CN105474175B - Distribution and scheduling are used for computer, manufacture and the method for multiple queues - Google Patents
Distribution and scheduling are used for computer, manufacture and the method for multiple queues Download PDFInfo
- Publication number
- CN105474175B CN105474175B CN201380077438.3A CN201380077438A CN105474175B CN 105474175 B CN105474175 B CN 105474175B CN 201380077438 A CN201380077438 A CN 201380077438A CN 105474175 B CN105474175 B CN 105474175B
- Authority
- CN
- China
- Prior art keywords
- worker thread
- priority
- thread
- queue
- request
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000009826 distribution Methods 0.000 title claims description 17
- 238000004519 manufacturing process Methods 0.000 title claims description 3
- 238000003860 storage Methods 0.000 claims description 18
- 238000004590 computer program Methods 0.000 claims description 6
- 230000003694 hair properties Effects 0.000 claims description 6
- 238000004891 communication Methods 0.000 description 10
- 238000010586 diagram Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000000034 method Methods 0.000 description 3
- 210000003128 Head Anatomy 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 230000000051 modifying Effects 0.000 description 2
- 210000004556 Brain Anatomy 0.000 description 1
- 210000003205 Muscles Anatomy 0.000 description 1
- 101710040034 Sh-1 Proteins 0.000 description 1
- 230000004913 activation Effects 0.000 description 1
- 230000003190 augmentative Effects 0.000 description 1
- 239000000969 carrier Substances 0.000 description 1
- 230000001413 cellular Effects 0.000 description 1
- 230000000875 corresponding Effects 0.000 description 1
- 230000001419 dependent Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000005684 electric field Effects 0.000 description 1
- 238000007654 immersion Methods 0.000 description 1
- 230000002452 interceptive Effects 0.000 description 1
- 230000014759 maintenance of location Effects 0.000 description 1
- 230000005055 memory storage Effects 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000006011 modification reaction Methods 0.000 description 1
- 230000003287 optical Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 101710040033 sasP-2 Proteins 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Abstract
Operating system provides the worker thread pond of the queue of multiple requests of the service in different priorities level.Concurrency controller limits the number of currently performed thread.The number of currently performed thread more than each levels of priority of system tracks, and the operation of with being conducive to higher priority worker thread preemption lower priority worker thread.System can have multiple worker thread ponds, and wherein each pond has priority query and the concurrency controller of its own.Thread can also change its priority intermediary operation.If thread becomes lower priority and current active, takes steps to ensure that and priority inversion does not occur.Particularly, the thread preemption of higher priority items can be directed to for the current thread of new lower priority project and preoccupied project is placed in lower priority queue.
Description
Technical field
The present invention broadly relates to thread management, more particularly, to distribution and scheduling for multiple priorization queues
Thread.
Background technique
In modern day computing systems, having allows computer program as the shared computing resource managed operating system
Multiple threads of access to be performed operating system be common.Operating system itself, which can also have, can be used for servicing from answering
Use multiple threads of the request of operating-system resources, referred to herein as worker thread.
There are the complexities that several modes carry out multiple applications of managed competition resource.Generally, queue is provided to manage and
The request of self-application such as accesses the request of resource to use worker thread using thread.If all requests are same
Ground is treated, then arrives first request to worker thread distribution first.In some instances, some requests have higher than other requests
Priority.In such a case, individual request queue is used for each levels of priority, and each priority
Level has the worker thread pond of its own.Scheduler distributes thread based on request of the arrival time into queue, and leads to
Crossing based on priority makes thread enliven and stop or preemption thread manages multiple threads to the share and access of resource.
Using such system, each worker thread has fixed priority, and can not during execution
Change priority;Otherwise there is the risk of priority inversion.Moreover, utilizing the worker thread for being directed to each levels of priority
The number in pond, levels of priority is limited by system resource.Finally, such system may undergo correlation deadlock.
Summary of the invention
The content of present invention introduces following selected concept further described in a specific embodiment in simplified form.This
The key or substantive characteristics of summary of the invention unawareness map logo theme claimed, are also not intended to limit claimed
The range of theme.
Operating system provides the worker thread pond of the queue of multiple requests of the service in different priorities level.Concurrently
Property controller limiting concurrent execute (i.e. active) worker thread number.Operating system is tracked in each levels of priority
The number of above concurrent execution thread, and with being conducive to higher priority worker thread preemption lower priority worker
The operation of thread.System can have multiple worker thread ponds, wherein each pond have its own priority query and
Concurrency controller.
Concurrency controller is associated with scheduler.Therefore, it is excellent can also to change its by directly notifying scheduler for thread
First grade intermediary operation.If thread become lower priority and be currently it is active, take steps to ensure that priority run
Will not it occur.Particularly, scheduler with being conducive to higher priority worker thread preemption is in now lower preferential
Grade enlivens worker thread.
In the following description, reference is made to the attached drawing for forming its part, and this technology wherein is shown as diagram
Specific example is realized.It is to be understood that can use other embodiments, and structure can be made and changed without departing from the disclosure
Range.
Detailed description of the invention
Fig. 1 is the block diagram that the exemplary computer of component of such system may be implemented using it.
Fig. 2 is illustrated for the different excellent of the worker thread pond managed by the operating system for computer system
The figure of the example implementation of multiple queues of first grade.
Fig. 3 is the flow chart to the example implementation of queue add items.
Fig. 4 is the flow chart of the options purpose example implementation after worker thread terminates.
Fig. 5 is the flow chart for changing the example implementation of priority of thread.
Fig. 6 is the flow chart to the example implementation of the allocation of items worker thread selected from queue.
Fig. 7 is the figure of the illustrated examples of the concurrency control in operation.
Specific embodiment
Following sections description is may be implemented on it to from multiple allocations of items for being prioritized queue and traffic control person
The example computer system of the operating system of thread.
It is described below and is intended to provide brief, the general description that the suitable computers of such system may be implemented using it.
Computer can be any one of various general or specialized computing hardware configurations.Potentially suitable well-known meter
Calculation machine example includes but is not limited to personal computer, server computer, hand-held or laptop devices (for example, media play
Device, notebook computer, tablet computer, cellular phone, personal digital assistant, voice recorder), multicomputer system, base
In the system of microprocessor, set-top box, game console, programmable consumer electronics, network PC, minicomputer, large size
Computer, distributed computing environment including any one of system above or equipment etc..
Fig. 1 illustrates suitable computer examples.This is only an example of suitable computer, and is not intended dark
Show the use scope or functional any restrictions about such computer.
Referring to Fig.1, exemplary computer 100 includes at least one processing unit 102 and memory in terms of basic configuration
104.Computer may include multiple processing units and/or additional association's processing unit, such as graphics processing unit 120.Depend on
In the exact configuration and type of computer, memory 104 can be volatibility (such as RAM), non-volatile (such as ROM, sudden strain of a muscle
Fast memory etc.) or both certain combination.The configuration is in Fig. 1 with the diagram of dotted line 106.
In addition, computer 100 can also have additional features/functionality.For example, computer 100 can also include additional
Storage device (can be removed and/or non-removable), including but not limited to disk or CD or tape.Such additional storage dress
It sets and is illustrated in Fig. 1 by storage device 108 and non-removable storage device 110 can be removed.Computer storage medium include with
In storage such as any method of the information of computer program instructions, data structure, program module or other data etc or
The volatile and non-volatile of technology realization, removable and nonremovable medium.Storage device 108 can be removed in memory 104
It is all the example of computer storage medium with non-removable storage device 110.Computer storage medium include but is not limited to RAM,
ROM, EEPROM, flash memory or other memory technologies, CD-ROM, digital versatile disc (DVD) or other optical storages dress
It sets, magnetic holder, tape, disk storage device or other magnetic storage apparatus or can be used to store desired information and can be with
Any other medium accessed by computer 100.Any such computer storage medium can be one of computer 100
Point.
Computer 100 can also allow equipment to pass through communication media and other equipment comprising (multiple) communication connections 112
Communication.The usual load capacity calculation machine program instruction of communication media, data structure, program module or in the modulated of such as carrier wave etc
Other data in data-signal processed or other transport mechanisms, and including any information delivery media.Term " modulated data
Signal " means to make one or more in its characteristic to be arranged or change in order to such mode of encoded information in the signal
Become, to change the configuration of the receiving device of signal or the signal of state.As an example, not a limit, communication media includes wired
Medium (such as cable network or direct wired connection) and wireless medium (such as acoustics, RF, infrared and other wireless Jie
Matter).Communication connection 112 is docked with communication media to transmit data by communication media and receive setting for data from communication media
It is standby, such as network interface.
Computer 100 can have (multiple) various input equipments 114, such as keyboard, mouse, pen, camera, touch input
Equipment etc..It can also include (multiple) output equipment 116 of such as display, loudspeaker, printer etc..It is all
These equipment are well known in the art, and do not need to be discussed in detail herein.Various input and output devices
Nature user interface (NUI) may be implemented, be to allow users to not by by such as mouse, keyboard, remote controler or the like
Any interfacing for being interacted with equipment of " nature " mode of artificial restraint forced of input equipment.
The example of NUI method includes dependent on speech recognition, touch and stylus identification, on the screen and close to both screens
Gesture identification, every those of empty-handed gesture, head and eye tracks, speech and voice, vision, touch, gesture and machine intelligence,
It and may include the use of the following terms: touch-sensitive display, speech and speech recognition, intention and object understanding, using depth
The motion gesture of camera (such as stereoscopic camera system, infrared camera system and other camera systems and these combination) is examined
Survey, using the detection of the motion gesture of accelerometer or gyroscope, face recognition, Three-dimensional Display, head, eyes and stare tracking,
Immersion augmented reality and virtual reality system (these wholes both provide more natural interface), and for using electric field sense
Electrode is surveyed to sense the technology (EEG and correlation technique) of brain activity.
Each component of the system operated on computers is generally by software realization, such as one or more computer journeys
Sequence comprising by the computer executable instructions and/or computer interpretative order of computer disposal, such as program module.Generally
Ground, program module include routine, programs, objects, component, data structure etc., and in cell processing processed, instruction is handled
Unit executes particular task or realizes particular abstract data type.The computer system can be real in a distributed computing environment
It tramples, wherein task is executed by the remote processing devices being linked through a communication network.In a distributed computing environment, program module can
To be located in local and remote computer storage media the two including memory storage device.
Alternatively or additionally, functionalities described herein can be at least partly by one or more hardware logic components
It executes.Such as without limiting ground, the hardware logic component for the illustrative type that can be used includes field programmable gate array
(FPGA), dedicated program integrated circuit (ASIC), dedicated program standardized product (ASSP), system on chip system (SOC), multiple
Miscellaneous programmable logic device (CPLD) etc..
It has now been described computer, operating system is that the management other application executed on computers provides computer
The computer program of the access in source.Operating system offer is by such as Fig. 2 to a kind of mode of the access of such resource
Shown in worker thread 202 pond 200.The expression of worker thread 202 can be called with the resource of access operation system
The example of computer program.Such resource can include but is not limited to, input equipment, output equipment, storage equipment, processor
Time etc..
It is executed using the Application Programming Interface of operating system and is related to the various tasks in such pond, it is such as but unlimited
The pond as requesting, the specified job handled by pond, and change the priority of the job handled by pond.
In one implementation, enter first to go out (LIFO) queue or storehouse after use to manage the pond of available worker thread.
By using such realization, first to waiting the thread of minimal amount of time to share out the work, the data of thread are related to optimization
A possibility that cache hit in structure.
In order to manage the use in such worker thread pond, pond has the associated setting of item destination queue 206.Team
Each of column project indicates the request for being directed to the work that will be executed by worker thread from application.Each queue tool
There is associated levels of priority, wherein the associated queue of each levels of priority.Different queue has different excellent
First grade is horizontal.In an example implementation, for each queue 206, or for each levels of priority, system is maintained
Indicate the thread count 208 of the number of the worker thread to the allocation of items in equal priority level as queue.
Queue can be realized in various ways, but make the request with more early arrival time by first processing, each queue is realized
For the form of first in, first out (FIFO) queue.
As described in more detail below, the concurrency controller 220 for pond is into worker thread distribution queue
Project and the number of the active worker thread of limiting concurrent simultaneously.In order to execute the function, the maintenance of concurrency controller can
With the number limitation 222 of the worker thread of concurrently active (i.e. access operation system resource).Concurrency controller ensures active
The number of worker thread is limited no more than concurrency.As an example, operating system can be arranged simultaneously with processor-based number
Hair property limitation, so that excess does not subscribe to central processing unit to the number of worker thread." batch " mode of example implementation processing,
Wherein worker thread runs up to end, rather than carries out with other threads in system time-multiplexed.
Each worker thread includes indicating the data of its priority 210 when assigning them to project, can changing
The priority 212 of any change in the case where change thread priority, and will make if it can change thread priority
Whether the instruction thread violates the concurrency mark 214 that concurrency limitation is counted into (charge).Any line being blocked
Cheng Wei is counted into the concurrency counting for its levels of priority.Preoccupied thread holding contributes to thread count or needle
The concurrency of its levels of priority is counted.
Given structure described above, concurrency controller can manage the distribution that worker thread is arrived in each new request
And concurrency is maintained to limit simultaneously.Concurrently visit of the active worker thread of concurrency controller also integrated management to system resource
The scheduler asked.Using the additional data of being directed to per thread, instruction changes priority 212 and concurrency mark 214, and
Hair property controller can thread priority change when the active worker thread of manage and dispatch.
Now by conjunction with Fig. 3-6 come describe to manage to queue add the priority of new request, change project and thread with
And the example implementation of the concurrency controller to the allocation of items worker thread being prioritized in queue.
In Fig. 3, it will describe now in conjunction with the operation to the priorization queue add items for worker thread pond
The example implementation of concurrency controller.The description assumes that worker thread pond has been created and has provided pass to application
Information in pond, so that using Application Programming Interface access pond can be used.After application has to the access in pond, it can be with
One order of request is that the worker thread in submitting pond executes asking for some assigned work in assigned priority level
It asks.The specified of levels of priority can infer from context, for example, application levels of priority or resource request property,
Or it can be specified by application.
After such request is submitted in application, operating system receives 300 requests and its levels of priority.Request is placed
302 in the queue for assigned priority level.
It next determines whether that the request can be serviced.If new request is not next unappropriated in its queue
Project, it is not serviced, until first (multiple) project in concurrency limitation in the queue terminates.Concurrency
Controller determines that the concurrency for the levels of priority that 306 are directed to new request counts.Concurrency counting is to be in add thereto
Add the levels of priority of the queue of request and the number for enlivening worker thread of higher priority level.
For example, if there is two worker threads for being in levels of priority five, and in levels of priority four
One worker thread is then counted as three for the concurrency of levels of priority four and is directed to the concurrency of levels of priority five
It is counted as two.
If the concurrency for the levels of priority of new project is counted more than concurrency limitation, as determined at 308,
Queue only then is arrived into new project addition 310, otherwise it is by selection 312 for being immediately assigned to worker thread.Fig. 6 will be combined
It is described more fully the distribution of project to worker thread.When to worker thread allocated items, in worker thread knot
Shu Shi, when stopping worker thread, change worker thread priority when, update be in the levels of priority work
The counting (such as 208 in Fig. 2) of person's thread.
Referring to Fig. 4, when concurrency controller can be to another from the allocation of items worker thread for being prioritized queue
Between be at the end of another worker thread.Concurrency controller receives the instruction that 400 worker threads are over.For this
The thread count of the levels of priority of worker thread can reduce 402.As long as the concurrency counting for levels of priority is small
It is limited in concurrency, concurrency controller, by any sluggish thread and item queue, is that highest is excellent first with regard to iteration 404
First grade, to identify the highest priority level at least one project in sluggish thread or its queue.It selects 406
The project of next highest priority, if the project in the queue, assign them to the worker thread so that active;
If project has been allocated that worker thread, keep the worker thread active.In the row for being in equal priority level
Sluggish thread is selected on team's project.Therefore, preoccupied thread is dispatched when not yet reaching concurrency limitation, and do not had
There is higher priority thread that can run, and equal priority thread is not present in online Cheng Qian face.Therefore, as long as thread is by elder generation
It accounts for, queue cannot discharge any project to be in equal priority level or in the preferential of preoccupied thread for distributing to
Grade worker thread below horizontal.The distribution of project to worker thread is described more fully in conjunction with Fig. 6.
Referring now to Fig. 5, another event for influencing the project being lined up and worker thread is in project or worker thread
Priority change when.For example, using can attempt change request priority, can be related to change queue in project or
The priority for the worker thread that change project is assigned to.It is such change by the concurrency controller of integrated scheduling device come
It executes, so that the priority of worker thread is directly changed by scheduler.It, will when the priority of the project in queue changes
It removes from the queue for its original priority level and is then added to the team for its new levels of priority
Column.If becoming higher for the priority of project or worker thread, the influence to system is small.Item in queue
The priority of mesh or worker thread can be simply changed.Project can be removed and be placed from lower priority queue
In another higher priority queues;The priority of worker thread can change Cheng Genggao in the scheduler.
However, if be assigned with project to worker thread, and its priority becomes lower, and it is preferential to there is generation
The reverse risk of grade, wherein lower priority thread prevents to distribute to the relatively Gao You of the job from higher priority queues
First grade thread operation expires.Particularly, if lower priority thread is prevented from operation (such as by another middle priority line
Journey) and higher priority thread be prevented from running due to the reason of lower priority thread, then priority inversion occurs.
Now by the example implementation of the description management priority change in Fig. 5.The given item that can lead to priority inversion
Part, concurrency controller receive the instruction that the levels of priority of 500 projects has changed.For example, being answered by Application Programming Interface
With the change that can be notified in concurrency controller priority.If not yet still to worker thread allocated items, i.e. project
In the queue, as determined at 502, then project can be lined up 506 and be regarded in the queue for new levels of priority
For new request (referring to Fig. 3).If being assigned with project to worker thread, however new priority is higher, such as 504
Place's determination, then the priority of worker thread can simply change in the scheduler.It can update preferential for each
The horizontal thread count of grade and concurrency count.Otherwise, if priority reduces and is assigned with this to worker thread
Project, then taking additional step with preemption, lower priority request is active with the thread for allowing for higher priority now,
This may include from higher priority queues to worker thread distribution request.
In order to which the thread for distributing higher priority request to worker thread or allowing for higher priority enlivens, and
Hair property controller locks 508 queues first and enables to make a change without interfering.Preemption 510 is distributed to change
The worker thread of the project of priority.Thread count and the counting of corresponding concurrency for levels of priority before subtract
It is small.Thread count and concurrency for new levels of priority, which count, to be increased.By the priority setting 512 of thread to newly excellent
First grade is horizontal.Then concurrency controller passes through the thread of activation higher priority or selection is not yet assigned worker thread
Project select the projects of 514 next highest priorities.Such distribution is described more fully below in conjunction with Fig. 6.?
For next highest priority project worker thread it is active after, update queuing data, the lock in queue can be released
Put 516.
As noted above, when project is next highest priority project in queue and if is directed to its priority water
If flat concurrency counting is not above current concurrency limitation, the project can be distributed to worker thread.The distribution can
It is preferential to occur for example when to queue add items, when worker thread terminates project, and when worker thread
When grade changes.
The example implementation to worker thread allocated items will be described in conjunction with Fig. 6 now.
Concurrency controller receives the instruction for next selected item that 600 will distribute to worker thread.The step can be with
It is the step identical as wherein concurrency controller described above selection next item.Then it determines whether work at 602
Author's thread is available from pond.In some instances, it is contemplated that worker thread is available, such as when controller is in another worker
When thread selects project after terminating.In other examples, there may be the one or more of the more low priority than current project
Worker thread, if resource is disabled, one of those is by preemption.If worker thread is available, by it
Current project is given in distribution 604.If worker thread is unavailable, the worker thread of selection and 606 lower priority of preemption.
Then current project distribution 604 is given to currently available worker thread.When project is assigned worker thread, thread can be with
Labeled 608 be to contribute to concurrency to limit and can increase for the thread count of the levels of priority.
When being assigned with worker thread, it is given the priority for the project that it is assigned to.As it is indicated above,
If the priority changes, concurrency controller can manage the change due to the fact that: scheduler and concurrency control
Device is integrated so that as scheduling operation part and priority is set, and therefore directly notice is dispatched any priority and is changed
Become.
Using to the worker thread from the allocation of items for being prioritized queue, the scheduling in concurrency controller is resided in
Then device can dispatch the concurrent operations for enlivening worker thread.
It, can be to the process job person's of sharing out the work thread in multiple and different priority using such realization
Single pond.Concurrency controller may insure that newest received highest priority project is first allocated to worker thread.Separately
Outside, the project in queue and/or the priority of thread can change during operation, and the tune integrated in concurrency controller
Spend a possibility that device can be distributed with managing and serving system thread to reduce priority inversion.
The illustrated examples of such system in work are illustrated in Fig. 7.In fig. 7 it is shown that worker thread
Pond 700 limits (" 2 ") with the priority query 702 of thread count 704 and concurrency.Each priority query has excellent
First grade, in this example respectively 0,4,10,15 and 31.The work item aim sequence that will be lined up illustrates at 704.Such as " A "
Shown in, all queues are sky.Then, as shown in " B ", to the work item of the queue addition priority 10 for priority 10
Mesh.The job is assigned to the worker thread with priority 10 and is directed to the thread count of 10 queue of priority
Increase.As shown in " C ", to the second job of the queue addition priority 10 for priority 10.The job quilt
It distributes the worker thread with priority 10 and increases to " 2 " for the thread count of priority 10.As shown in " D ",
Next item has priority 4 and is added to the queue of priority 4.Since concurrency is limited to " 2 ", and for having
Thread count higher than all queues of four priority has been " 2 " (since two projects are in priority 10), because of this
Mesh is not assigned worker thread.Then, shown in " E ", the job of priority 15 is added to it and corresponds to queue.
All thread counts in 15 or more levels of priority and be less than concurrency limit.Therefore the project is distributed work immediately
Author's thread, and the thread is arranged to priority 15.If machine only has, there are two available computing resources, entirely
The thread that office's scheduler preemption is run at priority 10 is to allow the thread operation at priority 15.
It is also possible that provide guarantee worker thread be lined up realization, ensure to job be lined up when
Between, worker thread will can be used for services project.A condition for allowing to make guarantee is the number for waiting worker thread
Mesh is greater than the number of the job before the new projects being prioritized in queue.Since the project of higher priority can be in project
It is reached after queued, therefore provides mechanism to count and such situation.In one implementation, label tool is guaranteed each
A job.When to having guaranteed queue add items, counter is increased;When being assigned worker thread to project
When, counter is reduced.When to the new projects of queue addition higher priority, if there is the guaranteed lower priority of tool
Project is not assigned worker thread.Such realization helps that the system of heavy load may be undergone and can eliminate
This is for providing dedicated thread to guarantee replication work person's threading logic of performance.
It has now been described example implementation and modification, it should be apparent that be aforementioned replaceable realization described herein
In any or all can be any desired combination be used to form additional mixed embodiment.It should be understood that
It is that the theme limited in the following claims is not necessarily limited to above-described specific implementation.Above-described specific implementation
It is merely exemplary and disclose.
Claims (10)
1. a kind of computer including memory He at least one processor, which has on at least one processor
The operating system of execution, the operating system can operate with:
It is assigned applications worker thread pond in response to the request from the application executed on the computer, wherein
Each worker thread is the thread provided by the operating system, and wherein each worker thread service is answered from described
Request to access operation system resource;
Multiple queues are associated with for the worker thread pond of the assigned applications, and each queue has different preferential
Grade, for storing the project of the worker thread that will be assigned to, each project indicates there is answering from described for priority
Request, to by using a work in the worker thread in the worker thread pond for the assigned applications
Author's thread accesses the operating-system resources;
The operating system further comprises concurrency controller, and the concurrency controller can be operated to divide to worker thread
With the project from the multiple queue and limitation is performed in parallel the number of worker thread;
Wherein in response to the request to access the operating-system resources from the application with priority, it is described simultaneously
Hair property controller can operate with:
Correspond to the project of the request to the queue of the priority with request addition;And
At least based on the thread count for being executed concurrently worker thread associated with the queue, it is determined whether arrival is used for
The concurrency of the queue of the priority with the request limits;And
In response to not reaching the determination of the concurrency limitation for the queue of the priority with the request,
Into the worker thread a worker thread distribution correspond to the request the project, and increase with it is described
The associated thread count of queue.
2. computer according to claim 1, wherein the operating system is with so that using can be to assigned priority
Queue add items Application Programming Interface.
3. computer according to claim 2, wherein each worker thread, which has, indicates whether the worker thread is disobeyed
Anti- concurrency limits the concurrency mark being counted.
4. computer according to claim 3, wherein each worker thread has the instruction of original priority and any changes
The instruction of the priority of change.
5. computer according to claim 4, wherein in response to worker thread priority to lower priority change, first
It accounts for worker thread and makes the worker thread of higher priority active.
6. computer according to claim 1, wherein project mark is will with worker thread by the concurrency controller
It can be used for the guarantee of service item.
7. computer according to claim 1, wherein the concurrency controller is in response to project to than work at present person
The addition of the queue of thread higher priority, preemption work at present person thread and to worker thread distribution higher priority
Project.
8. a kind of manufacture, comprising:
Computer storage medium;
The computer program instructions being stored in computer storage medium, when computer program instructions are read from computer storage medium
When taking and being handled by processing equipment, instruction processing apparatus is configured to that operating system is allowed to execute on at least one processor,
Operating system can operate with:
It is assigned applications worker thread pond in response to the request from the application executed on the computer, wherein
Each worker thread is the thread provided by the operating system, and wherein each worker thread service is answered from described
Request to access operation system resource;
Multiple queues are associated with for the worker thread pond of the assigned applications, and each queue has different preferential
Grade, for storing the project for the worker thread that will be assigned to, each project indicates to come from the application with priority
Request, to by using one in the worker thread in the worker thread pond for assigned applications work
Person's thread accesses the operating-system resources;And
The operating system further comprises concurrency controller, and the concurrency controller can be operated to divide to worker thread
With the project from the multiple queue and limitation is performed in parallel the number of worker thread;
Wherein in response to the request to access the operating-system resources from the application with priority, it is described simultaneously
Hair property controller can operate with:
Correspond to the project of the request to the queue of the priority with request addition;And
At least based on the thread count for being executed concurrently worker thread associated with the queue, it is determined whether arrival is used for
The concurrency of the queue of the priority with the request limits;And
In response to not reaching the determination of the concurrency limitation for the queue of the priority with the request,
Into the worker thread a worker thread distribution correspond to the request the project, and increase with it is described
The associated thread count of queue.
9. a method of computer implementation, comprising:
Execute operating system on computers, operating system can operate with:
It is assigned applications worker thread pond in response to the request from the application executed on the computer, wherein
Each worker thread is the thread provided by the operating system, and wherein each worker thread service is answered from described
Request to access operation system resource;
Multiple queues are associated with for the worker thread pond of the assigned applications, and each queue has different preferential
Grade, for storing the project for the worker thread that will be assigned to, each project indicates to come from the application with priority
Request, to by using one in the worker thread in the worker thread pond for assigned applications work
Person's thread accesses the operating-system resources;
The operating system further comprises concurrency controller, and the concurrency controller can be operated to divide to worker thread
With the project from the multiple queue and limitation is performed in parallel the number of worker thread;
Wherein in response to the request to access the operating-system resources from the application with priority, it is described simultaneously
Hair property controller can operate with:
Correspond to the project of the request to the queue of the priority with request addition;And
At least based on the thread count for being executed concurrently worker thread associated with the queue, it is determined whether arrival is used for
The concurrency of the queue of the priority with the request limits;And
In response to not reaching the determination of the concurrency limitation for the queue of the priority with the request,
Into the worker thread a worker thread distribution correspond to the request the project, and increase with it is described
The associated thread count of queue.
10. computer implemented method according to claim 9, wherein each worker thread has instruction worker thread
The concurrency mark for whether violating concurrency limitation counted.
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/918749 | 2013-06-14 | ||
US13/918,749 US9715406B2 (en) | 2013-06-14 | 2013-06-14 | Assigning and scheduling threads for multiple prioritized queues |
PCT/US2013/061086 WO2014200552A1 (en) | 2013-06-14 | 2013-09-21 | Assigning and scheduling threads for multiple prioritized queues |
Publications (2)
Publication Number | Publication Date |
---|---|
CN105474175A CN105474175A (en) | 2016-04-06 |
CN105474175B true CN105474175B (en) | 2019-07-16 |
Family
ID=
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5826081A (en) * | 1996-05-06 | 1998-10-20 | Sun Microsystems, Inc. | Real time thread dispatcher for multiprocessor applications |
US6477561B1 (en) * | 1998-06-11 | 2002-11-05 | Microsoft Corporation | Thread optimization |
EP0961204A3 (en) * | 1998-05-28 | 2004-01-21 | Hewlett-Packard Company, A Delaware Corporation | Thread based governor for time scheduled process execution |
CN1574776A (en) * | 2003-06-03 | 2005-02-02 | 微软公司 | Method for providing contention free quality of service to time constrained data |
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5826081A (en) * | 1996-05-06 | 1998-10-20 | Sun Microsystems, Inc. | Real time thread dispatcher for multiprocessor applications |
EP0961204A3 (en) * | 1998-05-28 | 2004-01-21 | Hewlett-Packard Company, A Delaware Corporation | Thread based governor for time scheduled process execution |
US6477561B1 (en) * | 1998-06-11 | 2002-11-05 | Microsoft Corporation | Thread optimization |
CN1574776A (en) * | 2003-06-03 | 2005-02-02 | 微软公司 | Method for providing contention free quality of service to time constrained data |
Non-Patent Citations (1)
Title |
---|
Dynamic Policy-Driven Quality of Service in Service-Oriented Systems;Joseph P. Loyall, Matthew Gillen, Aaron Paulos;《Object/Component/Service-Oriented Real-Time Distributed Computing (ISORC), 2010 13th IEEE International Symposium on》;20100607;正文第1页左栏倒数第1段-第8页右栏第1段,图1-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6320520B2 (en) | Thread assignment and scheduling for many priority queues | |
US10649664B2 (en) | Method and device for scheduling virtual disk input and output ports | |
US9277003B2 (en) | Automated cloud workload management in a map-reduce environment | |
US9542229B2 (en) | Multiple core real-time task execution | |
EP3132355B1 (en) | Fine-grained bandwidth provisioning in a memory controller | |
US9135059B2 (en) | Opportunistic multitasking | |
CN110489213B (en) | Task processing method and processing device and computer system | |
US8566830B2 (en) | Local collections of tasks in a scheduler | |
CN111406250A (en) | Provisioning using prefetched data in a serverless computing environment | |
CN105378668B (en) | The interruption of operating system management in multicomputer system guides | |
KR20150084098A (en) | System for distributed processing of stream data and method thereof | |
WO2013041366A1 (en) | Concurrent processing of queued messages | |
US10037225B2 (en) | Method and system for scheduling computing | |
CN114327894A (en) | Resource allocation method, device, electronic equipment and storage medium | |
CN105474175B (en) | Distribution and scheduling are used for computer, manufacture and the method for multiple queues | |
CN113703945B (en) | Micro service cluster scheduling method, device, equipment and storage medium | |
KR20180082560A (en) | Method and apparatus for time-based scheduling of tasks | |
US20140237149A1 (en) | Sending a next request to a resource before a completion interrupt for a previous request | |
CN106155810A (en) | The input/output scheduling device of workload-aware in software definition mixing stocking system | |
US10162571B2 (en) | Systems and methods for managing public and private queues for a storage system | |
US20240118920A1 (en) | Workload scheduling using queues with different priorities | |
CN116737365A (en) | Request processing method, device, equipment and medium of intelligent automobile operating system | |
CN117453386A (en) | Memory bandwidth allocation in a multi-entity system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant |