CN110011936A - Thread scheduling method and device based on multi-core processor - Google Patents
Thread scheduling method and device based on multi-core processor Download PDFInfo
- Publication number
- CN110011936A CN110011936A CN201910199086.8A CN201910199086A CN110011936A CN 110011936 A CN110011936 A CN 110011936A CN 201910199086 A CN201910199086 A CN 201910199086A CN 110011936 A CN110011936 A CN 110011936A
- Authority
- CN
- China
- Prior art keywords
- queue
- thread
- scheduling
- lock
- read
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/50—Queue scheduling
Abstract
The invention discloses a kind of thread scheduling method based on multi-core processor, the method is applied to include in the multi-core environment without lock scheduler and multiple thread processors;A scheduling queue is configured without lock scheduler, per thread processor configures an insertion PUT queue and a deletion GET queue;The method, comprising: all PUT queues are looped through according to predetermined period without lock scheduler, a thread is read from a PUT queue every time, the scheduling queue is written;And the scheduling queue is read according to the predetermined period without lock scheduler, it will be in the thread loops write-in GET queue of reading.Thread scheduling method provided in an embodiment of the present invention based on multi-core processor is able to solve the problem of multi-core processor influences cpu performance due to lock conflict in thread scheduling in the prior art.
Description
Technical field
The present invention relates to field of communication technology, espespecially a kind of thread scheduling method and device based on multi-core processor.
Background technique
Forwarding performance is an important functional parameter for measuring routing device quality.In order to reduce the place of forwarding data packet
Link is managed, router generallys use transfers raising forwarding performance fastly, here, fast-turn construction data packet forwarding process: physical layer reception →
Link layer process → physical layer is sent, and processing fast-turn construction message will occupy segment processor (CPU) resource.Therefore, multi-core CPU is answered
It transports and gives birth to.
In fast-turn construction frame in multi-core CPU, usually there are multiple threads to be processed, these threads are needed through multicore tune
Degree device, which is liberally distributed on multiple cores, to be executed, to improve whole forwarding performance.At present in Fair Scheduler, there is one
Ready queue, this queue for storing is the thread for needing to dispatch, and multiple cores will access this Ready queue, but
Ready queue can only once allow a core to access, so multicore is needed to lock, obtain the core access queue of lock, release after access
Put lock.The core sky wheel for not obtaining lock waits the release of lock.
In multi-core CPU, in order to realize the equity dispatching of thread.Multiple cores need the thread that processing is over, join the team to
The same Ready queue.Thread goes out team in the way of first in first out, and multiple cores access the same Ready queue when team out, from
Thread is obtained in Ready queue and is executed.In this way, can then have multiple cores while access the same Ready queue, need using
Multicore lock generates the case where multicore lock competes, especially in the case where 48 core, can then seriously affect CPU if there is lock conflict
Overall performance.
Summary of the invention
The embodiment of the present invention provides a kind of thread scheduling method and device based on multi-core processor, to solve existing skill
The problem of multi-core processor influences cpu performance due to lock conflict in thread scheduling in art.
A kind of thread scheduling method based on multi-core processor, the method are applied to include one without lock scheduler and more
In the multi-core environment of a thread processor;It is configured with a scheduling queue without lock scheduler, per thread processor is configured with one
A insertion PUT queue and a deletion GET queue;The method, comprising:
All PUT queues are looped through according to predetermined period without lock scheduler, every time from a PUT queue
A thread is read, the scheduling queue is written;And
The scheduling queue is read according to the predetermined period without lock scheduler, GET team is written into the thread loops of reading
It is read in column for the thread processor.
Further, the method, further includes:
Per thread processor, performs the following operations:
Thread processor is successively read the thread in corresponding GET queue and execution;
The thread for having executed end is sequentially written in corresponding PUT queue by thread processor;
Wherein, the GET queue is dual-port annular first in, first out fifo queue, for caching the thread that will be executed;
The PUT queue is dual-port annular fifo queue, for caching the thread for having executed end.
Wherein, the scheduling queue is dual-port annular fifo queue, and the dual-port includes the port IN and OUT terminal mouth.
Wherein, described that a thread is read from a PUT queue without lock scheduler, the scheduling queue is written, comprising:
It is described that a thread is read from a PUT queue without lock scheduler, the thread is written by institute by the port IN
State scheduling queue.
It is wherein, described to read the scheduling queue according to the predetermined period without lock scheduler, comprising:
It is described without lock scheduler according to the predetermined period, the thread that will be executed is read by the OUT terminal mouth.
A kind of thread scheduling device based on multi-core processor, described device include without at lock scheduling unit and multiple threads
Manage unit;It is described to be configured with a scheduling queue without lock scheduling unit, per thread processing unit configured with PUT queue and
One GET queue;Wherein,
It is described without lock scheduling unit, for being looped through according to predetermined period to all PUT queues, every time from one
A thread is read in a PUT queue, and the scheduling queue is written;And the scheduling queue is read according to the predetermined period,
It will be in the thread loops write-in GET queue of reading;
The threaded processing element, for reading the thread in the GET queue.
Wherein, the threaded processing element, thread and execution for being successively read in corresponding GET queue;And it will
The thread that executing terminates is sequentially written in corresponding PUT queue;
Wherein, the GET queue is dual-port annular first in, first out fifo queue, for caching the thread that will be executed;
The PUT queue is dual-port annular fifo queue, for caching the thread for having executed end.
Wherein, the scheduling queue is dual-port annular fifo queue, and the dual-port includes the port IN and OUT terminal mouth.
Wherein, described without lock scheduling unit, it is specifically used for: reads a thread from a PUT queue, pass through the port IN
The scheduling queue is written into the thread, and reads the scheduling queue according to the predetermined period, the thread of reading is followed
Ring is written in GET queue.
Wherein, described without lock scheduling unit, it is specifically used for: according to the predetermined period, is by OUT terminal mouth reading
It is written in GET queue by the thread of execution, and by the thread loops of reading.
The present invention has the beneficial effect that:
Thread scheduling method and device provided in an embodiment of the present invention based on multi-core processor, at for per thread
It manages device and configures two queues, to configure a scheduling queue without lock scheduler, and the queue configured is that can support two simultaneously
The dual-port annular fifo queue that processor is read while write is realized that multiple thread processors obtain without lock and discharge thread, and is led to
It crosses periodic write-in read operation and the thread justice in PUT queue is distributed to GET queue, and then realize multi-core processor
Thread continuous dispatching, entire scheduling process do not need multicore lock, avoid the appearance of lock conflict, and processor does not need sky etc.
Lock, improves the utilization rate and performance of CPU.
Detailed description of the invention
Fig. 1 is the flow chart of the thread scheduling method based on multi-core processor in the embodiment of the present invention;
Fig. 2 is another flow chart of the thread scheduling method based on multi-core processor in the embodiment of the present invention
Fig. 3 is the structural schematic diagram of the thread scheduling device based on multi-core processor in the embodiment of the present invention.
Specific embodiment
To keep the purposes, technical schemes and advantages of the application clearer, below in conjunction with the application specific embodiment and
Technical scheme is clearly and completely described in corresponding attached drawing.Obviously, described embodiment is only the application one
Section Example, instead of all the embodiments.Based on the embodiment in the application, those of ordinary skill in the art are not doing
Every other embodiment obtained under the premise of creative work out, shall fall in the protection scope of this application.
For multi-core processor existing in the prior art in thread scheduling since lock conflict influences asking for cpu performance
Topic, the thread scheduling method provided in an embodiment of the present invention based on multi-core processor, by configuring two for per thread processor
A queue, to configure a scheduling queue without lock scheduler, and the queue configured is that can support two processors simultaneously simultaneously
Dual-port annular first in, first out (First Input FirstOutput, FIFO) queue of read-write, to realize the thread tune without lock
Degree;The method is applied to include in the multi-core environment without lock scheduler and multiple thread processors;Match without lock scheduler
It is equipped with a scheduling queue, per thread processor is configured with insertion (PUT) queue and deletion (GET) queue.This
The process of inventive method is as shown in Figure 1, execute that steps are as follows:
Step 101, no lock scheduler loops through all PUT queues according to predetermined period, every time from one
A thread is read in PUT queue, and the scheduling queue is written;And
Step 102, no lock scheduler reads the scheduling queue according to the predetermined period, and the thread loops of reading are write
Enter in GET queue and is read for the thread processor.
Here, the predetermined period is set according to actual needs, and the principle specifically set is guaranteed in PUT queue
There cannot be thread accumulation, and the thread in GET queue cannot be sky.
Further, as shown in Fig. 2, the method also includes:
Per thread processor, performs the following operations:
Step 201, thread processor is successively read the thread in corresponding GET queue and execution;
Step 202, the thread for having executed end is sequentially written in corresponding PUT queue by thread processor;
Wherein, the GET queue is dual-port annular fifo queue, for caching the thread that will be executed;The PUT team
It is classified as dual-port annular fifo queue, for caching the thread for having executed end.Dual-port annular fifo queue, including one write
Port and a read port.Write port only supports a thread processor that data are written into FIFO simultaneously.Read port simultaneously only
A thread processor is supported to read data from FIFO.FIFO is read in dual-port annular FIFO, support while writing FIFO, and
It does not need to add mutual exclusion lock, two thread processors so may be implemented while carrying out write and read FIFO respectively, without multicore
Lock.
To sum up, above-mentioned steps 201 and step 202 are two independent processes, and there is no inevitable sequencings.
Further, the scheduling queue is dual-port annular fifo queue, and the dual-port includes the port IN and OUT terminal
Mouthful.The scheduling queue realizes first in first out, guarantees fairness for being lined up to thread.
Correspondingly, described that a thread is read from a PUT queue without lock scheduler in step 101, the tune is written
Spend queue, comprising:
It is described that a thread is read from a PUT queue without lock scheduler, the thread is written by institute by the port IN
State scheduling queue.
It is correspondingly, described to read the scheduling queue according to the predetermined period without lock scheduler in step 102, comprising:
It is described without lock scheduler according to the predetermined period, the thread that will be executed is read by the OUT terminal mouth.
Above-mentioned write-in (IN) operation executed without lock scheduler and reading (OUT) operation need to execute with the stable period,
So that PUT queue is discontented, Get queue is not empty, guarantees that thread scheduling can be executed down normally always.
Based on the same inventive concept, the embodiment of the present invention provides a kind of thread scheduling device based on multi-core processor, should
The structure of device is as shown in figure 3, include without lock scheduling unit 31 and multiple threaded processing elements 32;It is described without lock scheduling unit 31
Configured with a scheduling queue, per thread processing unit 32 is configured with a PUT queue and a GET queue;Wherein,
It is described without lock scheduling unit 31, for being looped through according to predetermined period to all PUT queues, every time from
A thread is read in one PUT queue, and the scheduling queue is written;And the scheduling team is read according to the predetermined period
Column, will be in the thread loops write-in GET queue of reading;
The threaded processing element, for reading the thread in the GET queue.Here, the predetermined period is according to reality
Border is set, and the principle specifically set is to guarantee cannot there is thread accumulation in PUT queue, and the line in GET queue
Cheng Buneng is sky.
Further, the threaded processing element 32, thread and execution for being successively read in corresponding GET queue;
And the thread for having executed end is sequentially written in corresponding PUT queue;
Wherein, the GET queue is dual-port annular fifo queue, for caching the thread that will be executed;The PUT team
It is classified as dual-port annular fifo queue, for caching the thread for having executed end.Dual-port annular fifo queue, including one write
Port and a read port.Write port only supports a threaded processing element 32 that data are written into FIFO simultaneously.Read port is same
When only support a thread processor to read data from FIFO.Dual-port annular FIFO supports reading while writing FIFO
FIFO, and do not need to add mutual exclusion lock, two threaded processing elements 32 so may be implemented while carrying out write and read FIFO respectively, and
Multicore lock is not needed.
Wherein, the scheduling queue is dual-port annular fifo queue, and the dual-port includes the port IN and OUT terminal mouth.
The scheduling queue realizes first in first out, guarantees fairness for being lined up to thread.
Wherein, described without lock scheduling unit 31, it is specifically used for: reads a thread from a PUT queue, pass through the end IN
The scheduling queue is written in the thread by mouth, and reads the scheduling queue according to the predetermined period, by the thread of reading
In recurrent wrIting GET queue.
Wherein, described without lock scheduling unit 31, it is specifically used for: according to the predetermined period, is read by the OUT terminal mouth
The thread that will be executed, and the thread loops of reading are written in GET queue.
Above-mentioned write-in (IN) operation executed without lock scheduling unit 31 and reading (OUT) operation need to hold with the stable period
Row, so that PUT queue is discontented, Get queue is not empty, guarantees that thread scheduling can be executed down normally always.
It should be appreciated that the thread scheduling device realization principle and process provided in an embodiment of the present invention based on multi-core processor
With above-mentioned Fig. 1,2 and shown in embodiment it is similar, details are not described herein.
Thread scheduling method and device provided in an embodiment of the present invention based on multi-core processor, at for per thread
It manages device and configures two queues, to configure a scheduling queue without lock scheduler, and the queue configured is that can support two simultaneously
The dual-port annular fifo queue that processor is read while write is realized that multiple thread processors obtain without lock and discharge thread, and is led to
It crosses periodic write-in read operation and the thread justice in PUT queue is distributed to GET queue, and then realize multi-core processor
Thread continuous dispatching, entire scheduling process do not need multicore lock, avoid the appearance of lock conflict, and processor does not need sky etc.
Lock, improves the utilization rate and performance of CPU.Use the thread scheduling side provided in an embodiment of the present invention based on multi-core processor
Case can be fully achieved without lock scheduling thread, effectively reduced idle running of the multicore because of lock conflict when, improved the utilization rate of CPU,
To improve fast-turn construction performance.The actual test especially in the CPU of 48 cores, fast-turn construction performance improve nearly 1/3, greatly accelerate data
Packet forwarding rate.
Those of ordinary skill in the art will appreciate that: attached drawing is the schematic diagram of one embodiment, module in attached drawing or
Process is not necessarily implemented necessary to the present invention.
As seen through the above description of the embodiments, those skilled in the art can be understood that the present invention can
It realizes by means of software and necessary general hardware platform.Based on this understanding, technical solution of the present invention essence
On in other words the part that contributes to existing technology can be embodied in the form of software products, the computer software product
It can store in storage medium, such as ROM/RAM, magnetic disk, CD, including some instructions are used so that a computer equipment
(can be personal computer, server or the network equipment etc.) executes the certain of each embodiment or embodiment of the invention
Method described in part.
All the embodiments in this specification are described in a progressive manner, same and similar portion between each embodiment
Dividing may refer to each other, and each embodiment focuses on the differences from other embodiments.Especially for device or
For system embodiment, since it is substantially similar to the method embodiment, so describing fairly simple, related place is referring to method
The part of embodiment illustrates.Apparatus and system embodiment described above is only schematical, wherein the conduct
The unit of separate part description may or may not be physically separated, component shown as a unit can be or
Person may not be physical unit, it can and it is in one place, or may be distributed over multiple network units.It can root
According to actual need that some or all of the modules therein is selected to achieve the purpose of the solution of this embodiment.Ordinary skill
Personnel can understand and implement without creative efforts.
In addition, containing in some processes of the description in above-described embodiment and attached drawing according to particular order appearance
Multiple operations, but it should be clearly understood that these operations can not execute or parallel according to its sequence what appears in this article
It executes, serial number of operation such as 201,202,203 etc. is only used for distinguishing each different operation, and serial number itself does not represent
Any executes sequence.In addition, these processes may include more or fewer operations, and these operations can be held in order
Capable or parallel execution.It should be noted that the description such as " first " herein, " second ", is for distinguishing different message, setting
Standby, module etc. does not represent sequencing, does not also limit " first " and " second " and is different type.
The apparatus embodiments described above are merely exemplary, wherein described, unit can as illustrated by the separation member
It is physically separated with being or may not be, component shown as a unit may or may not be physics list
Member, it can it is in one place, or may be distributed over multiple network units.It can be selected according to the actual needs
In some or all of the modules achieve the purpose of the solution of this embodiment.Those of ordinary skill in the art are not paying creativeness
Labour in the case where, it can understand and implement.
The present invention be referring to according to the method for the embodiment of the present invention, the process of equipment (system) and computer program product
Figure and/or block diagram describe.It should be understood that every one stream in flowchart and/or the block diagram can be realized by computer program instructions
The combination of process and/or box in journey and/or box and flowchart and/or the block diagram.It can provide these computer programs
Instruct the processor of general purpose computer, special purpose computer, Embedded Processor or other programmable data processing devices to produce
A raw machine, so that being generated by the instruction that computer or the processor of other programmable data processing devices execute for real
The device for the function of being specified in present one or more flows of the flowchart and/or one or more blocks of the block diagram.
These computer program instructions, which may also be stored in, is able to guide computer or other programmable data processing devices with spy
Determine in the computer-readable memory that mode works, so that it includes referring to that instruction stored in the computer readable memory, which generates,
Enable the manufacture of device, the command device realize in one box of one or more flows of the flowchart and/or block diagram or
The function of being specified in multiple boxes.
These computer program instructions also can be loaded onto a computer or other programmable data processing device, so that counting
Series of operation steps are executed on calculation machine or other programmable devices to generate computer implemented processing, thus in computer or
The instruction executed on other programmable devices is provided for realizing in one or more flows of the flowchart and/or block diagram one
The step of function of being specified in a box or multiple boxes.
Although alternative embodiment of the invention has been described, created once a person skilled in the art knows basic
Property concept, then additional changes and modifications may be made to these embodiments.So the following claims are intended to be interpreted as include can
It selects embodiment and falls into all change and modification of the scope of the invention.
Obviously, those skilled in the art can carry out various modification and variations without departing from this hair to the embodiment of the present invention
The spirit and scope of bright embodiment.In this way, if these modifications and variations of the embodiment of the present invention belong to the claims in the present invention
And its within the scope of equivalent technologies, then the present invention is also intended to include these modifications and variations.
Claims (10)
1. a kind of thread scheduling method based on multi-core processor, which is characterized in that the method is applied to include one without lock
In the multi-core environment of scheduler and multiple thread processors;A scheduling queue, per thread processing are configured with without lock scheduler
Device is configured with an insertion PUT queue and a deletion GET queue;The method, comprising:
All PUT queues are looped through according to predetermined period without lock scheduler, are read from a PUT queue every time
One thread, is written the scheduling queue;And
The scheduling queue is read according to the predetermined period without lock scheduler, it will be in the thread loops write-in GET queue of reading
It is read for the thread processor.
2. the method according to claim 1, wherein the method, further includes:
Per thread processor, performs the following operations:
Thread processor is successively read the thread in corresponding GET queue and execution;
The thread for having executed end is sequentially written in corresponding PUT queue by thread processor;
Wherein, the GET queue is dual-port annular first in, first out fifo queue, for caching the thread that will be executed;It is described
PUT queue is dual-port annular fifo queue, for caching the thread for having executed end.
3. method according to claim 1 or 2, which is characterized in that the scheduling queue is dual-port annular fifo queue,
The dual-port includes the port IN and OUT terminal mouth.
4. being written according to the method described in claim 3, described read a thread without lock scheduler from a PUT queue
The scheduling queue, comprising:
It is described that a thread is read from a PUT queue without lock scheduler, the thread is written by the tune by the port IN
Spend queue.
5. according to the method described in claim 3, it is characterized in that, described read institute according to the predetermined period without lock scheduler
State scheduling queue, comprising:
It is described without lock scheduler according to the predetermined period, the thread that will be executed is read by the OUT terminal mouth.
6. a kind of thread scheduling device based on multi-core processor, which is characterized in that described device include without lock scheduling unit and
Multiple threaded processing elements;Described to be configured with a scheduling queue without lock scheduling unit, per thread processing unit is configured with one
A PUT queue and a GET queue;Wherein,
It is described without lock scheduling unit, for being looped through according to predetermined period to all PUT queues, every time from one
A thread is read in PUT queue, and the scheduling queue is written;And the scheduling queue is read according to the predetermined period, it will
In the thread loops write-in GET queue of reading;
The threaded processing element, for reading the thread in the GET queue.
7. device according to claim 6, which is characterized in that the threaded processing element, specifically for being successively read pair
The thread in GET queue answered and execution;And the thread for having executed end is sequentially written in corresponding PUT queue;
Wherein, the GET queue is dual-port annular first in, first out fifo queue, for caching the thread that will be executed;It is described
PUT queue is dual-port annular fifo queue, for caching the thread for having executed end.
8. device according to claim 6 or 7, which is characterized in that the scheduling queue is dual-port annular fifo queue,
The dual-port includes the port IN and OUT terminal mouth.
9. device according to claim 8, which is characterized in that it is described without lock scheduling unit, it is specifically used for: from a PUT
A thread is read in queue, the thread is written by the scheduling queue by the port IN, and read according to the predetermined period
The scheduling queue is taken, it will be in the thread loops write-in GET queue of reading.
10. device according to claim 8, which is characterized in that it is described without lock scheduling unit, it is specifically used for: according to described
Predetermined period reads the thread that will be executed by the OUT terminal mouth, and the thread loops of reading is written in GET queue.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910199086.8A CN110011936B (en) | 2019-03-15 | 2019-03-15 | Thread scheduling method and device based on multi-core processor |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910199086.8A CN110011936B (en) | 2019-03-15 | 2019-03-15 | Thread scheduling method and device based on multi-core processor |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110011936A true CN110011936A (en) | 2019-07-12 |
CN110011936B CN110011936B (en) | 2023-02-17 |
Family
ID=67167183
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910199086.8A Active CN110011936B (en) | 2019-03-15 | 2019-03-15 | Thread scheduling method and device based on multi-core processor |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110011936B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111143079A (en) * | 2019-12-24 | 2020-05-12 | 浪潮软件股份有限公司 | Method for realizing multi-read multi-write lock-free queue |
CN111522643A (en) * | 2020-04-22 | 2020-08-11 | 杭州迪普科技股份有限公司 | Multi-queue scheduling method and device based on FPGA, computer equipment and storage medium |
CN111787185A (en) * | 2020-08-04 | 2020-10-16 | 成都云图睿视科技有限公司 | Method for real-time processing of multi-path camera data under VPU platform |
CN112511460A (en) * | 2020-12-29 | 2021-03-16 | 安徽皖通邮电股份有限公司 | Lock-free shared message forwarding method for single-transceiving-channel multi-core network communication equipment |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101631139A (en) * | 2009-05-19 | 2010-01-20 | 华耀环宇科技(北京)有限公司 | Load balancing software architecture based on multi-core platform and method therefor |
CN102331923A (en) * | 2011-10-13 | 2012-01-25 | 西安电子科技大学 | Multi-core and multi-threading processor-based functional macropipeline implementing method |
CN102591722A (en) * | 2011-12-31 | 2012-07-18 | 龙芯中科技术有限公司 | NoC (Network-on-Chip) multi-core processor multi-thread resource allocation processing method and system |
-
2019
- 2019-03-15 CN CN201910199086.8A patent/CN110011936B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101631139A (en) * | 2009-05-19 | 2010-01-20 | 华耀环宇科技(北京)有限公司 | Load balancing software architecture based on multi-core platform and method therefor |
CN102331923A (en) * | 2011-10-13 | 2012-01-25 | 西安电子科技大学 | Multi-core and multi-threading processor-based functional macropipeline implementing method |
CN102591722A (en) * | 2011-12-31 | 2012-07-18 | 龙芯中科技术有限公司 | NoC (Network-on-Chip) multi-core processor multi-thread resource allocation processing method and system |
Non-Patent Citations (1)
Title |
---|
黄益彬 等: "网络数据包高性能并行处理技术研究", 《计算机与现代化》 * |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111143079A (en) * | 2019-12-24 | 2020-05-12 | 浪潮软件股份有限公司 | Method for realizing multi-read multi-write lock-free queue |
CN111143079B (en) * | 2019-12-24 | 2024-04-16 | 浪潮软件股份有限公司 | Multi-read multi-write lock-free queue implementation method |
CN111522643A (en) * | 2020-04-22 | 2020-08-11 | 杭州迪普科技股份有限公司 | Multi-queue scheduling method and device based on FPGA, computer equipment and storage medium |
CN111787185A (en) * | 2020-08-04 | 2020-10-16 | 成都云图睿视科技有限公司 | Method for real-time processing of multi-path camera data under VPU platform |
CN111787185B (en) * | 2020-08-04 | 2023-09-05 | 成都云图睿视科技有限公司 | Method for processing multi-path camera data in real time under VPU platform |
CN112511460A (en) * | 2020-12-29 | 2021-03-16 | 安徽皖通邮电股份有限公司 | Lock-free shared message forwarding method for single-transceiving-channel multi-core network communication equipment |
CN112511460B (en) * | 2020-12-29 | 2022-09-09 | 安徽皖通邮电股份有限公司 | Lock-free shared message forwarding method for single-transceiving-channel multi-core network communication equipment |
Also Published As
Publication number | Publication date |
---|---|
CN110011936B (en) | 2023-02-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110011936A (en) | Thread scheduling method and device based on multi-core processor | |
US10963306B2 (en) | Managing resource sharing in a multi-core data processing fabric | |
CN105893126B (en) | A kind of method for scheduling task and device | |
CN108363615B (en) | Method for allocating tasks and system for reconfigurable processing system | |
US8402466B2 (en) | Practical contention-free distributed weighted fair-share scheduler | |
CN106503791A (en) | System and method for the deployment of effective neutral net | |
CN109308214A (en) | Data task processing method and system | |
CN104268018B (en) | Job scheduling method and job scheduler in a kind of Hadoop clusters | |
US20160241481A1 (en) | Traffic scheduling device | |
CN109144699A (en) | Distributed task dispatching method, apparatus and system | |
US8219716B2 (en) | Methods for accounting seek time of disk accesses | |
CN112380020A (en) | Computing power resource allocation method, device, equipment and storage medium | |
CN106648461A (en) | Memory management device and method | |
US20180150333A1 (en) | Bandwidth aware resource optimization | |
CN111104210A (en) | Task processing method and device and computer system | |
CN110287022A (en) | A kind of scheduling node selection method, device, storage medium and server | |
CN110389843A (en) | A kind of business scheduling method, device, equipment and readable storage medium storing program for executing | |
CN110177146A (en) | A kind of non-obstruction Restful communication means, device and equipment based on asynchronous event driven | |
CN109213607A (en) | A kind of method and apparatus of multithreading rendering | |
Phan et al. | Real-time MapReduce scheduling | |
WO2023174037A1 (en) | Resource scheduling method, apparatus and system, device, medium, and program product | |
Chao et al. | F-mstorm: Feedback-based online distributed mobile stream processing | |
IL264794B2 (en) | Scheduling of tasks in a multiprocessor device | |
Li et al. | Endpoint-flexible coflow scheduling across geo-distributed datacenters | |
Ranganath et al. | Speeding up collective communications through inter-gpu re-routing |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |