CN112486702B - Global message queue implementation method based on multi-core multi-processor parallel system - Google Patents
Global message queue implementation method based on multi-core multi-processor parallel system Download PDFInfo
- Publication number
- CN112486702B CN112486702B CN202011360414.7A CN202011360414A CN112486702B CN 112486702 B CN112486702 B CN 112486702B CN 202011360414 A CN202011360414 A CN 202011360414A CN 112486702 B CN112486702 B CN 112486702B
- Authority
- CN
- China
- Prior art keywords
- global
- message
- queue
- message queue
- semaphore
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 19
- 238000012545 processing Methods 0.000 claims abstract description 56
- 230000005540 biological transmission Effects 0.000 claims abstract description 7
- 230000000903 blocking effect Effects 0.000 claims description 8
- 230000002457 bidirectional effect Effects 0.000 claims description 4
- 238000013507 mapping Methods 0.000 claims description 4
- 230000007717 exclusion Effects 0.000 claims 1
- 238000004891 communication Methods 0.000 abstract description 12
- 238000010586 diagram Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 230000033772 system development Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/54—Interprogram communication
- G06F9/546—Message passing systems or structures, e.g. queues
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F5/00—Methods or arrangements for data conversion without changing the order or content of the data handled
- G06F5/06—Methods or arrangements for data conversion without changing the order or content of the data handled for changing the speed of data flow, i.e. speed regularising or timing, e.g. delay lines, FIFO buffers; over- or underrun control therefor
- G06F5/065—Partitioned buffers, e.g. allowing multiple independent queues, bidirectional FIFO's
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/52—Program synchronisation; Mutual exclusion, e.g. by means of semaphores
- G06F9/526—Mutual exclusion algorithms
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/54—Interprogram communication
- G06F9/544—Buffers; Shared memory; Pipes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2209/00—Indexing scheme relating to G06F9/00
- G06F2209/54—Indexing scheme relating to G06F9/54
- G06F2209/548—Queue
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Multi Processors (AREA)
Abstract
The invention discloses a global message queue implementation method based on a multi-core multi-processor parallel system, wherein each processing core of the multi-core multi-processor parallel system carries out system initialization, and a section of shared storage space is mapped through SRIO (serial number input/output) for storing a global message queue and a global message queue name table; creating and initializing a global message queue from a global message queue buffer pool corresponding to a processing core to which a thread belongs, and filling global message queue information into a global message queue name table; when a thread sends a message to the global message queue or receives a message from the global message queue, the global message queue resource is managed and controlled through the global semaphore, so that the transmission of the message is realized. And the global semaphore and the shared storage area are utilized to control the message transfer among threads by the global semaphore, so that the high-speed communication requirement among multi-core multi-processors is met, the method has the characteristics of rapidness and high efficiency, and the work of application developers can be simplified to a great extent.
Description
Technical Field
The invention relates to a method for realizing a global message queue, in particular to a method for realizing a global message queue based on a multi-core multi-processor parallel system, belonging to the field of embedded computers.
Background
With the development of computer systems and the increasing demands of field applications for real-time performance and parallelism, multi-core multi-processor parallel systems have become an important trend of embedded computer system development.
With the increase of the number of cores and the number of processor nodes of the multi-core multi-processor parallel system, compared with the traditional message communication mode of the multi-core processor, the message communication mode of the multi-core multi-processor parallel system is more complex, and the message communication mode of the multi-core multi-processor parallel system is required to realize the communication between the processing cores in the multi-core processor chip and the communication between the multi-processor chips. In other words, message communication becomes one of the bottlenecks affecting the performance of multi-core multiprocessor parallel systems. Therefore, efficient communication between processing cores and between processors becomes a key issue for multi-core multi-processor parallel systems.
In addition, when communication is performed between processing cores and between processors, threads of the processing cores must mutually exclusive access to hardware resources on the multi-core processor, otherwise, problems such as access data errors caused by resource contention can be caused, and communication failure between the processing cores and between the processors can be caused.
Disclosure of Invention
The invention aims to solve the problems and provide a global message queue implementation method based on a multi-core multi-processor parallel system, which utilizes global semaphores and a shared storage area to control message transmission among threads by the global semaphores, and can perform related operation of the global message queue after any thread using the global message queue acquires the global semaphores, thereby meeting the high-speed communication requirement among the multi-core multi-processors, having the characteristics of rapidness, high efficiency and safety and greatly simplifying the work of application developers.
The invention realizes the above purpose through the following technical scheme: a global message queue implementation method based on a multi-core multi-processor parallel system comprises the following steps:
s1) each processing core of the multi-core multi-processor parallel system performs system initialization, and a section of shared storage space is mapped through SRIO and used for storing a global message queue and a global message queue name table;
s2) creating and initializing a global message queue from a global message queue buffer pool corresponding to a processing core to which the thread belongs, and filling global message queue information into a global message queue name table;
s3) when the thread sends the message to the global message queue or receives the message from the global message queue, the global message queue resource is managed and controlled through the global semaphore, so that the message transmission is realized.
Preferably, the number of processor nodes of the multi-core multi-processor parallel system is at least 1; at least 1 processing core of the processor node; and the processor node or processing core supports SRIO bus interconnect.
Preferably, in step S1), the process of initializing the system includes:
s11) initializing SRIO by each processing core, and mapping a section of shared storage space for storing a global message queue and a global message queue name table through the SRIO;
s12) selecting any processing core as a main processing core, creating and initializing a shared global message queue name table for recording all created global message queues;
s13) creating a global message queue buffer pool and a message buffer pool in the shared memory space mapped by each processing core.
Preferably, in step S12), the content of the global message queue table includes a global semaphore controlling mutually exclusive access of the table, the number of all created global message queues in the table, and all created global message queue information;
the global message queue information comprises names, types, processing cores, message attributes, opening times, data queues, idle queues and global semaphores;
wherein the message attributes include a maximum number of messages, a message size, a message identification, and a current number of messages, and the global semaphores include a global semaphore for controlling access to the data queue and a global semaphore for controlling access to the free queue.
Preferably, in step S13), the global message queue buffer pool is used for allocating a global message queue, and the message buffer pool is used for allocating a free queue of the global message queue;
the global message queue buffer pool and the message buffer pool are bidirectional linked lists with head pointers, and the head of each linked list is placed on the corresponding processing core and can be accessed in a mutually exclusive mode by using spin locks or global semaphores.
Preferably, in step S2), when the global message queue is initialized, the value of the global semaphore of the control data queue is initialized to 0, and the value of the global semaphore of the control idle queue is initialized to the maximum number of messages allowed to be carried by the message queue.
Preferably, in step S3), the operation of sending the message to the global message queue by the thread specifically includes:
s31) acquiring global semaphores for controlling idle queues in the message queues;
s32) if the global semaphore controlling the idle queue is 0, indicating that the message queue reaches the maximum message number and the global message queue is full, blocking the thread through the global semaphore;
s33) if the global semaphore of the control idle queue is not 0, taking a block of idle area from the idle queue, copying the message to the idle area, adding the message into the data queue according to the message priority, modifying the message attribute, and releasing the global semaphore of the control data queue.
Preferably, in step S3), the operation of the thread to receive the message from the message queue specifically includes:
s34) acquiring global semaphores of control data queues in the message queues;
s35) if the global semaphore of the control data queue is 0, indicating that the message queue is empty, blocking the thread through the global semaphore;
s36) if the global semaphore of the control data queue is not 0, according to the message priority, according to the FIFO principle, taking down a message from the data queue, modifying the message attribute, copying the message, adding the message into the idle queue, and releasing the global semaphore of the control idle queue.
The beneficial effects of the invention are as follows: the method for realizing the global message queue based on the multi-core multi-processor parallel system utilizes the global semaphore and the shared storage area to control the message transmission among threads by the global semaphore, meets the high-speed communication requirement among the multi-core multi-processors, has the characteristics of rapidness and high efficiency, and can greatly simplify the work of application developers.
Drawings
FIG. 1 is a block diagram of a multi-core multiprocessor parallel system to which embodiments of the present invention are applied;
FIG. 2 is a diagram illustrating a method for implementing global message queues of a multi-core multiprocessor parallel system according to an embodiment of the present invention;
FIG. 3 is a system initialization flow chart for an embodiment of the present invention;
FIG. 4 is a global message queue name table structure used in an embodiment of the present invention;
FIG. 5 is a flow chart of a global message queue send message for an embodiment of the present invention;
FIG. 6 is a flow chart of a global message queue receive message for use in an embodiment of the invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
A global message queue implementation method based on a multi-core multi-processor parallel system comprises the following steps:
s1) each processing core of the multi-core multi-processor parallel system performs system initialization, and a section of shared storage space is mapped through SRIO and used for storing a global message queue and a global message queue name table.
In step S1), the system initialization process includes:
s11) initializing SRIO by each processing core, and mapping a section of shared storage space for storing a global message queue and a global message queue name table through the SRIO;
s12) selecting any processing core as a main processing core, creating and initializing a shared global message queue name table for recording all created global message queues;
the content of the global message queue name table comprises global semaphores for controlling mutually exclusive access of the name table, the number of all created global message queues in the name table and information of all created global message queues;
the global message queue information comprises names, types, processing cores, message attributes, opening times, data queues, idle queues and global semaphores;
wherein the message attributes include a maximum number of messages, a message size, a message identification, and a current number of messages, and the global semaphores include a global semaphore for controlling access to the data queue and a global semaphore for controlling access to the free queue.
S13) creating a global message queue buffer pool and a message buffer pool in the shared memory space mapped by each processing core.
The global message queue buffer pool is used for distributing global message queues, and the message buffer pool is used for distributing idle queues of the global message queues;
the global message queue buffer pool and the message buffer pool are bidirectional linked lists with head pointers, and the head of each linked list is placed on the corresponding processing core and can be accessed in a mutually exclusive mode by using spin locks or global semaphores.
S2) creating and initializing a global message queue from a global message queue buffer pool corresponding to the processing core to which the thread belongs, and filling global message queue information into a global message queue name table.
When the global message queue is initialized, the value of the global semaphore of the data queue is controlled to be 0, and the value of the global semaphore of the idle queue is controlled to be the maximum number of messages allowed to be carried by the message queue.
S3) when the thread sends the message to the global message queue or receives the message from the global message queue, the global message queue resource is managed and controlled through the global semaphore, so that the message transmission is realized.
The operation of the thread sending the message to the global message queue specifically includes:
s31) acquiring global semaphores for controlling idle queues in the message queues;
s32) if the global semaphore controlling the idle queue is 0, indicating that the message queue reaches the maximum message number and the global message queue is full, blocking the thread through the global semaphore;
s33) if the global semaphore of the control idle queue is not 0, taking a block of idle area from the idle queue, copying the message to the idle area, adding the message into the data queue according to the message priority, modifying the message attribute, and releasing the global semaphore of the control data queue.
The operation of the thread to receive messages from the message queue specifically includes:
s34) acquiring global semaphores of control data queues in the message queues;
s35) if the global semaphore of the control data queue is 0, indicating that the message queue is empty, blocking the thread through the global semaphore;
s36) if the global semaphore of the control data queue is not 0, according to the message priority, according to the FIFO principle, taking down a message from the data queue, modifying the message attribute, copying the message, adding the message into the idle queue, and releasing the global semaphore of the control idle queue.
The number of the processor nodes of the multi-core multi-processor parallel system is at least 1; at least 1 processing core of the processor node; and the processor node or processing core supports SRIO bus interconnect.
Examples
It should be noted that:
1) As shown in fig. 1, the present embodiment is an embodiment implemented on an embedded multi-core multiprocessor parallel system;
2) The embedded multi-core multi-processor parallel system applied in the embodiment comprises four processing boards, namely S 0 ,S 1 ,S 2 ,S 3 Wherein the boards S0, S1, S are processed 2 Comprises two MPC8641D dual-core processors and an SRIO switching device, a processing board S 3 Comprises an MPC8641D dual-core processor and an SRIO switching device, wherein each MPC8641D processor comprises two e600 processing cores, and the processing cores are respectively C 0 ,C 1 ,……,C i ,……,C 13 Select processing core C 0 Is a main processing core;
3) Processing core C 0 A thread A is arranged on the thread A and is used for sending messages; processing core C 13 A thread B is arranged on the thread B and is used for receiving the message;
4) The MPC8641D processors applied to the embodiment all support SRIO bus interconnection, and each MPC8641D dual-core processor is connected through an SRIO switching device;
5) The embodiment is a multi-core multi-processor parallel system based on SRIO.
As shown in fig. 2, the method for implementing the global message queue of the multi-core multiprocessor parallel system in this embodiment includes the following steps:
s1) each e600 processing core of the MPC8641D multi-core multi-processor parallel system performs system initialization, and a section of shared storage space is mapped through SRIO and used for storing a global message queue and a global message queue name table.
S2) each e600 processing core creates and initializes a global message queue from a global message queue buffer pool corresponding to the processing core to which the thread belongs, and fills global message queue information into a global message queue name table.
S3) when the thread sends the message to the global message queue or receives the message from the global message queue, the global message queue resource is managed and controlled through the global semaphore, so that the message transmission is realized.
As shown in fig. 3, in step S1, the system initialization process specifically includes:
s11) initializing SRIO by each e600 processing core, and mapping a section of shared storage space for storing a global message queue and a global message queue name table through the SRIO;
s12) selecting a processing core C 0 Creating and initializing a shared global message queue name table for a main processing core, wherein the shared global message queue name table is used for recording all created global message queues;
s13) creating a global message queue buffer pool and a message buffer pool in the shared memory space mapped out by each e600 processing core.
In step S11, each e600 processing core maps a space with a length of 4 mbytes from a local address space to an SRIO address space as a shared memory, which is used for all processing cores of the MPC8641D multi-core multi-processor parallel system to access, where a space head address of the shared memory is 0xa4000000,0xa440000, … …,0xa4000000+i x 0x400000, … …,0xa4000000+13 x 0x400000 in sequence.
As shown in fig. 4, in step S12, the contents of the global message queue table include the global semaphore controlling the mutually exclusive access of the table, the number of all created global message queues in the table, and all created global message queue information. The global message queue information mainly comprises names, types, affiliated processing cores, message attributes, opening times, data queues, idle queues, global semaphores and the like, the message attributes comprise maximum message number, message size, message identification and current message number, and the global semaphores mainly comprise global semaphores used for controlling access of the data queues and global semaphores used for controlling access of the idle queues.
In step S13, the global message queue buffer pool is used for allocating a global message queue, and the message buffer pool is used for allocating a free queue of the global message queue. The global message queue buffer pool and the message buffer pool are bidirectional linked lists with head pointers, and the head of each linked list is placed on the corresponding processing core and can be accessed in a mutually exclusive mode by using spin locks or global semaphores.
In step S2, when the global message queue is initialized, the value of the global semaphore of the control data queue is initialized to 0, and the value of the global semaphore of the control idle queue is initialized to the maximum number of messages allowed to be carried by the message queue.
As shown in fig. 5, in step S3, the operation of sending a message to the global message queue by the thread a specifically includes:
s31) acquiring global semaphores for controlling idle queues in the message queues;
s32) if the global semaphore controlling the idle queue is 0, indicating that the message queue reaches the maximum message number and the global message queue is full, blocking the thread through the global semaphore;
s33) if the global semaphore of the control idle queue is not 0, taking a block of idle area from the idle queue, copying the message to the idle area, adding the message into the data queue according to the message priority, modifying the message attribute, and releasing the global semaphore of the control data queue.
As shown in fig. 6, in step S3, the operation of the thread B to receive a message from the message queue specifically includes:
s34) acquiring global semaphores of control data queues in the message queues;
s35) if the global semaphore of the control data queue is 0, indicating that the message queue is empty, blocking the thread through the global semaphore;
s36) if the global semaphore for controlling the data queue is not 0, according to the message priority, according to the "FIFO" or other principle, take down a message from the data queue, modify the message attribute, then copy the message, add the message to the free queue, and release the global semaphore for controlling the free queue.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned.
Furthermore, it should be understood that although the present disclosure describes embodiments, not every embodiment is provided with a separate embodiment, and that this description is provided for clarity only, and that the disclosure is not limited to the embodiments described in detail below, and that the embodiments described in the examples may be combined as appropriate to form other embodiments that will be apparent to those skilled in the art.
Claims (2)
1. The method for realizing the global message queue based on the multi-core multi-processor parallel system is characterized by comprising the following steps of:
s1) each processing core of the multi-core multi-processor parallel system performs system initialization, and a section of shared storage space is mapped through SRIO and used for storing a global message queue and a global message queue name table;
in step S1), the system initialization process includes:
s11) initializing SRIO by each processing core, and mapping a section of shared storage space for storing a global message queue and a global message queue name table through the SRIO;
s12) selecting any processing core as a main processing core, creating and initializing a shared global message queue name table for recording all created global message queues;
in step S12), the content of the global message queue table includes a global semaphore controlling mutually exclusive access of the table, the number of all created global message queues in the table, and information of all created global message queues;
the global message queue information comprises names, types, processing cores, message attributes, opening times, data queues, idle queues and global semaphores;
the message attribute comprises the maximum message number, the message size, the message identification and the current message number, and the global semaphore comprises a global semaphore for controlling access to the data queue and a global semaphore for controlling access to the idle queue;
s13) creating a global message queue buffer pool and a message buffer pool in the shared memory space mapped by each processing core;
in step S13), the global message queue buffer pool is used for allocating a global message queue, and the message buffer pool is used for allocating an idle queue of the global message queue;
the global message queue buffer pool and the message buffer pool are bidirectional linked lists with head pointers, the head of each linked list is placed on the processing core to which each linked list belongs, and spin lock or global semaphore management mutual exclusion access can be used;
s2) creating and initializing a global message queue from a global message queue buffer pool corresponding to a processing core to which the thread belongs, and filling global message queue information into a global message queue name table;
in step S2), when the global message queue is initialized, the value of the global semaphore of the data queue is controlled to be 0, and the value of the global semaphore of the idle queue is controlled to be the maximum number of messages allowed to be carried by the message queue;
s3) when the thread sends the message to the global message queue or receives the message from the global message queue, the global message queue resource is managed and controlled through the global semaphore, so that the message transmission is realized;
in step S3), the operation of the thread sending the message to the global message queue specifically includes:
s31) acquiring global semaphores for controlling idle queues in the message queues;
s32) if the global semaphore controlling the idle queue is 0, indicating that the message queue reaches the maximum message number and the global message queue is full, blocking the thread through the global semaphore;
s33) if the global semaphore of the control idle queue is not 0, taking a block of idle area from the idle queue, copying the message to the idle area, adding the message into the data queue according to the message priority, modifying the message attribute, and releasing the global semaphore of the control data queue;
in step S3), the operation of the thread to receive the message from the message queue specifically includes:
s34) acquiring global semaphores of control data queues in the message queues;
s35) if the global semaphore of the control data queue is 0, indicating that the message queue is empty, blocking the thread through the global semaphore;
s36) if the global semaphore of the control data queue is not 0, according to the message priority, according to the FIFO principle, taking down a message from the data queue, modifying the message attribute, copying the message, adding the message into the idle queue, and releasing the global semaphore of the control idle queue.
2. The method for implementing the global message queue based on the multi-core multi-processor parallel system according to claim 1, wherein the method comprises the following steps: the number of the processor nodes of the multi-core multi-processor parallel system is at least 1; at least 1 processing core of the processor node; and the processor node or processing core supports SRIO bus interconnect.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011360414.7A CN112486702B (en) | 2020-11-27 | 2020-11-27 | Global message queue implementation method based on multi-core multi-processor parallel system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011360414.7A CN112486702B (en) | 2020-11-27 | 2020-11-27 | Global message queue implementation method based on multi-core multi-processor parallel system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112486702A CN112486702A (en) | 2021-03-12 |
CN112486702B true CN112486702B (en) | 2024-02-13 |
Family
ID=74936028
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011360414.7A Active CN112486702B (en) | 2020-11-27 | 2020-11-27 | Global message queue implementation method based on multi-core multi-processor parallel system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112486702B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113300946B (en) * | 2021-05-24 | 2022-05-10 | 北京理工大学 | Multi-core multi-communication protocol gateway and management scheduling method thereof |
CN113672364B (en) * | 2021-08-02 | 2023-09-01 | 北京奇艺世纪科技有限公司 | Task scheduling method and device, electronic equipment and storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102841810A (en) * | 2011-05-14 | 2012-12-26 | 国际商业机器公司 | Techniques for executing threads in a computing environment |
CN108595282A (en) * | 2018-05-02 | 2018-09-28 | 广州市巨硅信息科技有限公司 | A kind of implementation method of high concurrent message queue |
CN109144749A (en) * | 2018-08-14 | 2019-01-04 | 苏州硅岛信息科技有限公司 | A method of it is communicated between realizing multiprocessor using processor |
CN111722942A (en) * | 2020-05-29 | 2020-09-29 | 天津大学 | Transformation method of distributed real-time operating system |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080148280A1 (en) * | 2006-12-13 | 2008-06-19 | Stillwell Joseph W | Apparatus, system, and method for autonomically managing multiple queues |
US7937532B2 (en) * | 2007-03-30 | 2011-05-03 | Intel Corporation | Method and apparatus for speculative prefetching in a multi-processor/multi-core message-passing machine |
-
2020
- 2020-11-27 CN CN202011360414.7A patent/CN112486702B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102841810A (en) * | 2011-05-14 | 2012-12-26 | 国际商业机器公司 | Techniques for executing threads in a computing environment |
CN108595282A (en) * | 2018-05-02 | 2018-09-28 | 广州市巨硅信息科技有限公司 | A kind of implementation method of high concurrent message queue |
CN109144749A (en) * | 2018-08-14 | 2019-01-04 | 苏州硅岛信息科技有限公司 | A method of it is communicated between realizing multiprocessor using processor |
CN111722942A (en) * | 2020-05-29 | 2020-09-29 | 天津大学 | Transformation method of distributed real-time operating system |
Non-Patent Citations (2)
Title |
---|
一种嵌入式系统多处理器间通信协议的应用实验;陈金忠;耿锐;;单片机与嵌入式系统应用(05);全文 * |
嵌入式并行计算管理中间件技术研究;弓静;中国优秀硕士学位论文全文数据库 信息科技辑(第12期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN112486702A (en) | 2021-03-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11010681B2 (en) | Distributed computing system, and data transmission method and apparatus in distributed computing system | |
US8301717B2 (en) | Extended virtual memory system and method in a computer cluster | |
US7738443B2 (en) | Asynchronous broadcast for ordered delivery between compute nodes in a parallel computing system where packet header space is limited | |
JP5159884B2 (en) | Network adapter resource allocation between logical partitions | |
US8381230B2 (en) | Message passing with queues and channels | |
CN114741207B (en) | GPU resource scheduling method and system based on multi-dimensional combination parallelism | |
CN112486702B (en) | Global message queue implementation method based on multi-core multi-processor parallel system | |
US20070011687A1 (en) | Inter-process message passing | |
US7809918B1 (en) | Method, apparatus, and computer-readable medium for providing physical memory management functions | |
US11620254B2 (en) | Remote direct memory access for container-enabled networks | |
CN112612623B (en) | Method and equipment for managing shared memory | |
CN112148467A (en) | Dynamic allocation of computing resources | |
CN112256457A (en) | Data loading acceleration method and device based on shared memory, electronic equipment and storage medium | |
CN112860458A (en) | Inter-process communication method and system based on shared memory | |
EP0769740B1 (en) | Inter-object communication | |
US20110246582A1 (en) | Message Passing with Queues and Channels | |
CN112486704A (en) | Multi-core multiprocessor synchronization and communication system based on shared storage | |
CN111797497B (en) | Communication method and system for electromagnetic transient parallel simulation | |
CN104769553A (en) | System and method for supporting work sharing muxing in a cluster | |
CN112463716B (en) | Global semaphore implementation method based on multi-core multi-processor parallel system | |
US20040117372A1 (en) | System and method for controlling access to system resources | |
CN112486703B (en) | Global data memory management method based on multi-core multi-processor parallel system | |
Peng et al. | Fast interprocess communication algorithm in microkernel | |
US20020016899A1 (en) | Demand usable adapter memory access management | |
US9251100B2 (en) | Bitmap locking using a nodal lock |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |