CN112486702A - Global message queue implementation method based on multi-core multi-processor parallel system - Google Patents

Global message queue implementation method based on multi-core multi-processor parallel system Download PDF

Info

Publication number
CN112486702A
CN112486702A CN202011360414.7A CN202011360414A CN112486702A CN 112486702 A CN112486702 A CN 112486702A CN 202011360414 A CN202011360414 A CN 202011360414A CN 112486702 A CN112486702 A CN 112486702A
Authority
CN
China
Prior art keywords
global
message
queue
message queue
semaphore
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011360414.7A
Other languages
Chinese (zh)
Other versions
CN112486702B (en
Inventor
舒红霞
常轩
胡舒婷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CSIC (WUHAN) LINCOM ELECTRONICS CO LTD
Original Assignee
CSIC (WUHAN) LINCOM ELECTRONICS CO LTD
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CSIC (WUHAN) LINCOM ELECTRONICS CO LTD filed Critical CSIC (WUHAN) LINCOM ELECTRONICS CO LTD
Priority to CN202011360414.7A priority Critical patent/CN112486702B/en
Publication of CN112486702A publication Critical patent/CN112486702A/en
Application granted granted Critical
Publication of CN112486702B publication Critical patent/CN112486702B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/546Message passing systems or structures, e.g. queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F5/00Methods or arrangements for data conversion without changing the order or content of the data handled
    • G06F5/06Methods or arrangements for data conversion without changing the order or content of the data handled for changing the speed of data flow, i.e. speed regularising or timing, e.g. delay lines, FIFO buffers; over- or underrun control therefor
    • G06F5/065Partitioned buffers, e.g. allowing multiple independent queues, bidirectional FIFO's
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/52Program synchronisation; Mutual exclusion, e.g. by means of semaphores
    • G06F9/526Mutual exclusion algorithms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/544Buffers; Shared memory; Pipes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/54Indexing scheme relating to G06F9/54
    • G06F2209/548Queue

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multi Processors (AREA)

Abstract

The invention discloses a method for realizing a global message queue based on a multi-core multi-processor parallel system, wherein each processing core of the multi-core multi-processor parallel system carries out system initialization, and a segment of shared storage space is mapped through SRIO (remote input output) for storing a global message queue and a global message queue name table; creating and initializing a global message queue from a global message queue buffer pool corresponding to a processing core to which the thread belongs, and filling global message queue information into a global message queue name table; when the thread sends messages to the global message queue or receives messages from the global message queue, the global message queue resources are managed and controlled through the global semaphore, and message transmission is achieved. The method has the advantages that the global semaphore and the shared storage area are utilized, message transmission among threads is controlled by the global semaphore, the requirement of high-speed communication among multi-core multi-processors is met, the method has the advantages of being fast and efficient, and work of application developers can be simplified to the great extent.

Description

Global message queue implementation method based on multi-core multi-processor parallel system
Technical Field
The invention relates to a method for realizing a global message queue, in particular to a method for realizing the global message queue based on a multi-core multiprocessor parallel system, belonging to the field of embedded computers.
Background
With the development of computer systems and the increasing demand of field applications for real-time performance and parallelism, multi-core multi-processor parallel systems have become an important trend in the development of embedded computer systems.
With the increase of the number of cores and processor nodes of the multi-core multi-processor parallel system, compared with the traditional multi-core processor message communication mode, the message communication mode of the multi-core multi-processor parallel system is more complex, and not only the inter-core processing communication within the multi-core processor chip needs to be realized, but also the inter-chip communication of the multi-core processor needs to be realized. In other words, message communication becomes one of the bottlenecks affecting the performance of the multi-core multi-processor parallel system. Therefore, efficient communication between processing cores and processors becomes a key issue of the multi-core multi-processor parallel system.
In addition, when the processing cores and the processors communicate with each other, the threads of the processing cores and the processors must mutually exclusively access hardware resources on the multi-core processor, otherwise, the problems of data access errors and the like caused by resource contention are caused, and further, the communication between the processing cores and the processors fails.
Disclosure of Invention
The invention aims to solve the problems and provide a method for realizing a global message queue based on a multi-core multiprocessor parallel system, which utilizes a global semaphore and a shared storage area to control the message transmission among threads by the global semaphore, any thread using the global message queue can only perform the related operation of the global message queue after acquiring the global semaphore, thereby meeting the high-speed communication requirement among multi-core multiprocessors, having the characteristics of rapidness, high efficiency and safety, and simplifying the work of application developers to a great extent.
The invention realizes the purpose through the following technical scheme: a global message queue implementation method based on a multi-core multi-processor parallel system comprises the following steps:
s1) each processing core of the multi-core multi-processor parallel system carries out system initialization, and a segment of shared storage space is mapped through SRIO for storing a global message queue and a global message queue name table;
s2) creating and initializing a global message queue from a global message queue buffer pool corresponding to the processing core to which the thread belongs, and filling global message queue information into a global message queue name table;
s3), when the thread sends or receives the message to or from the global message queue, the global message queue resource is managed and controlled by the global semaphore, and the message transmission is realized.
Preferably, the number of processor nodes of the multi-core multi-processor parallel system is at least 1; at least 1 processing core of the processor node is provided; and the processor internode or processing core supports SRIO bus interconnect.
Preferably, in step S1), the system initialization process includes:
s11), each processing core initializes SRIO, and maps a segment of shared storage space through the SRIO for storing a global message queue and a global message queue name table;
s12) selecting any processing core as a main processing core, creating and initializing a shared global message queue name table for recording all created global message queues;
s13) creating a global message queue buffer pool and a message buffer pool in the shared memory space mapped by each processing core.
Preferably, in step S12), the content of the global message queue name table includes a global semaphore controlling exclusive access of the name table, the number of all created global message queues in the name table, and all created global message queue information;
the global message queue information comprises a name, a type, a processing core to which the global message queue belongs, message attributes, opening times, a data queue, an idle queue and global semaphores;
the message attributes comprise the maximum message number, the message size, the message identification and the current message number, and the global semaphore comprises a global semaphore used for controlling data queue access and a global semaphore used for controlling idle queue access.
Preferably, in step S13), the global message queue buffer pool is used to allocate a global message queue, and the message buffer pool is used to allocate a free queue of the global message queue;
the global message queue buffer pool and the message buffer pool are bidirectional linked lists with head pointers, and the heads of the linked lists are all placed on the processing cores to which the linked lists belong, and mutually exclusive access can be managed by using spin locks or global semaphores.
Preferably, in step S2), when the global message queue is initialized, the value of the global semaphore controlling the data queue is initialized to 0, and the value of the global semaphore controlling the idle queue is initialized to the maximum number of messages allowed to be carried by the message queue.
Preferably, in step S3), the operation of sending the message to the global message queue by the thread specifically includes:
s31) obtaining the global semaphore controlling the idle queue in the message queue;
s32) if the global semaphore for controlling the idle queue is 0, which indicates that the message queue has reached the maximum message number and the global message queue is full, the thread is blocked by the global semaphore;
s33) if the global semaphore of the control idle queue is not 0, taking a block of idle area from the idle queue, copying the message to the idle area, adding the message into the data queue according to the message priority, modifying the message attribute, and releasing the global semaphore of the control data queue.
Preferably, in step S3), the operation of the thread receiving the message from the message queue specifically includes:
s34) obtaining the global semaphore of the control data queue in the message queue;
s35) if the global semaphore of the control data queue is 0, indicating that the message queue is empty, blocking the thread through the global semaphore;
s36) if the global semaphore of the control data queue is not 0, according to the message priority and the FIFO principle, taking a message from the data queue, modifying the message attribute, then copying the message, adding the message into the idle queue, and releasing the global semaphore of the control idle queue.
The invention has the beneficial effects that: the method for realizing the global message queue based on the multi-core multi-processor parallel system utilizes the global semaphore and the shared storage area to control the message transmission among the threads by the global semaphore, meets the high-speed communication requirement among the multi-core multi-processors, has the characteristics of rapidness and high efficiency, and can simplify the work of application developers to a great extent.
Drawings
FIG. 1 is a block diagram of a multi-core multiprocessor parallel system applied in an embodiment of the present invention;
FIG. 2 is a schematic diagram of a method for implementing a global message queue of a multi-core multiprocessor parallel system according to an embodiment of the present invention;
FIG. 3 is a flowchart of system initialization applied in accordance with an embodiment of the present invention;
FIG. 4 is a table structure of a global message queue name applied in an embodiment of the present invention;
FIG. 5 is a flow chart of a global message queue message sending process applied in the embodiment of the present invention;
fig. 6 is a flow chart of a global message queue receiving message applied in the embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
A global message queue implementation method based on a multi-core multi-processor parallel system comprises the following steps:
s1) each processing core of the multi-core multi-processor parallel system carries out system initialization, and a segment of shared storage space is mapped through SRIO for storing a global message queue and a global message queue name table.
In step S1), the system initialization process includes:
s11), each processing core initializes SRIO, and maps a segment of shared storage space through the SRIO for storing a global message queue and a global message queue name table;
s12) selecting any processing core as a main processing core, creating and initializing a shared global message queue name table for recording all created global message queues;
the content of the global message queue name table comprises a global semaphore for controlling exclusive access of the name table, the number of all created global message queues in the name table and information of all created global message queues;
the global message queue information comprises a name, a type, a processing core to which the global message queue belongs, message attributes, opening times, a data queue, an idle queue and global semaphores;
the message attributes comprise the maximum message number, the message size, the message identification and the current message number, and the global semaphore comprises a global semaphore used for controlling data queue access and a global semaphore used for controlling idle queue access.
S13) creating a global message queue buffer pool and a message buffer pool in the shared memory space mapped by each processing core.
The global message queue buffer pool is used for distributing global message queues and is used for distributing idle queues of the global message queues;
the global message queue buffer pool and the message buffer pool are bidirectional linked lists with head pointers, and the heads of the linked lists are all placed on the processing cores to which the linked lists belong, and mutually exclusive access can be managed by using spin locks or global semaphores.
S2) creating and initializing a global message queue from the global message queue buffer pool corresponding to the processing core to which the thread belongs, and filling the global message queue information into a global message queue name table.
When the global message queue is initialized, the value of the global semaphore of the control data queue is initialized to 0, and the value of the global semaphore of the control idle queue is initialized to the maximum number of messages allowed to be carried by the message queue.
S3), when the thread sends or receives the message to or from the global message queue, the global message queue resource is managed and controlled by the global semaphore, and the message transmission is realized.
The operation of sending the message to the global message queue by the thread specifically includes:
s31) obtaining the global semaphore controlling the idle queue in the message queue;
s32) if the global semaphore for controlling the idle queue is 0, which indicates that the message queue has reached the maximum message number and the global message queue is full, the thread is blocked by the global semaphore;
s33) if the global semaphore of the control idle queue is not 0, taking a block of idle area from the idle queue, copying the message to the idle area, adding the message into the data queue according to the message priority, modifying the message attribute, and releasing the global semaphore of the control data queue.
The operation of the thread receiving the message from the message queue specifically includes:
s34) obtaining the global semaphore of the control data queue in the message queue;
s35) if the global semaphore of the control data queue is 0, indicating that the message queue is empty, blocking the thread through the global semaphore;
s36) if the global semaphore of the control data queue is not 0, according to the message priority and the FIFO principle, taking a message from the data queue, modifying the message attribute, then copying the message, adding the message into the idle queue, and releasing the global semaphore of the control idle queue.
The number of processor nodes of the multi-core multi-processor parallel system is at least 1; at least 1 processing core of the processor node is provided; and the processor internode or processing core supports SRIO bus interconnect.
Examples
It should be noted that:
1) as shown in fig. 1, the embodiment is an embodiment implemented on an embedded multi-core multiprocessor parallel system;
2) the embedded multi-core multiprocessor parallel system applied in the embodiment comprises four processing boards S0,S1,S2,S3Wherein the boards S0, S1, S are processed2Comprises two MPC8641D dual-core processors and an SRIO switching device, a processing board S3Comprises a MPC8641D dual-core processor and an SRIO switching device, wherein each MPC8641D processor comprises two e600 processing cores, the processing cores are respectively C0,C1,……,Ci,……,C13Selecting a processing core C0Is a main processing core;
3) processing core C0The thread A is used for sending messages; processing core C13The thread B is used for receiving messages;
4) the MPCs 8641D applied in this embodiment all support SRIO bus interconnection, and each MPC8641D dual-core processor is connected through SRIO switching equipment;
5) the embodiment is a multi-core multi-processor parallel system based on SRIO.
As shown in fig. 2, a method for implementing a global message queue of a multi-core multiprocessor parallel system in this embodiment includes the following steps:
s1) performing system initialization by each e600 processing core of the MPC8641D multi-core multi-processor parallel system, and mapping a segment of shared storage space for storing a global message queue and a global message queue name table through SRIO.
S2) each e600 processing core creates and initializes the global message queue from the global message queue buffer pool corresponding to the processing core to which the thread belongs, and fills the global message queue information into the global message queue name table.
S3), when the thread sends or receives the message to or from the global message queue, the global message queue resource is managed and controlled by the global semaphore, and the message transmission is realized.
As shown in fig. 3, in step S1, the system initialization process specifically includes:
s11) each e600 processing core initializes SRIO, maps a segment of shared storage space through SRIO for storing a global message queue and a global message queue name table;
s12) selecting a processing core C0Creating and initializing a shared global message queue name table for a main processing core, wherein the shared global message queue name table is used for recording all created global message queues;
s13) creates a global message queue buffer pool and a message buffer pool in the shared memory space mapped by each e600 processing core.
In step S11, each e600 processing core maps a space with a length of 4M bytes from the local address space to the SRIO address space as a shared memory, which is accessed by all processing cores of the MPC8641D multi-core multi-processor parallel system, and the first addresses of the space of the shared memory are sequentially 0xA4000000, 0xA4400000, … …, 0xA4000000+ i × 0x400000, … …, 0xA4000000+13 × 0x 400000.
As shown in fig. 4, in step S12, the contents of the global message queue name table include a global semaphore controlling exclusive access to the name table, the number of all created global message queues in the name table, and all created global message queue information. The global message queue information mainly comprises a name, a type, a processing core to which the global message queue belongs, message attributes, opening times, a data queue, an idle queue, global semaphore and the like, the message attributes comprise the maximum message number, the message size, the message identification and the current message number, and the global semaphore mainly comprises the global semaphore used for controlling the access of the data queue and the global semaphore used for controlling the access of the idle queue.
In step S13, the global message queue buffer pool is used to allocate a global message queue, and the message buffer pool is used to allocate a free queue of the global message queue. The global message queue buffer pool and the message buffer pool are bidirectional linked lists with head pointers, and the heads of the linked lists are all placed on the processing cores to which the linked lists belong, and mutually exclusive access can be managed by using spin locks or global semaphores.
In step S2, when the global message queue is initialized, the value of the global semaphore controlling the data queue is initialized to 0, and the value of the global semaphore controlling the idle queue is initialized to the maximum number of messages allowed to be carried by the message queue.
As shown in fig. 5, in step S3, the operation of sending a message to the global message queue by the thread a specifically includes:
s31) obtaining the global semaphore controlling the idle queue in the message queue;
s32) if the global semaphore for controlling the idle queue is 0, which indicates that the message queue has reached the maximum message number and the global message queue is full, the thread is blocked by the global semaphore;
s33) if the global semaphore of the control idle queue is not 0, taking a block of idle area from the idle queue, copying the message to the idle area, adding the message into the data queue according to the message priority, modifying the message attribute, and releasing the global semaphore of the control data queue.
As shown in fig. 6, in step S3, the operation of thread B receiving the message from the message queue specifically includes:
s34) obtaining the global semaphore of the control data queue in the message queue;
s35) if the global semaphore of the control data queue is 0, indicating that the message queue is empty, blocking the thread through the global semaphore;
s36) if the global semaphore of the control data queue is not 0, according to the message priority, according to FIFO or other principles, taking down a message from the data queue, modifying the message attribute, then copying the message, adding the message into the idle queue, and releasing the global semaphore of the control idle queue.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned.
Furthermore, it should be understood that although the present description refers to embodiments, not every embodiment may contain only a single embodiment, and such description is for clarity only, and those skilled in the art should integrate the description, and the embodiments may be combined as appropriate to form other embodiments understood by those skilled in the art.

Claims (8)

1. A global message queue implementation method based on a multi-core multi-processor parallel system is characterized by comprising the following steps:
s1) each processing core of the multi-core multi-processor parallel system carries out system initialization, and a segment of shared storage space is mapped through SRIO for storing a global message queue and a global message queue name table;
s2) creating and initializing a global message queue from a global message queue buffer pool corresponding to the processing core to which the thread belongs, and filling global message queue information into a global message queue name table;
s3), when the thread sends or receives the message to or from the global message queue, the global message queue resource is managed and controlled by the global semaphore, and the message transmission is realized.
2. The method according to claim 1, wherein the method comprises: the number of processor nodes of the multi-core multi-processor parallel system is at least 1; at least 1 processing core of the processor node is provided; and the processor internode or processing core supports SRIO bus interconnect.
3. The method according to claim 1, wherein the method comprises: in step S1), the system initialization process includes:
s11), each processing core initializes SRIO, and maps a segment of shared storage space through the SRIO for storing a global message queue and a global message queue name table;
s12) selecting any processing core as a main processing core, creating and initializing a shared global message queue name table for recording all created global message queues;
s13) creating a global message queue buffer pool and a message buffer pool in the shared memory space mapped by each processing core.
4. The method according to claim 3, wherein the method comprises: in step S12), the content of the global message queue name table includes a global semaphore controlling exclusive access of the name table, the number of all created global message queues in the name table, and all created global message queue information;
the global message queue information comprises a name, a type, a processing core to which the global message queue belongs, message attributes, opening times, a data queue, an idle queue and global semaphores;
the message attributes comprise the maximum message number, the message size, the message identification and the current message number, and the global semaphore comprises a global semaphore used for controlling data queue access and a global semaphore used for controlling idle queue access.
5. The method according to claim 3, wherein the method comprises: in step S13), the global message queue buffer pool is used to allocate a global message queue, and the message buffer pool is used to allocate an idle queue of the global message queue;
the global message queue buffer pool and the message buffer pool are bidirectional linked lists with head pointers, and the heads of the linked lists are all placed on the processing cores to which the linked lists belong, and mutually exclusive access can be managed by using spin locks or global semaphores.
6. The method according to claim 1, wherein the method comprises: in step S2), when the global message queue is initialized, the value of the global semaphore controlling the data queue is initialized to 0, and the value of the global semaphore controlling the idle queue is initialized to the maximum number of messages allowed to be carried by the message queue.
7. The method according to claim 1, wherein the method comprises: in step S3), the operation of sending the message to the global message queue by the thread specifically includes:
s31) obtaining the global semaphore controlling the idle queue in the message queue;
s32) if the global semaphore for controlling the idle queue is 0, which indicates that the message queue has reached the maximum message number and the global message queue is full, the thread is blocked by the global semaphore;
s33) if the global semaphore of the control idle queue is not 0, taking a block of idle area from the idle queue, copying the message to the idle area, adding the message into the data queue according to the message priority, modifying the message attribute, and releasing the global semaphore of the control data queue.
8. The method according to claim 1, wherein in step S3), the operation of the thread receiving the message from the message queue specifically includes:
s34) obtaining the global semaphore of the control data queue in the message queue;
s35) if the global semaphore of the control data queue is 0, indicating that the message queue is empty, blocking the thread through the global semaphore;
s36) if the global semaphore of the control data queue is not 0, according to the message priority and the FIFO principle, taking a message from the data queue, modifying the message attribute, then copying the message, adding the message into the idle queue, and releasing the global semaphore of the control idle queue.
CN202011360414.7A 2020-11-27 2020-11-27 Global message queue implementation method based on multi-core multi-processor parallel system Active CN112486702B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011360414.7A CN112486702B (en) 2020-11-27 2020-11-27 Global message queue implementation method based on multi-core multi-processor parallel system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011360414.7A CN112486702B (en) 2020-11-27 2020-11-27 Global message queue implementation method based on multi-core multi-processor parallel system

Publications (2)

Publication Number Publication Date
CN112486702A true CN112486702A (en) 2021-03-12
CN112486702B CN112486702B (en) 2024-02-13

Family

ID=74936028

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011360414.7A Active CN112486702B (en) 2020-11-27 2020-11-27 Global message queue implementation method based on multi-core multi-processor parallel system

Country Status (1)

Country Link
CN (1) CN112486702B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113300946A (en) * 2021-05-24 2021-08-24 北京理工大学 Multi-core multi-communication protocol gateway and management scheduling method thereof
CN113672364A (en) * 2021-08-02 2021-11-19 北京奇艺世纪科技有限公司 Task scheduling method and device, electronic equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080148280A1 (en) * 2006-12-13 2008-06-19 Stillwell Joseph W Apparatus, system, and method for autonomically managing multiple queues
US20080244231A1 (en) * 2007-03-30 2008-10-02 Aaron Kunze Method and apparatus for speculative prefetching in a multi-processor/multi-core message-passing machine
US20120291034A1 (en) * 2011-05-14 2012-11-15 International Business Machines Corporation Techniques for executing threads in a computing environment
CN108595282A (en) * 2018-05-02 2018-09-28 广州市巨硅信息科技有限公司 A kind of implementation method of high concurrent message queue
CN109144749A (en) * 2018-08-14 2019-01-04 苏州硅岛信息科技有限公司 A method of it is communicated between realizing multiprocessor using processor
CN111722942A (en) * 2020-05-29 2020-09-29 天津大学 Transformation method of distributed real-time operating system

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080148280A1 (en) * 2006-12-13 2008-06-19 Stillwell Joseph W Apparatus, system, and method for autonomically managing multiple queues
US20080244231A1 (en) * 2007-03-30 2008-10-02 Aaron Kunze Method and apparatus for speculative prefetching in a multi-processor/multi-core message-passing machine
US20120291034A1 (en) * 2011-05-14 2012-11-15 International Business Machines Corporation Techniques for executing threads in a computing environment
CN102841810A (en) * 2011-05-14 2012-12-26 国际商业机器公司 Techniques for executing threads in a computing environment
CN108595282A (en) * 2018-05-02 2018-09-28 广州市巨硅信息科技有限公司 A kind of implementation method of high concurrent message queue
CN109144749A (en) * 2018-08-14 2019-01-04 苏州硅岛信息科技有限公司 A method of it is communicated between realizing multiprocessor using processor
CN111722942A (en) * 2020-05-29 2020-09-29 天津大学 Transformation method of distributed real-time operating system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
弓静: "嵌入式并行计算管理中间件技术研究", 中国优秀硕士学位论文全文数据库 信息科技辑, no. 12 *
陈金忠;耿锐;: "一种嵌入式系统多处理器间通信协议的应用实验", 单片机与嵌入式系统应用, no. 05 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113300946A (en) * 2021-05-24 2021-08-24 北京理工大学 Multi-core multi-communication protocol gateway and management scheduling method thereof
CN113300946B (en) * 2021-05-24 2022-05-10 北京理工大学 Multi-core multi-communication protocol gateway and management scheduling method thereof
CN113672364A (en) * 2021-08-02 2021-11-19 北京奇艺世纪科技有限公司 Task scheduling method and device, electronic equipment and storage medium
CN113672364B (en) * 2021-08-02 2023-09-01 北京奇艺世纪科技有限公司 Task scheduling method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN112486702B (en) 2024-02-13

Similar Documents

Publication Publication Date Title
CN105579961B (en) Data processing system, operating method and hardware unit for data processing system
US6823472B1 (en) Shared resource manager for multiprocessor computer system
US8381230B2 (en) Message passing with queues and channels
US20090055831A1 (en) Allocating Network Adapter Resources Among Logical Partitions
US9448934B2 (en) Affinity group access to global data
US20150067695A1 (en) Information processing system and graph processing method
US11429462B2 (en) Method and apparatus for peer-to-peer messaging in heterogeneous machine clusters
CN112486702B (en) Global message queue implementation method based on multi-core multi-processor parallel system
US9311044B2 (en) System and method for supporting efficient buffer usage with a single external memory interface
US20090199191A1 (en) Notification to Task of Completion of GSM Operations by Initiator Node
CN115525417A (en) Data communication method, communication system, and computer-readable storage medium
US8972693B2 (en) Hardware managed allocation and deallocation evaluation circuit
KR102027391B1 (en) Method and apparatus for accessing data visitor directory in multicore systems
CN112148467A (en) Dynamic allocation of computing resources
WO2023160484A1 (en) Image processing method, related apparatus and system
CN112486704B (en) Multi-core multiprocessor synchronization and communication system based on shared storage
CN115878333A (en) Method, device and equipment for judging consistency between process groups
CN112463716B (en) Global semaphore implementation method based on multi-core multi-processor parallel system
CN112486703B (en) Global data memory management method based on multi-core multi-processor parallel system
Faraji Improving communication performance in GPU-accelerated HPC clusters
WO2012127534A1 (en) Barrier synchronization method, barrier synchronization device and processing device
US9251100B2 (en) Bitmap locking using a nodal lock
US20230114263A1 (en) Hardware assisted efficient memory management for distributed applications with remote memory accesses
US20230222018A1 (en) Data Transmission Method and System
JPH0950423A (en) Data transmission method between remote information-processing systems

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant