CN115080277B - Inter-core communication system of multi-core system - Google Patents

Inter-core communication system of multi-core system Download PDF

Info

Publication number
CN115080277B
CN115080277B CN202210855849.1A CN202210855849A CN115080277B CN 115080277 B CN115080277 B CN 115080277B CN 202210855849 A CN202210855849 A CN 202210855849A CN 115080277 B CN115080277 B CN 115080277B
Authority
CN
China
Prior art keywords
node
message
buffer
shared memory
buffer area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210855849.1A
Other languages
Chinese (zh)
Other versions
CN115080277A (en
Inventor
徐坤林
崔国勋
梁煜键
高萌
黄键
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Foshan Institute Of Intelligent Equipment Technology
Original Assignee
Foshan Institute Of Intelligent Equipment Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Foshan Institute Of Intelligent Equipment Technology filed Critical Foshan Institute Of Intelligent Equipment Technology
Priority to CN202210855849.1A priority Critical patent/CN115080277B/en
Publication of CN115080277A publication Critical patent/CN115080277A/en
Application granted granted Critical
Publication of CN115080277B publication Critical patent/CN115080277B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/544Buffers; Shared memory; Pipes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/52Program synchronisation; Mutual exclusion, e.g. by means of semaphores
    • G06F9/526Mutual exclusion algorithms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/546Message passing systems or structures, e.g. queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/54Indexing scheme relating to G06F9/54
    • G06F2209/548Queue

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multi Processors (AREA)

Abstract

The invention relates to the technical field of communication, and provides an inter-core communication system of a multi-core system, wherein the multi-core system comprises a plurality of cores, each core is provided with a plurality of nodes, the inter-core communication system comprises a shared memory and a shared memory communication module arranged in each node, and the shared memory communication module comprises a cross-platform interface, a shared memory manager, a buffer area message processor established by the shared memory manager and a shared memory distributor; the shared memory provides memory space for transmitting buffer zone messages and shared variables for each node; the shared memory manager is initialized after mapping the shared memory to a self address space through a cross-platform interface; the buffer message processor reads the buffer messages of other nodes from the shared memory and writes the buffer messages generated by the buffer message processor into the shared memory; and the shared memory distributor applies for distributing the memory space to the shared memory. According to the invention, the shared memory communication module for independently accessing the shared memory is built for the node of each kernel, so that inter-kernel communication is realized.

Description

Inter-core communication system of multi-core system
Technical Field
The invention relates to the technical field of communication, in particular to an inter-core communication system of a multi-core system.
Background
In the multi-core processor, different cores can run different operating systems in parallel by using the multi-core characteristic, so that specific integration requirements of application ecology, instantaneity and the like are met, and the inter-core data exchange mode and performance of the multi-core system are particularly important.
At present, an inter-core communication mode of a multi-core system is mainly realized by adopting a virtualization technology or a technology combining shared memory and inter-core interruption, but the two technologies still have certain disadvantages: firstly, the virtualization technology is to exchange data through an intermediate virtual machine, so that the real-time performance is reduced, and the intermediate virtual machine is created and needs extra hardware resources to provide support, so that the overhead of system resources is increased; secondly, the technology of combining the shared memory and the inter-core interrupt needs to perform mutual exclusion processing when a plurality of cores access the shared memory at the same time, so that only one core can access the shared memory at the same time, which causes low operation efficiency, and the real-time performance is reduced because the inter-core interrupt is triggered twice when data is sent once.
Disclosure of Invention
The invention provides an inter-core communication system of a multi-core system, which is used for solving one or more technical problems in the prior art and at least providing a beneficial selection or creation condition.
The embodiment of the invention provides an inter-core communication system of a multi-core system, wherein the multi-core system comprises a plurality of cores, each core is provided with a plurality of nodes, the inter-core communication system comprises a shared memory and a shared memory communication module arranged in each node, and the shared memory communication module comprises a cross-platform interface, a shared memory manager, a buffer message processor and a shared memory distributor;
the shared memory provides memory space for each node to transmit buffer area messages and shared variables; the shared memory manager is initialized after mapping the shared memory to a self address space through the cross-platform interface, and meanwhile, the buffer area message processor and the shared memory distributor are established; the buffer message processor reads buffer messages transmitted by other nodes from the shared memory and writes buffer messages generated by the buffer message processor into the shared memory; the shared memory distributor applies for distributing a memory space for storing shared variables to the shared memory;
the shared memory is divided with a message buffer zone, the message buffer zone is internally provided with a plurality of node buffer zones, and each node buffer zone adopts a read-write separated circular queue and is used for caching buffer zone messages transmitted between a node executing a write command and another node executing a read command.
Furthermore, a public information area and an idle distribution area are also divided in the shared memory;
the public information area is used for storing public information required to be used by each node during initialization and state information of each node, wherein the public information comprises the number of nodes, node IDs (identities), an initial address of a message buffer area, the size of the message buffer area, the initial address of an idle distribution area and the size of the idle distribution area;
the free allocation area is used for storing the shared variables transmitted by each node.
Further, the operation process of initializing the shared memory manager includes:
when the node where the shared memory manager is located is a main node, initializing the content of the public information area, and simultaneously taking the current node ID as the node ID of the node;
or when the node where the shared memory manager is located is a slave node, reading the content of the public information area, and acquiring the current node ID as the node ID of the node through CAS operation;
the shared memory manager is internally provided with a mapping table from the node ID to the node name, self node information is registered in the mapping table according to the node ID of the node where the shared memory manager is located, and then self node information fed back by the rest of other nodes is received and registered in the mapping table.
Further, the message structure of the buffer message includes a head magic number, a source node ID, a destination node ID, a message length, a message type, a synchronizer ID, a message ID, an original message ID, a function code, message data, and a tail magic number.
Further, the reading, by the buffer message processor, the buffer message transmitted by the other node from the shared memory includes:
and screening all node buffer areas of which the nodes where the buffer area message processors are appointed to execute the reading command from the message buffer areas, and sequentially reading the buffer area messages one by one from all the node buffer areas.
Further, the reading operation of the buffer message is performed on each node buffer, and the reading operation comprises the following steps:
extracting byte stream data which accords with a message structure of a buffer area message from a node buffer area according to a reading position and a writing position stored in the node buffer area, reading the byte stream data into a self address space of a node where a buffer area message processor is located, and updating the reading position;
and performing deserialization on the byte stream data to obtain a buffer area message to be processed, and calling a corresponding user processing program according to the message type of the buffer area message to be processed to process the buffer area message to be processed after confirming that each field of the buffer area message to be processed is an effective value.
Further, the buffer message processor writes the buffer message generated by itself into the shared memory, including:
extracting a buffer area message from a sending queue appointed by the buffer area message processor, and searching a corresponding node buffer area from the message buffer area according to a source node ID and a destination node ID recorded in the extracted buffer area message;
determining the residual data writing space in the searched node buffer area according to the reading position and the writing position stored in the searched node buffer area;
and performing serialization processing on the extracted buffer area message to obtain byte stream data to be processed, completely copying the byte stream data to be processed from the writing position to the data writing space, and then updating the writing position.
Further, the operating process of the shared memory allocator applying for allocating a memory space for storing a shared variable to the shared memory includes:
inquiring the initial position of the memory space which is applied for distribution at present from the space distribution area, and then determining the end position of the memory space according to the size of the memory space which is applied for distribution at present;
when the ending position of the memory space is within the limit of the space distribution area, acquiring the memory space applied for distribution, and updating the starting position of the memory space according to the ending position of the memory space;
when the memory space ending position does not fall within the limits of the space allocation zone, keeping the memory space starting position unchanged.
The invention has at least the following beneficial effects: the message buffer area formed by the node buffer areas is arranged in the shared memory, and each node buffer area is appointed to adopt a read-write separated circular queue, so that the buffer area messages in the message buffer area can be read and written by a plurality of cores at the same time, mutual exclusion processing is not needed, the transmission efficiency of non-fixed-length data among the cores is improved, and the extremely-speed transmission of the fixed-length data can be realized through a shared variable. By adopting the design of master and slave nodes among a plurality of kernels and endowing each node with different node IDs by CAS operation, the mutual exclusion of real-time threads among the plurality of kernels can be realized, and the problem of access conflict of a shared memory is avoided. By independently arranging a shared memory communication module which is in butt joint with the shared memory for each node in each kernel, the kernel can support a plurality of instances to carry out inter-kernel communication simultaneously.
Drawings
The accompanying drawings are included to provide a further understanding of the present invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the example serve to explain the principles of the invention and do not constitute a limitation thereof.
Fig. 1 is a schematic diagram of a framework structure of an inter-core communication system of a multi-core system according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a shared memory according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
It should be noted that although functional block divisions are provided in the system drawings and logical orders are shown in the flowcharts, in some cases, the steps shown and described may be performed in different orders than the block divisions in the systems or in the flowcharts. The terms first, second and the like in the description and in the claims, and the drawings described above, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order.
Referring to fig. 1, fig. 1 is a schematic diagram illustrating a framework structure of an inter-core communication system of a multi-core system according to an embodiment of the present invention, where the multi-core system includes a plurality of cores, each core includes a plurality of nodes (only two cores are listed in fig. 1: CPU0 and CPU1, and each core is provided with one node for description), the inter-core communication system includes a shared memory and a shared memory communication module disposed inside each node, and the shared memory communication module includes a cross-platform interface, a shared memory manager, a buffer message processor, and a shared memory allocator.
In the implementation process of the invention, the shared memory allows the plurality of kernels to access, provides a memory space for transmitting buffer messages and shared variables for each node, and requires a user to set the access attribute of the shared memory to an enabled cache state when the shared memory is formally started for use; the shared memory manager is initialized after mapping the shared memory to a self address space through the cross-platform interface, and meanwhile, the buffer area message processor and the shared memory distributor are established; the buffer message processor reads buffer messages transmitted by other nodes from the shared memory and writes buffer messages generated by the buffer message processor into the shared memory; and the shared memory distributor distributes a memory space for storing the shared variables to the shared memory application.
In addition, the cross-platform interface is also provided with an interface for executing thread mutual exclusion or thread synchronization; the shared memory manager is also provided with an interface for inquiring the ID or the state of the node and an interface for realizing the mutual conversion between the physical address provided by the shared memory and the virtual address provided by the node; the buffer message handler is also provided with an interface for sending or handling buffer messages.
In the embodiment of the present invention, the shared memory is divided into a common information area, a message buffer area, and an idle allocation area, as shown in fig. 2, where the common information area is used to store common information that each node needs to use when initializing and state information of each node; the message buffer zone is internally provided with a plurality of node buffer zones, and each node buffer zone adopts a read-write separated circular queue and is used for caching buffer zone messages transmitted between a node executing a write command and another node executing a read command; the free allocation area is used for storing the shared variables transmitted by each node.
The public information comprises a version identifier, initialization time, the number of nodes, a node ID, a message buffer area initial address, a message buffer area size, an idle allocation area initial address, an idle allocation area size and a reserved area, wherein the message buffer area initial address refers to the offset of the initial position of the message buffer area relative to the initial position of the shared memory, and the idle allocation area initial address refers to the offset of the initial position of the idle allocation area relative to the initial position of the shared memory; and the state information of each node includes the state and heartbeat count of each node.
In the embodiment of the present invention, according to the number N of nodes fed back by the common information, the message buffer dynamically generates N × (N-1) node buffers, that is, N-1 node buffers are provided for any node, where the N-1 node buffers are all written by the node, and the N-1 nodes except the node execute read commands respectively; or, the N-1 node buffers are used for executing the read command by the node, and executing the write command by the other N-1 nodes except the node respectively.
In addition, for any node buffer area storing a reading position and a writing position, only a first node which is associated with the node buffer area and executes a reading command can update the reading position, and the first node cannot read the writing position and data behind the writing position; similarly, only a second node associated with the node buffer and executing a write command may update the write location, and the second node will not write data to the read location; therefore, when the first node reads data from the node buffer area, the second node can write data into the node buffer area, and mutual exclusion operation among the nodes is not required to be executed.
In the embodiment of the present invention, there are seven states set for any one node, which are respectively: when the node is in the uninitialized state, the node does not start to execute initialization; when the node is in the initialization state, the node is explained to be executing initialization; when the node is in the ready state, the node is described to complete the basic initialization; when the node is in the running state, the node can use the communication function; when the node is in an error state, the node is indicated to have an error in the communication process; when the node is in the ending state, the node is indicating to close the communication function; when the node is in an unknown state, indicating that data errors occur in the shared memory; the communication function refers to that a plurality of nodes perform access operation on the shared memory.
In this embodiment of the present invention, the operation process of initializing the shared memory manager includes the following steps:
step S11, when the node where the shared memory manager is located is a master node, content initialization is performed on the public information area, meanwhile, a current node ID is used as a node ID of the node, and since the master node performs initialization prior to a slave node, the current node ID is 0 at the moment, and then the current node ID is updated to 1;
or, when the node where the shared memory manager is located is a slave node, reading the content of the common information area, acquiring a current node ID as a node ID of the node through CAS (compare and swap) operation, and then adding one to the current node ID, where the CAS operation is performed to ensure that a plurality of slave nodes do not acquire the same node ID when performing initialization simultaneously, thereby ensuring that threads between different cores are mutually exclusive;
step S12, a mapping table from the node ID to the node name is arranged in the shared memory manager, self node information is registered in the mapping table according to the node ID of the node where the shared memory manager is located, self node information fed back by other nodes is received, and registration is completed in the mapping table, wherein the self node information comprises the node ID and the node name which is defined by a user in advance.
More specifically, the step S12 includes: after the node where the shared memory manager is located registers own node information in the mapping table, setting the own node state as a ready state; and sending the node information of the node to other nodes entering the ready state for registration, receiving the node information of the node sent by other nodes entering the ready state for registration at the same time, and setting the node state of the node where the shared memory manager is located to be an operating state by a user at the moment until all the nodes are completely registered, so that the node where the shared memory manager is located can start to execute a communication function.
In the embodiment of the present invention, the packet structure of the buffer message includes a head magic number (two bytes), a source node ID (two bytes), a destination node ID (two bytes), a message length (two bytes), a message type (two bytes), a synchronizer ID (two bytes), a message ID (two bytes), an original message ID (two bytes), a function code (two bytes), message data (the number of bytes should be determined according to the message length), and a tail magic number (two bytes); wherein the source node ID and the destination node ID are different from each other, the source node ID referring to a node that performs a write command and the destination node ID referring to another node that performs a read command, or the source node ID referring to a node that performs a read command and the destination node ID referring to another node that performs a write command; the message length refers to the byte length occupied by the message data, the message types include a synchronization request type, a response type and a notification type, and a user processing program can select a proper execution mode to process the buffer message according to the function code.
For the use of synchronizer ID, message ID and original message ID, the buffer message transfer between node i and node j is taken as an example for explanation: when a node i transmits a first buffer area message written in a synchronization request type to a node j, automatically setting a first synchronizer ID and a first message ID in a message structure of the first buffer area message by the node i; when the node j transmits a second buffer message written with a response type to the node i aiming at the first buffer message, a second synchronizer ID in a message structure of the second buffer message is the first synchronizer ID, and an original message ID is the first message ID.
In this embodiment of the present invention, the buffer message processor reads the buffer messages transmitted by other nodes from the shared memory, and actually shows that: and screening all node buffer areas of which the nodes where the buffer area message processors are appointed to execute the reading command from the message buffer areas, and sequentially reading the buffer area messages one by one from all the node buffer areas.
Specifically, the operation of reading the buffer information from each node buffer includes the following steps:
step S21, extracting byte stream data which accords with a message structure of a buffer area message from a node buffer area according to a reading position and a writing position stored in the node buffer area, reading the byte stream data into a self address space of a node where a buffer area message processor is located, and updating the reading position, namely moving the reading position backwards according to the total length of the byte stream data;
and S22, performing deserialization on the byte stream data to obtain a buffer area message to be processed, and calling a corresponding user processing program according to the message type of the buffer area message to be processed to process the buffer area message to be processed after confirming that each field of the buffer area message to be processed is a valid value.
In addition, the step S21 further includes: according to the reading position and the writing position stored in the node buffer area, extracting byte stream data which does not conform to the message structure of the buffer area message from the node buffer area (namely the message data carried by the byte stream data does not conform to the length of the message recorded by the byte stream data), at the moment, the buffer area message processor records a log and releases the byte stream data, and then the reading position is moved backwards according to the total length of the byte stream data.
In the step S22, when the to-be-processed buffer message belongs to the notification type, and the to-be-processed buffer message is identified to carry the self node information sent by other nodes according to the function code carried in the to-be-processed buffer message, the self node information is extracted from the to-be-processed buffer message and is delivered to the shared memory manager for initialization; when the buffer information to be processed belongs to the response type, extracting the corresponding response information from the buffer information to be processed for processing and awakening the thread waiting for the synchronous request information; and when the buffer information to be processed belongs to the synchronous request type, extracting the corresponding synchronous request information from the buffer information to be processed and processing the synchronous request information through a processing interface appointed by a user.
In this embodiment of the present invention, before writing the buffer message generated by the buffer message processor into the shared memory, the buffer message processor should first put the buffer message into its designated sending queue, which is specifically represented as:
when the buffer message processor writes the buffer message in an asynchronous mode, the buffer message is marked with a message ID after field inspection is carried out on the buffer message, and then the buffer message is put into the sending queue;
when the buffer message processor writes the buffer message in a synchronous mode, the buffer message is marked with a message ID after field inspection, and then the buffer message is bound with a synchronizer and then is put into the sending queue, so that the buffer message enters a block to wait for awakening.
In the embodiment of the present invention, the writing, by the buffer message processor, the buffer message generated by the buffer message processor into the shared memory includes the following steps:
step S31, extracting a buffer message from the transmission queue specified by the buffer message processor, and according to the source node ID and the destination node ID recorded in the extracted buffer message, searching a corresponding node buffer from the message buffer, where the searched node buffer specifies that the source node ID (which refers to the node where the buffer message processor is located) executes a write command, and specifies that the destination node ID executes a read command;
step S32, determining the remaining data writing space in the searched node buffer area according to the reading position and the writing position stored in the searched node buffer area, wherein the data writing space takes the writing position as a starting point and a memory space position ahead of the reading position as an end point, and the reading position is appointed to fall in front of the writing position at this moment;
step S33, performing serialization processing on the extracted buffer information to obtain byte stream data to be processed, then completely copying the byte stream data to be processed from the write position to the data write space, and then updating the write position, that is, moving the write position backward according to the total length of the byte stream data to be processed, at this time, the extracted buffer information needs to be released from the sending queue at the same time.
In the step S33, when the total length of the byte stream data to be processed exceeds the data writing space, which indicates that the byte stream data to be processed cannot be copied to the data writing space, the buffer message processor records a log and releases the extracted buffer message from the transmission queue.
It should be noted that, since a plurality of buffer messages are actually allowed to be put into the transmission queue, the above steps S31 to S33 are executed in a loop until the buffer messages in the transmission queue are emptied.
In this embodiment of the present invention, the operation process of the shared memory allocator applying for allocating a memory space for storing a shared variable to the shared memory includes the following steps:
step S41, querying a memory space starting position currently applied for allocation from the space allocation region, and determining a memory space ending position according to a size of the memory space currently applied for allocation, where the size of the memory space is actually a length of a shared variable to be written in by a node where the shared memory allocator is located;
step S42, when the ending position of the memory space falls within the limit of the space distribution area, acquiring the memory space applied for distribution, and updating the starting position of the memory space through CAS operation according to the ending position of the memory space, namely defining the space position of the ending position of the memory space in the future as the starting position of the memory space applied for distribution next time; if the CAS operation fails, returning to re-execute step S41;
or when the ending position of the memory space does not fall within the limit of the space allocation region, determining that the task of applying for allocating the memory space to the shared memory fails, and at this time, continuously designating the starting position of the memory space currently applied for allocation as the starting position of the memory space applied for allocation next time.
Before executing step S41, the shared memory allocator applies for aligning the memory, and at this time, it needs to perform alignment processing on a space position subsequent to the last memory space end position that is applied for allocation, which is queried in the space allocation region, to obtain a memory space start position that is applied for allocation currently.
In the embodiment of the present invention, the description is made by taking an example that the node p reads the shared variable written by the node k in the free allocation area: a buffer area message processor arranged on the node p encapsulates the shared variable name which is associated with the shared variable and is defined by the user into a first buffer area message and writes the first buffer area message into the shared memory; a buffer area message processor arranged on the node k reads the first buffer area message through the shared memory and obtains the shared variable name from the first buffer area message, and then inquires a first virtual address stored in the address space of the node k by the shared variable; a shared memory manager set by the node k converts the first virtual address into a physical address of the shared variable stored in the shared memory; a buffer area message processor arranged on the node k encapsulates the physical address into a second buffer area message and writes the second buffer area message into the shared memory; a buffer area message processor arranged on the node p reads the second buffer area message through the shared memory and obtains the physical address from the second buffer area message; and the shared memory manager arranged on the node p converts the physical address into a second virtual address stored in the own address space of the node p by the shared variable, so that the node p can directly read the shared variable through the second virtual address.
While the description of the present application has been presented in considerable detail and with particular reference to several illustrated embodiments, it is not intended to be limited to any such detail or embodiment or any particular embodiment, but rather should be construed to effectively cover the intended scope of the application by providing a broad interpretation of such claims in view of the prior art, and by reference to the appended claims. Further, the foregoing describes the present application in terms of embodiments foreseen by the inventor for which an enabling description was available, notwithstanding that insubstantial changes from the present application, not presently foreseen, may nonetheless represent equivalents thereto.

Claims (2)

1. An inter-core communication system of a multi-core system, wherein the multi-core system comprises a plurality of cores, each core is provided with a plurality of nodes, and the inter-core communication system comprises a shared memory and a shared memory communication module arranged in each node, wherein the shared memory communication module comprises a cross-platform interface, a shared memory manager, a buffer message processor and a shared memory distributor;
the shared memory provides memory space for each node to transmit buffer messages and shared variables; the shared memory manager is initialized after mapping the shared memory to a self address space through the cross-platform interface, and meanwhile, the buffer area message processor and the shared memory distributor are established; the buffer message processor reads buffer messages transmitted by other nodes from the shared memory and writes buffer messages generated by the buffer message processor into the shared memory; the shared memory distributor applies for distributing a memory space for storing shared variables to the shared memory;
a message buffer area is divided in the shared memory, a plurality of node buffer areas are arranged in the message buffer area, and each node buffer area adopts a read-write separated circular queue and is used for caching buffer area messages transmitted between a node executing a write command and another node executing a read command;
a public information area and an idle distribution area are also divided in the shared memory;
the public information area is used for storing public information required to be used by each node during initialization and state information of each node, wherein the public information comprises the number of nodes, node IDs (identities), an initial address of a message buffer area, the size of the message buffer area, the initial address of an idle distribution area and the size of the idle distribution area;
the idle distribution area is used for storing shared variables transmitted by each node;
the operation process of the shared memory manager for initialization includes:
when the node where the shared memory manager is located is a main node, initializing the content of the public information area, and taking the current node ID as the node ID of the node;
or when the node where the shared memory device is located is a slave node, reading the content of the public information area, and acquiring the current node ID as the node ID of the node through CAS operation;
a mapping table from a node ID to a node name is arranged in the shared memory manager, self node information is registered in the mapping table according to the node ID of the node where the shared memory manager is located, and then self node information fed back by other nodes is received and registered in the mapping table;
the message structure of the buffer zone message comprises a head magic number, a source node ID, a destination node ID, a message length, a message type, a synchronizer ID, a message ID, an original message ID, a function code, message data and a tail magic number;
and performing reading operation of the buffer information to each node buffer, wherein the reading operation comprises the following steps:
extracting byte stream data which accords with a message structure of a buffer area message from a node buffer area according to a reading position and a writing position stored in the node buffer area, reading the byte stream data into a self address space of a node where a buffer area message processor is located, and updating the reading position;
performing deserialization on the byte stream data to obtain a buffer area message to be processed, and calling a corresponding user processing program according to the message type of the buffer area message to be processed to process the buffer area message to be processed after confirming that each field of the buffer area message to be processed is a valid value;
the buffer message processor writes the buffer message generated by the buffer message processor into the shared memory, and the method comprises the following steps:
extracting a buffer area message from a sending queue appointed by the buffer area message processor, and searching a corresponding node buffer area from the message buffer area according to a source node ID and a destination node ID recorded in the extracted buffer area message;
determining the residual data writing space in the searched node buffer area according to the reading position and the writing position stored in the searched node buffer area;
serializing the extracted buffer area message to obtain byte stream data to be processed, completely copying the byte stream data to be processed from a writing position to the data writing space, and then updating the writing position;
the operation process of the shared memory distributor applying for distributing a memory space for storing the shared variables to the shared memory comprises the following steps:
inquiring the initial position of the memory space which is currently applied for distribution from the space distribution area, and then determining the end position of the memory space according to the size of the memory space which is currently applied for distribution;
when the ending position of the memory space falls within the limit of the space distribution area, acquiring the memory space applied for distribution, and updating the starting position of the memory space according to the ending position of the memory space;
when the memory space ending position does not fall within the limits of the space allocation zone, keeping the memory space starting position unchanged.
2. The inter-core communication system of the multi-core system according to claim 1, wherein the buffer message handler reads the buffer messages transferred by the other nodes from the shared memory, and the reading comprises:
and screening all node buffer areas of which the nodes where the buffer area message processors are appointed to execute the reading commands from the message buffer areas, and sequentially reading the buffer area messages one by one from all the node buffer areas.
CN202210855849.1A 2022-07-21 2022-07-21 Inter-core communication system of multi-core system Active CN115080277B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210855849.1A CN115080277B (en) 2022-07-21 2022-07-21 Inter-core communication system of multi-core system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210855849.1A CN115080277B (en) 2022-07-21 2022-07-21 Inter-core communication system of multi-core system

Publications (2)

Publication Number Publication Date
CN115080277A CN115080277A (en) 2022-09-20
CN115080277B true CN115080277B (en) 2022-12-06

Family

ID=83260543

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210855849.1A Active CN115080277B (en) 2022-07-21 2022-07-21 Inter-core communication system of multi-core system

Country Status (1)

Country Link
CN (1) CN115080277B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115840650B (en) * 2023-02-20 2023-06-02 麒麟软件有限公司 Method for realizing three-terminal system communication based on kvisor isolated real-time domain
CN117407356B (en) * 2023-12-14 2024-04-16 芯原科技(上海)有限公司 Inter-core communication method and device based on shared memory, storage medium and terminal

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102077181A (en) * 2008-04-28 2011-05-25 惠普开发有限公司 Method and system for generating and delivering inter-processor interrupts in a multi-core processor and in certain shared-memory multi-processor systems
CN113326149A (en) * 2021-05-27 2021-08-31 展讯通信(天津)有限公司 Inter-core communication method and device of heterogeneous multi-core system
CN114443322A (en) * 2022-01-20 2022-05-06 Oppo广东移动通信有限公司 Inter-core communication method, inter-core communication device, electronic equipment and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103559166A (en) * 2013-11-11 2014-02-05 厦门亿联网络技术股份有限公司 Method for high-speed data transmission between multiple cores

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102077181A (en) * 2008-04-28 2011-05-25 惠普开发有限公司 Method and system for generating and delivering inter-processor interrupts in a multi-core processor and in certain shared-memory multi-processor systems
CN113326149A (en) * 2021-05-27 2021-08-31 展讯通信(天津)有限公司 Inter-core communication method and device of heterogeneous multi-core system
CN114443322A (en) * 2022-01-20 2022-05-06 Oppo广东移动通信有限公司 Inter-core communication method, inter-core communication device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN115080277A (en) 2022-09-20

Similar Documents

Publication Publication Date Title
CN115080277B (en) Inter-core communication system of multi-core system
US20200387405A1 (en) Communication Method and Apparatus
US8209690B2 (en) System and method for thread handling in multithreaded parallel computing of nested threads
US20110125974A1 (en) Distributed symmetric multiprocessing computing architecture
JP4171910B2 (en) Parallel processing system and parallel processing program
CN107391400B (en) Memory expansion method and system supporting complex memory access instruction
CN111400268A (en) Log management method of distributed persistent memory transaction system
US20110265093A1 (en) Computer System and Program Product
US10733101B2 (en) Processing node, computer system, and transaction conflict detection method
CN114490141B (en) High-concurrency IPC data interaction method based on shared memory
CN110119304B (en) Interrupt processing method and device and server
US20160034332A1 (en) Information processing system and method
CN112307119A (en) Data synchronization method, device, equipment and storage medium
US20200319939A1 (en) Distributed system for distributed lock management and method for operating the same
US20170366612A1 (en) Parallel processing device and memory cache control method
US20030177273A1 (en) Data communication method in shared memory multiprocessor system
JP2009217721A (en) Data synchronization method in multiprocessor system and multiprocessor system
US20230137609A1 (en) Data synchronization method and apparatus
WO2022194021A1 (en) Concurrency control method, network card, computer device, and storage medium
JP4734348B2 (en) Asynchronous remote procedure call method, asynchronous remote procedure call program and recording medium in shared memory multiprocessor
JP3639366B2 (en) Address space sharing system
JP3169624B2 (en) Interprocessor communication method and parallel processor therefor
US20120151153A1 (en) Programmable Controller
CN116266101A (en) Distributed object storage system processing method and device and storage medium
KR100978083B1 (en) Procedure calling method in shared memory multiprocessor and computer-redable recording medium recorded procedure calling program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant