CN112860458A - Inter-process communication method and system based on shared memory - Google Patents
Inter-process communication method and system based on shared memory Download PDFInfo
- Publication number
- CN112860458A CN112860458A CN202110196496.4A CN202110196496A CN112860458A CN 112860458 A CN112860458 A CN 112860458A CN 202110196496 A CN202110196496 A CN 202110196496A CN 112860458 A CN112860458 A CN 112860458A
- Authority
- CN
- China
- Prior art keywords
- shared memory
- node
- pointer
- head
- variable
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000015654 memory Effects 0.000 title claims abstract description 377
- 238000000034 method Methods 0.000 title claims abstract description 183
- 238000004891 communication Methods 0.000 title claims abstract description 30
- 230000008569 process Effects 0.000 claims abstract description 150
- 230000007717 exclusion Effects 0.000 claims description 21
- 239000002243 precursor Substances 0.000 claims description 13
- 238000004364 calculation method Methods 0.000 claims description 6
- 238000013507 mapping Methods 0.000 claims description 4
- 238000007726 management method Methods 0.000 description 14
- 230000007246 mechanism Effects 0.000 description 9
- 238000010586 diagram Methods 0.000 description 7
- FFBHFFJDDLITSX-UHFFFAOYSA-N benzyl N-[2-hydroxy-4-(3-oxomorpholin-4-yl)phenyl]carbamate Chemical compound OC1=C(NC(=O)OCC2=CC=CC=C2)C=CC(=C1)N1CCOCC1=O FFBHFFJDDLITSX-UHFFFAOYSA-N 0.000 description 6
- 238000012545 processing Methods 0.000 description 5
- 230000005540 biological transmission Effects 0.000 description 2
- 230000002068 genetic effect Effects 0.000 description 2
- 230000002457 bidirectional effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 125000004122 cyclic group Chemical group 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- ZMRUPTIKESYGQW-UHFFFAOYSA-N propranolol hydrochloride Chemical compound [H+].[Cl-].C1=CC=C2C(OCC(O)CNC(C)C)=CC=CC2=C1 ZMRUPTIKESYGQW-UHFFFAOYSA-N 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 230000008054 signal transmission Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/54—Interprogram communication
- G06F9/544—Buffers; Shared memory; Pipes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/54—Interprogram communication
- G06F9/543—User-generated data transfer, e.g. clipboards, dynamic data exchange [DDE], object linking and embedding [OLE]
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Multi Processors (AREA)
Abstract
The invention discloses a shared memory-based interprocess communication method and a system, wherein the method comprises the following steps: s1: the management process determines the names of all shared memory modules to be created, and the block size and the block number of each shared memory module according to different service modules; s2: the management process initializes any shared memory module, wherein each shared memory module is a shared memory with a double-linked list structure; s3: repeating S2 until the creation of all the shared memory modules is completed; s4: the business process acquires the name of the corresponding shared memory module, then acquires the base address of the corresponding shared memory according to the name of the shared memory module, and applies for the shared memory; s5: the business process uses the applied shared memory; s6: and the business process acquires the name of the used shared memory module, then acquires the base address of the corresponding shared memory according to the name of the shared memory module, and returns the shared memory.
Description
Technical Field
The invention relates to the field of computer data processing, in particular to a shared memory-based interprocess communication method and system.
Background
Inter-Process Communication (IPC) has various modes, and the following modes are common:
1) pipe (pipe): the method is a half-duplex communication mode, data can only flow in one direction and can only be used among processes with genetic relationship, and the genetic relationship of the processes usually refers to parent-child process relationship;
2) named pipe (FIFO): it is also a half-duplex communication mode, but it allows communication between unrelated processes;
3) message queue (MessageQueue): the message queue overcomes the defects that the signal transmission information is less, the pipeline only can bear the plain byte stream, the size of the buffer area is limited, and the like;
4) shared storage (SharedMemory): the shared memory is the fastest IPC mode, is specially designed aiming at low running efficiency of communication modes among other processes, and is often matched with other communication mechanisms, such as semaphore, to realize synchronization and communication among the processes;
5) signal amount (Semaphore): the counter can be used for controlling the access of a plurality of processes to the shared resource, is often used as a locking mechanism to prevent other processes from accessing the resource when a process is accessing the shared resource, and is mainly used as a synchronization means between processes and different threads in the same process;
6) socket (Socket): is an interprocess communication mechanism that, unlike other communication mechanisms, can be used for different and inter-process communications;
7) signal (sinal): a signal is a relatively complex communication means for informing a receiving process that an event has occurred.
In the above common interprocess communication method, for the transmission of large data blocks among the processes, except for the shared memory, the communication overhead of other communication modes among the processes is relatively large. The memory sharing mode can enable a plurality of processes to access the memory space in the same block, and is the fastest available IPC form. The shared memory is often created by one process, other processes can map the same segment of shared memory into their own address space, and all processes can access addresses in the shared memory. If one process writes data into the shared memory, the change can be instantly seen by other processes accessing the same shared memory, the consumption of the memory in the large-scale data processing process is greatly reduced by using the shared memory, and the data transmission speed is improved.
However, shared memory does not provide a synchronization mechanism, and there is no automatic mechanism that prevents another process (e.g., a client process) from starting to read it until a service process finishes writing to shared memory. In addition, most of the existing shared memory implementation methods aim at the sharing and use of continuous memories, and are implemented by adopting a semaphore mechanism, so that the implementation scheme of adopting a mutual exclusion lock and a linked list is lacked, and the shared memory implementation method is complex, so that program crash is easily caused.
Disclosure of Invention
In order to solve the above problems, the present invention provides a method and a system for inter-process communication based on a shared memory, in which the shared memory is divided into a plurality of data blocks, each data block is managed by a bidirectional linked list, and data synchronization of the linked list is realized between processes by using an inter-process mutual exclusion lock, so as to simply and conveniently realize data communication between multiple processes through the shared memory.
In order to achieve the above object, the present invention provides a method for inter-process communication based on a shared memory, which comprises the following steps:
s1: the management process determines the names of all shared memory modules to be created, and the block size and the block number of each shared memory module according to different service modules;
s2: the management process initializes any shared memory module, wherein each shared memory module is a shared memory with a double-linked list structure and comprises two head nodes and a plurality of memory nodes, each head node comprises a process mutual exclusion lock, a head pointer and a tail pointer, and each memory node comprises a precursor pointer, a subsequent pointer, a node size and a data block;
s3: repeating S2 until the creation of all the shared memory modules is completed;
s4: the business process acquires the name of the corresponding shared memory module, then acquires the base address of the corresponding shared memory according to the name of the shared memory module, and applies for the shared memory;
s5: the business process uses the applied shared memory;
s6: and the business process acquires the name of the used shared memory module, then acquires the base address of the corresponding shared memory according to the name of the shared memory module, and returns the shared memory.
In an embodiment of the present invention, the specific process of initializing any shared memory module by the management process in S2 is as follows:
s201: the management process acquires a handle corresponding to the shared memory module according to the name of the shared memory module;
s202: calculating the size of an actual physical memory occupied by the shared memory module;
s203: distributing the actual physical memory according to the size of the actual physical memory, completing the address mapping from the shared memory module to the virtual space, and recording and storing the shared memory base address of the shared memory module;
s204: and carrying out initialization setting on the shared memory module.
In an embodiment of the present invention, a specific calculation process of the actual physical memory size in S202 is as follows:
s2021: calculating the total memory size M1 ═ M1xN1 occupied by effective data of the shared memory module in actual use, wherein M1 is the block size of the shared memory module, and N1 is the block number of the shared memory module;
s2022: calculating the size of an extra space generated by using a linked list, wherein the size of a memory M2 actually occupied by the linked list is P1xN1+2xP2+2xP3, wherein P1 is the size of an offset variable of a shared memory module, P2 is the size of a head node offset variable of the used linked list, and P3 is the size of a process mutual exclusion lock in the used linked list;
s2023: the actual physical memory size is calculated as M1+ M2.
In an embodiment of the present invention, the specific process of initializing the shared memory module in S204 is as follows:
s2041: defining head nodes, adjacent to a shared memory base address, of a double-linked list structure shared memory as head1, then head2, memory nodes, adjacent to two head nodes, as nodes 1, and sequentially as nodes 2 and 3 until the last memory node is node N1;
s2402: assigning a value to any memory node, specifically, setting the xth memory node as a node, setting the previous memory node of the node as a node (x-1), setting the next memory node of the node as a node (x +1), setting the offset of the node relative to the shared memory base address as node _ off, setting the variable value of the predecessor pointer of the node as node (x-1) _ off, and setting the variable value of the successor pointer of the node as node (x +1) _ off, wherein node (x-1) _ off is the offset of the node (x-1) relative to the shared memory base address, node (x +1) _ off is the offset of the node (x +1) relative to the shared memory base address, and setting the offset value of the node data block as node _ data;
s2403: repeating S2402 to complete the assignment of the offset sizes of all memory nodes from the node1, the node2 to the node N1, the precursor pointer variable, the subsequent pointer variable and the value of each node data block offset, wherein the variable value of the subsequent pointer of the node N1 is independently set to be the variable value of the head pointer of the head 1;
s2404: assigning a value to the head node, specifically:
setting the variable value of the tail pointer of the head1 as node1_ off, setting the variable value of the head pointer of the head1 as 0, and thus setting the variable value of the subsequent pointer of the node N1 as 0;
setting the head2 as being only used for initializing the process mutex, and assigning a head pointer and a tail pointer of the head2 as head2_ off, so that the linked list corresponding to the head2 is empty;
taking the two head nodes as the head nodes of the overall structure, wherein the variable value of the head pointer of the head nodes of the overall structure is 0, and the variable value of the tail pointer of the head nodes of the overall structure is node1_ off;
s2405: and finishing the initialization setting of the shared memory module.
In an embodiment of the present invention, a specific process of the service process applying for the shared memory in S4 is as follows:
s401: locking the shared memory of the double-linked list structure through the process exclusive lock of the head2, checking the variable values of the head pointer and the tail pointer of the head node of the whole structure,
if the variable value of the head pointer is not equal to the variable value of the tail pointer, the double-linked list structure shared memory is indicated to have at least one available memory node, the memory node is determined to be the node1, and the next step is executed to start applying for the memory node;
if the variable value of the head pointer is equal to the variable value of the tail pointer, the shared memory has no available memory node, a null pointer is returned, and the process mutual exclusion lock is released;
s402: modifying the variable value of the tail pointer of the head node of the whole structure to be node2_ off, wherein node2_ off is the offset of memory node2 relative to the base address of the shared memory;
s403: modifying the value of a predecessor pointer variable of the node2 to be the value of a head node tail pointer of the whole structure, namely node2_ off;
s404: releasing the locking of the process mutual exclusion lock;
s405: respectively-1 the values of the predecessor pointer variable and successor pointer variable of node 1;
s406: returning the actual effective address of the shared memory block available for the business process as the value of the shared memory base address + the node1 data block offset;
s407: and finishing the application of the shared memory, wherein the node1 is the shared memory block applied to the business process.
In an embodiment of the present invention, a specific process of returning the shared memory by the business process in S6 is as follows:
s601: setting the returned shared memory block as a node, and acquiring values of a precursor pointer variable and a subsequent pointer variable of the node according to the address of the node;
s602: locking the shared memory of the double-linked list structure through a process mutual exclusion lock of the head2, and modifying the value of a subsequent pointer variable of the node to be the variable value of a tail pointer of a head node of the whole structure;
s603: modifying the value of the predecessor pointer variable of the node to be the offset of the head2 relative to the base address of the shared memory, namely the head2_ off;
s604: modifying the value of a precursor pointer variable of the node1 to be the offset of the node relative to the base address of the shared memory, and defining the value as node _ off;
s605: modifying the variable value of the tail pointer of the head node of the whole structure into node _ off;
s606: and releasing the locking of the process mutual exclusion lock.
In an embodiment of the present invention, a specific calculation method of the node _ off value is as follows:
according to S2402 and S2403, the offset value of the node data block is known to be node _ data, and the value of node _ off is obtained by subtracting the space occupied by the precursor pointer, the subsequent pointer and the variable corresponding to the node size from the node _ data.
In order to achieve the above object, the present invention further provides an interprocess communication system based on shared memory, which includes:
the initialization module is used for initializing the shared memory, and the required parameters comprise the name of the shared memory module, the block size and the block number of each shared memory module;
the distribution module is used for applying and using the shared memory module in the business process, and the required parameters comprise the name of the shared memory module;
and the release module is used for returning the shared memory module to the business process, and the required parameters comprise the name of the shared memory module and the address corresponding to the shared memory module.
Compared with the prior art, the invention has the following advantages:
1) the data structures of the shared memory implementation system provided by the invention are all positioned on the shared memory, and the shared memory is realized by adopting a mode of one-time distribution and then configuration according to specific requirements without carrying out multiple distribution on memory blocks with the same size;
2) the shared memory management method provided by the invention needs fewer parameters, and the obtained shared memory can be directly used without additional processing like a common memory, thereby facilitating the application and programming of the service module;
3) the invention adopts mutual exclusion lock between processes to realize the use of the shared memory linked list, rather than using a more complex semaphore mechanism, and is more convenient for application program developers to use.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a flowchart illustrating a management process creating a shared memory according to an embodiment of the present invention;
FIG. 2 is a diagram of a shared memory structure based on a linked list according to an embodiment of the present invention;
FIG. 3 is a system architecture diagram of a shared memory implementation according to an embodiment of the present invention;
FIG. 4 is a diagram illustrating a producer-consumer mode shared memory communication among multiple processes according to another embodiment of the present invention.
Description of reference numerals: 10-an initialization module; 20-a distribution module; 30-a release module; 401-producer process; 402-consumer process; 403-shared memory initialization process; 4031-free shared memory linked list; 4032-shared memory linked list after data population.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without inventive effort based on the embodiments of the present invention, are within the scope of the present invention.
An embodiment of the present invention provides a shared memory-based interprocess communication method, which includes the following steps:
fig. 1 is a flowchart of creating a shared memory by a management process according to an embodiment of the present invention, and as shown in fig. 1, the shared memory may generate only one large shared memory according to a requirement of a shared memory corresponding to an actual service module, and then implement segmentation of a shared memory block and initialization and management of a node managing a linked list structure of the shared memory by dividing different data structures. Correspondingly, different service modules can be composed of a plurality of different shared memory modules, and the requirements of each module on the shared memory are different, so that different parameters are provided respectively, and the initialization of the shared memory corresponding to different application modules is realized. The following steps 1 to 3 are specific processes for creating the shared memory in this embodiment:
step 1: the management process determines the names of all shared memory modules to be created, and the block size and the block number of each shared memory module according to different service modules;
step 2: a management process initializes any shared memory module, wherein each shared memory module is a shared memory with a double linked list structure, fig. 2 is a linked list-based shared memory structure diagram according to an embodiment of the present invention, as shown in fig. 2, which includes two head nodes (head1 and head2) and a plurality of memory nodes (node1, node2, … … node (x-1), node (x +1), … … node n1), each head node includes a process exclusive lock (mutex), a head pointer (head), and a tail pointer (tail), and each memory node includes a precursor pointer (prev), a successor pointer (next), a node size (size), and a data block (data);
the specific process of initializing any shared memory module by the management process in step 2 is as follows:
s201: the management process acquires a handle corresponding to the shared memory module according to the name of the shared memory module;
s202: calculating the size of an actual physical memory occupied by the shared memory module;
the specific calculation process of the actual physical memory size in S202 is as follows:
s2021: calculating the total memory size M1 ═ M1xN1 occupied by effective data of the shared memory module in actual use, wherein M1 is the block size of the shared memory module, and N1 is the block number of the shared memory module;
s2022: calculating the size of an extra space generated by using a linked list, wherein the size of a memory actually occupied by the linked list M2 is P1xN1+2xP2+2xP3, wherein P1 is the size of an offset variable of a shared memory module, P2 is the size of a head node offset variable of the used linked list, and P3 is the size of a process mutual exclusion lock in the used linked list;
s2023: the actual physical memory size is calculated as M1+ M2.
S203: distributing the actual physical memory according to the size of the actual physical memory, completing the address mapping from the shared memory module to the virtual space, and recording and storing the shared memory base address (baseptr) of the shared memory module;
s204: and carrying out initialization setting on the shared memory module.
The specific process of initializing the shared memory module in S204 is as follows:
s2041: defining head nodes of a shared memory of a double-linked list structure, which are adjacent to a shared memory base address (baseptr), as head1, then head2, memory nodes adjacent to two head nodes as nodes 1, namely a first memory node, sequentially as nodes 2 and node3, and so on until the last memory node is node N1;
s2402: assigning a value to any memory node, specifically: setting the xth memory node as node, the previous memory node of the node as node (x-1), the next memory node of the node as node (x +1), and the offset of the node relative to the shared memory base address (baseptr) as node _ off, then the variable value of the predecessor pointer (prev) of the node is node (x-1) _ off, and the variable value of the successor pointer (next) of the node is node (x +1) _ off, where node (x-1) _ off is the offset of node (x-1) relative to the shared memory base address (baseptr), node (x +1) _ off is the offset of node (x +1) relative to the shared memory base address (baseptr), and meanwhile setting the offset value of the node data block (data) as node _ data;
s2403: repeating S2402 to complete the assignment of the offset sizes of all memory nodes from the node1, the node2 to the node N1, the value of a precursor pointer (prev) variable, a subsequent pointer (next) variable and the offset of each node data block (data), wherein the variable value of the subsequent pointer (next) of the node N1 is set as the variable value of the head pointer (head) of the head1, so that the offset value of the node1 data block (data) is known as node1_ data;
s2404: assigning a value to the head node, specifically:
setting the variable value of the tail pointer (tail) of the head1 as node1_ off, namely the offset of the first memory node to the shared memory base address (baseptr) and the variable value of the head pointer (head) of the head1 as 0, and since the head1 is the starting point of the whole linked list structure, the offset of the head pointer (head) of the head1 relative to the shared memory base address (baseptr) is 0, the variable value of the head pointer (head) of the head1 is 0, and the variable value of the subsequent pointer (next) of the nodeN1 can be also 0;
setting the head2 to be only used for initializing the process mutex, and assigning a head pointer (head) and a tail pointer (tail) of the head2 to be head2_ off, so that the linked list corresponding to the head2 is empty;
taking two head nodes (head1 and head2) as the head nodes of the overall structure, the variable value of a head pointer (head) of the head nodes of the overall structure is 0, and the variable value of a tail pointer (tail) of the head nodes of the overall structure is node1_ off;
s2405: and finishing the initialization setting of the shared memory module.
And step 3: repeating the step 2 until the creation of all the shared memory modules is completed;
the above steps complete the allocation and initialization of the shared memory data structure, the initialization is mainly realized in the shared memory allocation process, and for other related processes in actual use, the shared memory base address is obtained by adopting the same shared memory name. In this embodiment, since the shared memory has already completed initializing the data in the memory, it is not necessary to initialize the shared memory again.
In the service process, the shared memory structure is initialized, so that the shared memory can be used only by completing mapping of the shared memory base address according to the name corresponding to the shared memory module, the actually used data structure is consistent with the data structure used by the management process and is a two-way linked list structure, and the actual physical memory corresponding to the same module name is the same memory. When the shared memory is used, the business process calls the shared memory to apply and return the shared memory according to the corresponding business name to complete the application, use and return of the shared memory. The following steps 4 to 6 are specific processes of applying, using and returning the shared memory in this embodiment.
And 4, step 4: the business process acquires the name of the corresponding shared memory module, then acquires a base address (baseptr) of the corresponding shared memory according to the name of the shared memory module, and applies for the shared memory; in this embodiment, according to the setting in step S2404, it is known that the shared memory base address (baseptr) is the position of the head node (head1 and head2) of the linked list, and therefore the head nodes (head1 and head2) of the shared memory linked list are obtained.
The specific process of applying for the shared memory by the service process in step 4 is as follows:
s401: locking the shared memory of the double-linked list structure through a process exclusive lock (mutex) of the head2, checking variable values of a head pointer (head) and a tail pointer (tail) of a head node of the whole structure,
if the variable value of the head pointer (head) is not equal to the variable value of the tail pointer (tail), the variable value indicates that at least one available memory node is in the shared memory of the double linked list structure, the memory node is determined to be node1, and the next step is executed to start applying for the memory node;
if the variable value of the head pointer (head) is equal to the variable value of the tail pointer (tail), returning a null pointer, namely no available memory node exists, and simultaneously removing the process mutual exclusion lock;
s402: modifying the variable value of a tail pointer (tail) of a head node of the whole structure to be node2_ off, wherein node2_ off is the offset of memory node2 relative to a base address (baseptr) of the shared memory;
s403: modifying the variable value of a precursor pointer (prev) of the node2 into the variable value of a head node tail pointer (tail) of the whole structure, namely node2_ off;
s404: releasing the locking of the process mutual exclusion lock;
s405: the values of the predecessor pointer (prev) and successor pointer (next) variables of node1 are each-1;
s406: returning the actual effective address of the shared memory block available for the business process as the value of the base address (baseptr) + node1 data block (data) offset of the shared memory, in this embodiment, as can be known from the description in step S2403, the actual effective address of the shared memory block (i.e. the shared memory block of node1) is baseprt + node1_ data;
s407: and after the application of the shared memory is completed, the node1 is the shared memory block applied to the business process.
In order to ensure the simplicity of the method for obtaining the shared memory block, in this embodiment, only the actual valid address is returned in S407, and no additional data structure is returned, so that the actual user can use the address returned by malloc provided by the C language as convenient.
In another embodiment of the present invention, the subsequent pointer (next) variable of node1 is node2_ off according to the initial assignment, if there is only one available memory node, node1, in the shared memory linked list structure, then the subsequent pointer (next) of node1 is 0, that is, node2_ off is 0, after the variable value of the tail pointer (tail) of the head node of the whole structure is modified to node2_ off in executing S402, the variable value of the tail pointer (tail) of the head node of the whole structure and the variable value of the head pointer (head) of the head node of the whole structure are both 0, that is, the variable value of the head pointer (head) is equal to the variable value of the tail pointer (tail), if there is another service process applying for the shared memory at this time, according to the judgment of S401, the shared memory has no available memory node, and returns an empty pointer.
And 5: the business process uses the applied shared memory;
step 6: and the business process acquires the name of the used shared memory module, then acquires a base address (baseptr) of the corresponding shared memory according to the name of the shared memory module, and returns the shared memory. In this embodiment, according to the setting in step S2404, it is known that the shared memory base address (baseptr) is the position of the head node (head1 and head2) of the linked list, and therefore the head nodes (head1 and head2) of the shared memory linked list are obtained.
The specific process of returning the shared memory by the service process in step 6 is as follows:
s601: setting the returned shared memory block as a node, and acquiring values of a predecessor pointer (prev) variable and a successor pointer (next) variable of the node according to the address of the node;
in this embodiment, if the offset of the actual address of the nodeb relative to the base address (base) is nodeb _ data-base, that is, the difference between the offset value of the data block (data) of nodeb and the base address (base), the offset of the subsequent pointer (next) offset address variable of nodeb is nodeb _ data-base, and then the memory size occupied by the next, prev, and size variables is subtracted from the nodeb _ data-base, and the offset of the predecessor pointer (prev) offset address variable of nodeb is nodeb _ data-base and then the memory size occupied by the size variable is subtracted from the size variable.
S602: locking the double-linked list structure shared memory through a process mutual exclusion lock (mutex) of the head2, and modifying the value of a successor pointer (next) variable of the nodey into the variable value of a head node tail pointer (tail) of the whole structure;
s603: modifying the value of a predecessor pointer (prev) variable of nodey to be the offset of head2 relative to a base address (baseptr) of the shared memory, namely head2_ off;
s604: modifying the value of a predecessor pointer (prev) variable of node1 to be the offset of node relative to a shared memory base address (baseptr), defined as node _ off;
s605: modifying the variable value of a tail pointer (tail) of the head node of the whole structure into node _ off;
s606: and releasing the locking of the process mutual exclusion lock. At this point, node returns to the position between the head node of the list and node 1.
The specific calculation method of the node _ off value is as follows:
when the memory block is returned, according to S406, it can be known that the actually input address of the returned shared memory block nodeb is the shared memory base address (baseptr) + the value of the offset of the data block (data) of nodeb, that is, baseptr + nodeb _ data, and for convenience of calculation, it is assumed that the actually input address is nodeb _ data _ ptr, so the value of the offset of the data block (data) of nodeb, which can be calculated accordingly, is nodeb _ data _ ptr-baseptr, that is, nodeb _ data _ nodeb _ ptr-baseptr;
referring to fig. 2 again, as can be seen from fig. 2, the value of the offset of the data block (data) of the node _ off is a fixed value, so that the value of the offset of the data block (data) of the node _ off is calculated as the value obtained by subtracting the space occupied by the predecessor pointer (prev), successor pointer (next) and node size (size) variables from the node _ data.
In this embodiment, when returning the shared memory to the corresponding memory linked list, only the address of the shared memory block needs to be transferred to the shared memory block to ensure that the shared memory is consistent with the free function provided by the C language.
Fig. 3 is a system architecture diagram implemented by sharing a memory according to an embodiment of the present invention, and as shown in fig. 3, an embodiment of the present invention provides an interprocess communication system based on a shared memory, which includes:
the initialization module (10) is used for initializing the shared memory, and the required parameters comprise the name of the shared memory module, the block size and the block number of each shared memory module; to ensure efficient use of the initialization step, the name of the shared memory module, the block size and the number of blocks of each shared memory module need to be transmitted when the initialization module is called.
The distribution module (20) is used for applying and using the shared memory module in the business process, and the required parameters comprise the name of the shared memory module;
and the release module (30) is used for returning the shared memory module by the business process, and the required parameters comprise the name of the shared memory module and the address corresponding to the shared memory module.
Fig. 4 is a schematic diagram of a multi-process shared memory communication in a producer-consumer mode according to another embodiment of the present invention, and in order to better describe the actual use method of the present invention between multiple processes, as shown in fig. 4, how to implement data communication between multiple processes by way of inter-process communication in a producer-consumer mode is described. In yet another embodiment of the present invention, there are a plurality of producer processes (401) (producer process 1, … … producer process N) and a plurality of consumer processes (402) (consumer process 1, … … consumer process N), and the specific implementation steps of the producer process (401) and the consumer process (402) include:
the first step is as follows: the shared memory initialization process (403) completes the initialization of the shared memory in the same manner as the foregoing steps 1 to 3, and the specific initialization process is not described herein again;
the second step is that: any producer process (401), for example, producer process 1, obtains an actual effective address (data address) corresponding to a first idle shared memory block (for example, node1) from the idle shared memory linked list (4031) list1 according to the same process from S401 to S407, and the specific obtaining process is not described herein again;
the third step: corresponding to the producer process (401), i.e. producer process 1, filling data into the actual effective address (i.e. the data address of node1) of the obtained shared memory block, and then returning the memory block (node1) filled with the data into the shared memory linked list (4032) list2 filled with the data according to the same method as in the foregoing S601 to S606, where the specific returning process is not described herein again;
the fourth step: any consumer process (402), for example, the consumer process 1, acquires an actual effective address (data address) corresponding to a first shared memory block (for example, node1) from the shared memory linked list (4032) list2 after data filling according to the same process from S401 to S407, and the specific acquisition process is not described herein again;
the fifth step: corresponding to the consumer process (402), i.e. consumer process 1, processing (for example, reading out data) the data filled in the actual effective address (i.e. the data address of node1) of the obtained shared memory block, and then returning the shared memory data block (node1) to the idle shared memory linked list (4031) list1 according to the same method as the foregoing S601 to S606.
In the actual use process of this embodiment, the foregoing steps 1 to 3 can only be executed in the shared memory initialization process; the shared memory allocation, i.e., the aforementioned S401 to S407, and the shared memory release, i.e., the aforementioned S601 to S606, can be performed during the initialization of the shared memory and other processes using the shared memory. Through the second step to the fourth step, the cyclic use of multiple processes in the shared memory can be realized, multiple producer processes (401) and multiple consumer processes (402) are supported to be used simultaneously, the required parameters are less during use, the method can be the same as that of using a common memory, an additional data structure is not needed for storing information such as the size of the memory, the offset of the memory and the like, and the use mode is simple.
The data structures of the shared memory implementation system provided by the invention are all positioned on the shared memory, and the shared memory is realized by adopting a mode of one-time distribution and then configuration according to specific requirements without carrying out multiple distribution on memory blocks with the same size; the shared memory management method provided by the invention needs fewer parameters, and the obtained shared memory can be directly used without additional processing like a common memory, thereby facilitating the application and programming of the service module; the invention adopts mutual exclusion lock between processes to realize the use of the shared memory linked list, rather than using a more complex semaphore mechanism, and is more convenient for application program developers to use.
Those of ordinary skill in the art will understand that: the figures are merely schematic representations of one embodiment, and the blocks or flow diagrams in the figures are not necessarily required to practice the present invention.
Those of ordinary skill in the art will understand that: modules in the devices in the embodiments may be distributed in the devices in the embodiments according to the description of the embodiments, or may be located in one or more devices different from the embodiments with corresponding changes. The modules of the above embodiments may be combined into one module, or further split into multiple sub-modules.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.
Claims (8)
1. A method for interprocess communication based on shared memory is characterized by comprising the following steps:
s1: the management process determines the names of all shared memory modules to be created, and the block size and the block number of each shared memory module according to different service modules;
s2: the management process initializes any shared memory module, wherein each shared memory module is a shared memory with a double-linked list structure and comprises two head nodes and a plurality of memory nodes, each head node comprises a process mutual exclusion lock, a head pointer and a tail pointer, and each memory node comprises a precursor pointer, a subsequent pointer, a node size and a data block;
s3: repeating S2 until the creation of all the shared memory modules is completed;
s4: the business process acquires the name of the corresponding shared memory module, then acquires the base address of the corresponding shared memory according to the name of the shared memory module, and applies for the shared memory;
s5: the business process uses the applied shared memory;
s6: and the business process acquires the name of the used shared memory module, then acquires the base address of the corresponding shared memory according to the name of the shared memory module, and returns the shared memory.
2. The method according to claim 1, wherein the specific process of initializing any shared memory module by the management process in S2 is as follows:
s201: the management process acquires a handle corresponding to the shared memory module according to the name of the shared memory module;
s202: calculating the size of an actual physical memory occupied by the shared memory module;
s203: distributing the actual physical memory according to the size of the actual physical memory, completing the address mapping from the shared memory module to the virtual space, and recording and storing the shared memory base address of the shared memory module;
s204: and carrying out initialization setting on the shared memory module.
3. The method according to claim 2, wherein the specific calculation process of the actual physical memory size in S202 is as follows:
s2021: calculating the total memory size M1 ═ M1xN1 occupied by effective data of the shared memory module in actual use, wherein M1 is the block size of the shared memory module, and N1 is the block number of the shared memory module;
s2022: calculating the size of an extra space generated by using a linked list, wherein the size of a memory M2 actually occupied by the linked list is P1xN1+2xP2+2xP3, wherein P1 is the size of an offset variable of a shared memory module, P2 is the size of a head node offset variable of the used linked list, and P3 is the size of a process mutual exclusion lock in the used linked list;
s2023: the actual physical memory size is calculated as M1+ M2.
4. The method according to claim 2, wherein the specific process of initializing and setting the shared memory module in S204 is as follows:
s2041: defining head nodes, adjacent to a shared memory base address, of a double-linked list structure shared memory as head1, then head2, memory nodes, adjacent to two head nodes, as nodes 1, and sequentially as nodes 2 and 3 until the last memory node is node N1;
s2402: assigning a value to any memory node, specifically, setting the xth memory node as a node, setting the previous memory node of the node as a node (x-1), setting the next memory node of the node as a node (x +1), setting the offset of the node relative to the shared memory base address as node _ off, setting the variable value of the predecessor pointer of the node as node (x-1) _ off, and setting the variable value of the successor pointer of the node as node (x +1) _ off, wherein node (x-1) _ off is the offset of the node (x-1) relative to the shared memory base address, node (x +1) _ off is the offset of the node (x +1) relative to the shared memory base address, and setting the offset value of the node data block as node _ data;
s2403: repeating S2402 to complete the assignment of the offset sizes of all memory nodes from the node1, the node2 to the node N1, the precursor pointer variable, the subsequent pointer variable and the value of each node data block offset, wherein the variable value of the subsequent pointer of the node N1 is independently set to be the variable value of the head pointer of the head 1;
s2404: assigning a value to the head node, specifically:
setting the variable value of the tail pointer of the head1 as node1_ off, setting the variable value of the head pointer of the head1 as 0, and thus setting the variable value of the subsequent pointer of the node N1 as 0;
setting the head2 as being only used for initializing the process mutex, and assigning a head pointer and a tail pointer of the head2 as head2_ off, so that the linked list corresponding to the head2 is empty;
taking the two head nodes as the head nodes of the overall structure, wherein the variable value of the head pointer of the head nodes of the overall structure is 0, and the variable value of the tail pointer of the head nodes of the overall structure is node1_ off;
s2405: and finishing the initialization setting of the shared memory module.
5. The method according to claim 1, wherein the specific process of the business process applying for the shared memory in S4 is as follows:
s401: locking the shared memory of the double-linked list structure through the process exclusive lock of the head2, checking the variable values of the head pointer and the tail pointer of the head node of the whole structure,
if the variable value of the head pointer is not equal to the variable value of the tail pointer, the double-linked list structure shared memory is indicated to have at least one available memory node, the memory node is determined to be the node1, and the next step is executed to start applying for the memory node;
if the variable value of the head pointer is equal to the variable value of the tail pointer, the shared memory has no available memory node, a null pointer is returned, and the process mutual exclusion lock is released;
s402: modifying the variable value of the tail pointer of the head node of the whole structure to be node2_ off, wherein node2_ off is the offset of memory node2 relative to the base address of the shared memory;
s403: modifying the value of a predecessor pointer variable of the node2 to be the value of a head node tail pointer of the whole structure, namely node2_ off;
s404: releasing the locking of the process mutual exclusion lock;
s405: respectively-1 the values of the predecessor pointer variable and successor pointer variable of node 1;
s406: returning the actual effective address of the shared memory block available for the business process as the value of the shared memory base address + the node1 data block offset;
s407: and finishing the application of the shared memory, wherein the node1 is the shared memory block applied to the business process.
6. The method according to claim 1, wherein the specific process of returning the shared memory by the business process in S6 is as follows:
s601: setting the returned shared memory block as a node, and acquiring values of a precursor pointer variable and a subsequent pointer variable of the node according to the address of the node;
s602: locking the shared memory of the double-linked list structure through a process mutual exclusion lock of the head2, and modifying the value of a subsequent pointer variable of the node to be the variable value of a tail pointer of a head node of the whole structure;
s603: modifying the value of the predecessor pointer variable of the node to be the offset of the head2 relative to the base address of the shared memory, namely the head2_ off;
s604: modifying the value of a precursor pointer variable of the node1 to be the offset of the node relative to the base address of the shared memory, and defining the value as node _ off;
s605: modifying the variable value of the tail pointer of the head node of the whole structure into node _ off;
s606: and releasing the locking of the process mutual exclusion lock.
7. The method of claim 6, wherein the value of nodey _ off is calculated by:
according to S2402 and S2403, the offset value of the node data block is known to be node _ data, and the value of node _ off is obtained by subtracting the space occupied by the precursor pointer, the subsequent pointer and the variable corresponding to the node size from the node _ data.
8. An interprocess communication system based on shared memory, which is characterized by comprising:
the initialization module is used for initializing the shared memory, and the required parameters comprise the name of the shared memory module, the block size and the block number of each shared memory module;
the distribution module is used for applying and using the shared memory module in the business process, and the required parameters comprise the name of the shared memory module;
and the release module is used for returning the shared memory module to the business process, and the required parameters comprise the name of the shared memory module and the address corresponding to the shared memory module.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110196496.4A CN112860458B (en) | 2021-02-22 | 2021-02-22 | Inter-process communication method and system based on shared memory |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110196496.4A CN112860458B (en) | 2021-02-22 | 2021-02-22 | Inter-process communication method and system based on shared memory |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112860458A true CN112860458A (en) | 2021-05-28 |
CN112860458B CN112860458B (en) | 2022-10-25 |
Family
ID=75988414
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110196496.4A Active CN112860458B (en) | 2021-02-22 | 2021-02-22 | Inter-process communication method and system based on shared memory |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112860458B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114911632A (en) * | 2022-07-11 | 2022-08-16 | 北京融为科技有限公司 | Method and system for controlling inter-process communication |
CN115460054A (en) * | 2022-08-26 | 2022-12-09 | 深圳技威时代科技有限公司 | Cloud service management and release method and system based on shared memory |
CN116055664A (en) * | 2023-03-28 | 2023-05-02 | 北京睿芯通量科技发展有限公司 | Method, device and storage medium for sharing memory for video processing process |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101470632A (en) * | 2007-12-24 | 2009-07-01 | 华为软件技术有限公司 | Sharing internal memory management method and apparatus |
US7844973B1 (en) * | 2004-12-09 | 2010-11-30 | Oracle America, Inc. | Methods and apparatus providing non-blocking access to a resource |
CN103514053A (en) * | 2013-09-22 | 2014-01-15 | 中国科学院信息工程研究所 | Shared-memory-based method for conducting communication among multiple processes |
CN111427707A (en) * | 2020-03-25 | 2020-07-17 | 北京左江科技股份有限公司 | IPC communication method based on shared memory pool |
-
2021
- 2021-02-22 CN CN202110196496.4A patent/CN112860458B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7844973B1 (en) * | 2004-12-09 | 2010-11-30 | Oracle America, Inc. | Methods and apparatus providing non-blocking access to a resource |
CN101470632A (en) * | 2007-12-24 | 2009-07-01 | 华为软件技术有限公司 | Sharing internal memory management method and apparatus |
CN103514053A (en) * | 2013-09-22 | 2014-01-15 | 中国科学院信息工程研究所 | Shared-memory-based method for conducting communication among multiple processes |
CN111427707A (en) * | 2020-03-25 | 2020-07-17 | 北京左江科技股份有限公司 | IPC communication method based on shared memory pool |
Non-Patent Citations (1)
Title |
---|
田鲁: "Android操作系统中IPC机制和Media的分析与优化", 《电子科技大学工程硕士学位论文》 * |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114911632A (en) * | 2022-07-11 | 2022-08-16 | 北京融为科技有限公司 | Method and system for controlling inter-process communication |
CN114911632B (en) * | 2022-07-11 | 2022-09-13 | 北京融为科技有限公司 | Method and system for controlling interprocess communication |
CN115460054A (en) * | 2022-08-26 | 2022-12-09 | 深圳技威时代科技有限公司 | Cloud service management and release method and system based on shared memory |
CN115460054B (en) * | 2022-08-26 | 2024-04-19 | 深圳技威时代科技有限公司 | Cloud service management and release method and system based on shared memory |
CN116055664A (en) * | 2023-03-28 | 2023-05-02 | 北京睿芯通量科技发展有限公司 | Method, device and storage medium for sharing memory for video processing process |
CN116055664B (en) * | 2023-03-28 | 2023-06-02 | 北京睿芯通量科技发展有限公司 | Method, device and storage medium for sharing memory for video processing process |
Also Published As
Publication number | Publication date |
---|---|
CN112860458B (en) | 2022-10-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112860458B (en) | Inter-process communication method and system based on shared memory | |
US11487698B2 (en) | Parameter server and method for sharing distributed deep learning parameter using the same | |
US11010681B2 (en) | Distributed computing system, and data transmission method and apparatus in distributed computing system | |
WO2021051914A1 (en) | Gpu resource-based data processing method and system, and electronic device | |
US7124255B2 (en) | Message based inter-process for high volume data | |
Grünewald et al. | The GASPI API specification and its implementation GPI 2.0 | |
US20120324170A1 (en) | Read-Copy Update Implementation For Non-Cache-Coherent Systems | |
CN110888727A (en) | Method, device and storage medium for realizing concurrent lock-free queue | |
WO2020115330A1 (en) | Computing resource allocation | |
CN113535363A (en) | Task calling method and device, electronic equipment and storage medium | |
US20170344398A1 (en) | Accelerator control device, accelerator control method, and program storage medium | |
Simmendinger et al. | The GASPI API: A failure tolerant PGAS API for asynchronous dataflow on heterogeneous architectures | |
CN115525417A (en) | Data communication method, communication system, and computer-readable storage medium | |
CN116893899A (en) | Resource allocation method, device, computer equipment and storage medium | |
CN107451070B (en) | Data processing method and server | |
CN112486702B (en) | Global message queue implementation method based on multi-core multi-processor parallel system | |
CN116680042A (en) | Image processing method and related device and system | |
CN112368686A (en) | Heterogeneous computing system and memory management method | |
CN113326149A (en) | Inter-core communication method and device of heterogeneous multi-core system | |
CN104572483A (en) | Device and method for management of dynamic memory | |
CN111797497A (en) | Communication method and system for electromagnetic transient parallel simulation | |
EP4432087A1 (en) | Lock management method, apparatus and system | |
CN116775266A (en) | Techniques for scalable load balancing of thread groups in a processor | |
CN113282382B (en) | Task processing method, device, computer equipment and storage medium | |
CN112596889B (en) | Method for managing chained memory based on state machine |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CP01 | Change in the name or title of a patent holder |
Address after: Room 711c, 7 / F, block a, building 1, yard 19, Ronghua Middle Road, Beijing Economic and Technological Development Zone, Daxing District, Beijing 102600 Patentee after: Beijing Zhongke Flux Technology Co.,Ltd. Address before: Room 711c, 7 / F, block a, building 1, yard 19, Ronghua Middle Road, Beijing Economic and Technological Development Zone, Daxing District, Beijing 102600 Patentee before: Beijing Ruixin high throughput technology Co.,Ltd. |
|
CP01 | Change in the name or title of a patent holder |