CN110618883B - Method, device, equipment and storage medium for sharing memory linked list - Google Patents

Method, device, equipment and storage medium for sharing memory linked list Download PDF

Info

Publication number
CN110618883B
CN110618883B CN201910920795.0A CN201910920795A CN110618883B CN 110618883 B CN110618883 B CN 110618883B CN 201910920795 A CN201910920795 A CN 201910920795A CN 110618883 B CN110618883 B CN 110618883B
Authority
CN
China
Prior art keywords
shared memory
linked list
data
address
address offset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910920795.0A
Other languages
Chinese (zh)
Other versions
CN110618883A (en
Inventor
魏阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Maipu Communication Technology Co Ltd
Original Assignee
Maipu Communication Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Maipu Communication Technology Co Ltd filed Critical Maipu Communication Technology Co Ltd
Priority to CN201910920795.0A priority Critical patent/CN110618883B/en
Publication of CN110618883A publication Critical patent/CN110618883A/en
Application granted granted Critical
Publication of CN110618883B publication Critical patent/CN110618883B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/901Indexing; Data structures therefor; Storage structures
    • G06F16/9024Graphs; Linked lists
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/544Buffers; Shared memory; Pipes

Abstract

The application provides a method, a device, equipment and a storage medium for sharing a memory linked list. The method comprises the following steps: a first process creates a shared memory, wherein the shared memory comprises a shared memory management block and a shared memory data block, and the first address of the shared memory is used as the initial address of the shared memory management block; the first process configures a data structure of a head of a user data linked list based on a data structure of a predefined address offset linked list, the data structure of the address offset linked list including an address offset parameter pointing to a next node, the data structure of the head of the linked list including an address offset parameter pointing to the next node; the first process maps the data structure of the chaining header to the shared memory management block. The method and the device can improve the real-time performance and reliability of data access among multiple processes.

Description

Method, device, equipment and storage medium for sharing memory linked list
Technical Field
The present application relates to the technical field of data sharing among multiple processes, and in particular, to a method, an apparatus, a device, and a storage medium for sharing a memory linked list.
Background
In a software system, a shared memory is a most efficient interprocess communication form and is designed aiming at low efficiency of other communication mechanisms. However, most of the existing shared memory implementations share and use continuous memory, and the implementation devices and applications for the shared memory linked list are very lacking, so that the convenience and communication efficiency of accessing the same data link between processes are not fully utilized.
Disclosure of Invention
An object of the embodiments of the present application is to provide a method, an apparatus, a device, and a storage medium for sharing a memory linked list, so as to improve real-time performance and reliability of data access between multiple processes.
In a first aspect, a method for sharing a memory linked list provided in an embodiment of the present application includes: a first process creates a shared memory, wherein the shared memory comprises a shared memory management block and a shared memory data block, and the first address of the shared memory is used as the initial address of the shared memory management block; the first process configures a data structure of a head of a user data linked list based on a data structure of a predefined address offset linked list, the data structure of the address offset linked list including an address offset parameter pointing to a next node, the data structure of the head of the linked list including an address offset parameter pointing to the next node; the first process maps the data structure of the linked list head to the shared memory management block, so that a second process maps the shared memory of the first process into the process of the second process, and the second process finds the linked list head in the shared memory management block from the shared memory of the second process; and accessing data in a shared memory linked list corresponding to the user data linked list in the first process through an address offset parameter pointing to a next node in the linked list header and a first address existing in the shared memory of the first process mapped in the shared memory of the second process.
In the implementation process, a shared memory is created in a first process, the first process configures a data structure of a head of a user data linked list based on a data structure of a predefined address offset linked list, the data structure of the address offset linked list comprises an address offset parameter pointing to a next node, and the data structure of the head of the linked list comprises an address offset parameter pointing to the next node; the first process maps the data structure of the linked list head to the shared memory management block, so that the second process can find the linked list head in the shared memory management block from the shared memory of the second process after mapping the shared memory of the first process to the second process; and accessing data in a shared memory linked list corresponding to the user data linked list in the first process through an address offset parameter of a next node in the linked list header and a first address in the first process in the shared memory mapped in the second process, so that the real-time performance and the reliability of data access among multiple processes can be effectively improved, the system performance is improved, and more values are brought to research and development convenience.
With reference to the first aspect, an embodiment of the present application provides a first possible implementation manner of the first aspect, where the method further includes: the first process divides a shared memory data block into a plurality of data blocks with different sizes, organizes the data blocks with the same size in a pointer linked list mode, and generates a plurality of memory linked lists with different sizes, wherein each data block comprises a starting address and an ending address; the first process applies for data blocks with corresponding sizes from a plurality of memory linked lists with different sizes according to the size of user data; the first process determines the address offset of each data block relative to the first address of the shared memory according to the initial address of the applied data block and the first address of the shared memory; and the first process links the user data to the chain table head to generate a user data chain, wherein the address offset parameter of each node on the user data chain, which points to the next node, is set according to the address offset.
In the implementation process, the first process creates the user data linked list in an address offset-based manner, so that when the user data is shared, the second process can accurately access the data of the first process in an address offset-based manner, so as to implement the access of the shared memory linked list among the processes. The communication efficiency of the whole system can be effectively improved, the system performance is improved, and more values are brought to research and development conveniently.
In a second aspect, a method for sharing a memory linked list provided in an embodiment of the present application includes: the second process maps the shared memory of the first process to the shared memory of the second process; the second process finds out the linked list head of the user data linked list in the first process from the shared memory of the second process; and the second process accesses data in the shared memory linked list corresponding to the user data linked list in the first process according to the address offset parameter of the next node in the linked list header and the first address in the shared memory of the first process mapped by the first process.
In the implementation process, the second process maps the shared memory of the first process into the shared memory of the second process; so that the second process can find the head of the user data linked list in the first process from the shared memory of the second process; and then the second process can rapidly access the data in the shared memory linked list corresponding to the user data linked list in the first process according to the address offset parameter pointing to the next node in the linked list header and the first address in the first process mapped in the shared memory of the second process, so that the realization and the access of the shared memory linked list among multiple processes are realized, the real-time performance and the reliability of the data access among the multiple processes can be effectively improved, the system performance is improved, and more values are brought to the research and development convenience.
With reference to the second aspect, an embodiment of the present application provides a first possible implementation manner of the second aspect, where the finding, by the second process, the list head of the user data list in the first process from the shared memory of the second process includes: and the second process finds the linked list head of the user data linked list in the first process from the shared memory management block mapped by the first process in the shared memory of the second process.
In a third aspect, an apparatus for sharing a memory linked list provided in an embodiment of the present application includes: the system comprises a creating module, a first processing module and a second processing module, wherein the creating module is used for creating a shared memory through a first process, the shared memory comprises a shared memory management block and a shared memory data block, and the first address of the shared memory is used as the initial address of the shared memory management block; a configuration module configured to configure, by the first process, a data structure of a head of a user data linked list based on a data structure of a predefined address offset linked list, the data structure of the address offset linked list including an address offset parameter pointing to a next node, the data structure of the head of the linked list including an address offset parameter pointing to the next node; a first processing module, configured to map a data structure of the linked list header to the shared memory management block through the first process, so that a second process maps the shared memory of the first process into its own process, so that the second process finds the linked list header in the shared memory management block from its own shared memory; and accessing data in a shared memory linked list corresponding to the user data linked list in the first process through an address offset parameter pointing to a next node in the linked list header and a first address in the shared memory of the first process mapped in the shared memory of the second process.
With reference to the third aspect, an embodiment of the present application provides a first possible implementation manner of the third aspect, where the apparatus further includes: the second processing module is used for dividing the shared memory data block into a plurality of data blocks with different sizes through the first process, organizing the data blocks with the same size in a pointer linked list mode, and generating a plurality of memory linked lists with different sizes, wherein each data block comprises a starting address and an ending address; the memory application module is used for applying data blocks with corresponding sizes from a plurality of memory linked lists with different sizes according to the size of user data through the first process; a third processing module, configured to determine, by the first process, an address offset of each data block with respect to the first address of the shared memory according to the initial address of the applied data block and the first address of the shared memory; and a data chain generation module, configured to chain the user data to the chain table header through the first process, and generate a user data chain, where an address offset parameter pointing to a next node in each node in the user data chain is set according to the address offset.
In a fourth aspect, an apparatus for sharing a memory linked list according to an embodiment of the present application includes: the mapping module is used for mapping the shared memory of the first process to the storage space of the mapping module through the second process; the query module is used for finding the linked list head of the user data linked list in the first process from the shared memory of the second process; and the access module is used for accessing the data in the shared memory linked list corresponding to the user data linked list in the first process through the second process according to the address offset parameter of the next node in the linked list header and the first address in the shared memory of the first process mapped in the shared memory.
With reference to the fourth aspect, an embodiment of the present application provides a first possible implementation manner of the fourth aspect, where the query module is further configured to: and the second process finds the linked list head of the user data linked list in the first process from the shared memory management block mapped by the first process in the shared memory of the second process.
In a fifth aspect, an electronic device provided in an embodiment of the present application includes: a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor implements the steps for sharing the linked list of memories according to any one of the first aspect when executing the computer program; alternatively, the processor implements the steps for sharing the memory linked list according to the second aspect when executing the computer program.
In a sixth aspect, a storage medium provided in an embodiment of the present application is a storage medium, where the storage medium is configured to store instructions, and when the instructions are executed on a computer, the computer is caused to perform the step for sharing a memory linked list according to any one of the first aspect; alternatively, the instructions, when executed on a computer, cause the computer to perform the steps for the shared memory linked list as described in the second aspect.
In a seventh aspect, a computer program product provided in an embodiment of the present application, when running on a computer, causes the computer to execute the step for sharing a memory linked list according to any one of the first aspect; alternatively, the computer is caused to perform the steps for sharing the memory linked list according to the second aspect.
Additional features and advantages of the disclosure will be set forth in the description which follows, or in part may be learned by the practice of the above-described techniques of the disclosure, or may be learned by practice of the disclosure.
In order to make the aforementioned objects, features and advantages of the present application more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
To more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments of the present application will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and that those skilled in the art can also obtain other related drawings based on the drawings without inventive efforts.
Fig. 1 is a flowchart of a method for sharing a memory linked list according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram illustrating access between multiple processes according to an embodiment of the present disclosure;
fig. 3 is a flowchart of a method for sharing a memory linked list according to an embodiment of the present disclosure;
fig. 4 is a flowchart of another method for sharing a linked list of memories according to an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of an apparatus for sharing a memory linked list according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of an apparatus for sharing a memory linked list according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application.
Referring to fig. 1, fig. 1 is a flowchart of a method for sharing a memory linked list according to an embodiment of the present disclosure, it should be understood that the method may be executed by an apparatus for sharing a memory linked list shown in fig. 5 below, where the apparatus corresponds to an electronic device shown in fig. 7 below, and the electronic device may be various devices capable of executing the method, for example, the electronic device may be a computer or a server, and the embodiment of the present disclosure is not limited to this, and specifically includes the following steps:
step S101, the first process creates a shared memory.
Optionally, the first process is a process that needs to share data. That is, in actual use, the first process is a process to share data among multiple processes. For example, if there are 4 processes, namely process 1, process 2, process 3, and process 4, and if process 2, process 3, and process 4 need to share and access user data in process 1, process 1 may be referred to as a first process in this application.
It is to be understood that the above description is intended to be illustrative, and not restrictive.
Optionally, the shared memory includes a shared memory management block and a shared memory data block, and a first address of the shared memory is used as a starting address of the shared memory management block.
Optionally, the size of the shared memory management block is a fixed size, for example, the size may be 1024 bytes, 2048 bytes, or 512 bytes, and this is not limited specifically. In other words, the shared memory management block can be understood as a fixed size storage space with the first address of the shared memory as the starting address.
As an embodiment, after applying for the shared memory, the first process uses a fixed-size block in the shared memory as a shared memory management block, and uses the remaining memory as a shared memory data block.
Optionally, the shared memory management block size is skipped, and the remaining shared memory data block is split into a plurality of blocks of different sizes and spaces.
Optionally, the segmented blocks with the same size are organized in a pointer linked list manner, so as to generate a plurality of memory block chains with different sizes. Wherein the block information is tagged with the start address of the block and the size of the block.
Optionally, the segmented blocks with the same size may be organized in a doubly linked list manner, so as to generate a plurality of memory block chains with different sizes.
In actual use, of course, the segmented blocks with the same size may also be organized in a manner of a single linked list or a circular linked list to generate a plurality of memory block chains with different sizes. Here, the number of the carbon atoms is not particularly limited.
For example, assuming that the shared memory is 4k (4096 bytes) and the bmblock occupies 1024 bytes, the distance of the bmblock from the head address of the shared memory is 0 or 1024 bytes, and the start address of the bmblock is 1025.
As another example, a shared memory data block is divided into 6 blocks, such as 4 blocks of 512 bytes (Block 1, Block 2, Block 3, and Block 4) and 2 blocks of 1024 bytes (Block 11 and Block 22), respectively. Then chain 1 in the formed memory block chain is: block 1, block 2, block 3 and block 4. Chain 2 in the memory block chain is a chain formed by block 11 and block 22.
It is to be understood that the above description is intended to be illustrative, and not restrictive.
Optionally, when a request space (i.e. a certain node in the chain) stores data from the memory block chain, the requested node is taken down from the memory block chain. When the node releases the data, the node can be returned to the memory block chain.
Continuing with the above example, assuming that the applied node is block 2, block 2 is removed from chain 1, and block 2 may be returned to chain 1 after the data stored in block 2 is released, and generally, when returning to chain 1, chain 1 is accessed as the tail node.
Of course, block 2 may also access chain 1 according to historical location. Here, the number of the carbon atoms is not particularly limited.
Step S102, the first process configures the data structure of the head of the user data linked list based on the data structure of the predefined address offset linked list.
Optionally, the data structure of the address offset linked list includes an address offset parameter pointing to a next node.
Optionally, a user data linked list is used to manage the user's data. For example, student data such as student number, student name, age, etc. may be used.
It is to be understood that the above description is intended to be illustrative, and not restrictive.
As an embodiment, step S102 includes: the first process configures a data structure of a head of a user data linked list and an entire user data linked list through a data structure of a predefined address offset linked list.
Optionally, the data structure of the head of the user data linked list includes an address offset parameter pointing to the next node.
Optionally, the data structure of the head of the user data link list further includes description information for describing the user data link and a shared mutual exclusion semaphore.
Optionally, the data structure of the address offset linked list is preconfigured in the header file. The definition of the data structure of the address offset linked list is automatically realized when the program is executed.
Optionally, operations of querying, adding and deleting by using the address offset linked list are also configured in the header file. For example, the query, addition and deletion operations based on the address offset linked list can be realized through an inline function and macro definition, so that the query, addition and deletion operations can be performed by directly using the address offset linked list in the self process among multiple processes.
Alternatively, the address offset linked list may be a doubly linked list based on address offsets.
Of course, in practical use, the address offset linked list may also be a single linked list or a circular linked list based on the address offset. Here, the number of the carbon atoms is not particularly limited.
It should be understood that when the address offset linked list is a bi-directional linked list or other linked list, the data structure of the head of the user data linked list also changes with the data structure of the address offset linked list.
For example, when the address offset linked list is a doubly linked list, the data structure thereof includes: an address offset parameter pointing to the next node and an address offset parameter pointing to the next previous node. The data structure of the head of the user data link list also includes the address offset parameter pointing to the next node and the address offset parameter pointing to the previous node.
As an embodiment, step S102 includes: and the first process configures the address offset parameter pointing to the next node in the data structure of the linked list head of the user data linked list according to the address offset parameter pointing to the next node in the data structure of the predefined address offset linked list.
Optionally, the specific value of the address offset parameter pointing to the next node at each node in the user data linked list is determined by the difference between the next node and the first address of the shared memory.
It should be understood that a node refers to a member (also referred to as a node) on a linked list of user data.
Step S103, the first process maps the data structure of the head of the linked list to the shared memory management block.
Alternatively, the shared memory management block may store multiple chaining headers.
Alternatively, the plurality of chaining headers may be chaining headers for different user data. For example, when the shared memory management block has a plurality of linked list headers, when the second process needs to share a certain data, only the corresponding linked list header needs to be found, so as to access the corresponding user data linked list through the linked list header.
Optionally, a first process maps a data structure of the linked list header into the shared memory management block, so that a second process maps the shared memory of the first process into its own process, so that the second process finds the linked list header in the shared memory management block from its own shared memory; and accessing the data in the shared memory linked list corresponding to the user data linked list in the first process through the address offset parameter pointing to the next node in the linked list header and the first address existing in the shared memory of the first process mapped in the shared memory of the second process
It will be understood by those skilled in the art that the second process refers to a process that requires access to data in the first process.
Alternatively, the second process may be plural. Here, the present invention is not particularly limited.
In the implementation process, the second process maps the shared memory of the first process, so that the data in the first process can be accurately accessed.
In a possible embodiment, the method further comprises: the first process divides a shared memory data block into a plurality of data blocks with different sizes, organizes the data blocks with the same size in a pointer linked list mode to generate a plurality of memory linked lists with different sizes, and each data block comprises a starting address and an ending address; the first process applies for data blocks with corresponding sizes from a plurality of memory linked lists with different sizes according to the size of user data; the first process determines the address offset of each data block relative to the first address of the shared memory according to the initial address of the applied data block and the first address of the shared memory; and the first process links the user data to the chain table head to generate a user data chain, wherein the address offset parameter of each node on the user data chain, which points to the next node, is set according to the address offset.
It should be understood that the first process may delete or query the data in the shared memory based on a delete function or a query function defined by the address offset linked list, which is not specifically limited herein.
Optionally, the first process creates the user data linked list in an address offset-based manner, so that when the user data is shared, the second process can accurately access the data of the first process in the address offset-based manner, so as to implement access of the shared memory linked list among the processes. The communication efficiency of the whole system can be effectively improved, the system performance is improved, and more values are brought to research and development conveniently.
As an application scenario, as shown in fig. 2, it is assumed that process 1 is a process that needs to share data out, i.e. a first process in this application, and process 2, process 3, process 4, and process N are second processes in this application. Wherein N is an integer greater than or equal to 4. Because the process 1 needs to share data, a shared memory pool (the shared memory pool is the same as the shared memory in the present application) is created in the process 1, and then the shared memory pool is divided into a shared memory management block region (i.e., the shared memory management block in the present application) and a shared memory data block region (i.e., the shared memory data block in the present application); the method comprises the steps of dividing a shared memory data block into a plurality of data blocks with different sizes, organizing the data blocks with the same size in a pointer linked list mode, and generating a plurality of memory linked lists with different sizes, wherein each data block comprises a starting address and an ending address. After creation is complete, a user data chain, such as user data block 1, user data block 2, etc., is created using an address offset based linked list operation.
It should be noted that the size of the data block in the shared memory data block area in fig. 2 can be represented by the size of a box, and a smaller box indicates that the data block is smaller, and vice versa.
The method for sharing the memory linked list comprises the steps that a shared memory is created in a first process, the first process configures a data structure of a linked list head of a user data linked list based on a data structure of a predefined address offset linked list, the data structure of the address offset linked list comprises an address offset parameter pointing to a next node, and the data structure of the linked list head comprises an address offset parameter pointing to the next node; the first process maps the data structure of the linked list head to the shared memory management block, so that the second process can find the linked list head in the shared memory management block from the shared memory of the second process after mapping the shared memory of the first process to the second process; and accessing data in the shared memory linked list corresponding to the user data linked list in the first process through the address offset parameter of the next node in the linked list header and the first address in the shared memory of the first process mapped in the shared memory of the second process, so that the real-time performance and the reliability of data access among multiple processes can be effectively improved, the system performance is improved, and more values are brought to the research and development convenience.
Referring to fig. 3, fig. 3 is a flowchart of a method for sharing a memory linked list according to an embodiment of the present application, and it should be understood that the method may be performed by an apparatus for sharing a memory linked list as shown in fig. 6, where the apparatus corresponds to an electronic device shown in fig. 7, and the electronic device may be various devices capable of executing the method, for example, the electronic device may be a computer or a server, and the embodiment of the present application is not limited thereto, and specifically includes the following steps:
in step S201, the second process maps the shared memory of the first process to its own shared memory.
Optionally, the second process maps the shared memory of the first process into its own shared memory based on a shared memory mechanism.
It should be understood that each process has a separate memory space.
Step S202, the second process finds the head of the user data linked list in the first process from its shared memory.
As an implementation manner, the second process finds the first address existing in the shared memory of the first process from the shared memory of the second process, and starts to query from the first address to find the head of the user data linked list.
For example, assuming that the first address of the shared memory in the first process is 1000 (assuming that the end address is 2000), and the first address of the second process is 4000 (assuming that the end address is 10000), after the second process maps the shared memory of the first process to its own space, the second process allocates a continuous storage space for the mapped shared memory of the first process according to its own storage space, for example, a continuous 1000-sized space may be allocated from 6000 to be used for data in the shared memory to be mapped, so as to map the shared memory of the first process to the second process, and store the shared memory of the first process from 6000 to 7000, where the first address of the shared memory in the first process is 6000.
As another embodiment, step S202 includes: and the second process finds the linked list head of the user data linked list in the first process from the shared memory management block mapped by the first process in the shared memory of the second process.
Step S203, the second process accesses the data in the shared memory linked list corresponding to the user data linked list in the first process according to the address offset parameter pointing to the next node in the linked list header and the first address in the shared memory of the first process mapped in the shared memory of the second process.
As an implementation manner, the second process may find the start address of the next node by adding the first address to the address offset parameter pointing to the next node in the head of the chain table, so as to access the data located in the next node, and when the next node is accessed, the second process accesses the data of the next node again by adding the first address to the address offset parameter pointing to the next node in the next node on the chain table, so as to sequentially access all nodes on the chain table.
Optionally, generally, when the data that needs to be accessed by the second process is accessed, the access is ended, or when the data that needs to be accessed is not queried in accessing the complete linked list, the access is automatically ended after the complete linked list is accessed. Here, the present invention is not particularly limited.
The method for sharing the memory linked list in the embodiment of the present application is described in detail above with reference to fig. 3, and the method for sharing the memory linked list in the present application is described in detail below with reference to fig. 4 by way of example and without limitation. The method shown in fig. 4 includes:
in step S301, a first process address offset linked list is defined.
Optionally, the first process obtains a data structure defined by the address offset doubly linked list in the header file.
In step S302, the first process creates a shared memory.
Step S303, the first process shares the memory data block partition and organization.
The specific implementation of step S302 and step S303 can refer to the above description, and is not repeated herein.
In step S304, the first process applies for and releases the shared memory block.
Optionally, an application function of a size corresponding to a memory block (or a data block) in the shared memory is configured in the first process, and a recycle function is released, so as to apply for the memory or release the memory.
In step S305, the first process user data link header is defined.
Optionally, the first process defines the user data link header based on the address offset doubly linked list, and the specific implementation manner may refer to the above, which is not described herein again.
In step S306, the first process link table header data is mapped to the shared memory management block.
Optionally, the specific implementation of step S306 may refer to the foregoing, and is not described herein again.
In step S307, the first process shares the creation of the mutex semaphore.
Optionally, a shared mutual exclusion semaphore is created in a head of a user data linked list in the first process, so as to prevent multiple processes from accessing data in the first process at the same time.
In step S308, the first process creates a user data linked list based on the address offset doubly linked list.
In step S309, the second process maps the shared memory of the first process.
In step S310, the second process obtains and saves the data in the shared memory management block.
In step S311, the second process accesses and queries the data of the first process through the address offset doubly linked list operation.
Optionally, please refer to the above for the specific implementation of steps S308 to S311, which is not described herein again.
It should be noted that the order of the method steps S301 to S311 in the present application is not limited to the specific implementation process.
According to the method for sharing the memory linked list, the second process maps the shared memory of the first process to the shared memory of the second process; so that the second process can find the head of the user data linked list in the first process from the shared memory of the second process; and then the second process can rapidly access the data in the shared memory linked list corresponding to the user data linked list in the first process according to the address offset parameter pointing to the next node in the linked list header and the first address in the first process mapped in the shared memory of the second process, so that the realization and the access of the shared memory linked list among multiple processes are realized, the real-time performance and the reliability of the data access among the multiple processes can be effectively improved, the system performance is improved, and more values are brought to the research and development convenience.
Based on the same inventive concept, as shown in fig. 5, an apparatus for sharing a memory linked list corresponding to the method for sharing a memory linked list shown in fig. 1 one to one is also provided in the embodiment of the present application, it should be understood that the apparatus 400 can perform the steps related to the above method embodiment, specific functions of the apparatus 400 may be referred to the description above, and detailed descriptions are appropriately omitted herein to avoid repetition. Specifically, the apparatus 400 includes:
a creating module 410, configured to create, by a first process, a shared memory, where the shared memory includes a shared memory management block and a shared memory data block, and a first address of the shared memory is used as a starting address of the shared memory management block.
A configuration module 420 configured to configure, by the first process, a data structure of a head of a user data linked list based on a data structure of a predefined address offset linked list, the data structure of the address offset linked list including an address offset parameter pointing to a next node, the data structure of the head of the linked list including an address offset parameter pointing to the next node.
A first processing module 430, configured to map, by the first process, a data structure of the chaining header to the shared memory management block, so that a second process maps the shared memory of the first process into its own process, so that the second process finds the chaining header in the shared memory management block from its own shared memory; and accessing data in a shared memory linked list corresponding to the user data linked list in the first process through an address offset parameter pointing to a next node in the linked list header and a first address existing in the shared memory of the first process mapped in the shared memory of the second process.
In a possible embodiment, the apparatus 400 comprises: the second processing module is used for dividing the shared memory data block into a plurality of data blocks with different sizes through the first process, organizing the data blocks with the same size in a pointer linked list mode, and generating a plurality of memory linked lists with different sizes, wherein each data block comprises a starting address and an ending address; the memory application module is used for applying data blocks with corresponding sizes from a plurality of memory linked lists with different sizes according to the size of user data through the first process; a third processing module, configured to determine, by the first process, an address offset of each data block with respect to the first address of the shared memory according to the initial address of the applied data block and the first address of the shared memory; and a data chain generating module, configured to chain the user data to the chain table header through the first process, and generate a user data chain, where an address offset parameter pointing to a next node in each node in the user data chain is set according to the address offset.
Based on the same inventive concept, as shown in fig. 6, an apparatus for sharing a memory linked list corresponding to the method for sharing a memory linked list shown in fig. 3 one to one is also provided in the embodiment of the present application, it should be understood that the apparatus 500 can perform each step related to the above method embodiment, and specific functions of the apparatus 500 may be referred to the description above, and a detailed description is appropriately omitted herein to avoid repetition. Specifically, the apparatus 500 includes:
and a mapping module 510, configured to map, by the second process, the shared memory of the first process to a storage space of the second process.
The query module 520 is configured to find the head of the user data linked list in the first process from the shared memory of the second process.
An accessing module 530, configured to access, by the second process, data in the shared memory linked list corresponding to the user data linked list in the first process according to the address offset parameter pointing to the next node in the linked list header and the first address existing in the shared memory of the first process mapped in the shared memory of the second process.
Optionally, the query module is further configured to: and the second process finds the linked list head of the user data linked list in the first process from the shared memory management block mapped by the first process in the shared memory of the second process.
Based on the same inventive concept, the present application further provides an apparatus, and fig. 7 is a block diagram of an apparatus 500 in the embodiment of the present application. Electronic device 600 may include a processor 610, a communication interface 620, a memory 630, and at least one communication bus 640. Wherein communication bus 640 is used to enable direct, coupled communication of these components. The communication interface 620 of the device in this embodiment is used to perform signaling or data communication with other node devices. The processor 610 may be an integrated circuit chip having signal processing capabilities.
The memory 630 stores computer readable instructions, which when executed by the processor 610, the electronic device 600 may perform the steps involved in the method embodiments of fig. 1 or fig. 3 described above.
Optionally, the electronic device 600 may further comprise a memory controller.
The memory 630, the memory controller, and the processor 610 are electrically connected to each other directly or indirectly to realize data transmission or interaction. For example, these elements may be electrically coupled to each other via one or more communication buses 540. The processor 610 is configured to execute executable modules stored in the memory 630, such as software functional modules or computer programs included in the apparatus 400. Also, the apparatus 400 is configured to perform the following method: a first process creates a shared memory, wherein the shared memory comprises a shared memory management block and a shared memory data block, and the first address of the shared memory is used as the initial address of the shared memory management block; the first process configures a data structure of a head of a user data linked list based on a data structure of a predefined address offset linked list, the data structure of the address offset linked list including an address offset parameter pointing to a next node, the data structure of the head of the linked list including an address offset parameter pointing to the next node; the first process maps the data structure of the linked list head to the shared memory management block, so that a second process maps the shared memory of the first process into the process of the second process, and the second process finds the linked list head in the shared memory management block from the shared memory of the second process; and accessing data in a shared memory linked list corresponding to the user data linked list in the first process through an address offset parameter pointing to a next node in the linked list header and a first address existing in the shared memory of the first process mapped in the shared memory of the second process. Also for example, the apparatus 500 comprises software functional modules or computer programs. Also, the apparatus 500 is configured to perform the method of: the second process maps the shared memory of the first process to the storage space of the second process; the second process finds out the linked list head of the user data linked list in the first process from the shared memory of the second process; and the second process accesses data in the shared memory linked list corresponding to the user data linked list in the first process according to the address offset parameter of the next node in the linked list header and the first address in the shared memory of the first process mapped by the first process.
It is to be understood that the configuration shown in fig. 7 is merely exemplary, and that the electronic device 600 may include more or fewer components than shown in fig. 7, or have a different configuration than shown in fig. 7. The components shown in fig. 7 may be implemented in hardware, software, or a combination thereof.
The embodiment of the present application further provides a storage medium, where the storage medium stores instructions, and when the instructions are executed on a computer, the method in the method embodiment is implemented when the computer executes the instructions, and details are not repeated here to avoid repetition.
The present application also provides a computer program product which, when run on a computer, causes the computer to perform the method of the method embodiments.
The above description is only an example of the present application and is not intended to limit the scope of the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (10)

1. A method for sharing a linked list of memory, the method comprising:
a first process creates a shared memory pool, wherein the shared memory pool comprises a shared memory management block and a shared memory data block, and the first address of the shared memory pool is used as the initial address of the shared memory management block;
the first process configures a data structure of a head of a user data linked list based on a data structure of a predefined address offset linked list, the data structure of the address offset linked list including an address offset parameter pointing to a next node, the data structure of the head of the linked list including an address offset parameter pointing to the next node;
the first process maps the data structure of the chaining header to the shared memory management block, so that a second process maps the shared memory pool of the first process into the process of the second process, and the second process finds the chaining header in the shared memory management block from the shared memory pool of the second process; and accessing data in the shared memory linked list corresponding to the user data linked list in the first process through the address offset parameter pointing to the next node in the linked list header and the first address of the shared memory pool of the first process in the shared memory pool of the second process mapped.
2. The method of claim 1, further comprising:
the first process divides a shared memory data block into a plurality of data blocks with different sizes, organizes the data blocks with the same size in a pointer linked list mode to generate a plurality of memory linked lists with different sizes, and each data block comprises a starting address and an ending address;
the first process applies for data blocks with corresponding sizes from a plurality of memory linked lists with different sizes according to the size of user data;
the first process determines the address offset of each data block relative to the first address of the shared memory pool according to the applied initial address of the data block and the first address of the shared memory pool;
and the first process uplinks the user data to the chain table head to generate a user data chain, wherein address offset parameters pointing to a next node in each node on the user data chain are set according to the address offset.
3. A method for sharing a linked list of memory, the method comprising:
the second process maps the shared memory pool of the first process to the shared memory pool of the second process; the shared memory pool of the first process is created by the first process, the shared memory pool of the first process comprises a shared memory management block and a shared memory data block, and the first address of the shared memory pool of the first process is used as the initial address of the shared memory management block; the first process configures a data structure of a head of a user data linked list based on a data structure of a predefined address offset linked list, wherein the data structure of the address offset linked list comprises an address offset parameter pointing to a next node, and the data structure of the head of the linked list comprises an address offset parameter pointing to the next node;
the second process finds out the linked list head of the user data linked list in the first process from a shared memory pool of the second process;
and the second process accesses the data in the shared memory linked list corresponding to the user data linked list in the first process according to the address offset parameter of the next node in the linked list head and the first address of the shared memory pool of the first process in the shared memory pool mapped by the second process.
4. The method of claim 3, wherein the second process finds the head of the user data linked list in the first process from its shared memory, and the finding comprises:
and the second process finds the linked list head of the user data linked list in the first process from the shared memory management block mapped by the first process in the shared memory pool of the second process.
5. An apparatus for sharing a linked list of memories, the apparatus comprising:
the mapping module is used for creating a shared memory pool through a first process, wherein the shared memory pool comprises a shared memory management block and a shared memory data block, and a first address of the shared memory pool is used as an initial address of the shared memory management block;
a configuration module configured to configure, by the first process, a data structure of a head of a user data linked list based on a data structure of a predefined address offset linked list, the data structure of the address offset linked list including an address offset parameter pointing to a next node, the data structure of the head of the linked list including an address offset parameter pointing to the next node;
a first processing module, configured to map, by the first process, the data structure of the chaining header to the shared memory management block, so that a second process maps the shared memory pool of the first process into its own process, so that the second process finds the chaining header in the shared memory management block from its own shared memory pool; and accessing data in the shared memory linked list corresponding to the user data linked list in the first process through the address offset parameter pointing to the next node in the linked list header and the first address of the shared memory pool of the first process in the shared memory pool of the second process, which is mapped.
6. The apparatus of claim 5, further comprising:
the second processing module is used for dividing the shared memory data block into a plurality of data blocks with different sizes through the first process, organizing the data blocks with the same size in a pointer linked list mode, and generating a plurality of memory linked lists with different sizes, wherein each data block comprises a starting address and an ending address;
the memory application module is used for applying data blocks with corresponding sizes from a plurality of memory linked lists with different sizes according to the size of user data through the first process;
a third processing module, configured to determine, by the first process, an address offset of each data block with respect to a first address of the shared memory pool according to the initial address of the applied data block and the first address of the shared memory pool;
and a data chain generation module, configured to chain the user data to the chain table header through the first process, and generate a user data chain, where an address offset parameter pointing to a next node in each node in the user data chain is set according to the address offset.
7. An apparatus for sharing a linked list of memories, the apparatus comprising:
the mapping module is used for mapping the shared memory pool of the first process to the shared memory pool of the second process; the shared memory pool of the first process is created by the first process, the shared memory pool of the first process comprises a shared memory management block and a shared memory data block, and the first address of the shared memory pool of the first process is used as the initial address of the shared memory management block; the first process configures a data structure of a head of a user data linked list based on a data structure of a predefined address offset linked list, wherein the data structure of the address offset linked list comprises an address offset parameter pointing to a next node, and the data structure of the head of the linked list comprises an address offset parameter pointing to the next node;
the query module is used for finding the linked list head of the user data linked list in the first process from a shared memory pool of the query module through the second process;
and the access module is used for accessing the data in the shared memory linked list corresponding to the user data linked list in the first process through the second process according to the address offset parameter pointing to the next node in the linked list header and the first address, in the shared memory pool, of the first process mapped in the shared memory pool of the second process.
8. The apparatus of claim 7, wherein the query module is further configured to:
and the second process finds the linked list head of the user data linked list in the first process from the shared memory management block mapped by the first process in the shared memory pool of the second process.
9. An electronic device, comprising: a memory, a processor, and a computer program stored in the memory and executable on the processor, the processor implementing the method for sharing a linked list of memories according to any one of claims 1 to 2 when executing the computer program; alternatively, the processor implements the method for shared memory linked list of claims 3-4 when executing the computer program.
10. A storage medium for storing instructions which, when executed on a computer, cause the computer to perform a method for sharing a linked list of memories according to any one of claims 1 to 2; alternatively, the instructions, when executed on a computer, cause the computer to perform the method for sharing a linked list of memory of claims 3-4.
CN201910920795.0A 2019-09-26 2019-09-26 Method, device, equipment and storage medium for sharing memory linked list Active CN110618883B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910920795.0A CN110618883B (en) 2019-09-26 2019-09-26 Method, device, equipment and storage medium for sharing memory linked list

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910920795.0A CN110618883B (en) 2019-09-26 2019-09-26 Method, device, equipment and storage medium for sharing memory linked list

Publications (2)

Publication Number Publication Date
CN110618883A CN110618883A (en) 2019-12-27
CN110618883B true CN110618883B (en) 2022-09-13

Family

ID=68924648

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910920795.0A Active CN110618883B (en) 2019-09-26 2019-09-26 Method, device, equipment and storage medium for sharing memory linked list

Country Status (1)

Country Link
CN (1) CN110618883B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113760560A (en) * 2020-06-05 2021-12-07 华为技术有限公司 Inter-process communication method and inter-process communication device
CN113342805B (en) * 2021-04-21 2023-04-11 湖北微源卓越科技有限公司 System and method for sharing data by multiple processes
CN113453276B (en) * 2021-05-18 2024-01-16 翱捷科技股份有限公司 Method and device for improving uplink and downlink memory utilization rate of LTE terminal

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6952722B1 (en) * 2002-01-22 2005-10-04 Cisco Technology, Inc. Method and system using peer mapping system call to map changes in shared memory to all users of the shared memory
CN1740978A (en) * 2004-08-23 2006-03-01 华为技术有限公司 Method for realing sharing internal stored data base and internal stored data base system
CN103197979A (en) * 2012-01-04 2013-07-10 阿里巴巴集团控股有限公司 Method and device for realizing data interaction access among processes
CN106681842A (en) * 2017-01-18 2017-05-17 迈普通信技术股份有限公司 Management method and device for sharing memory in multi-process system
CN107102900A (en) * 2016-02-22 2017-08-29 上海大唐移动通信设备有限公司 A kind of management method of shared memory space
CN107391285A (en) * 2017-08-23 2017-11-24 美的智慧家居科技有限公司 Internal memory sharing method and system

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101676906B (en) * 2008-09-18 2013-06-05 中兴通讯股份有限公司 Method of managing memory database space by using bitmap
CN102004675A (en) * 2010-11-11 2011-04-06 福建星网锐捷网络有限公司 Cross-process data transmission method, device and network equipment
CN103034544B (en) * 2012-12-04 2015-08-05 杭州迪普科技有限公司 The management method of a kind of User space and kernel state shared drive and device
CN107402891B (en) * 2012-12-25 2020-12-22 华为技术有限公司 Method for determining page management mode of shared virtual memory and related equipment
CN106155933B (en) * 2016-07-06 2019-02-05 乾云众创(北京)信息科技研究院有限公司 A kind of virutal machine memory sharing method combined based on KSM and Pass-through
WO2019028682A1 (en) * 2017-08-08 2019-02-14 深圳前海达闼云端智能科技有限公司 Multi-system shared memory management method and device
CN110109763A (en) * 2019-04-12 2019-08-09 厦门亿联网络技术股份有限公司 A kind of shared-memory management method and device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6952722B1 (en) * 2002-01-22 2005-10-04 Cisco Technology, Inc. Method and system using peer mapping system call to map changes in shared memory to all users of the shared memory
CN1740978A (en) * 2004-08-23 2006-03-01 华为技术有限公司 Method for realing sharing internal stored data base and internal stored data base system
CN103197979A (en) * 2012-01-04 2013-07-10 阿里巴巴集团控股有限公司 Method and device for realizing data interaction access among processes
CN107102900A (en) * 2016-02-22 2017-08-29 上海大唐移动通信设备有限公司 A kind of management method of shared memory space
CN106681842A (en) * 2017-01-18 2017-05-17 迈普通信技术股份有限公司 Management method and device for sharing memory in multi-process system
CN107391285A (en) * 2017-08-23 2017-11-24 美的智慧家居科技有限公司 Internal memory sharing method and system

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
multi process share memory map address;Xiao-hui Cheng等;《Proceedings of 2011 International Conference on Computer Science and Network Technology》;20120412;第111-114页 *
this is share memory impl multi process for rust;tickbh;《https://github.com/tickbh/ShareMemory》;20180423;第1-3页 *
基于内存映射文件的进程间数据传输;段继华等;《无线电工程》;20071105(第11期);第46-47、51页 *
面向同驻虚拟机的高效共享内存文件系统;沙行勉等;《计算机学报》;20180515;第42卷(第04期);第800-819页 *

Also Published As

Publication number Publication date
CN110618883A (en) 2019-12-27

Similar Documents

Publication Publication Date Title
CN110618883B (en) Method, device, equipment and storage medium for sharing memory linked list
KR101786871B1 (en) Apparatus for processing remote page fault and method thereof
US8381230B2 (en) Message passing with queues and channels
CN107273042B (en) Memory module and method for repeating deleting DRAM system algorithm structure
JP2017182803A (en) Memory deduplication method and deduplication DRAM memory module
CN107066498B (en) Key value KV storage method and device
US20150113230A1 (en) Directory storage method and query method, and node controller
CN104424030B (en) Method and device for sharing memory by multi-process operation
CN108121813B (en) Data management method, device, system, storage medium and electronic equipment
CN107273397B (en) Virtual bucket polyhistidine table for efficient memory online deduplication applications
US9400767B2 (en) Subgraph-based distributed graph processing
CN107341054B (en) Task execution method and device and computer readable storage medium
US20190220443A1 (en) Method, apparatus, and computer program product for indexing a file
CN112256457A (en) Data loading acceleration method and device based on shared memory, electronic equipment and storage medium
US8543722B2 (en) Message passing with queues and channels
CN107451070B (en) Data processing method and server
CN108762915B (en) Method for caching RDF data in GPU memory
JP2014071904A (en) Computing system and data management method of computing system
CN109271193B (en) Data processing method, device, equipment and storage medium
CN112650692A (en) Heap memory allocation method, device and storage medium
CN116662019B (en) Request distribution method and device, storage medium and electronic device
CN115114042A (en) Storage data access method and device, electronic equipment and storage medium
CN112685417A (en) Database operation method, system, device, server and storage medium
CN107765992B (en) Method and device for processing data
CN112035380B (en) Data processing method, device and equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant