CN106681842A - Management method and device for sharing memory in multi-process system - Google Patents

Management method and device for sharing memory in multi-process system Download PDF

Info

Publication number
CN106681842A
CN106681842A CN201710038314.4A CN201710038314A CN106681842A CN 106681842 A CN106681842 A CN 106681842A CN 201710038314 A CN201710038314 A CN 201710038314A CN 106681842 A CN106681842 A CN 106681842A
Authority
CN
China
Prior art keywords
shared drive
bulk
chained list
fritter
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710038314.4A
Other languages
Chinese (zh)
Inventor
林友义
范恒英
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Maipu Communication Technology Co Ltd
Original Assignee
Maipu Communication Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Maipu Communication Technology Co Ltd filed Critical Maipu Communication Technology Co Ltd
Priority to CN201710038314.4A priority Critical patent/CN106681842A/en
Publication of CN106681842A publication Critical patent/CN106681842A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/544Buffers; Shared memory; Pipes

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multi Processors (AREA)

Abstract

The embodiment of the invention discloses a management method and device for sharing a memory in a multi-process system, relates to the field of computers and aims at solving the problem that in the prior art, the time consumed when a progress allocates memory blocks is long. The method comprises the steps that the progress creates a shared memory based on a preset key value; the progress creates a shared memory control block in the shared memory, wherein the shared memory control block includes a first linked list composed of index links of a first large shared memory block, a second linked list composed of index links of a second large shared memory block and a third linked list composed of index links of a third large shared memory block; the progress creates a mapping relation list in a privately-owned memory of the progress; the progress looks up indexes of target large shared memory blocks allocated to the progress from the second linked list and/or the third linked list and looks up starting addresses of the target large shared memory blocks in the mapping relation list so as to write data in continuous and unoccupied small shared memory blocks in the target large shared memory blocks.

Description

The management method and device of shared drive in a kind of multiprocess system
Technical field
The present invention relates in computer realm, more particularly to a kind of multiprocess system shared drive management method and dress Put.
Background technology
The system that the system of multi-process is generally based on virtual memory, such as Linux and newer Windows systems, The address space of process is separate.Respective private data is interacted between process various communication mechanisms, such as disappear Breath queue, pipeline, socket and file etc., these inter-process communication mechanisms substantially will be through multiple data copy ability The purpose of shared data is reached, expense is excessive.
In order to avoid between process interaction respective private data when, it may appear that the excessive problem of expense, in prior art The method to carry out data sharing based on memory pool is proposed, the ultimate principle of the method is:Firstly the need of establishment shared drive Pond, then distributes to each process by the shared drive pond of the establishment, and the management thought of its shared drive pond is first will be various big Little memory block is linked together, when need for course allocation internal memory when, successively traversal check each idle memory block, find After meeting desired memory block, then fractionation distribution is carried out, the action for again merging adjacent memory block after release.Above-mentioned In scheme due to for course allocation shared drive block when, need to travel through each memory block in the memory pool, then find and meet The memory block of requirement, so that final longer to the time that shared drive block spends by course allocation.
The content of the invention
Embodiments of the invention provide a kind of management method and device of shared drive in multiprocess system, existing to solve Time for spending when having in technology by course allocation memory block longer problem.
To reach above-mentioned purpose, embodiments of the invention are adopted the following technical scheme that:
The first aspect of the embodiment of the present invention, there is provided the management method of shared drive in a kind of multiprocess system, the side Method includes:
Process creates shared drive based on default key assignments, and the shared drive includes that M size identical shared drive is big Block, each shared drive bulk includes N number of shared drive fritter;The M and N are all higher than or equal to 2;
The process creates shared drive control block in shared drive, and the shared drive control block includes being total to by first Enjoy the first chained list that the indexed links of internal memory bulk are constituted, the second chain being made up of the indexed links of the second shared drive bulk Table, the 3rd chained list being made up of the indexed links of the 3rd shared drive bulk, wherein, the whole in the first shared drive bulk is total to Enjoy that internal memory fritter is all occupied, the partial sharing internal memory fritter in the second shared drive bulk is occupied, and the 3rd shared drive is big Whole shared drive fritters in block are idle.
The process creates mapping table in its privately owned internal memory;The mapping table is used to indicate that shared drive is big Mapping relations between the index of block and the initial address of shared drive bulk;
The process is total to from the target that second chained list and/or the 3rd chain table search are used to distribute to the process Enjoy the index of internal memory bulk, and in the initial address of target shared drive bulk described in the mapping relations table search, so as to Continuous idle shared drive fritter writes data in the target shared drive bulk.
The second aspect of the embodiment of the present invention, there is provided the managing device of shared drive in a kind of multiprocess system, the dress Put including:
First creation module, for creating shared drive based on default key assignments, the shared drive includes that M size is identical Shared drive bulk, each shared drive bulk includes N number of shared drive fritter;The M and N are all higher than or equal to 2;
First creation module, is additionally operable to create shared drive control block, the shared drive control in shared drive Clamp dog includes the first chained list being made up of the indexed links of the first shared drive bulk, by the index chain of the second shared drive bulk Connect the second chained list of composition, the 3rd chained list being made up of the indexed links of the 3rd shared drive bulk, wherein, the first shared drive Whole shared drive fritters in bulk are all occupied, and the partial sharing internal memory fritter in the second shared drive bulk is occupied, Whole shared drive fritters in 3rd shared drive bulk are idle.
Second creation module, for creating mapping table in the privately owned internal memory of process;The mapping table is used for Indicate the mapping relations between the index of shared drive bulk and the initial address of shared drive bulk;
Distribute module, for searching for distributing to the process from second chained list and/or the 3rd chained list Target shared drive bulk index, and the starting point of the target shared drive bulk is searched in the mapping table Location, so that continuous idle shared drive fritter writes data in the target shared drive bulk.
The management method and device of shared drive in multiprocess system provided in an embodiment of the present invention, compared to existing skill Art, this programme process creates shared drive control block first in shared drive, and the shared drive control block includes three kinds not With the chained list of state, wherein:Whole shared drive fritters in shared drive bulk in first chained list are all occupied, the second chain Partial sharing internal memory fritter in shared drive bulk in table is occupied, the whole in shared drive bulk in the 3rd chained list Shared drive fritter is idle;Secondly, the process creation mapping table, what is stored in the mapping table is that shared drive is big Mapping relations between the index of block and the initial address of shared drive bulk;Process for itself distribution shared drive fritter when, The index of the direct target shared drive bulk for being used to distribute to process from the second chained list and/or the 3rd chain table search, and upper The initial address of target shared drive bulk is searched in the mapping table stated, in order to continuous in target shared drive bulk Idle shared drive fritter writes data.So avoid to be traveled through successively in prior art and check each idle memory block, So as to the time spent when saving course allocation memory block, and then improve the operating rate of system.
Description of the drawings
In order to be illustrated more clearly that the embodiment of the present invention or technical scheme of the prior art, below will be to embodiment or existing The accompanying drawing to be used needed for having technology description is briefly described, it should be apparent that, drawings in the following description are only this Some embodiments of invention, for those of ordinary skill in the art, without having to pay creative labor, may be used also To obtain other accompanying drawings according to these accompanying drawings.
Fig. 1 is the method flow of the management method of shared drive in a kind of multiprocess system provided in an embodiment of the present invention Figure;
Fig. 2 is a kind of management schematic diagram of shared drive provided in an embodiment of the present invention;
Fig. 3 is the schematic diagram that shared drive is accessed between a kind of managing process provided in an embodiment of the present invention and business process;
Fig. 4 is the structural representation of the managing device of shared drive in a kind of multiprocess system provided in an embodiment of the present invention Figure;
Fig. 5 is the structural representation of the managing device of shared drive in another kind of multiprocess system provided in an embodiment of the present invention Figure.
Specific embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out clear, complete Site preparation is described, it is clear that described embodiment is only a part of embodiment of the invention, rather than the embodiment of whole.It is based on Embodiment in the present invention, it is every other that those of ordinary skill in the art are obtained under the premise of creative work is not made Embodiment, belongs to the scope of protection of the invention.
For the ease of clearly describing the technical scheme of the embodiment of the present invention, in an embodiment of the present invention, employ " the One ", the printed words such as " second " identical entry essentially identical to function or effect or similar item make a distinction, and those skilled in the art can To understand that the printed words such as " first ", " second " are not defined to quantity and execution order.
The terms "and/or", only a kind of incidence relation of description affiliated partner, represents there may be three kinds of passes System, for example, A and/or B can be represented:Individualism A, while there is A and B, individualism B these three situations.In addition, herein Middle character "/", typicallys represent forward-backward correlation pair as if a kind of relation of "or".
Term is defined:
Process:In the context of the invention, process is the application entity run in a computer, and it is assigned to calculating The processor of machine equipment, and by computing device.
Business process:In the context of the invention, business process refers to the process application entity of operation specific protocol business, Such as NTP (English:Network Time Protocol, referred to as:NTP) business process.
Managing process:In the context of the invention, managing process refers to the privately owned number in inside for being managed collectively and showing business process According to etc., such as the Command Line Interface (English provided user is provided in managing process:Command-line Interface, letter Claim:CLI), can carry out acquisition of information and show for certain specific business process.
Shared drive:It is a kind of mode of interprocess communication, is also most fast interprocess communication (English:Inter- Process Communication, referred to as:IPC) form.Two different process A, B shared drives mean same physics Internal memory is mapped to the respective process virtual address space of process A, B.Process A can immediately see process B in shared drive The renewal of data, vice versa.
Shared drive bulk:The shared drive of the bulk distributed by system call interfaces, such as in Linux system Below, the shared drive bulk of one piece of 4096 byte is mapped by mmap system call interfaces.
Shared drive fritter:Shared drive bulk is split into into the shared drive fritter of fixed size, such as at mmap point With 4096 internal memories out after, be split as the shared drive fritter that 64 block sizes are 64 bytes, concrete shared drive bulk by 4096 Can be configured according to concrete service conditions with the size of shared drive fritter.
The embodiment of the present invention provides a kind of management method of shared drive in multiprocess system, as shown in figure 1, the method bag Include:
101st, process creates shared drive based on default key assignments, and the shared drive includes M size identical shared drive Bulk, each shared drive bulk includes N number of shared drive fritter;M and N are all higher than or equal to 2.
Optionally, above-mentioned default key assignments can be set by each process is unified in advance, or base In the default key assignments that specified file is generated.For example, each process is based on the file (such as/shm/shm_ for specifying Key assignments can be generated based on this file under cache, Linux system, each process creates same based on identical file Shared drive) creating this shared drive, and this is preserved using process built-in variable shmCacheAddr and is mapped out Address.
Exemplary, the number of above-mentioned internal memory bulk can be created disposably, such as create 2048, or What dynamic was increased, and for dynamic growth is not illustrated here, referring in particular to prior art.
102nd, process creates shared drive control block in shared drive, and the shared drive control block includes being shared by first The first chained list that the indexed links of internal memory bulk are constituted, the second chained list being made up of the indexed links of the second shared drive bulk, The 3rd chained list being made up of the indexed links of the 3rd shared drive bulk.
Wherein, the whole shared drive fritters in the first above-mentioned shared drive bulk are all occupied, the second shared drive Partial sharing internal memory fritter in bulk is occupied, and the whole shared drive fritters in the 3rd shared drive bulk are idle.
It should be noted that the first above-mentioned chained list, the second chained list and the 3rd chained list can be double-linked circular list Or one-way circulation chained list.
It is exemplary, with reference to the schematic diagram shown in Fig. 2, three above-mentioned chained lists be respectively Full (0), Partial (1) and Free (2), by taking Free chains as an example, referring in particular to the label 2 in Fig. 2, the free chain head preNode values are 6, represent that its is previous Node is array indexing 6, and nxtNode values are 5, represents that its latter node is array indexing 5;Shm_ in array indexing 5 The preNode values of cache_node are 2, represent that its previous node is array indexing 2, and nxtNode values are 6, represent that its is latter Individual node is array indexing 6;The rest may be inferred, i.e., these similar blocks are linked with the value of array index to index, and forms one Two-way circular linked list.Can be obtained plus these indexes by sharing memory pointer value shmCacheAddr mapped in each process To the address of specified shm_cache_node blocks, so as to access its member value.
103rd, process creates mapping table in its privately owned internal memory.
Mapping table in the embodiment of the present invention is used to indicate the index of shared drive bulk and shared drive bulk Mapping relations between initial address.
Exemplary, the above-mentioned privately owned interior virtual address space for saving as process, it is generally the case that in process initiation, Linux system can be that the virtual address space of each course allocation 4GB is used come the operation for process.
104th, process is used to distribute to the target shared drive bulk of process from the second chained list and/or the 3rd chain table search Index, and in the initial address of mapping relations table search target shared drive bulk, to connect in target shared drive bulk Continue idle shared drive fritter and write data.
Preferably, in order to reduce shared drive in occur memory fragmentation quantity, the process in above-mentioned step 104 from Second chained list and/or the 3rd chain table search are used for the index of the target shared drive bulk for distributing to process and specifically include in following Hold:
104a, process, according to the order of indexed links in the second chained list, are searched successively for distributing to from the second chained list The index of the target shared drive bulk of process, till finding.
If there is no the index for distributing to the target shared drive bulk of process in 104b, the second chained list, or second There is no the index of shared drive bulk in chained list, then determine in the 3rd chained list according to the order of indexed links in the 3rd chained list At least one index, as the index of the target shared drive bulk of process of distributing to.
Exemplary, there is no the index of shared drive bulk in the second above-mentioned chained list includes both sides content, one Aspect, when process is run for the first time when, due to the shared drive fritter in shared drive bulk it is idle, therefore here It is empty in second chained list, that is, does not store any index;On the other hand, what is stored in the second chained list is only some other Mark, for example, identifies 0, and the mark 0 does not represent the index of any shared drive bulk.
Exemplary, the above-mentioned order according to indexed links in the 3rd chained list determines that at least one of the 3rd chained list is indexed Specially:Due to the demand of process itself, a possible shared drive bulk can not meet the demand that process writes data, may Two are needed even more than shared drive bulk, therefore need exist for determining at least one index in the 3rd chained list.
Exemplary, the process in above-mentioned step 104 is used to distribute feed from the second chained list and/or the 3rd chain table search The index of the target shared drive bulk of journey specifically includes herein below:
104c, process are searched successively for dividing directly from the 3rd chained list according to the order of indexed links in the 3rd chained list The index of the target shared drive bulk of dispensing process.
The management method of shared drive in multiprocess system provided in an embodiment of the present invention, compared to prior art, we Case process creates shared drive control block first in shared drive, and the shared drive control block includes three kinds of different conditions Chained list, wherein:Whole shared drive fritters in shared drive bulk in first chained list are all occupied, being total in the second chained list The partial sharing internal memory fritter enjoyed in internal memory bulk is occupied, the whole shared drives in shared drive bulk in the 3rd chained list Fritter is idle;Secondly, the process creation mapping table, what is stored in the mapping table is the index of shared drive bulk With the mapping relations between the initial address of shared drive bulk;Process for itself distribution shared drive fritter when, directly from the Two chained lists and/or the 3rd chain table search are used for the index of the target shared drive bulk for distributing to process, and in above-mentioned mapping The initial address of target shared drive bulk is searched in relation table, is total in order to continuous idle in target shared drive bulk Enjoy internal memory fritter and write data.So avoid to be traveled through successively in prior art and check each idle memory block, so as to save The time spent during course allocation memory block, and then improve the operating rate of system.
Optionally, if process finds the rope for distributing to the target shared drive bulk of process from the second chained list Draw, now need to write data in the shared drive fritter in the target shared drive bulk in second chained list, specifically, should Method also includes herein below:
A1, process according to the initial address of the target shared drive bulk found in mapping table, target it is shared in The occupied number of shared drive fritter deposited in the size and target shared drive bulk of idle directorial area in bulk is true Make the initial address of the shared drive fritter of free time.
A2, process start to write data from the initial address of idle shared drive fritter, by shared drive bulk The numerical value of enumerator increases x.
Wherein, above-mentioned x is the number of occupied idle shared drive fritter.For example, it is total to if process is write from idle Data are write at the initial address for enjoying internal memory fritter, data are write the shared drive fritter for needing to take 3 free time, then now needed The numerical value of the enumerator in shared drive bulk is added 3.
After above-mentioned course allocation shared drive fritter so that the shared drive bulk that the shared drive fritter is located State changes, and then needs to update the shared drive bulk index linked in three chained lists.Optionally, the method is also wrapped Include:
If all shared drive fritters in the second shared drive bulk in B1, the second chained list are occupied, process will The indexed links of the second shared drive bulk are in the first chained list;Or
If all shared drive fritters in the 3rd shared drive bulk in B2, the 3rd chained list are occupied, process will The indexed links of the 3rd shared drive bulk are in the first chained list;Or
If the partial sharing internal memory fritter in the 3rd shared drive bulk in B3, the 3rd chained list is occupied, process will The indexed links of the 3rd shared drive bulk are in the second chained list.
In order that other processes are met to the demand of memory headroom, in addition it is also necessary to by the shared drive bulk being assigned to It is returned to system.Optionally, the method also includes herein below:
C1, process are released to the shared drive fritter in the target shared drive bulk of course allocation, and shared drive is big The numerical value of the enumerator in block reduces y.
If the numerical value in enumerator in C2, shared drive bulk is kept to 0, process is by shared drive bulk in this process Interior deletion.
Wherein, above-mentioned y is the number of the shared drive fritter being released.For example, if process releases target and shares interior 2 shared drive fritters in bulk are deposited, then now needs to subtract 2 by the numerical value of the enumerator in shared drive bulk.
Preferably, in order to ensure the normal operation of equipment, need quickly to discharge in above-mentioned target shared drive bulk Shared drive fritter, process is released to the shared drive in the target shared drive bulk of course allocation in above-mentioned step C1 Fritter specifically includes herein below:
D1, according in shared drive fritter record shared drive bulk index positioning obtain target shared drive bulk Initial address.
D2, from the beginning of at the initial address of target shared drive bulk, by for the target shared drive bulk of course allocation In shared drive fritter be set to the free time.
Exemplary, above-mentioned can directly will share by the shared drive fritter release in target shared drive bulk The index of internal memory fritter is added in the free block directorial area in target shared drive bulk.Due to storing in free block directorial area Be exactly the index of idle shared drive fritter, therefore only need to change the control letter in idle directorial area when release Breath, indicates that after this shared drive fritter release be idle upstate.
After shared drive fritter in above-mentioned process release target shared drive bulk so that the shared drive fritter The state of the target shared drive bulk at place changes, and then the shared drive for needing to be linked in three chained lists of renewal is big Block is indexed.Optionally, the method also includes:
If all shared drive fritters in E1, target shared drive bulk are released, process is by target shared drive The indexed links of bulk are in the 3rd chained list.
If the partial sharing internal memory fritter in E2, target shared drive bulk is released, process is by target shared drive The indexed links of bulk are in the second chained list.
Below based on the shared-memory management method in the multiprocess system shown in Fig. 1, the shared drive with reference to shown in Fig. 2 Management process schematic diagram, the concrete management process being described in this programme.
The basic thought of the shared drive unified management includes herein below:First it is shared drive caching shm_cache Management, the state of all of shared drive bulk shm_node in its maintenance process, including full (gauge outfit position be array rope Draw 0, all shared drive fritters of shm_node on this table management have all been allocated), (gauge outfit position is array to partial Index 1, the partial sharing internal memory fritter of shm_node on this table management is unassigned), (gauge outfit position is array indexing to free All shared drive fritters of 2, shm_node on this table management are all unassigned) these three states, business process is in need The preferentially distribution from partial tables during distribution shared drive fritter.
Secondly, the shared drive fritter that business process is used just is allocated to, detail is following, and (shared drive is created and connect Mouth is simultaneously propped up using the primary interface of system, such as the mmap or shmget in Linux system, any multi-process environment The equal here invention protection domain of system of shared drive mechanism is held, is illustrated with Linux system here):
(1) shared drive control block is (hereinafter referred to as:Shm_cache) create:Business process can all give tacit consent to establishment after starting Shm_cache, and each process (such as/shm/shm_cache, can be based on this file based on the file specified under Linux system To generate key assignments, each business process creates same shared drive based on identical file) creating this shared drive, And preserved using business process built-in variable shmCacheAddr (in Fig. 2 shown in label 1) this map out come address. The shared drive bulk of shm_cache management is (hereinafter referred to as:Shm_node) number (the shared drive bulk node of correspondence diagram (hereinafter referred to as:Shm_cache_node), each shm_cache_node one shm_node of correspondence) disposably create, than (the shm_node numbers for managing here certainly dynamic can also increase, and be easy to explanation scheme, and here dynamic increases such as to create 2048 Length is not just illustrated, but belongs to this aspects).
(2) how shm_cache manages shm_node:As shown in label 2 in Fig. 2, here by taking free chains as an example, free chains Head preNode values are 6, represent that the previous node in two neighboring node is array indexing 6, and nxtNode values are 5, represent phase Latter node in adjacent two nodes is array indexing 5;The preNode values of shm_cache_node are 2 in array indexing 5, Represent that the previous node in two neighboring node is array indexing 2, nxtNode values are 6, after representing in two neighboring node One node is array indexing 6;The rest may be inferred, i.e., these similar blocks are linked with the value of array index to index, and forms one Individual two-way circular linked list.Indexed plus these by sharing memory pointer value shmCacheAddr mapped in each business process Can obtain specifying the address of shm_cache_node, so as to access its member value.
(3) mapping table (hereinafter referred to as shm_map:) create:Shm_map is stored in each business process and divides alone In the internal memory matched somebody with somebody (not being shared drive), its presence is mainly used in associating shm_cache_node and shm_node, ShmKey (example value is 0 to 2047) in shm_cache_node has two-layer to act on, and one is for the key of shm_map Hash tables Value, two be for generate file (such as/shm/shm_node_shmKey, this file is used for the key assignments that shared drive is created, each Process can be based on the same shared drive of this document creation), business process creates shared drive with this file, and this shares Internal memory is used to store free block directorial area and the data field of shm_node.Business process maps out the shared drive address storage for coming ShmNodeAddr in shm_map, as shown in label 6 in Fig. 2.
(4) free block directorial area:For managing all unappropriated shared drive fritters, the nxtFree in shm_node Next available block index is pointed to, the corresponding control zone of next available block is also directed to down next available block index, in such as Fig. 2 Shown in label 7, that is, the mode that shm_cache manages shm_node is similar to, is here simply unidirectional array linked list.
(5) shared drive fritter area:Shared drive bulk is divided into into fixed size by business process demand, such as each industry Business process Memory Allocation information (process identification (PID), the user profile using internal memory, storage allocation number of times, storage allocation total size, This record has a plurality of.The private information of every kind of business process can correspond to a shm_cache), such managing process can be with The memory usage information of specified services process is shown by traveling through this table, consequently facilitating the positioning of Memory Leaks.
Here only it is the method that a display is illustrated with example, there can also be other methods to show these information, Other methods fall within the scope of the present invention.Such as the index information in shm_map and each business process are bound Association, at the same specify a shm_node business process can only be allowed to use, such managing process to show it is specific certain During business process private data, can directly settle at one go and shm_node is obtained by index, then show all on shm_node The data message of the storage of the allocated shared drive fritter.
(6) shared drive fritter allocation flow:1. the shm_ that shared drive fritter can be distributed is found by shm_cache Cache_node, obtains shmKey values, and by shmKey Hash shm_node initial addresses are obtained, and one is found in shm_node The index of idle shared drive fritter, with reference to the initial address of shm_node the address to be returned finally is calculated (shmNodeAddr+ free time directorial area sizes+index * internal memory block sizes), inUseCnt adds 1 in shm_node;2. in mistake 1. Fail if searched by shmKey in journey, then illustrate that this is the shm_node that other business process are created, currently also in other words This shm_node is created without any business process, this business process at this time needs to carry out mapping action acquisition address (key assignments for/shm/shm_node_shmKey), afterwards recorded this address in mapping table shm_map, behind flow process with 1.; 3. when the shared drive bulk node that can distribute shared drive fritter cannot be obtained by shm_cache, new establishment one is needed Individual shared shm_node and data field, and add shm_cache to manage, while mapping obtains shared drive in this business process Address, behind flow process with 1.;The numerical value of enumerator is used the internal memory in shm_node after shared drive fritter success is distributed Increase, the numerical value of concrete increase is the number of the shared drive fritter of distribution.
(7) shared drive fritter release flow process:There is two ways in which the shared drive for 1. transmitting into according to user Fritter address, travels through shm_map, and by shmNodeAddr and the big block size of shared drive, (size can be pre-set ) scope finds shared drive bulk node in specified mapping table, obtain in the shared drive bulk node from mapping table The shmNodeAddr values of the shared drive bulk that the shared drive fritter address is located, by corresponding shared drive in shm_node The control zone of fritter is set to the free time;2. in order to quickly search, shmKey values are recorded in shared drive fritter, is determined by shmKey Position obtains shmNodeAddr, trades space for time;Internal memory in shm_node is subtracted using the numerical value of enumerator after release success Few, the numerical value of concrete reduction is the number of the shared drive fritter of release.If counting is reduced to 0, by shm_node sums Delete in this business process according to area.If all shared drive fritters of this shm_node management are used without process, i.e. shm_ InUseCnt is 0 in node, then notify that managing process is deleted specified shm_node and data field.
(8) operation protection principle:Wherein it is required for each business process to enter shm_cache and shm_node associative operations Row mutual exclusion, and the operation to shared drive fritter can be protected, protection can consider to use Linux system primary Semget mechanism.The exclusive reference between the thread inside consideration process is needed to the operation of shm_map, it may be considered that using former Raw Line Procedure Mutually-exclusive amount pthread_mutex_t mechanism.
The schematic diagram of shared drive is accessed between the managing process provided with reference to Fig. 3 and business process, in each business process And all there is the process of a shared-memory management in managing process, when business process needs to write data in shared drive When, it is required for first calling the shared-memory management process (performing the process in above-mentioned Fig. 2), then write to the shared drive Data.And managing process is in the data in reading the shared drive, it is still desirable to first call the shared-memory management process, so Afterwards read the shared drive in data.It is achieved thereby that multiple business process by way of shared drive by private data Share to managing process.
Below by the associated description in the embodiment based on the shared-memory management method in the multiprocess system shown in Fig. 1 Shared-memory management device in a kind of multiprocess system provided in an embodiment of the present invention is introduced.In following examples with The explanation of above-described embodiment related technical term, concept etc. is referred to the above embodiments, repeats no more here.
The managing device of shared drive in a kind of multiprocess system provided in an embodiment of the present invention, as shown in figure 4, the device 2 include:First creation module 21, the second creation module 22 and distribute module 23, wherein:
First creation module 21, for creating shared drive based on default key assignments, shared drive includes M size identical Shared drive bulk, each shared drive bulk includes N number of shared drive fritter;M and N are all higher than or equal to 2.
First creation module 21, is additionally operable to create shared drive control block, shared drive control block bag in shared drive The first chained list being made up of the indexed links of the first shared drive bulk is included, is made up of the indexed links of the second shared drive bulk The second chained list, the 3rd chained list being made up of the indexed links of the 3rd shared drive bulk, wherein, in the first shared drive bulk Whole shared drive fritters it is all occupied, the partial sharing internal memory fritter in the second shared drive bulk is occupied, the 3rd be total to The whole shared drive fritters enjoyed in internal memory bulk are idle.
Second creation module 22, for creating mapping table in the privately owned internal memory of process;Mapping table is used to refer to Show the mapping relations between the index of shared drive bulk and the initial address of shared drive bulk.
Distribute module 23, the target for searching from the second chained list and/or the 3rd chained list for distributing to process is shared The index of internal memory bulk, and the initial address of target shared drive bulk is searched in mapping table, so as to shared to target Continuous idle shared drive fritter writes data in internal memory bulk.
Preferably, in order to reduce shared drive in the quantity of memory fragmentation that occurs, above-mentioned distribute module 23 is from altogether Enjoy the rope of the target shared drive bulk that the second chained list and/or the 3rd chain table search in MCB are used to distribute to process When drawing specifically for:
According to the order of indexed links in the second chained list from the second chained list, the target for distributing to process is searched successively The index of shared drive bulk, till finding;
If there is no the index for distributing to the target shared drive bulk of process in the second chained list, or in the second chained list There is no the index of shared drive bulk, then at least in the 3rd chained list is determined according to the order of indexed links in the 3rd chained list Individual index, as the index of the target shared drive bulk of process of distributing to.
Optionally, if process finds the rope for distributing to the target shared drive bulk of process from the second chained list Draw, now need to write data in the shared drive fritter in the target shared drive bulk in second chained list.Such as Fig. 5 institutes Show, the device 2 also includes:Determining module 24 and memory module 25, wherein:
Determining module 24, initial address, the mesh of the target shared drive bulk found in mapping table for basis Shared drive fritter in the size and target shared drive bulk of the idle directorial area in mark shared drive bulk is occupied Number determine free time shared drive fritter initial address.
Memory module 25, for starting to write data from the initial address of idle shared drive fritter, by shared drive The numerical value of the enumerator in bulk increases x, and x is the number of occupied idle shared drive fritter.
After above-mentioned course allocation shared drive fritter so that the shared drive bulk that the shared drive fritter is located State changes, and then needs to update the shared drive bulk index linked in three chained lists.Optionally, as shown in figure 5, The device 2 also includes:Update module 26, wherein:
Update module 26 is used for:
If all shared drive fritters in the second shared drive bulk in the second chained list are occupied, process is by second The indexed links of shared drive bulk are in the first chained list;Or
If all shared drive fritters in the 3rd shared drive bulk in the 3rd chained list are occupied, process is by the 3rd The indexed links of shared drive bulk are in the first chained list;Or
If the partial sharing internal memory fritter in the 3rd shared drive bulk in the 3rd chained list is occupied, process is by the 3rd The indexed links of shared drive bulk are in the second chained list.
In order that other processes are met to the demand of memory headroom, in addition it is also necessary to by the shared drive bulk being assigned to It is returned to system.Optionally, as shown in figure 5, the device 2 also includes:Release module 27, wherein:
Release module 27, for being released to the target shared drive bulk of course allocation in shared drive fritter, will altogether The numerical value for enjoying the enumerator in internal memory bulk reduces y, and y is the number of the shared drive fritter being released;If being additionally operable in shared Deposit the numerical value in the enumerator in bulk and be kept to 0, then process deletes shared drive bulk in this process.
Exemplary, above-mentioned release module 27 is shared interior in the target shared drive bulk of course allocation is released to When depositing fritter specifically for:
Shared drive bulk index positioning according to recording in shared drive fritter obtains rising for target shared drive bulk Beginning address.
From the beginning of at the initial address of target shared drive bulk, by the target shared drive bulk for course allocation Shared drive fritter is set to the free time.
After shared drive fritter in above-mentioned process release target shared drive bulk so that the shared drive fritter The state of the target shared drive bulk at place changes, and then the shared drive for needing to be linked in three chained lists of renewal is big Block is indexed.Exemplary, above-mentioned update module 26 is additionally operable to:
If all shared drive fritters in target shared drive bulk are released, process is by target shared drive bulk Indexed links in the 3rd chained list.
If the partial sharing internal memory fritter in target shared drive bulk is released, process is by target shared drive bulk Indexed links in the second chained list.
The managing device of shared drive in multiprocess system provided in an embodiment of the present invention, compared to prior art, we Case process creates shared drive control block first in shared drive, and the shared drive control block includes three kinds of different conditions Chained list, wherein:Whole shared drive fritters in shared drive bulk in first chained list are all occupied, being total in the second chained list The partial sharing internal memory fritter enjoyed in internal memory bulk is occupied, the whole shared drives in shared drive bulk in the 3rd chained list Fritter is idle;Secondly, the process creation mapping table, what is stored in the mapping table is the index of shared drive bulk With the mapping relations between the initial address of shared drive bulk;Process for itself distribution shared drive fritter when, directly from the Two chained lists and/or the 3rd chain table search are used for the index of the target shared drive bulk for distributing to process, and in above-mentioned mapping The initial address of target shared drive bulk is searched in relation table, is total in order to continuous idle in target shared drive bulk Enjoy internal memory fritter and write data.So avoid to be traveled through successively in prior art and check each idle memory block, so as to save The time spent during course allocation memory block, and then improve the operating rate of system.
Through the above description of the embodiments, those skilled in the art can be understood that, be description It is convenient and succinct, only it is illustrated with the division of above-mentioned each functional module, in practical application, can as desired will be upper State function distribution to be completed by different functional modules, will the internal structure of device be divided into different functional modules, to complete All or part of function described above.The specific work process of the system, apparatus, and unit of foregoing description, before may be referred to The corresponding process in embodiment of the method is stated, be will not be described here.
In several embodiments provided herein, it should be understood that shared interior in disclosed multiprocess system Managing device is deposited, can be realized by another way.For example, device embodiment described above is only schematic, For example, the division of the module or unit, only a kind of division of logic function can have other division side when actually realizing Formula, such as multiple units or component can with reference to or be desirably integrated into another system, or some features can be ignored, or not Perform.Another, shown or discussed coupling each other or direct-coupling or communication connection can be connect by some Mouthful, the INDIRECT COUPLING or communication connection of device or unit can be electrical, mechanical or other forms.
The unit as separating component explanation can be or may not be it is physically separate, it is aobvious as unit The part for showing can be or may not be physical location, you can with positioned at a place, or can also be distributed to multiple On NE.Some or all of unit therein can according to the actual needs be selected to realize the mesh of this embodiment scheme 's.
In addition, each functional unit in each embodiment of the invention can be integrated in a processing unit, it is also possible to It is that unit is individually physically present, it is also possible to which two or more units are integrated in a unit.Above-mentioned integrated list Unit both can be realized in the form of hardware, it would however also be possible to employ the form of SFU software functional unit is realized.
If the integrated unit is realized using in the form of SFU software functional unit and as independent production marketing or used When, during a computer read/write memory medium can be stored in.Based on such understanding, technical scheme is substantially The part for contributing to prior art in other words or all or part of the technical scheme can be in the form of software products Embody, the computer software product is stored in a storage medium, including some instructions are used so that a computer Equipment (can be personal computer, server, or network equipment etc.) or processor (processor) perform the present invention each The all or part of step of embodiment methods described.And aforesaid storage medium includes:USB flash disk, portable hard drive, read only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disc or CD Etc. it is various can be with the medium of store program codes.
The above, the only specific embodiment of the present invention, but protection scope of the present invention is not limited thereto, any Those familiar with the art the invention discloses technical scope in, change or replacement can be readily occurred in, all should contain Cover within protection scope of the present invention.Therefore, protection scope of the present invention should be defined by the scope of the claims.

Claims (14)

1. in a kind of multiprocess system shared drive management method, it is characterised in that methods described includes:
Process creates shared drive based on default key assignments, and the shared drive includes M size identical shared drive bulk, often Individual shared drive bulk includes N number of shared drive fritter;The M and N are all higher than or equal to 2;
The process creates shared drive control block in shared drive, and the shared drive control block includes shared interior by first Deposit bulk indexed links constitute the first chained list, the second chained list being made up of the indexed links of the second shared drive bulk, by The 3rd chained list that the indexed links of the 3rd shared drive bulk are constituted, wherein, the whole in the first shared drive bulk is shared interior Deposit that fritter is all occupied, the partial sharing internal memory fritter in the second shared drive bulk is occupied, in the 3rd shared drive bulk Whole shared drive fritters it is idle;
The process creates mapping table in its privately owned internal memory;The mapping table is used to indicate shared drive bulk Mapping relations between the initial address of index and shared drive bulk;
The process is searched the target for distributing to the process from second chained list and/or the 3rd chained list and is shared The index of internal memory bulk, and the initial address of the target shared drive bulk is searched in the mapping table, so as to Continuous idle shared drive fritter writes data in the target shared drive bulk.
2. method according to claim 1, it is characterised in that the process from the shared drive control block second Chained list and/or the 3rd chain table search are used for the index of the target shared drive bulk for distributing to the process and specifically include:
The process, according to the order of indexed links in second chained list, is searched successively for distributing from second chained list To the index of the target shared drive bulk of the process, till finding;
If there is no the index for distributing to the target shared drive bulk of the process in the second chained list, or in the second chained list There is no the index of shared drive bulk, then determine in the 3rd chained list according to the order of indexed links in the 3rd chained list At least one index, as the index of the target shared drive bulk for distributing to the process.
3. method according to claim 2, it is characterised in that if the process find from second chained list for The index of the target shared drive bulk of the process is distributed to, then methods described also includes:
Initial address, the mesh of the process according to the target shared drive bulk found in the mapping table Shared drive fritter quilt in the size and the target shared drive bulk of the idle directorial area in mark shared drive bulk The number of occupancy determines the initial address of the shared drive fritter of free time;
The process starts to write data from the initial address of the idle shared drive fritter, by the shared drive bulk In enumerator numerical value increase x, the x is the number of occupied idle shared drive fritter.
4. method according to claim 1 and 2, it is characterised in that methods described also includes:
If all shared drive fritters in the second shared drive bulk in second chained list are occupied, the process will The indexed links of the second shared drive bulk are in the first chained list;Or
If all shared drive fritters in the 3rd shared drive bulk in the 3rd chained list are occupied, the process will The indexed links of the 3rd shared drive bulk are in the first chained list;Or
If the partial sharing internal memory fritter in the 3rd shared drive bulk in the 3rd chained list is occupied, the process will The indexed links of the 3rd shared drive bulk are in the second chained list.
5. method according to claim 1, it is characterised in that also include:
The process is released to the shared drive fritter in the target shared drive bulk of the course allocation, will be described common The numerical value for enjoying the enumerator in internal memory bulk reduces y, and the y is the number of the shared drive fritter being released;
If the numerical value of the enumerator in the shared drive bulk is kept to 0, the process is by the shared drive bulk at this Delete in process.
6. method according to claim 5, it is characterised in that the process is released to the target of the course allocation Shared drive fritter in shared drive bulk is specifically included:
Shared drive bulk index positioning according to recording in the shared drive fritter obtains rising for target shared drive bulk Beginning address;
From the beginning of at the initial address of the target shared drive bulk, by for the target shared drive of the course allocation Shared drive fritter in bulk is set to the free time.
7. the method according to claim 5 or 6, it is characterised in that methods described also includes:
If all shared drive fritters in the target shared drive bulk are released, the process shares the target The indexed links of internal memory bulk are in the 3rd chained list;
If the partial sharing internal memory fritter in the target shared drive bulk is released, the process shares the target The indexed links of internal memory bulk are in the second chained list.
8. in a kind of multiprocess system shared drive managing device, it is characterised in that described device includes:
First creation module, for creating shared drive based on default key assignments, the shared drive includes that M size identical is total to Internal memory bulk is enjoyed, each shared drive bulk includes N number of shared drive fritter;The M and N are all higher than or equal to 2;
First creation module, is additionally operable to create shared drive control block, the shared drive control block in shared drive Including the first chained list being made up of the indexed links of the first shared drive bulk, by the indexed links structure of the second shared drive bulk Into the second chained list, the 3rd chained list being made up of the indexed links of the 3rd shared drive bulk, wherein, the first shared drive bulk In whole shared drive fritters it is all occupied, the partial sharing internal memory fritter in the second shared drive bulk is occupied, the 3rd Whole shared drive fritters in shared drive bulk are idle;
Second creation module, for creating mapping table in the privately owned internal memory of process;The mapping table is used to indicate Mapping relations between the index of shared drive bulk and the initial address of shared drive bulk;
Distribute module, for searching the mesh for distributing to the process from second chained list and/or the 3rd chained list The index of mark shared drive bulk, and the initial address of the target shared drive bulk is searched in the mapping table, So that continuous idle shared drive fritter writes data in the target shared drive bulk.
9. device according to claim 8, it is characterised in that the distribution process is from the shared drive control block The second chained list and/or the 3rd chain table search be used for distribute to the process target shared drive bulk index when specifically use In:
According to the order of indexed links in second chained list from second chained list, search successively for distribute to it is described enter The index of the target shared drive bulk of journey, till finding;
If there is no the index for distributing to the target shared drive bulk of the process in the second chained list, or in the second chained list There is no the index of shared drive bulk, then determine in the 3rd chained list according to the order of indexed links in the 3rd chained list At least one index, as the index of the target shared drive bulk for distributing to the process.
10. device according to claim 9, it is characterised in that if the process finds use from second chained list In the index of the target shared drive bulk for distributing to the process, then described device also includes:
Determining module, the initial address of the target shared drive bulk found in the mapping table for basis, Shared drive in the size of the idle directorial area in the target shared drive bulk and the target shared drive bulk The occupied number of fritter determines the initial address of the shared drive fritter of free time;
Memory module, for starting to write data from the initial address of the idle shared drive fritter, will be described shared interior Depositing the numerical value of the enumerator in bulk increases x, and the x is the number of occupied idle shared drive fritter.
11. devices according to claim 8 or claim 9, it is characterised in that described device also includes:
Update module, the update module is used for:
If all shared drive fritters in the second shared drive bulk in second chained list are occupied, the process will The indexed links of the second shared drive bulk are in the first chained list;Or
If all shared drive fritters in the 3rd shared drive bulk in the 3rd chained list are occupied, the process will The indexed links of the 3rd shared drive bulk are in the first chained list;Or
If the partial sharing internal memory fritter in the 3rd shared drive bulk in the 3rd chained list is occupied, the process will The indexed links of the 3rd shared drive bulk are in the second chained list.
12. devices according to claim 8, it is characterised in that also include:
Release module, for being released to the target shared drive bulk of the course allocation in shared drive fritter, will The numerical value of the enumerator in the shared drive bulk reduces y, and the y is the number of the shared drive fritter being released;Also use If the numerical value in enumerator in the shared drive bulk is kept to 0, the process is by the shared drive bulk at this Delete in process.
13. devices according to claim 12, it is characterised in that the release module is being released to the course allocation During shared drive fritter in the target shared drive bulk specifically for:
Shared drive bulk index positioning according to recording in the shared drive fritter obtains rising for target shared drive bulk Beginning address;
From the beginning of at the initial address of the target shared drive bulk, by for the target shared drive of the course allocation Shared drive fritter in bulk is set to the free time.
14. devices according to claim 12 or 13, it is characterised in that the update module is additionally operable to:
If all shared drive fritters in the target shared drive bulk are released, the process shares the target The indexed links of internal memory bulk are in the 3rd chained list;
If the partial sharing internal memory fritter in the target shared drive bulk is released, the process shares the target The indexed links of internal memory bulk are in the second chained list.
CN201710038314.4A 2017-01-18 2017-01-18 Management method and device for sharing memory in multi-process system Pending CN106681842A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710038314.4A CN106681842A (en) 2017-01-18 2017-01-18 Management method and device for sharing memory in multi-process system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710038314.4A CN106681842A (en) 2017-01-18 2017-01-18 Management method and device for sharing memory in multi-process system

Publications (1)

Publication Number Publication Date
CN106681842A true CN106681842A (en) 2017-05-17

Family

ID=58859663

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710038314.4A Pending CN106681842A (en) 2017-01-18 2017-01-18 Management method and device for sharing memory in multi-process system

Country Status (1)

Country Link
CN (1) CN106681842A (en)

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107329833A (en) * 2017-07-03 2017-11-07 郑州云海信息技术有限公司 One kind realizes the continuous method and apparatus of internal memory using chained list
CN107656993A (en) * 2017-09-15 2018-02-02 上海斐讯数据通信技术有限公司 A kind of method and system for realizing that Adelson-Velskii-Landis tree uses between process
CN108038002A (en) * 2017-12-15 2018-05-15 天津津航计算技术研究所 A kind of embedded software EMS memory management process
CN108132842A (en) * 2017-12-15 2018-06-08 天津津航计算技术研究所 A kind of embedded software internal storage management system
CN108491278A (en) * 2018-03-13 2018-09-04 网宿科技股份有限公司 A kind of method and the network equipment of processing business data
CN110109763A (en) * 2019-04-12 2019-08-09 厦门亿联网络技术股份有限公司 A kind of shared-memory management method and device
CN110618883A (en) * 2019-09-26 2019-12-27 迈普通信技术股份有限公司 Method, device, equipment and storage medium for sharing memory linked list
CN110858162A (en) * 2018-08-24 2020-03-03 华为技术有限公司 Memory management method and device and server
CN110928680A (en) * 2019-11-09 2020-03-27 上交所技术有限责任公司 Order memory allocation method suitable for security trading system
CN111078408A (en) * 2019-12-10 2020-04-28 Oppo(重庆)智能科技有限公司 Memory allocation method and device, storage medium and electronic equipment
CN111092865A (en) * 2019-12-04 2020-05-01 全球能源互联网研究院有限公司 Security event analysis method and system
CN111506436A (en) * 2020-03-25 2020-08-07 炬星科技(深圳)有限公司 Method for realizing memory sharing, electronic equipment and shared memory data management library
CN111638976A (en) * 2020-05-16 2020-09-08 中信银行股份有限公司 Data transmission method and system based on shared memory
CN112328435A (en) * 2020-12-07 2021-02-05 武汉绿色网络信息服务有限责任公司 Method, device, equipment and storage medium for backing up and recovering target data
CN112668000A (en) * 2021-01-04 2021-04-16 新华三信息安全技术有限公司 Configuration data processing method and device
CN113138859A (en) * 2020-01-17 2021-07-20 北京中软万维网络技术有限公司 General data storage method based on shared memory pool
CN116225745A (en) * 2023-05-04 2023-06-06 北京万维盈创科技发展有限公司 Linux-based multi-process communication method and system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101630992A (en) * 2008-07-14 2010-01-20 中兴通讯股份有限公司 Method for managing shared memory
CN102004675A (en) * 2010-11-11 2011-04-06 福建星网锐捷网络有限公司 Cross-process data transmission method, device and network equipment
CN105893269A (en) * 2016-03-31 2016-08-24 武汉虹信技术服务有限责任公司 Memory management method used in Linux system
CN106201719A (en) * 2016-07-05 2016-12-07 西北工业大学 The method and apparatus of management distributed task scheduling RapidIO shared drive

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101630992A (en) * 2008-07-14 2010-01-20 中兴通讯股份有限公司 Method for managing shared memory
CN102004675A (en) * 2010-11-11 2011-04-06 福建星网锐捷网络有限公司 Cross-process data transmission method, device and network equipment
CN105893269A (en) * 2016-03-31 2016-08-24 武汉虹信技术服务有限责任公司 Memory management method used in Linux system
CN106201719A (en) * 2016-07-05 2016-12-07 西北工业大学 The method and apparatus of management distributed task scheduling RapidIO shared drive

Cited By (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107329833A (en) * 2017-07-03 2017-11-07 郑州云海信息技术有限公司 One kind realizes the continuous method and apparatus of internal memory using chained list
CN107329833B (en) * 2017-07-03 2021-02-19 苏州浪潮智能科技有限公司 Method and device for realizing memory continuity by using linked list
CN107656993A (en) * 2017-09-15 2018-02-02 上海斐讯数据通信技术有限公司 A kind of method and system for realizing that Adelson-Velskii-Landis tree uses between process
CN108038002A (en) * 2017-12-15 2018-05-15 天津津航计算技术研究所 A kind of embedded software EMS memory management process
CN108132842A (en) * 2017-12-15 2018-06-08 天津津航计算技术研究所 A kind of embedded software internal storage management system
CN108132842B (en) * 2017-12-15 2021-11-02 天津津航计算技术研究所 Embedded software memory management system
CN108038002B (en) * 2017-12-15 2021-11-02 天津津航计算技术研究所 Embedded software memory management method
CN108491278B (en) * 2018-03-13 2020-09-18 网宿科技股份有限公司 Method and network device for processing service data
CN108491278A (en) * 2018-03-13 2018-09-04 网宿科技股份有限公司 A kind of method and the network equipment of processing business data
CN110858162A (en) * 2018-08-24 2020-03-03 华为技术有限公司 Memory management method and device and server
CN110109763A (en) * 2019-04-12 2019-08-09 厦门亿联网络技术股份有限公司 A kind of shared-memory management method and device
CN110618883A (en) * 2019-09-26 2019-12-27 迈普通信技术股份有限公司 Method, device, equipment and storage medium for sharing memory linked list
CN110618883B (en) * 2019-09-26 2022-09-13 迈普通信技术股份有限公司 Method, device, equipment and storage medium for sharing memory linked list
CN110928680A (en) * 2019-11-09 2020-03-27 上交所技术有限责任公司 Order memory allocation method suitable for security trading system
CN110928680B (en) * 2019-11-09 2023-09-12 上交所技术有限责任公司 Order memory allocation method suitable for securities trading system
CN111092865A (en) * 2019-12-04 2020-05-01 全球能源互联网研究院有限公司 Security event analysis method and system
CN111078408A (en) * 2019-12-10 2020-04-28 Oppo(重庆)智能科技有限公司 Memory allocation method and device, storage medium and electronic equipment
CN113138859A (en) * 2020-01-17 2021-07-20 北京中软万维网络技术有限公司 General data storage method based on shared memory pool
CN111506436A (en) * 2020-03-25 2020-08-07 炬星科技(深圳)有限公司 Method for realizing memory sharing, electronic equipment and shared memory data management library
CN111506436B (en) * 2020-03-25 2024-05-14 炬星科技(深圳)有限公司 Method for realizing memory sharing, electronic equipment and shared memory data management library
CN111638976A (en) * 2020-05-16 2020-09-08 中信银行股份有限公司 Data transmission method and system based on shared memory
CN112328435A (en) * 2020-12-07 2021-02-05 武汉绿色网络信息服务有限责任公司 Method, device, equipment and storage medium for backing up and recovering target data
CN112328435B (en) * 2020-12-07 2023-09-12 武汉绿色网络信息服务有限责任公司 Method, device, equipment and storage medium for backing up and recovering target data
CN112668000A (en) * 2021-01-04 2021-04-16 新华三信息安全技术有限公司 Configuration data processing method and device
CN112668000B (en) * 2021-01-04 2023-06-13 新华三信息安全技术有限公司 Configuration data processing method and device
CN116225745A (en) * 2023-05-04 2023-06-06 北京万维盈创科技发展有限公司 Linux-based multi-process communication method and system

Similar Documents

Publication Publication Date Title
CN106681842A (en) Management method and device for sharing memory in multi-process system
US10409781B2 (en) Multi-regime caching in a virtual file system for cloud-based shared content
CN104915151B (en) A kind of memory excess distribution method that active is shared in multi-dummy machine system
JP4738038B2 (en) Memory card
CN103197979B (en) Method and device for realizing data interaction access among processes
CN103577345A (en) Methods and structure for improved flexibility in shared storage caching by multiple systems
CN106168885B (en) A kind of method and system of the logical volume dynamic capacity-expanding based on LVM
CN100517335C (en) Distributed file system file writing system and method
CN103761053B (en) A kind of data processing method and device
KR20170008153A (en) A heuristic interface for enabling a computer device to utilize data property-based data placement inside a nonvolatile memory device
CN107256196A (en) The caching system and method for support zero-copy based on flash array
TWI488118B (en) Signaling, ordering, and execution of dynamically generated tasks in a processing system
CN103678160A (en) Data storage method and device
CN104471524B (en) Storage system and storage controlling method
US20140032854A1 (en) Coherence Management Using a Coherent Domain Table
CN106527963A (en) Memory system and host apparatus
CN103218305B (en) The distribution method of memory space
WO2024078429A1 (en) Memory management method and apparatus, computer device, and storage medium
US20140244943A1 (en) Affinity group access to global data
CN113485832B (en) Method and device for carrying out distribution management on physical memory pool and physical memory pool
CN106104477A (en) Method and system for the object memory block of expanded application virtual machine
CN104111896B (en) Virtual memory management method and its device in big data processing
CN104317734A (en) Memory allocation method and device applicable to SLAB
CN109308269A (en) A kind of EMS memory management process and device
CN102567225A (en) Method and device for managing system memory

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 610041 15-24 floors, No. 288 Tianfu Third Street, Chengdu High-tech Zone, Sichuan Province

Applicant after: Maipu Communication Technologies Co., Ltd.

Address before: 610041 Chengdu high tech Development Zone, Sichuan, No. nine Hing Road, No. 16

Applicant before: Maipu Communication Technologies Co., Ltd.

CB02 Change of applicant information
RJ01 Rejection of invention patent application after publication

Application publication date: 20170517

RJ01 Rejection of invention patent application after publication