KR20130003855A - Device and method for providing memory reclamation - Google Patents

Device and method for providing memory reclamation Download PDF

Info

Publication number
KR20130003855A
KR20130003855A KR1020110065465A KR20110065465A KR20130003855A KR 20130003855 A KR20130003855 A KR 20130003855A KR 1020110065465 A KR1020110065465 A KR 1020110065465A KR 20110065465 A KR20110065465 A KR 20110065465A KR 20130003855 A KR20130003855 A KR 20130003855A
Authority
KR
South Korea
Prior art keywords
memory
shared data
accessor
block
active block
Prior art date
Application number
KR1020110065465A
Other languages
Korean (ko)
Inventor
신은환
김인혁
김태형
이동우
김정훈
엄영익
Original Assignee
성균관대학교산학협력단
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 성균관대학교산학협력단 filed Critical 성균관대학교산학협력단
Priority to KR1020110065465A priority Critical patent/KR20130003855A/en
Publication of KR20130003855A publication Critical patent/KR20130003855A/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/023Free address space management
    • G06F12/0238Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory
    • G06F12/0246Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory in block erasable memory, e.g. flash memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0638Organizing or formatting or addressing of data
    • G06F3/064Management of blocks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Multi Processors (AREA)

Abstract

PURPOSE: A memory reuse method and a device thereof are provided to efficiently release a used memory area by minimizing a waiting time for data processing when there is shared data access by memory accessors. CONSTITUTION: A memory allocator allocates copy sharing data for sharing data to an unused object when there is access to the sharing data by a second memory accessor in a state that a first memory accessor uses sharing data through a part of objects included in a first active block. An active block setting unit(120) generates/sets a second active block and sets a first active block as an inactive block when there is access to the sharing data by the second memory accessor in a state that all of the objects are used. [Reference numerals] (110) Memory allocation unit; (120) Active block setting unit; (130) Memory accessor management unit; (140) Memory canceling unit; (200) Memory area; (210) Block header; (230) Block N; (AA) Block 1; (BB) Block 2

Description

METHOD AND DEVICE FOR MEMORY RECYCLING {DEVICE AND METHOD FOR PROVIDING MEMORY RECLAMATION}

The present invention relates to a memory reuse method and apparatus.

Recently, various attempts have been made to improve the performance of a microprocessor. As a representative attempt, there is a multi-processor or multi-threaded data processing, and various studies on how to efficiently control shared data access by a plurality of processors have been made. .

In this regard, a prior art document relating to a dynamic memory management method (Application No. 10-2004-0008397) refers to a technique for sequential allocation of objects to memory blocks and reuse of released memory blocks.

With regard to the control of shared data, if one of the plurality of processors accesses the shared memory area and reads or writes data while another processor accesses and writes the same shared memory area, the data stored in the memory area is incorrect. There is a problem that an error may occur in the processing or processing results of the processor.

The most common and widely used technique to solve this problem is the locking technique. Locking is a technology that prevents the conflict of shared data by granting access to shared data only to the thread that preempts the shared data, and waiting for the remaining thread to finish its work in the waiting state.

However, the control method of the shared data using the locking technology has a problem that the overall performance is degraded due to the long waiting time until the locked data is unlocked.

Some embodiments of the present invention provide a memory reuse apparatus and method for minimizing a waiting time for data processing and efficiently releasing a used memory area when there is shared data access by a plurality of memory accessors. The purpose is to provide.

As a technical means for achieving the above-described technical problem, the memory reuse apparatus according to the first aspect of the present invention is a state in which the first memory accessor uses shared data through some of the plurality of objects belonging to the first active block. And a memory allocation for generating duplicate shared data for the shared data and allocating to an unused object among the plurality of objects belonging to the first active block when there is access to the shared data by a second memory accessor. When the plurality of objects belonging to the first active block are completely used by the secondary and the memory accessor and there is access to the shared data by another memory accessor, a second active block is generated and set as the active block. And an active block setting unit configured to set the first active block as an inactive block. All.

In addition, the memory reuse method according to the second aspect of the present invention, in the state that the first memory accessor uses the shared data through some of the plurality of objects belonging to the first active block, the sharing by the second memory accessor Receiving an access request for data, generating duplicate shared data for the shared data, assigning the shared shared data to an available object among a plurality of objects belonging to the first active block, and giving the object to the second memory accessor Returning the address of the. If there is access to shared data by the second memory accessor when a plurality of objects belonging to the first active block are all used, a second active block is generated and set as an active block, and the first active You can set a block as an inactive block.

According to any one of the problem solving means of the present invention described above, when there is shared data access by a plurality of memory accessor, it is possible to minimize the waiting time for data processing, and to effectively release the used memory area. A memory reuse apparatus and method can be provided.

1 is a block diagram illustrating a memory reuse apparatus according to an embodiment of the present invention.
2 is a block diagram illustrating a memory access method of a memory accessor based on a memory area in a memory reuse apparatus according to an exemplary embodiment of the present invention.
3 is a block diagram illustrating a block header in a memory reuse apparatus according to an embodiment of the present invention.
4 is an exemplary view for explaining a shared data request of a memory accessor in a memory reuse apparatus according to an embodiment of the present invention.
5 is an exemplary diagram for describing a process of generating a new active block in a memory reuse apparatus according to an embodiment of the present invention.
FIG. 6 is an exemplary diagram for describing a reference process of a new active block by a memory access device in a memory reuse apparatus according to an embodiment of the present invention.
7 is an exemplary diagram for describing a process of releasing a batch memory in a memory reuse apparatus according to an embodiment of the present invention.
8 is a flowchart illustrating a memory reuse method according to an embodiment of the present invention.

Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings, which will be readily apparent to those skilled in the art. The present invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. In the drawings, parts irrelevant to the description are omitted in order to clearly describe the present invention, and like reference numerals designate like parts throughout the specification.

Throughout the specification, when a part is referred to as being "connected" to another part, it includes not only "directly connected" but also "electrically connected" with another part in between . Also, when an element is referred to as "comprising ", it means that it can include other elements as well, without departing from the other elements unless specifically stated otherwise.

In the conventional memory control apparatus, when a plurality of memory accessors 300 request memory access to shared data at the same time, only one processor grants access to the shared data, and the processor completes the work on the shared data. After that, another processor was granted access to the shared data, and in this process, the memory access latency was long.

Memory reuse apparatus according to an embodiment of the present invention, when the plurality of memory accessor 300 requests access to the shared data at the same time, the plurality of memory accessor by copying the shared data to minimize the memory access waiting time Use the algorithm provided to 300.

1 is a block diagram illustrating a memory reuse apparatus according to an embodiment of the present invention.

Memory reuse apparatus according to an embodiment of the present invention and the memory control unit 100 including a memory allocation unit 110, active block setting unit 120, memory accessor management unit 130 and memory release unit 140 and The memory region 200 may include a block header 210, an inactive block 220, and an active block 230.

The memory controller 100 manages shared data, memory accessors referring to shared data, and control information to minimize memory access latency when a plurality of memory accessors 300 simultaneously access shared data. do. To this end, the memory control unit 100 may include a memory allocation unit 110, an active block setting unit 120, a memory access manager 130, and a memory release unit 140.

The memory allocator 110 may access the shared data by another memory accessor while the shared data is used by some of the plurality of objects belonging to the first active block by the memory accessor 300. For example, duplicate shared data for shared data can be generated and assigned to an unused object among a plurality of objects.

When the plurality of objects belonging to the first active block are all used by the memory accessor 300, the active block setting unit 120 accesses the shared data by another memory accessor 300. An active block may be generated and set as an active block, and the first active block may be set as an inactive block.

The memory access manager 130 may manage registration and release of the memory accessor 300 accessing the shared data.

After access to the shared data of the plurality of memory accessors 300 is completed, the memory release unit 140 may perform a memory release operation on a block in which the shared data is stored and used.

The memory area 200 is a storage space in which shared data is stored, and may include a block header in which information for controlling allocation and release of memory is stored and a plurality of blocks in which actual shared data is stored.

The block header 210 may store control information for controlling allocation and release of memory, and the control information includes an address of an uppermost block of the memory area 200, an address of an active block, and a memory area 200. Memory accessor information registered to refer to the shared data, memory accessor information referring to active block, index of object last accessed in active block, index of object last updated in active block Information may be included

Here, the control information may include an allocation index, a replication index, a reference map, a management map, a header pointer, and a tail pointer. have.

The memory accessor 300 may perform operations such as reading, writing, and updating through access to shared data stored in the memory area 200. An example of the memory accessor 300 may include a process ( process, thread, etc. In addition, the memory accessor 300 may include a reader performing a read operation by referring to the shared data and an updater performing an update operation by referring to the shared data, depending on the type of the operation to be performed. have.

2 is a block diagram illustrating a memory access method of a memory accessor based on a memory area in a memory reuse apparatus according to an exemplary embodiment of the present invention.

The memory area 200 may include an inactive block 220 and an active block 230. Here, the active block 230 refers to a block to which the object most recently referenced by the memory accessor 300 belongs, and the inactive block 220 refers to a block that is not an active block.

Blocks included in the memory area 200 may be configured by a plurality of objects, and the plurality of blocks may be linked by a linked list structure. In the embodiment illustrated in FIG. 2, one block is composed of four objects. Here, the configuration of four objects per block is just an example, and the number of objects may vary according to the characteristics of the memory reuse apparatus and hardware requirements.

Memory allocation is performed in chronological order, and when there is access to the shared data by the second memory accessor while the first memory accessor uses the shared data, the memory allocation unit 110 assigns the shared data to the shared data. You can create replication shared data. Here, the duplicated shared data may be allocated to an unused object among a plurality of objects belonging to the first active block of the memory area 200 by chronological memory allocation, as shown in FIG. 2. At this time, the second memory accessor receives the address of the allocated object by copying the shared data, and performs a read, write, update, or the like operation according to the operation type of the second memory accessor.

According to an embodiment of the present invention, if the first memory accessor is a reader and the second memory accessor is also a reader, a problem such as a collision due to a change of shared data does not occur, and thus the first memory accessor is a reader. When requesting a reference of a second memory accessor to shared data (eg, located in the first object of an active block) referenced by the parser, an address of the object where the shared data is located may be returned and the shared data may be shared.

However, when the first memory accessor is a reader and the second memory accessor is an updater, since the shared data may be changed by the second memory accessor, data is changed by replicating the shared data. Problems that may arise from this should be avoided.

3 is a block diagram illustrating a block header in a memory reuse apparatus according to an embodiment of the present invention.

The block header 210 is a data structure that stores control information for controlling allocation and deallocation of memory. The control information includes an allocation index (aloc_idx), a replica index (rep_idx), and a reference. It may include a reference map (ref_map), a management map (mng_map), a header pointer (* head), and a tail pointer (* tail).

The allocation index can store the index of the object that was last accessed in the active block, so that it can be known whether the maximum value of the allowed block size has been reached.

The replication index may store index information of an object last updated in the active block.

The reference map is a storage space for recording and managing whether the memory accessor 300 refers to the active block. If the memory accessor refers to the active block, the reference map records reference information of the memory accessor referring to the active block in the reference map. can do. For example, a reference map for managing 32 memory accessors may use an array structure having an index of 32.

The management map is a storage space for recording and managing the memory accessor 300 that accesses shared data. For example, when using a 32-bit bitmap data structure, as shown in FIG. 3, 32 memory accessors can be managed. For example, when bit information is set to 1, the memory accessor It means that 300 is registered to receive a permission to refer to the shared data.

The header pointer stores the address of the highest block, and the tail pointer stores the address of the active block.

Referring to FIG. 3, "0, 1, 1, 0, 0 ..., 0" is recorded in the reference map, and "1, 1, 1, 0 ..., 0" is recorded in the management map. It is. Three memory accessors are registered through the control information recorded in the reference map and the management map, and the first memory accessor refers to the inactive block 220 through the control information recorded in the reference map. It can be seen that the two memory accessors are referring to the active block 230.

4 is an exemplary view for explaining a shared data request of a memory accessor in a memory reuse apparatus according to an embodiment of the present invention.

FIG. 4 assumes a situation in which five memory accessors A, B, C, D, and E are registered, and four memory accessors B, C, D, and E refer to active blocks. This is an example. Here, information indicating that five memory accessors are registered for the shared data reference is recorded in the management map (“1, 1, 1, 1, 1”), and four of the five memory accessors refer to the active block. Information indicating that the information is being recorded may be recorded in a reference map (“0, 1, 1, 1, 1”). In addition, as shown in FIG. 4, all objects of the active block are referenced by memory accessors B, C, D, and E, and when a shared data reference request is received by another memory accessor, a new block is allocated. You will have to.

5 is an exemplary diagram for describing a process of generating a new active block in a memory reuse apparatus according to an embodiment of the present invention.

If the memory accesser F (not shown) additionally attempts to access the shared data in the example illustrated in FIG. 4, the last object of the active block must be duplicated to refer to the shared data. Since the object is referenced by memory accessors B, C, D, and E, you will need to allocate and use a new block.

Here, the memory controller 100 receives a new active block, resets the bitmap information of the reference map, and records “0, 0, 0, 0, 0”.

FIG. 6 is an exemplary diagram for describing a reference process of a new active block by a memory access device in a memory reuse apparatus according to an embodiment of the present invention.

The memory accessor A refers to the shared data through the inactive memory block, and after the reference is completed, it refers to the new active block to access the shared data for new work. Accordingly, active block reference information of the memory accessor A is recorded in the reference map, and the remaining memory accessors B, C, D, and E refer to the inactive block 220.

7 is an exemplary diagram for describing a process of releasing a batch memory in a memory reuse apparatus according to an embodiment of the present invention.

As shown in Fig. 7, all registered memory accessors refer to active blocks. In this case, the memory controller 100 may release and reuse the inactive block portion of the memory collectively. According to an embodiment of the present invention, when looking at the actual implementation, since all memory accessors all refer to active blocks, the bitmap information of the reference map is all '1', and the bitmap information of the reference map and the management map. Becomes the same. Therefore, when the bitmap information of the reference map and the management map are the same, the inactive blocks of the memory can be released in a batch.

8 is a flowchart illustrating a memory reuse method according to an embodiment of the present invention.

According to the memory reuse method according to an embodiment of the present invention, first, the first memory accessor uses the shared data through some of a plurality of objects belonging to the first active block, the second memory accessor may use the shared data. In step S1100, an access request for shared data is received.

Next, replica shared data for the shared data is generated and assigned to an available object among a plurality of objects belonging to the first active block (S1200).

Here, the first active block means a block to which the object most recently referenced by the first memory accessor belongs.

Next, an address of an object usable to the second memory accessor is returned (S1300).

In steps S1100 to S1300, when there is access to the shared data by the second memory accessor when a plurality of objects belonging to the first active block are all used, the second active block is generated and set as an active block. The memory may be sequentially allocated by setting the first active block as an inactive block.

The foregoing description of the present invention is intended for illustration, and it will be understood by those skilled in the art that the present invention may be easily modified in other specific forms without changing the technical spirit or essential features of the present invention. will be. It is therefore to be understood that the above-described embodiments are illustrative in all aspects and not restrictive. For example, each component described as a single entity may be distributed and implemented, and components described as being distributed may also be implemented in a combined form.

The scope of the present invention is shown by the following claims rather than the above description, and all changes or modifications derived from the meaning and scope of the claims and their equivalents should be construed as being included in the scope of the present invention. do.

100: memory controller
110: memory allocation unit 120: active block setting unit
130: memory access management unit 140: memory release unit
200: memory area 210: block header
220: inactive block 230: active block
300: memory accessor

Claims (14)

In the memory reuse device,
When there is access to the shared data by the second memory accessor while the first memory accessor uses the shared data through some of the plurality of objects belonging to the first active block, A memory allocator configured to generate duplicate shared data and allocate the duplicate shared data to an unused object among a plurality of objects belonging to the first active block; And
If there is access to shared data by the second memory accessor when a plurality of objects belonging to the first active block are all used, a second active block is generated and set as an active block, and the first active An active block setting unit for setting a block as an inactive block
Memory Reuse Device.
The method of claim 1,
The memory allocation unit,
In the state where the plurality of objects belonging to the first active block are all used, when there is access to the shared data by the second memory accessor, the copy shared data for the shared data is generated to generate the active data. Assigning to the first object of the plurality of objects belonging to the second active block generated by the block setting unit
Memory Reuse Device.
The method of claim 1,
Further comprising a block header for storing control information for controlling the allocation and release of memory
Memory Reuse Device.
The method of claim 1,
The apparatus may further include a memory access management unit configured to manage registration and release of a memory accessor accessing the shared data.
Memory Reuse Device.
The method of claim 3, wherein
After the access to the shared data of the memory access is completed, further comprising a memory release unit for performing a memory release operation for the block in which the shared data is stored and used,
The control information includes a management map and reference map information,
The memory release unit compares the management map and the reference map to determine whether to release the memory
Memory Reuse Device.
The method of claim 3, wherein
The control information includes one or more of an allocation index, a replication index, a reference map, a management map, a header pointer, and a tail pointer. To
Memory Reuse Device.
The method of claim 1,
The memory allocating unit sequentially allocates the object including the shared data to the memory accessor according to the access time order of the shared data of the memory accessor.
Memory Reuse Device.
The method of claim 1,
The block is composed of a plurality of objects, the plurality of blocks are connected by a linked list structure
Memory Reuse Device.
The method of claim 1,
The first memory accessor and the second memory accessor,
And a reader performing a read operation by referring to the shared data or an updater performing an update operation by referring to the shared data.
Memory Reuse Device.
The method of claim 4, wherein
The memory access management unit,
Read the management map, update the first zero bit on the management map to 1 to register the memory accessor, and update all the bits on the management map to 0 to release the memory accessor.
Memory Reuse Device.
The method of claim 5, wherein
When the bitmap information included in the management map and the reference map is the same, the memory release unit performs a memory release operation on the inactive block.
The management map records and manages the memory accessor that accesses the shared data,
The reference map indicates that the memory accessor records and manages whether an active block is referenced.
Memory Reuse Device.
The method of claim 1,
When the first memory accessor is a reader and the second memory accessor is also a reader, the memory allocator does not generate duplicate shared data for the shared data, and the first memory accessor. Returning the address of the object storing the shared data referenced by the second memory accessor to the second memory accessor.
Memory Reuse Device.
In the memory reuse method,
In a state in which the first memory accessor uses shared data through some of the plurality of objects belonging to the first active block,
(a) receiving a request for access to the shared data by a second memory accessor;
(b) generating duplicate shared data for the shared data and assigning the shared shared data to an available object among a plurality of objects belonging to the first active block; And
(c) returning the address of the object to the second memory accessor,
If there is access to shared data by the second memory accessor when a plurality of objects belonging to the first active block are all used, a second active block is generated and set as an active block, and the first active To make the block inactive
Memory reuse method.
The method of claim 13,
The step (b)
In the state where the plurality of objects belonging to the first active block are all used, when there is access to the shared data by the second memory accessor, the replica shared data for the shared data is generated to generate the first shared block. 2 to the first object of the plurality of objects belonging to the active block
Memory reuse method.
KR1020110065465A 2011-07-01 2011-07-01 Device and method for providing memory reclamation KR20130003855A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020110065465A KR20130003855A (en) 2011-07-01 2011-07-01 Device and method for providing memory reclamation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020110065465A KR20130003855A (en) 2011-07-01 2011-07-01 Device and method for providing memory reclamation

Publications (1)

Publication Number Publication Date
KR20130003855A true KR20130003855A (en) 2013-01-09

Family

ID=47835933

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020110065465A KR20130003855A (en) 2011-07-01 2011-07-01 Device and method for providing memory reclamation

Country Status (1)

Country Link
KR (1) KR20130003855A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20160106999A (en) * 2015-03-03 2016-09-13 한국전자통신연구원 Memory Management Apparatus and Method for Supporting Partial Release of Memory Allocation
WO2020021551A1 (en) * 2018-07-24 2020-01-30 Jerusalem College Of Technology System for implementing shared lock free memory implementing composite assignment

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20160106999A (en) * 2015-03-03 2016-09-13 한국전자통신연구원 Memory Management Apparatus and Method for Supporting Partial Release of Memory Allocation
WO2020021551A1 (en) * 2018-07-24 2020-01-30 Jerusalem College Of Technology System for implementing shared lock free memory implementing composite assignment
US11646063B2 (en) 2018-07-24 2023-05-09 Jerusalem College Of Technology System for implementing shared lock free memory implementing composite assignment

Similar Documents

Publication Publication Date Title
US9824011B2 (en) Method and apparatus for processing data and computer system
US9043560B2 (en) Distributed cache coherency protocol
US9436597B1 (en) Using non-volatile memory resources to enable a virtual buffer pool for a database application
US20090210464A1 (en) Storage management system and method thereof
JPH05210637A (en) Method of simultaneously controlling access
US9208088B2 (en) Shared virtual memory management apparatus for providing cache-coherence
CN111309289B (en) Memory pool management assembly
US11048414B2 (en) Method and apparatus for managing data access
US11989588B2 (en) Shared memory management method and device
WO2022120522A1 (en) Memory space allocation method and device, and storage medium
JP2022171773A (en) Memory system and control method
KR20130003855A (en) Device and method for providing memory reclamation
US10664393B2 (en) Storage control apparatus for managing pages of cache and computer-readable storage medium storing program
JPH0133857B2 (en)
TWI809968B (en) Adjustable resource management system and method
US8001084B2 (en) Memory allocator for optimistic data access
US11474938B2 (en) Data storage system with multiple-size object allocator for disk cache
US11366594B2 (en) In-band extent locking
WO2022002128A1 (en) Data reading method, data writing method, device, and system
US20220137964A1 (en) Methods and systems for optimizing file system usage
JPS62287359A (en) Control system for simultaneous file access in loosely coupled multi-processor system
WO2023236629A1 (en) Data access method and apparatus, and storage system and storage medium
CN117093160B (en) Data processing method and device of Cache, computer equipment and medium
JPH06110759A (en) File system
US20060294292A1 (en) Shared spare block for multiple memory file volumes

Legal Events

Date Code Title Description
A201 Request for examination
E902 Notification of reason for refusal
E601 Decision to refuse application