WO2006131167A2 - Verfahren zur speicherverwaltung von digitalen recheneinrichtungen - Google Patents
Verfahren zur speicherverwaltung von digitalen recheneinrichtungen Download PDFInfo
- Publication number
- WO2006131167A2 WO2006131167A2 PCT/EP2006/003393 EP2006003393W WO2006131167A2 WO 2006131167 A2 WO2006131167 A2 WO 2006131167A2 EP 2006003393 W EP2006003393 W EP 2006003393W WO 2006131167 A2 WO2006131167 A2 WO 2006131167A2
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- memory
- stack
- bytes
- memory object
- stacks
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/0223—User address space allocation, e.g. contiguous or non contiguous base addressing
- G06F12/023—Free address space management
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5011—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
- G06F9/5016—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
Definitions
- the invention relates to a method for managing memory of digital computing device.
- Modern computing facilities allow the use of complex programs due to the large existing memory and the enormous computing power. Through these programs, processes run on the computing device, within which several so-called threads are processed simultaneously. Because many of these threads are not synchronized with each other in time, multiple threads may simultaneously attempt to access memory management, potentially accessing a particular block of available memory. Such concurrent access can lead to system instability. However, intervention of the operating system can prevent such simultaneous access to a particular memory block.
- the prevention of access to a memory block already in access by another thread is described in DE 679 15 532 T2. In this case, simultaneous access is prevented only if the simultaneous access concerns the same memory block.
- the object is achieved by the method according to claim 1.
- a stack management is used for the available memory.
- at least one such stack is initially created in the available memory area.
- the inclusion and return of a storage object by a thread is then carried out by one atomic operation.
- a further blocking of the remaining threads is not required. It is already ensured by the atomic operation that the access to the memory object takes place in only a single step, whereby an overlap with parallel steps of further threads can not occur.
- Figure 1 is a schematic representation of a known memory management with double-linked lists.
- Fig. 2 is a memory management means of stacking and atomic recording and return functions
- Fig. 3 is a schematic representation of the process sequence of the invention
- the memory is divided into a plurality of memory objects 1, 2, 3 and 4, which are shown schematically in FIG.
- a first field Ia and a second field Ib are applied.
- the first field 1a of the first memory object 1 refers to the position of the second memory object 2.
- the first field 2a of the second memory object 2 refers to the position of the third memory object 3 and so on.
- the position of the next storage object is not only indicated in the forward direction, but in the second field 2b, 3b and 4b of the storage objects 2, 3 and 4, the position of the respective preceding storage object 1, 2 and 3 indicated. In this way, it is possible to take out a memory object arranged between two memory objects and at the same time to update the fields of the adjacent memory objects.
- the first memory object 1 in a list can be reached by a special pointer 5 and is also characterized in that a zero vector is stored in the second field Ib instead of the position of a preceding memory object. Accordingly, the memory object 4 is identified last by storing a zero vector instead of the position of a further memory object in the first field 4a of the memory object 4.
- FIG. 2 shows an example of a memory management according to the invention.
- an initialization preferably creates a plurality of stacks or stacks. These stacks are a special form of simply linked lists. 2, four such stacks are shown, which are designated by the reference numerals 6, 7, 8 and 9.
- Each of these stacks 6 to 9 comprises a plurality of memory objects of different sizes.
- the first stack 6 objects up to a size of 16 bytes
- the second stack 7 objects up to a size of 32 bytes
- in the third stack 8 objects up to a size of 64 bytes
- the fourth stack 9 Objects can be stored up to a size of 128 bytes.
- the fourth stack 9 consists of a series of memory objects 10.1, 10.2, 10.3 ... to 10.k, which are simply linked together.
- the last memory object 10.k of the fourth stack 9 is shown slightly offset in FIG. An access to the individual memory objects is possible on all stacks 6 to 9 only for the lowest memory object of the stacks 6 to 9, for example on stack 9 only for the memory object 10. In Fig. 2, therefore, upon request of memory z. B. the last memory object 10 k of the fourth stack 9 are used. If the memory object 10 k is freed again because it is no longer needed by a thread, it will be returned accordingly at the end of the fourth stack 9.
- this is represented schematically by a number of different threads 11, by which a memory requirement is given in each case.
- a process in multiple threads 12, 13, and 14 requests storage volumes of the same size.
- the size of the requested memory results from the data to be stored.
- the fourth stack 9 is selected as soon as there is a memory requirement of more than 64 bytes up to a maximum size of 128 bytes. If a storage volume of, for example, 75 bytes is required by the first thread 12, then that one of the stacks 6 through 9 is selected, which contains a free storage object of suitable size. In the illustrated embodiment, this is the fourth stack 9.
- memory objects 10. I having a size of 128 bytes are made available. Since the memory object 10. K is the last memory object in the fourth stack 9, a so-called "pop" operation is processed on the basis of the memory request of the first thread 12 and thus the memory object 10. k is made available to the thread 12.
- Such a pop routine is atomic or indivisible, i. H. the memory object 10. k is removed from the fourth stack 9 for the thread 12 in a single processing step.
- This atomic or atomic operation which maps memory object 10.k to thread 12, prevents another thread, such as thread 13, from accessing the same memory object
- 10. k can access at the same time. Ie. as soon as a new one Processing step can be performed by the system. the processing with regard to the storage object 10 k is completed and the 10 th storage object is no longer part of the fourth stack 9. In the case of a further storage request by the thread 13, then the last storage object of the fourth stack 9 is the storage object 10 k -1. Again, an atomic pop operation is performed to transfer the memory object 10.k-1 to the thread 13.
- Such atomic operations require appropriate hardware support and can not be formulated directly in normal programming languages, but require the use of machine language.
- these hardware-implemented, but usually not used for memory management so-called lock-free-pop calls or lock-free push calls are used to manage memory.
- lock-free-pop calls or lock-free push calls are used to manage memory.
- a singly linked list is used in which memory objects can only be retrieved or returned at one end of the applied stacks.
- FIG. 2 it is further shown for a number of threads 15, as after a delete call of a thread, the respective released memory object is given back to the appropriate stack.
- each of the memory objects 10. I there is a header 10. I head , as shown for the memory object 10. K in FIG. 2, in which the assignment to a particular stack is coded. For example, the assignment to the fourth stack 9 is contained in the header 10.k head .
- a delete function is now called by a thread 16, which was assigned the memory object 10.k due to a corresponding lock-free-pop operation, then the memory object is activated by a corresponding, likewise atomic lock-free push operation 10. k returned.
- the storage object 10. k is appended to the last memory element 10 k belonging to the fourth stack 9.
- the order of the memory objects 10.sub.i in the fourth stack 9 is changed.
- the stacks 6 required for the process flow 9 are merely initialized, but at this point in time there are no memory objects 10. If a memory object of a certain size is needed for the first time, for example a memory object of the third stack 8 for a 50-byte element to be stored, this first memory request is processed via the slower system memory management and the memory object is made available therefrom. Concurrent access is prevented in the illustrated example of doubly linked lists as system memory management by a slow mutex opertation.
- the memory object made available in this way to a first thread is not returned again via the slower system memory management after a delete call, instead the third stack 8 is locked onto a corresponding stack, in the exemplary embodiment described by a lock-free push Operation filed. For the next invocation of a storage object of this size, therefore, this storage object can be accessed with a very fast lock-free-pop operation.
- This procedure has the advantage that it is not necessary to allocate a fixed number of storage objects to the individual stacks 6, 7, 8 and 9 at the start of a process. Rather, the memory requirement can be dynamically adapted to the running process or its threads. For example, if a process runs in the background with few concurrent threads and has little need for memory objects, then resources are significantly reduced in such an approach.
- step 19 a program is started, thus generating a process on, for example, a computer.
- multiple stacks 6 through 9 are initialized.
- the initialization of the stacks 6-9 is shown in step 20.
- step 20 In the in 3 illustrated embodiment of the process flow initially only individual stacks 6-9 are created, but without filling them with a certain predefined number of memory objects.
- a corresponding stack is first selected on the basis of the object size defined by the thread.
- the second stack 7 is selected in the stack selection of FIG. Subsequently, in step 23, the atomic pop operation is performed a query. Part of this atomic operation is a query 26, whether in the second stack 7, a memory object is available.
- a null vector ("NULL") is returned and a memory object is accessed through a system call in step 24 through slower system memory management Size 32 bytes provided.
- NULL null vector
- the size of the provided storage object is not determined directly by the thread in step 21, but via the selection of a particular object size in step 22, taking into account the initialized stack.
- the memory request is changed to request a memory object of size 32 bytes.
- step 26 would therefore have to be answered with "yes" and a storage object is immediately delivered.
- the return of the same on the basis of a delete call is shown in the further course of the method both for a memory object made available by means of a lock-free-pop call and via the system memory management.
- the process following a thread's delete call is the same for both situations. Ie. In this case, no account is taken of the way in which the storage object was made available.
- Fig. 3 this is schematically represented by the two parallel paths, wherein on the right side painted reference numerals are used.
- a thread starts a delete call.
- the corresponding memory object is assigned to a specific stack by evaluating the information of the header of the memory object. Consequently, in the described embodiment, the memory object of size 32 bytes is assigned to the second stack 7.
- the return of the memory object to the second stack 7 takes place in both cases via a lock-free-push operation 29 or 29 '.
- the last method step 30 indicates that the memory object of the second stack 7 returned in this way is thus available for a next call. This next call can then, as already explained, be made available to a thread by a lock-free-pop operation.
- a reduction in memory waste can be achieved by creating frequency distributions for requested object sizes. This can also be defined during the execution of different processes for the individual processes. Will such a process with his concurrent threads are started once again, the previously determined frequency distribution from the previous process run is used to allow an adapted size distribution of the stacks 6 to 9.
- the system can be self-learning, ie with each new pass, the knowledge gained about the size distributions of the memory requirement is updated and the updated data are used each time the process is called.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Software Systems (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Executing Machine-Instructions (AREA)
- Memory System (AREA)
Abstract
Description
Claims
Priority Applications (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP06742576A EP1889159A2 (de) | 2005-06-09 | 2006-04-12 | Verfahren zur speicherverwaltung von digitalen recheneinrichtungen |
JP2008515063A JP2008542933A (ja) | 2005-06-09 | 2006-04-12 | ディジタル計算装置のメモリを管理する方法 |
US11/916,805 US20080209140A1 (en) | 2005-06-09 | 2006-04-12 | Method for Managing Memories of Digital Computing Devices |
CA002610738A CA2610738A1 (en) | 2005-06-09 | 2006-04-12 | Method for managing memories of digital computing devices |
CN2006800162391A CN101208663B (zh) | 2005-06-09 | 2006-04-12 | 对数字计算设备的存储器进行管理的方法 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
DE102005026721.1 | 2005-06-09 | ||
DE102005026721A DE102005026721A1 (de) | 2005-06-09 | 2005-06-09 | Verfahren zur Speicherverwaltung von digitalen Recheneinrichtungen |
Publications (2)
Publication Number | Publication Date |
---|---|
WO2006131167A2 true WO2006131167A2 (de) | 2006-12-14 |
WO2006131167A3 WO2006131167A3 (de) | 2007-03-08 |
Family
ID=37103066
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/EP2006/003393 WO2006131167A2 (de) | 2005-06-09 | 2006-04-12 | Verfahren zur speicherverwaltung von digitalen recheneinrichtungen |
Country Status (8)
Country | Link |
---|---|
US (1) | US20080209140A1 (de) |
EP (1) | EP1889159A2 (de) |
JP (1) | JP2008542933A (de) |
KR (1) | KR20080012901A (de) |
CN (1) | CN101208663B (de) |
CA (1) | CA2610738A1 (de) |
DE (1) | DE102005026721A1 (de) |
WO (1) | WO2006131167A2 (de) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB0808576D0 (en) * | 2008-05-12 | 2008-06-18 | Xmos Ltd | Compiling and linking |
US11243769B2 (en) | 2020-03-28 | 2022-02-08 | Intel Corporation | Shadow stack ISA extensions to support fast return and event delivery (FRED) architecture |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5784698A (en) * | 1995-12-05 | 1998-07-21 | International Business Machines Corporation | Dynamic memory allocation that enalbes efficient use of buffer pool memory segments |
US6065019A (en) * | 1997-10-20 | 2000-05-16 | International Business Machines Corporation | Method and apparatus for allocating and freeing storage utilizing multiple tiers of storage organization |
WO2001050247A2 (en) * | 2000-01-05 | 2001-07-12 | Intel Corporation | Memory shared between processing threads |
WO2001061471A2 (en) * | 2000-02-16 | 2001-08-23 | Sun Microsystems, Inc. | An implementation for nonblocking memory allocation |
US6539464B1 (en) * | 2000-04-08 | 2003-03-25 | Radoslav Nenkov Getov | Memory allocator for multithread environment |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS6391755A (ja) * | 1986-10-06 | 1988-04-22 | Fujitsu Ltd | スタツク使用量予測に基づくメモリ分割方式 |
JPH0713852A (ja) * | 1993-06-23 | 1995-01-17 | Matsushita Electric Ind Co Ltd | 領域管理装置 |
US5978893A (en) * | 1996-06-19 | 1999-11-02 | Apple Computer, Inc. | Method and system for memory management |
GB9717715D0 (en) * | 1997-08-22 | 1997-10-29 | Philips Electronics Nv | Data processor with localised memory reclamation |
US6275916B1 (en) * | 1997-12-18 | 2001-08-14 | Alcatel Usa Sourcing, L.P. | Object oriented program memory management system and method using fixed sized memory pools |
US6449709B1 (en) * | 1998-06-02 | 2002-09-10 | Adaptec, Inc. | Fast stack save and restore system and method |
-
2005
- 2005-06-09 DE DE102005026721A patent/DE102005026721A1/de not_active Ceased
-
2006
- 2006-04-12 WO PCT/EP2006/003393 patent/WO2006131167A2/de active Application Filing
- 2006-04-12 CN CN2006800162391A patent/CN101208663B/zh active Active
- 2006-04-12 KR KR1020077027590A patent/KR20080012901A/ko not_active Application Discontinuation
- 2006-04-12 JP JP2008515063A patent/JP2008542933A/ja active Pending
- 2006-04-12 EP EP06742576A patent/EP1889159A2/de not_active Withdrawn
- 2006-04-12 CA CA002610738A patent/CA2610738A1/en not_active Abandoned
- 2006-04-12 US US11/916,805 patent/US20080209140A1/en not_active Abandoned
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5784698A (en) * | 1995-12-05 | 1998-07-21 | International Business Machines Corporation | Dynamic memory allocation that enalbes efficient use of buffer pool memory segments |
US6065019A (en) * | 1997-10-20 | 2000-05-16 | International Business Machines Corporation | Method and apparatus for allocating and freeing storage utilizing multiple tiers of storage organization |
WO2001050247A2 (en) * | 2000-01-05 | 2001-07-12 | Intel Corporation | Memory shared between processing threads |
WO2001061471A2 (en) * | 2000-02-16 | 2001-08-23 | Sun Microsystems, Inc. | An implementation for nonblocking memory allocation |
US6539464B1 (en) * | 2000-04-08 | 2003-03-25 | Radoslav Nenkov Getov | Memory allocator for multithread environment |
Also Published As
Publication number | Publication date |
---|---|
WO2006131167A3 (de) | 2007-03-08 |
KR20080012901A (ko) | 2008-02-12 |
EP1889159A2 (de) | 2008-02-20 |
CA2610738A1 (en) | 2006-12-14 |
US20080209140A1 (en) | 2008-08-28 |
JP2008542933A (ja) | 2008-11-27 |
DE102005026721A1 (de) | 2007-01-11 |
CN101208663B (zh) | 2012-04-25 |
CN101208663A (zh) | 2008-06-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
DE2224537C2 (de) | Einrichtung und Verfahren zur Instruktionsauswahl in einem Fließbandprozessor | |
DE2645537C2 (de) | ||
EP0048767B1 (de) | Prioritätsstufengesteuerte Unterbrechungseinrichtung | |
DE2423194C2 (de) | Vorrichtung zum Berechnen einer absoluten Hauptspeicheradresse in einer Datenverarbeitungsanlage | |
DE69425554T2 (de) | System zur dynamischen zuordnung von speicher registern zum herstellen von pseudowarteschlangen | |
DE2354521C2 (de) | Verfahren und Einrichtung zum gleichzeitigen Zugriff zu verschiedenen Speichermoduln | |
EP1228432B1 (de) | Verfahren zur dynamischen speicherverwaltung | |
DE1499182C3 (de) | Datenspeichersystem | |
DE2346525B2 (de) | Virtuelle Speichereinrichtung | |
DE69936257T2 (de) | Erzeugen und uberprüfen von referenz-adresszeigern | |
EP0635792A2 (de) | Verfahren zur Koordination von parallelen Zugriffen mehrerer Prozessoren auf Resourcenkonfigurationen | |
DE2556617C2 (de) | Schiebe- und Rotierschaltung | |
DE2031040B2 (de) | Verfahren zur festlegung des zugangs von mehreren benutzern zu einer einheit einer datenverarbeitungsanlage und anordnung zur durchfuehrung des verfahrens | |
EP0010570A2 (de) | Verfahren und Einrichtung zur selbstadaptiven Zuordnung der Arbeitslast einer Datenverarbeitungsanlage | |
DE2101949A1 (de) | Verfahren zum Schutz von Datengruppen in einer Multiprocessing-Datenverarbeitungsanlage | |
DE3507584C2 (de) | ||
DE2617485C3 (de) | Schaltungsanordnung für Datenverarbeitungsanlagen zur Abarbeitung von Mikrobefehlsfolgen | |
WO2001040931A2 (de) | Verfahren zum synchronisieren von programmabschnitten eines computerprogramms | |
DE69831282T2 (de) | Verwaltung von umbenannten Register in einem superskalaren Rechnersystem | |
DE2456710A1 (de) | Einrichtung zum packen von seitenrahmen eines hauptspeichers mit datensegmenten | |
WO2006131167A2 (de) | Verfahren zur speicherverwaltung von digitalen recheneinrichtungen | |
EP0655688B1 (de) | Programmspeichererweiterung für einen Mikroprozessor | |
EP2102766A1 (de) | Verfahren zum auslesen von daten aus einem speichermedium | |
DE69815656T2 (de) | Rechnersystem mit einem mehrfach Sprungbefehlzeiger und -Verfahren | |
DE2419522A1 (de) | Verfahren und anordnung zur unterteilung eines oder mehrerer nicht benutzter bereiche eines mit einem rechner verbundenen speichers |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
DPE2 | Request for preliminary examination filed before expiration of 19th month from priority date (pct application filed from 20040101) | ||
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
WWE | Wipo information: entry into national phase |
Ref document number: 2006742576 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 200680016239.1 Country of ref document: CN |
|
WWE | Wipo information: entry into national phase |
Ref document number: 1020077027590 Country of ref document: KR |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2610738 Country of ref document: CA |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2008515063 Country of ref document: JP |
|
WWP | Wipo information: published in national office |
Ref document number: 2006742576 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 11916805 Country of ref document: US |