AU2014268230A1 - Cyclic allocation buffers - Google Patents

Cyclic allocation buffers Download PDF

Info

Publication number
AU2014268230A1
AU2014268230A1 AU2014268230A AU2014268230A AU2014268230A1 AU 2014268230 A1 AU2014268230 A1 AU 2014268230A1 AU 2014268230 A AU2014268230 A AU 2014268230A AU 2014268230 A AU2014268230 A AU 2014268230A AU 2014268230 A1 AU2014268230 A1 AU 2014268230A1
Authority
AU
Australia
Prior art keywords
memory
allocation
buffer
buffers
allocations
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
AU2014268230A
Inventor
Antony Louis Grech
Gregory John Marr
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Canon Inc
Original Assignee
Canon Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canon Inc filed Critical Canon Inc
Priority to AU2014268230A priority Critical patent/AU2014268230A1/en
Publication of AU2014268230A1 publication Critical patent/AU2014268230A1/en
Abandoned legal-status Critical Current

Links

Abstract

CYCLIC ALLOCATION BUFFERS A memory allocation method 410, the method comprising: establishing a plurality of allocation buffers based on short-lifetime allocations; distributing allocations of portions of memory 400 by accessing the plurality of buffers to reduce likelihood that a memory management command would cause fragmentation of at least one of the plurality of buffers; detecting a memory management command referencing a portion of memory allocated from an allocation buffer; in response to detecting the memory management command, examining 510 the plurality of location buffers to determine whether the detected memory management command can be performed without fragmentation of the allocation buffer; and if the memory management command can be performed without fragmentation of the allocation buffer, performing the memory management command. 9430115_1 7301 7302 7300 Fig. 8B

Description

CYCLIC ALLOCATION BUFFERS TECHNICAL FIELD
[0001] The present invention relates to cyclic allocation buffers. In particular, the present invention relates to a memory allocation method, a memory manager, a method of building a display list, a raster image processing system and a raster image processing method.
BACKGROUND
[0002] Memory management is an important and integral component of most software. Most commonly, a generic memory manager performs memory management. A memory manager typically provides ways to dynamically allocate portions of memory to programs at their request and free that memory for reuse when no longer needed.
[0003] Requests for memory are satisfied by allocating portions from within a large pool of memory called the heap. A request for memory can be an allocation request, or a reallocation request where an existing allocation is resized to a larger size. Requests to release memory must be handled by the memory manager so as to organise the free memory to best satisfy current and future requests. A request to release memory can be a free request, or a reallocation request where an existing allocation is resized to a smaller size (also called a truncation request).
[0004] Many different techniques exist for memory management and key goals such as speed of performance and minimising memory overhead are often conflicting and must be balanced by the memory manager. The potential to meet key memory management goals depends on the allocation pattern of memory requests made by the software. The term “allocation pattern” refers to both the sequence and properties (such as size) of all memory calls made to the allocator. Although general purpose allocators typically provide a reasonable trade-off in performance and memory overhead for a wide range of possible allocation patterns, general purpose allocators typically do not achieve the highest possible performance for any given allocation pattern.
[0005] A major challenge faced by allocators is fragmentation. Fragmentation is where available memory becomes broken into small, non-contiguous blocks making it difficult or impossible to reuse memory that is free. Fragmentation is exacerbated by long-lived allocations scattered throughout the heap. This type of fragmentation is called external fragmentation. In addition to increasing memory use, external fragmentation may degrade performance due to extra time spent coalescing adjacent free memory blocks and an overall increase in time spent managing free memory blocks. In addition to this, fragmentation may degrade performance in other ways such as decreased cache utilisation when locality of reference decreases. That is, when memory address references are positioned closer together, the average speed of performing operations using the data in those memory locations increases due to improved cache utilisation. However, when memory address references are positioned further apart, the speed of performing operations using the data in those memory locations decreases.
[0006] One common memory management technique that is used to improve performance is a memory pool. Also called a fixed size allocator, this technique uses lists of fixed size blocks of memory. Blocks may be allocated efficiently by pre-allocating numerous blocks before those blocks are required. When a block is no longer required, the block is returned to an appropriate list for the given block’s size. This method helps avoid fragmentation, is very fast and is well suited to small allocations.
[0007] Several other memory management methods specifically attempt to separate short and long life allocations as a way of reducing fragmentation. Many studies have highlighted the predominance of small-sized allocations amongst short-lived allocations and as such, an allocation’s size has become one predictor of the lifetime of the allocation. Attributes such as an allocation’s size may be further combined with information that has been previously collected, such as the call stack at the time of allocation, to provide a more reliable indication of allocation lifetime. Whilst separating allocations based on lifetime can improve performance and reduce fragmentation, collecting the required information may be inconvenient for users. Other methods propose sampling a subset of allocations at runtime. However, this incurs a runtime overhead and a delay before the application can use the information. In addition, lifetime prediction methods in general only provide an indication of the allocation’s lifetime and may not be entirely accurate.
[0008] It will be understood that the terms “short-lived”, “short-life”, “long-lived”, “long-life” and the like refer to the relative lifetime of an allocation in terms of how many operations are performed by the memory manager during the lifetime of the allocation.
[0009] Software developers sometimes exploit particular patterns within applications by using a special purpose allocator in place of a general purpose allocator. For example, when stacklike behaviour is exhibited, e.g. where the majority of operations are last in first out (LIFO), it is advantageous to free entire regions as one operation during the unwinding of the stack is advantageous. Allocations can be performed quickly, fragmentation can be eliminated and memory can be reclaimed quickly. An extension to this caters for short life allocations that are not stack-like by marking freed allocations as free, so that when all memory allocated after any point is marked free, that entire region can be reused by the memory allocator. These methods, while always performing quickly, exhibit poor memory use when mixed with any long life allocation requests.
[0010] Another technique that is often utilised because of the speed advantages associated with freeing entire regions in one operation is region-based memory management. Each allocation is assigned to a region. A region, also called an arena, is a collection of allocations that can be efficiently de-allocated all at once. However the inability to free individual allocations within regions can lead to an increase in memory consumption. To reduce this problem, arenas can be separated based on lifetime. This partitioning helps facilitate fast and efficient freeing of the entire arena all at once using other techniques such as reference counting active allocations within the arena. However, this method is still limited by the user’s ability to separate short and long life time allocations.
[0011] Short-life allocation patterns where a significant proportion of allocations are resized shortly after being created are poorly addressed by current methods. The term “shortly” is intended to mean within a few or several memory manager operations. When an allocation is truncated to a smaller size, existing methods have deficiencies in either speed or memory use. For example, in this situation, the truncation causes memory pool techniques to introduce fragmentation by creating free memory of a new size that may not be a common allocation size. A fixed size allocator is forced to either waste part or all of the freed memory or to spend time copying the memory being resized to a new memory block. A similar problem is faced by other allocators where the problem can be further compounded by having a large number of free blocks to manage. Similarly when an allocation is resized to a larger size, an allocator is typically forced to spend time copying the memory being resized to a new memory block.
SUMMARY
[0012] The herein disclosure provides a memory manager and method that reduces the likelihood that a memory management command will cause fragmentation of memory blocks within a plurality of memory buffers. The reduction of the likelihood of fragmentation occurring is based on defining a sequence of memory buffers being accessed and/or examining the buffers to determine if the command can be performed without fragmentation occurring.
[0013] It is an object of the present invention to substantially overcome, or at least ameliorate, one or more disadvantages of existing arrangements.
[0014] According to a first aspect of the present disclosure, there is provided a memory allocation method, the method comprising: establishing a plurality of allocation buffers based on short-lifetime allocations; distributing allocations of portions of memory by accessing the plurality of buffers in a sequence for reducing the likelihood that a memory management command would cause fragmentation of at least one of the plurality of buffers; detecting a memory management command referencing a portion of memory allocated from an allocation buffer; in response to detecting the memory management command, examining the plurality of allocation buffers to determine whether the detected memory management command can be performed without fragmentation of the allocation buffer; and if the memory management command can be performed without fragmentation of the allocation buffer, performing the memory management command to allocate the portions of memory.
[0015] According to a second aspect of the present disclosure, there is provided a memory manager for controlling short lifetime allocations to prevent memory fragmentation, the memory manager configured to: establish a plurality of designated allocation buffers adapted for a short lifetime allocation pattern; detect a command associated with a portion of memory allocated from a designated allocation buffer; and alternate allocations of portions of memory between the plurality of allocation buffers in accordance with the short lifetime allocation pattern to allow the detected command to be performed substantially without fragmentation of the allocation buffer.
[0016] According to a third aspect of the present disclosure, there is provided a method of building a display list for use by a raster image processing system when producing an object, the method comprising: generating a set of control points for a plurality of stroked paths, adding the generated control points to a forwards direction array and a backwards direction array, and upon completion of generating the control points for the object, allocating the control points in the forwards direction array and the backwards direction array to portions of memory in a plurality of buffers by accessing the plurality of buffers in a defined sequence for allocating the control points so that sequential memory management commands access a different allocation buffer.
[0017] According to a fourth aspect of the present disclosure, there is provided a raster image processing system for building a display list when producing an object, the processing system comprising a stroking module and a memory allocator: wherein the stroking module is arranged to generate a set of control points for a plurality of stroked paths for an object, and add the generated control points to a forwards direction array and a backwards direction array, and upon completion of generating the control points for the object; and the memory allocator is arranged to allocate the control points in the forwards direction array and the backwards direction array to portions of memory in a plurality of buffers by accessing the plurality of buffers in a defined sequence so that sequential memory management commands access a different allocation buffer.
[0018] According to a fifth aspect of the present disclosure, there is provided a raster image processing method for building a display list when producing an object, the processing method comprising the steps of: generating a set of control points for a plurality of stroked paths for an object, and adding the generated control points to a forwards direction array and a backwards direction array, and upon completion of generating the control points for the object, allocating the control points in the forwards direction array and the backwards direction array to portions of memory in a plurality of buffers by accessing the plurality of buffers in a defined sequence so that sequential memory management commands access a different allocation buffer.
[0019] According to a sixth aspect of the present disclosure, there is provided a memory manager configured to: establish a plurality of allocation buffers based on short-lifetime allocations; distribute allocations of portions of memory by accessing the plurality of buffers in a sequence for reducing the likelihood that a memory management command would cause fragmentation of at least one of the plurality of buffers; detect a memory management command referencing a portion of memory allocated from an allocation buffer; in response to detecting the memory management command, examine the plurality of allocation buffers to determine whether the detected memory management command can be performed without fragmentation of the allocation buffer; and if the memory management command can be performed without fragmentation of the allocation buffer, perform the memory management command to allocate the portions of memory.
BRIEF DESCRIPTION OF THE DRAWINGS
[0020] One or more aspects of the disclosure are described hereinafter with reference to the following drawings, in which: [0021] Figs. 1A and 1B depict a general-purpose computer system upon which the various arrangements described can be practiced according to one aspect of the disclosure; [0022] Fig. 1C is a schematic block diagram illustrating the processing of requests by a memory manager according to one aspect of the disclosure; [0023] Fig. 2 is a schematic flow diagram illustrating a method of allocating memory according to one aspect of the disclosure; [0024] Fig. 3 is a schematic flow diagram illustrating a method of allocating memory from the current active buffer according to one aspect of the disclosure; [0025] Fig. 4 is a schematic flow diagram illustrating a method of freeing memory according to one aspect of the disclosure; [0026] Fig. 5 is a schematic flow diagram illustrating a method of finding the active buffer where the previous-pointer matches the user-pointer according to one aspect of the disclosure; [0027] Figs. 6A and 6B show schematic flow diagrams illustrating a method of resizing memory according to one aspect of the disclosure; [0028] Figs. 7 A and 7B show a configuration of memory having four buffers being allocated according to one aspect of the disclosure; [0029] Figs. 8A, 8B, 8C and 8D show a configuration of memory having four buffers being freed and resized according to one aspect of the disclosure; [0030] Fig. 9 shows the commencement of stroking a path for a printer according to one aspect of the disclosure; [0031] Fig. 10 shows Fig. 9 when the entire path has been stroked but not closed; [0032] Fig. 11 shows Fig. 10 after the stroked path has been closed and the data structures truncated; [0033] Fig. 12 shows a buffer according to one aspect of the disclosure; and [0034] Fig. 13 illustrates an allocation pattern from which a number of buffers can be determined according to one aspect of the disclosure.
DETAILED DESCRIPTION
[0035] Various aspects of this disclosure apply to software that requires a memory manager to process requests to allocate, free and resize memory. Various aspects of this disclosure are most useful when memory allocations predominately have a short lifetime before being resized or freed.
[0036] Figs. 1A and 1B depict a general-purpose computer system 1300, upon which the various arrangements described can be practiced.
[0037] As seen in Fig. 1 A, the computer system 1300 includes: a computer module 1301; various input devices such as, for example, a keyboard 1302, a mouse pointer device 1303, a scanner 1326, a camera 1327, and a microphone 1380; and output devices including, for example, a printer 1315, a display device 1314 and loudspeakers 1317. An external Modulator-Demodulator (Modem) transceiver device 1316 may be used by the computer module 1301 for communicating to and from a communications network 1320 via a connection 1321. The communications network 1320 may be a wide-area network (WAN), such as the Internet, a cellular telecommunications network, or a private WAN. Where the connection 1321 is a telephone line, the modem 1316 may be a traditional “dial-up” modem. Alternatively, where the connection 1321 is a high capacity (e.g., cable) connection, the modem 1316 may be a broadband modem. A wireless modem may also be used for wireless connection to the communications network 1320.
[0038] The computer module 1301 typically includes at least one processor unit 1305, and a memory unit 1306. For example, the memory unit 1306 may have semiconductor random access memory (RAM) and semiconductor read only memory (ROM). The computer module 1301 also includes an number of input/output (I/O) interfaces including: an audio-video interface 1307 that couples to the video display 1314, loudspeakers 1317 and microphone 1380; an I/O interface 1313 that couples to the keyboard 1302, mouse 1303, scanner 1326, camera 1327 and optionally a joystick or other human interface device (not illustrated); and an interface 1308 for the external modem 1316 and printer 1315. In some implementations, the modem 1316 may be incorporated within the computer module 1301, for example within the interface 1308. The computer module 1301 also has a local network interface 1311, which permits coupling of the computer system 1300 via a connection 1323 to a local-area communications network 1322, known as a Local Area Network (LAN). As illustrated in Fig. 1A, the local communications network 1322 may also couple to the wide network 1320 via a connection 1324, which would typically include a so-called “firewall” device or device of similar functionality. The local network interface 1311 may comprise an Ethernet circuit card, a Bluetooth® wireless arrangement or an IEEE 802.11 wireless arrangement; however, numerous other types of interfaces may be practiced for the interface 1311.
[0039] The I/O interfaces 1308 and 1313 may afford either or both of serial and parallel connectivity, the former typically being implemented according to the Universal Serial Bus (USB) standards and having corresponding USB connectors (not illustrated). Storage devices 1309 are provided and typically include a hard disk drive (HDD) 1310. Other storage devices such as a floppy disk drive and a magnetic tape drive (not illustrated) may also be used. An optical disk drive 1312 is typically provided to act as a non-volatile source of data. Portable memory devices, such optical disks (e.g., CD-ROM, DVD, Blu-ray DiscTM), USB-RAM, portable, external hard drives, and floppy disks, for example, may be used as appropriate sources of data to the system 1300.
[0040] The components 1305 to 1313 of the computer module 1301 typically communicate via an interconnected bus 1304 and in a manner that results in a conventional mode of operation of the computer system 1300 known to those in the relevant art. For example, the processor 1305 is coupled to the system bus 1304 using a connection 1318. Likewise, the memory 1306 and optical disk drive 1312 are coupled to the system bus 1304 by connections 1319. Examples of computers on which the described arrangements can be practised include IBM-PC’s and compatibles, Sun Sparcstations, Apple MacTM or a like computer systems.
[0041] The method of allocating memory may be implemented within and using the computer system 1300 wherein the processes of Figs. 2 through to 8D, to be described, may be implemented as one or more software application programs 1333 executable within the computer system 1300. In particular, the steps of the method of memory allocation may be effected by instructions 1331 (see Fig. 1B) in the software 1333 that are carried out within the computer system 1300. The software instructions 1331 may be formed as one or more code modules, each for performing one or more particular tasks. The software may also be divided into two separate parts, in which a first part and the corresponding code modules perform the memory allocation processes and a second part and the corresponding code modules manage a user interface between the first part and the user.
[0042] The software may be stored in a computer readable medium, including the storage devices described below, for example. The software is loaded into the computer system 1300 from the computer readable medium, and then executed by the computer system 1300. A computer readable medium having such software or computer program recorded on the computer readable medium is a computer program product. The use of the computer program product in the computer system 1300 preferably effects an advantageous apparatus for allocating memory in a memory buffer.
[0043] The software 1333 is typically stored in the HDD 1310 or the memory 1306. The software is loaded into the computer system 1300 from a computer readable medium, and executed by the computer system 1300. Thus, for example, the software 1333 may be stored on an optically readable disk storage medium (e.g., CD-ROM) 1325 that is read by the optical disk drive 1312. A computer readable medium having such software or computer program recorded on it is a computer program product. The use of the computer program product in the computer system 1300 preferably effects an apparatus for allocating memory in a memory buffer.
[0044] In some instances, the application programs 1333 may be supplied to the user encoded on one or more CD-ROMs 1325 and read via the corresponding drive 1312, or alternatively may be read by the user from the networks 1320 or 1322. Still further, the software can also be loaded into the computer system 1300 from other computer readable media. Computer readable storage media refers to any non-transitory tangible storage medium that provides recorded instructions and/or data to the computer system 1300 for execution and/or processing. Examples of such storage media include floppy disks, magnetic tape, CD-ROM, DVD, Blu-rayTM Disc, a hard disk drive, a ROM or integrated circuit, USB memory, a magneto-optical disk, or a computer readable card such as a PCMCIA card and the like, whether or not such devices are internal or external of the computer module 1301. Examples of transitory or non-tangible computer readable transmission media that may also participate in the provision of software, application programs, instructions and/or data to the computer module 1301 include radio or infra-red transmission channels as well as a network connection to another computer or networked device, and the Internet or Intranets including email transmissions and information recorded on Websites and the like.
[0045] The second part of the application programs 1333 and the corresponding code modules mentioned above may be executed to implement one or more graphical user interfaces (GUIs) to be rendered or otherwise represented upon the display 1314. Through manipulation of typically the keyboard 1302 and the mouse 1303, a user of the computer system 1300 and the application may manipulate the interface in a functionally adaptable manner to provide controlling commands and/or input to the applications associated with the GUI(s). Other forms of functionally adaptable user interfaces may also be implemented, such as an audio interface utilizing speech prompts output via the loudspeakers 1317 and user voice commands input via the microphone 1380.
[0046] Fig. 1B is a detailed schematic block diagram of the processor 1305 and a “memory” 1334. The memory 1334 represents a logical aggregation of all the memory modules (including the HDD 1309 and semiconductor memory 1306) that can be accessed by the computer module 1301 in Fig. 1A.
[0047] When the computer module 1301 is initially powered up, a power-on self-test (POST) program 1350 executes. The POST program 1350 is typically stored in a ROM 1349 of the semiconductor memory 1306 of Fig. 1A. A hardware device such as the ROM 1349 storing software is sometimes referred to as firmware. The POST program 1350 examines hardware within the computer module 1301 to ensure proper functioning and typically checks the processor 1305, the memory 1334 (1309, 1306), and a basic input-output systems software (BIOS) module 1351, also typically stored in the ROM 1349, for correct operation. Once the POST program 1350 has run successfully, the BIOS 1351 activates the hard disk drive 1310 of Fig. 1 A. Activation of the hard disk drive 1310 causes a bootstrap loader program 1352 that is resident on the hard disk drive 1310 to execute via the processor 1305. This loads an operating system 1353 into the RAM memory 1306, upon which the operating system 1353 commences operation. The operating system 1353 is a system level application, executable by the processor 1305, to fulfil various high level functions, including processor management, memory management, device management, storage management, software application interface, and generic user interface.
[0048] The operating system 1353 manages the memory 1334 (1309, 1306) under operation of the software 1333 to ensure that each process or application running on the computer module 1301 has sufficient memory in which to execute without colliding with memory allocated to another process. Furthermore, the different types of memory available in the system 1300 of Fig. 1A must be used properly so that each process can run effectively. Accordingly, the aggregated memory 1334 is not intended to illustrate how particular segments of memory are allocated (unless otherwise stated), but rather to provide a general view of the memory accessible by the computer system 1300 and how such is used.
[0049] As shown in Fig. 1B, the processor 1305 includes a number of functional modules including a control unit 1339, an arithmetic logic unit (ALU) 1340, and a local or internal memory 1348, sometimes called a cache memory. The cache memory 1348 typically includes a number of storage registers 1344 - 1346 in a register section. One or more internal busses 1341 functionally interconnect these functional modules. The processor 1305 typically also has one or more interfaces 1342 for communicating with external devices via the system bus 1304, using a connection 1318. The memory 1334 is coupled to the bus 1304 using a connection 1319.
[0050] The application program 1333 includes a sequence of instructions 1331 that may include conditional branch and loop instructions. The program 1333 may also include data 1332 which is used in execution of the program 1333. The instructions 1331 and the data 1332 are stored in memory locations 1328, 1329, 1330 and 1335, 1336, 1337, respectively. Depending upon the relative size of the instructions 1331 and the memory locations 1328-1330, a particular instruction may be stored in a single memory location as depicted by the instruction shown in the memory location 1330. Alternately, an instruction may be segmented into a number of parts each of which is stored in a separate memory location, as depicted by the instruction segments shown in the memory locations 1328 and 1329.
[0051] In general, the processor 1305 is given a set of instructions which are executed therein. The processor 1305 waits for a subsequent input, to which the processor 1305 reacts to by executing another set of instructions. Each input may be provided from one or more of a number of sources, including data generated by one or more of the input devices 1302, 1303, data received from an external source across one of the networks 1320, 1302, data retrieved from one of the storage devices 1306, 1309 or data retrieved from a storage medium 1325 inserted into the corresponding reader 1312, all depicted in Fig. 1A. The execution of a set of the instructions may in some cases result in output of data. Execution may also involve storing data or variables to the memory 1334.
[0052] The disclosed memory allocation arrangements use input variables 1354, which are stored in the memory 1334 in corresponding memory locations 1355, 1356, 1357. The memory allocation arrangements produce output variables 1361, which are stored in the memory 1334 in corresponding memory locations 1362, 1363, 1364. Intermediate variables 1358 may be stored in memory locations 1359, 1360, 1366 and 1367.
[0053] Referring to the processor 1305 of Fig. 1B, the registers 1344, 1345, 1346, the arithmetic logic unit (ALU) 1340, and the control unit 1339 work together to perform sequences of micro-operations needed to perform “fetch, decode, and execute” cycles for every instruction in the instruction set making up the program 1333. Each fetch, decode, and execute cycle comprises: [0054] a fetch operation, which fetches or reads an instruction 1331 from a memory location 1328, 1329, 1330; [0055] a decode operation in which the control unit 1339 determines which instruction has been fetched; and [0056] an execute operation in which the control unit 1339 and/or the ALU 1340 execute the instruction.
[0057] Thereafter, a further fetch, decode, and execute cycle for the next instruction may be executed. Similarly, a store cycle may be performed by which the control unit 1339 stores or writes a value to a memory location 1332.
[0058] Each step or sub-process in the processes of Figs. 2 through to 8D is associated with one or more segments of the program 1333 and is performed by the register section 1344, 1345, 1347, the ALU 1340, and the control unit 1339 in the processor 1305 working together to perform the fetch, decode, and execute cycles for every instruction in the instruction set for the noted segments of the program 1333.
[0059] The method of memory allocation may alternatively be implemented in dedicated hardware such as one or more integrated circuits performing the functions or sub functions of memory allocation for a memory manager device. Such dedicated hardware may include graphic processors, digital signal processors, or one or more microprocessors and associated memories.
[0060] Fig. 1C shows a schematic block diagram illustrating the processing of requests by a memory manager 110. An application 105 issues memory requests to the memory manager 110. The application 105 may be software code that operates in an embedded device, such as in a printer, scanner, photocopier, camera, mobile telephone or the like. Alternatively, the application 105 may be software code that operates in a computing device. The memory manager, when necessary, issues a further request to the operating system 115. Requests to the operating system 115 normally comprise requests to increase the memory available to the application 105. This memory allocated by the operating system 115 to the application 105 is from the system heap memory available within the computer or other device. The memory manager 110 manages and distributes available heap memory to the application 105 and attempts to minimise fragmentation, while also maximising the speed at which the memory manager 110 operates.
[0061] Memory allocation libraries that are part of the standard C library for the C programming language conform to a standard “malloc” interface. This interface supports the functionality of allocating memory, freeing memory and reallocating memory. These basic functions are performed with functions named respectively: malloc (allocate a memory block), free (deallocate memory block) and realloc (reallocate memory block). The memory management commands free and realloc can be called fragmentation commands because these commands are the commands that cause external fragmentation. An additional function, calloc (allocate and zero-initialized array), behaves identically to malloc except calloc also guarantees all memory allocated is initialised to NULL. While many variations exist to this interface, such variations are non-standard and thus affect software portability. Many memory allocation libraries exist that do not conform to a standard C library “malloc” interface, however, these primary functions of allocating, freeing and resizing remain as fundamental components.
[0062] Various aspects of the disclosure improve both processing time and the handling of fragmentation when a memory allocator performs a large number of short-life allocations. These short-life allocations may optionally be intermixed with some longer life allocations.
[0063] Various aspects of the disclosure operate by allocating memory from several different buffers of memory. Each allocation attempts to use a different buffer than the previous allocation and in this way, the allocator cycles through each buffer. Therefore, a plurality of allocation buffers is established based on short-lifetime allocations. Fig. 7B shows a configuration of memory that has four memory buffers 7100, 7200, 7300 and 7400 that reside within the memory space 700. Each buffer keeps track of a next-pointer from which new memory for that buffer will be allocated. Therefore, a memory management command is detected which references a portion of memory allocated from an allocation buffer. In response to detecting the memory management command, the plurality of allocation buffers are examined to determine whether the detected memory management command can be performed without fragmentation of the allocation buffer. If the memory management command can be performed without fragmentation of the allocation buffer, the memory management command allocates the portions of memory. When a new memory allocation is made from a particular buffer, the current next-pointer is stored as the current previous-pointer for that buffer and the current next-pointer for that buffer is updated to the next-pointer plus the allocation size of the new memory allocation. After the memory block 7110 is allocated, the buffer 7100 has a previous-pointer 7101 with the value 7111 and a next-pointer 7102 with the value 7112. When a free operation occurs for a particular allocation and that allocation was the latest allocation from any of the buffers, the memory can be quickly reclaimed with a “rewind” operation that assigns the value of the previous-pointer 7101 to the next-pointer 7102, as shown in Fig. 7A. Similarly when a resize operation occurs, the operation can also be completed quickly if the allocation was the latest allocation from a particular buffer by setting the value of the next-pointer to the value of the previous-pointer plus the allocation size required for the resize operation. Therefore, allocations of portions of memory are distributed by accessing the plurality of buffers in a sequence that reduces the likelihood that a memory management command would cause fragmentation of at least one of the plurality of buffers. Further, the plurality of buffers may be accessed in a defined sequence so that each sequential memory management command accesses a different allocation buffer.
[0064] The effect of cycling allocations through multiple buffers is that, if N buffers exist, the most recent N allocations can be freed or resized by a fast rewind operation that wastes no memory and causes no fragmentation. This is especially advantageous for memory allocation patterns that have a short-life and resize or free memory allocations before many other allocations are performed. For these allocation patterns, the number of buffers can be chosen to best suit the allocation pattern and maximise the benefit of this technique.
[0065] According to a first memory allocation example, a custom memory allocator is implemented that is fast and well suited to a memory allocation pattern where allocations are predominately short-life but may still vary in size.
[0066] The system is initialised by first creating multiple (i.e. two or more) buffers and selecting the first buffer as the current buffer.
[0067] One way to determine a number of buffers to use in the allocator is to base the number on an expected or previously logged allocation pattern. A suggested number of buffers N is determined such that 90% of the free and resize requests in the logged allocation pattern occur within N memory operations of the initial allocation. This can be expressed using the expression: N = L[p](E) [0068] where N is the suggested number of buffers, L[p] is the lifetime of the pth percentile allocation (where lifetime is the number of subsequent memory operations prior to a free or resize request on that allocation), E is a set of allocations (from an expected allocation pattern, ranked in ascending order by lifetime), and p has the value 90%. Memory operations that free remaining memory at the end of the allocator’s life are ignored in this calculation, i.e. these allocations are excluded from set E. Calculating the number of buffers in this way means that 90% of allocations can be expected to be freed or resized using a fast rewind operation.
[0069] Therefore, establishing a plurality of allocation buffers are established by determining the number of allocation buffers based on a defined algorithm, where the defined algorithm is based on the equation above.
[0070] An example memory operation sequence is shown in table 13000 in Fig. 13. The “lifetime” column shows the number of subsequent memory operations prior to a free or resize (realloc) request on that allocation. Note that allocations that exist for the life of the allocator are ignored. This set of lifetimes is then ordered in ascending order to form the sequence [2, 2, 2, 2, 3, 3, 3, 3, 3, 4], Taking the 90th percentile of this sequence indicates a suitable number of buffers would be 3, thereby allowing 90% of free or resize operations to be freed or resized using a fast rewind operation.
[0071] Each buffer is then initialised so that each buffer has a previous-pointer set to NULL and a next-pointer set to the start of the current buffer’s available memory. These buffers are allocated by retrieving memory of the appropriate size (based on the application’s requirements, such as its typical allocation size) from the underlying operating system. If the buffer size is too small, each buffer fills up too quickly and the memory manager makes too many requests to the operating system for new replacement buffers. If the buffer size is too large, unnecessary space may be left unused in each buffer and total allocated memory may be excessive. An appropriate size for the buffers can be determined by balancing these tradeoffs. One method for determining this balance is through empirical statistical observation. For example, an application whose primary requirements includes minimising operating system calls may choose a buffer size of 8 kB, even though this size results in more total allocated memory than a smaller buffer size. In another example, an application that usually allocates 6 kB memory blocks will suffer fragmentation with 8 kB buffers. In this example, a buffer size of 1 MB will be more appropriate. In a further example, an application that usually allocates 32 byte memory blocks may waste the majority of the space if 1 MB buffers are used. In this example a smaller buffer size of 1 kB will be more appropriate.
[0072] The allocated buffers that are still in use are termed active buffers. When the allocated buffers fill and are no longer in use, those buffers become inactive buffers. One of the active buffers is denoted as the current active buffer. The current active buffer is the active buffer from which the next memory allocation is applied. After each allocation, the memory manager changes the active buffer that is denoted as the current active buffer. Further, the memory manager alternates between all of the active buffers when denoting the current active buffer.
[0073] Fig. 12 shows an example configuration of an active buffer 1200. The active buffer 1200 comprises an individual memory buffer 1210, which is selected from the multiple buffers,12, a previous-pointer 1290 and a next-pointer 1295. According to this configuration, an allocation consists of a first allocation header 1220, user data 1230, and unused space 1240. The unused space 1240 is present so that the next allocation is correctly aligned to meet any alignment requirements that each user-pointer must conform to. A user-pointer refers to the pointer that is returned to the user from the memory manager indicating the location where the user can store user data 1230. For example, if a computer’s word size is 4 bytes, all user-pointers may need to be 4 byte aligned so that user operations on the user-pointer can be read and written to memory more efficiently. Fig. 12 shows that the most recently allocated memory 1260 is preceded by a second allocation header 1250 and followed by unused space 1270 that ensures correct alignment. The previous-pointer 1290 references the start memory address of the user data 1260 of the most recent allocation. The location of a third allocation header 1280 for the next allocation is also shown. The next-pointer 1295 references the memory address after the location of the header 1280, which is the start memory address for the next allocation and so the location for user data of that next allocation.
[0074] Fig. 2 shows a process 200 for allocating memory. The process starts with receiving a new memory allocation request in step 205. In this example process, an allocation header indicates the size of the memory allocation in the allocation request. The current active buffer is examined in step 210 to see if sufficient space is available for the request and allocation header. If sufficient space is available, memory is allocated in process 215 from the current active buffer as discussed below and shown in Fig. 3. When step 210 determines that the memory requested cannot fit in the current active buffer, a new buffer must be allocated by step 220 before process 215 can proceed. Step 220 selects a buffer to retire from the set of active buffers and become a new inactive buffer. To minimise wasted memory, the preferred buffer to retire is the buffer that is determined to be the most full. Step 220 then allocates a new empty active buffer from the operating system, so that the number of active buffers remains constant. The new buffer is then initialised such that the new buffer has a previous-pointer set to NULL and a next-pointer set to the first location suitable for storing user data, taking into account alignment and the header for the first allocation. The memory manager then makes this new buffer the current active buffer. Process 215 is then able to allocate memory from the current active buffer and returns an allocation pointer. When process 215 has completed, the pointer to the new allocation, i.e. the user-pointer returned by process 215, is returned in step 225 and the process 200 finishes.
[0075] The memory is allocated quickly by treating the buffer like a stack. The buffer’s previous-pointer is assigned the value of the buffer’s next-pointer in step 305. The buffer’s next-pointer is incremented by the size of the user allocation, plus unused space to ensure correct alignment of the next allocation, plus the number of bytes required to store the header of the next allocation in step 310. For example, with reference to Fig. 12, current next-pointer 1295 is equal to the former next-pointer (now the current previous-pointer) 1290 plus the size of user allocation 1260, unused space for alignment 1270, plus size reserved for the next header 1280. The size of the allocation is then stored in the allocation header in step 315. In step 320, the current active buffer is updated to the next buffer in a cyclic sequence of the active buffers. A pointer (a memory address) to the user data of the allocated memory, called a user-pointer, is returned in step 325.
[0076] An allocation process example 200 is shown in Figs. 7A and 7B. Figs. 7A and 7B show four memory buffers 7100, 7200, 7300 and 7400 each of size 4096 kB that reside within the memory space 700. Fig. 7 A shows an initial state where a former next-pointer 7102 is positioned and a former previous-pointer (not shown) is NULL. Fig. 7B shows a current state where the values of the former previous-pointer 7101 and former next-pointer 7102 are updated according to the memory allocation to reposition the current previous-pointer 7101 and current next-pointer 7102. When these buffers are allocated to the memory manager 110 by the operating system 115, they are initialised so that each buffer has a previous-pointer set to NULL and next-pointers 7102, 7202, 7302 and 7402 respectively set to the start of the buffer plus the header size which in this example is 8 bytes. For example, the next-pointer 7102 associated with the first buffer 7100 is initialised with the value 7111 as shown in Fig. 7A. In this allocation process example no adjustments are made for alignment.
[0077] The first buffer 7100 is selected as the current active buffer. In this scenario, 200 bytes of memory is requested by the application 105 in step 205. The allocator will attempt to allocate the 200 bytes with an 8 byte header, i.e. 208 bytes. Step 210 determines that 208 bytes fit within the current active buffer 7100, which is so far wholly unused. Within process 215, the memory block 7110 is allocated from within the current active buffer 7100. As shown in Fig. 7A, in step 305 the former previous-pointer of the buffer 7100 had a NULL value and the former next-pointer 7102 had the memory address 7111 assigned to it. The current previous-pointer 7101 is assigned the memory address value 7111 of that buffer’s former next-pointer 7102. As shown in Fig.7B, in step 310, the buffer 7100’s current next-pointer 7102 is advanced from memory address 7111 to a memory address 7112 at the end of the new memory allocation, and the allocation size (200 bytes) is stored in the allocation header in step 315. In step 320, the next buffer 7200 is then assigned by the memory manager to be the current active buffer and a memory address 7111 to the memory block 7110 is returned by process 215 as the user-pointer in step 325. Step 225 then returns the memory address 7111 to the calling application 105.
[0078] Similarly, when the calling application 105 requests the next memory allocation, the current active buffer 7200 is used to allocate the memory, the memory address 7211 to memory block 7210 is returned and buffer 7300 is selected as the new current active buffer.
[0079] In this manner, the memory manager allocates memory by cycling through each available memory buffer in the memory space 700. The process of cycling through the buffers in this manner increases the likelihood that contiguous allocation requests are not allocated from the same buffer in the memory space, i.e. allocation requests (allocations) that are contiguous in a sequence of allocation requests are allocated non-contiguously in memory (i.e. the allocation buffers). Therefore, when reclaiming a portion of short-lived memory from the buffers, there is a reduced risk of fragmentation occurring.
[0080] A free request, i.e. a request to free or reclaim a portion of memory, is handled according to process 400 as shown in Fig. 4. When the application 105 requests the memory manager 110 to free a previously allocated user-pointer, step 405 receives the request and process 410 attempts to find an active buffer where the previous-pointer equals that user-pointer. 4Process 410 is shown in detail in Fig. 5, and is described below. When process 410 completes, process 400 checks the returned value in step 415 and if this returned value is not NO_BUFFER (i.e. a reference to the matching active buffer was found and returned), the memory allocated to the user-pointer is reclaimed quickly and without fragmentation by assigning the value of the current previous-pointer to the current next-pointer in the matched buffer in step 420. The current previous-pointer is then set to NULL and the current active buffer is set to the matching active buffer. When process 410 returns NO_BUFFER, process 400 ignores the free memory request. By ignoring the free memory request, the process is trading off increased memory overhead for increased performance.
[0081] As shown in Fig. 5, process 410 begins at step 505 by selecting an initial active buffer from the set of active buffers to perform a pointer comparison process as described below.
The most suitable active buffer to select as the initial active buffer for this comparison process will depend on the allocation pattern being used. It will often be most appropriate to select the buffer from which the last allocation was made to be the initial active buffer. Once the initial active buffer is selected to perform the comparison process, step 510 compares the previous-pointer address value for that buffer with the user-pointer address value. If the user-pointer address value is identical to the previous-pointer address value in the initial active buffer, a reference to this initial active buffer is returned by step 515. This reference uniquely identifies the buffer memory address and associated previous-pointer and next-pointer. If step 510 finds the two pointers are not identical (because the address values do not match), an iterative process begins. Step 520 checks if any active buffers exist which have not yet been checked by this process for matching pointers, and if so, the next active buffer is selected by process 525. For example, process 525 can select the most recently used active buffer that has not yet been checked. Then step 510 checks again if the selected active buffer has a previous-pointer address value that matches the user-pointer address value. If the process 410 determines that all buffers have been checked at step 520, then at step 530 the process 410 returns NO_BUFFER to the calling process 410, signifying that no active buffer has a previous-pointer equal to the user-pointer.
[0082] Therefore, the plurality of buffers are examined by determining if a buffer in the plurality of buffers has a previous-pointer with a memory address equal to the memory address of a user-pointer associated with the memory management command.
[0083] Therefore, it can be seen that the process 410 will, depending on the memory allocation sequence, typically only match a previous-pointer address value with the user-pointer address value if the memory allocation associated with the user-pointer was one of the four most recent memory allocations in the active buffers. If the memory allocation associated with the user-pointer was not one of the four most recent memory allocations, then it is likely that a further memory allocation took place in the relevant active buffer changing the value of the previous-pointer address. According to this example, no records are kept for any former previous-pointer addresses in the active buffers. Only the current previous-pointer address values (or a NULL value is assigned) and current next-pointer address values are stored for each active buffer. These values are re-assigned during each allocation process for an active buffer in which the allocation is being applied.
[0084] It can be seen that with short-life memory allocation, there is a greater chance that the memory will be freed using the process described above. This therefore reduces the chance that memory fragmentation will occur. Whereas, if a memory block is allocated with a longer life, there is an increased chance that the active memory buffer in which the long-life memory block has been allocated will have a further memory allocation applied to it, thus causing memory fragmentation to occur when the long-life memory block is eventually freed.
[0085] By providing multiple active buffers in which memory allocations can be made in a defined pattern or sequence, the chances of matching an address values for a previous-pointer value with the user-pointer address value is increased in a short-life memory system, as there is a greater chance that the current previous-pointer value would not have changed since the previous operation.
[0086] An example of this “freeing” process 400 is shown with reference to Figs. 8A and 8B. Fig. 8A shows the four memory buffers 7100, 7200, 7300 and 7400 from Fig. 7 after a number of operations have been completed by the memory manager 110. The allocations 7110, 8120, 7210, 8220, 8230, 8310, 8320, 8410 and 8420 are blocks of memory allocated by the memory manager that may still be in use by the application 105. In this example process, no adjustments are made for alignment. When the memory manager 105 receives a free request in step 405 to free up a memory block with a user-pointer memory address 0x20400, the memory manager begins step 410 by checking each of the buffers 7100, 7200, 7300 and 7400 to find the active buffer that has a current previous-pointer with a memory address that matches the user-pointer memory address. Process 410 is shown in Fig.5.
[0087] In step 505 of Fig.5, the process selects buffer 7100 (the initial buffer). The process at step 510 performs a comparison of the buffer’s previous-pointer 7101 memory address 0x10200 7112, and the memory address 0x20400 associated with the user-pointer. The process determines at step 510 that the memory address associated with the user-pointer does not match the memory address associated with the current previous-pointer 7101 in the initial buffer 7100. After this determination, the process continues to step 520. The process then determines that all of the buffers have not been checked in step 520, and proceeds to select a further active buffer 7200 in step 525 according to a desired buffer selection sequence. The user-pointer memory address 0x20400 is compared with the address of the current previous-pointer 7201 for memory buffer 7200 in step 510. If it is determined in step 510, as in this example, that the address of the current previous-pointer 7201 matches the address of the user-pointer, the process moves to step 515.
[0088] At step 515, the process returns a reference to the buffer 7200 with the matching current previous-pointer address to process 410 of Fig. 4. Step 415 in Fig. 4 then proceeds to step 420 where the memory for block 8230 in Fig. 8A is reclaimed as shown with reference to Fig. 8B.
[0089] The memory address value of the current next-pointer 7202 for buffer 7200 in Fig. 8A is assigned the memory address value 8231 as shown in Fig. 8B, which is the memory address value of the current previous-pointer 7201 in Fig.8A. The memory address value of current previous-pointer 7201 in Fig. 8A for buffer 7200 is set to NULL. The current active buffer is assigned as the matching buffer 7200. The next allocation from buffer 7200 is then able to reuse the memory made available by freeing block 8230. That is, the next allocation will be applied using the address value associated with the current next-pointer 7202 in Fig. 8B.
[0090] In the previous “freeing” process example, only two buffers were checked for matching address values before a match occurred in step 410. The memory manager has been initialised by the user with four buffers so as to improve the likelihood of a match in step 410 for the overall allocation pattern of the system. For example, if four memory allocations are made such that a single allocation is made to each of the four active buffers, and those four memory allocations are freed or resized in any order, this will always result in four user-pointer address matches in step 410. If these free or resize operations were interspersed with extra allocations, as is likely in a standard system, then a match in step 410 may not always occur. The number of buffers required to ensure a match in step 410 occurs will vary with the order of allocation, free and resize requests. For example, if three allocations were followed by a free operation and then two more allocations, the next free may not result in a match in step 410 if four buffers are used. Whereas, if eight buffers are used instead of four, the proportion of user requests that are matched may increase over time in the user-pointer address matching process of step 410. Flowever, the process may be slower as the process would need to search through up to eight buffers for each request resulting in an increased number of process steps. Alternatively, if two buffers were used instead of four, the matching process of step 410 would be faster but many opportunities to match and reuse the last allocation in a buffer may be lost. In any given system, the final choice regarding the number of buffers may involve experimentation and timing analysis to determine the optimum number of active buffers for the process described.
[0091] When the application 105 makes a request to the memory manager 110 for a memory block to be resized to a smaller size (i.e. a truncation request) based on a previously allocated user-pointer, process 600 of Figs. 6A and 6B is performed. First, the process at step 610 receives the memory resize request. At step 615, the process determines whether the resize operation would result in a new memory block size that is sufficiently different to a previous memory block allocation that would make a resize operation worthwhile, should it be performed. If a truncation is deemed of negligible difference or if alignment considerations render the resize unnecessary, process 680 may return the user-pointer and process 600 ends without further action. Even though the new size associated with the memory allocation request may be different to the original size of the memory block, it may be determined that the difference in memory block sizes is not sufficiently different.
[0092] For example, if a previously allocated user-pointer from a request to allocate a memory block having a size of 100 bytes is being resized to a new size of 97 bytes, the memory manager 110 will compare this size change (i.e. 3 bytes) with a threshold value associated with the memory manager. In this example process, the threshold value may be set at 4 bytes so that the resize request incorporating a size change of 3 bytes is determined to be less than 4 bytes and so is not sufficiently different. The user-pointer is then returned quickly and efficiently with no further action. It will be understood that any other suitable threshold value may be set for a particular memory manager, such as 8 bytes, 16 bytes, etc.
[0093] If the memory manager 110 performing the process at step 615 determines that the new size is sufficiently different, process 410 then operates as previously described to attempt to find an active buffer with a previous-pointer address value that matches the user-pointer address value. Decision step 620 examines the results of process 410. If at step 620 it is determined that there is no active buffer with a previous-pointer that has an address value that matches the address value of the user-pointer, then the process proceeds to step 670. The process at step 670 examines the size in the allocation header and determines if the resize operation is a truncation (that is, the allocation is not being grown in size). That is, the header associated with the memory allocation contains at least an allocation size associated with the memory allocation. Upon a determination that the resize operation is a truncation, the process proceeds to step 680. At step 680, the process returns the user-pointer without further action and the process ends. In this case, although the application 105 is now using only part of its original allocation, the unused portion is not re-used in order to achieve higher performance.
[0094] After a matching buffer was not found at step 620, if the process determines at step 670 that the resize operation is not a truncation, process 200 allocates new memory within the next cyclic buffer as previously described. That is, a new memory block is allocated in the next active buffer in the sequence to enable the resize operation to be executed. The process then copies the contents of the memory at the memory address pointed to by the user-pointer to the newly allocated memory location at step 675. The allocation size, which is stored in the allocation header, is used by the process to determine the amount of data to copy from the original allocation to the resized allocation. The process at step 685 then returns the new user-pointer for the newly allocated memory and the process ends.
[0095] When process 410 is able to find an active buffer that has a previous-pointer with a memory address value that matches the user-pointer memory address value, process 410 returns the matching buffer and step 620 proceeds to step 630. In this case, the memory to resize is the last memory block that was allocated in the identified active buffer. If the process determines at step 630 that the resize operation is not a truncation, the process proceeds to step 640. At step 640, the process determines if the matching buffer has sufficient space available to increase the size of the memory referenced by the user-pointer. When the process determines that sufficient space is available, the process proceeds to step 635. At step 635, the process updates the next-pointer memory address value to match the memory address of the end of the resized memory allocation after any alignment requirements have been made and the size of an allocation header has been added. Similarly, when the process determines at step 630 that the resize operation is a truncation, the process proceeds to step 635, which again updates the memory address value of the next-pointer for the matching-buffer as above and then updates the size in the header for that allocation. After step 635, the resize operation process is completed by returning the original user-pointer at step 680 and the process ends.
[0096] When the process at step 640 determines that the matching buffer has insufficient space available to increase the size of the memory referenced by the user-pointer, a new active buffer must be allocated in step 645. Step 645 allocates a new active buffer and makes this new active buffer the current active buffer using process step 220 of process 200.
Process step 220 includes retiring the active buffer that is determined to be the most full, i.e. is closest to its maximum capacity. Process step 215 then allocates memory from this new active buffer of a size equal to the size that has been requested for the resize operation. The process at step 655 then copies the memory at the location pointed to by the user-pointer to the newly allocated memory where the size of the memory to copy is obtained from the allocation header. The process at step 660 then updates the next-pointer of the previous active buffer to be equal to the previous-pointer to remove the portion of memory that has now been moved to the new active buffer. The process at step 685 returns the new memory pointer and the process ends.
[0097] An example resizing process is shown with reference to Figs. 8B, 8C and 8D. In this example process, no adjustments are made for alignment. An example of a truncate operation is first described. In this truncation example, the application 105 requests the memory contents referenced by a user-pointer of address 0x40200 to be resized to 100 bytes. When the memory manager receives the user-pointer and a request to resize that memory block in step 610, the memory manager 110 first determines in step 615 if the new size for the memory block is sufficiently different to the original size of the memory block prior to resizing. In other words, the process determines whether the change in memory block size from the original size to the truncated size is greater than a threshold value. This is performed by retrieving the size of the allocation from the allocation header, and comparing the size of the original memory block with the requested new allocation size. If the difference in sizes is greater than a threshold difference value, the process determines that the memory block size change is sufficiently different and proceeds with the truncation process. For example, the threshold value may be 4 bytes. It will be understood that any other suitable threshold value may be set for a particular memory manager, such as 8 bytes, 16 bytes, etc.
[0098] Although the location of the user allocation is not yet known by process 600, the header is always a fixed number of bytes prior to the user-pointer, and so it can be retrieved by stepping backwards in memory from the user-pointer a number of bytes equal to the header size (8 bytes). In this case, the size of the allocation is 300 bytes, and the requested new size is 100 bytes. This is determined by the memory manager to be sufficiently different based on a threshold value of 10 bytes. The memory manager calculates that the difference in size is 100 bytes, which is greater than the threshold value of 10 bytes.
[0099] The memory manager then checks each of the buffers 7100, 7200, 7300 and 7400 in process 410 to find a previous-pointer memory address value that matches the user-pointer memory address value. The previous-pointer memory address of each buffer is compared with the address 0x40200 of the user-pointer. Steps 510, 520, and 525 are repeated for buffers 7100, 7200, and 7300, which have previous-pointers 7101,7201, and 7301 with addresses 0x10200, 0x20400, 0x30200 respectively. The process determines that none of the previous-pointers in buffers 7100, 7200 and 7300 match the user-pointer address 0x40200. When the memory buffer 7400 is checked in step 510, the previous-pointer 7401 for memory block 8420 with address 0x40200 8401 matches the memory address associated with the user-pointer. Process 410 returns a reference to the buffer 7400 in step 515. The process at step 620 determines that a matching buffer was found and proceeds to step 630.
[00100] The process at step 630 compares the requested size (100 bytes) with the allocation size (300 bytes), and determines that the memory block 8420 is being truncated. That is, the process determines that the requested size is less than the currently allocated size. The block 8420 is then resized in step 635 by changing the address of the former next-pointer 7402 of buffer 7400 as shown in Fig. 8B to the memory address value 8403. Therefore the current next-pointer 7402 is associated with the memory address value 8403 as shown in Fig. 8C and memory block 8420 is truncated. The previous-pointer 7401 for the buffer 7400 is left unchanged.
[00101] As can be seen in Figs, 8A-8D each subsequent allocation within an allocation buffer may abut a previous allocation in that buffer. Further, a header associated with a memory allocation may precede the memory allocation within an allocation buffer.
[00102] Another example resizing process is now described, where a memory block is resized to a larger size and the new memory block size fits within the allocation’s existing buffer. In this resizing process example, the application 105 requires the contents referenced by the user-pointer of address 0x40200 to be resized to 500 bytes. Process 600 operates in the same manner as the previously described resizing process example, again identifying buffer 7400 as the matching buffer as shown in Fig. 8C, until step 630 is reached. Step 630 compares the requested size (500 bytes) with the allocation size (300 bytes) and determines that the request is for the memory block 8420 to be increased and proceeds to step 640. Step 640 then checks if the amount of memory from the previous-pointer 7401 to the end of the buffer is greater than or equal to the new request size (500 bytes). This check succeeds and memory block 8420 can be resized in step 635 by assigning the next-pointer 7402 for the buffer 7400 to the value 8404 as shown in Fig. 8D and leaving the previous-pointer 7401 for the buffer 7400 unchanged.
[00103] A further example resizing process is described where a memory block is resized to an increased size, where the new memory block size and the existing memory blocks do not fit within the currently active buffer associated with the resizing allocation. In this resizing process example, the application 105 requires the contents of address 0x40200 referenced by the user-pointer to be resized to 800 bytes. The active buffers 7100, 7200, 7300 and 7400 have 2000 bytes, 300 bytes, 1500 bytes and 400 bytes remaining respectively. In this resizing process example, the memory manager has selected buffer 7300 as the current active buffer. Process 600 operates in the same manner as the previously described resizing process example, again identifying buffer 7400 as the matching buffer. The process at step 640 determines whether the memory associated with the user-pointer plus the new memory request size (800 bytes) will fit in the buffer 7400. In this case, the process determines that the new memory block size will not fit into the buffer 7400. Therefore, at step 640 the process proceeds to step 645. The process at step 645 allocates a new 4096 kB buffer from the operating system. The new buffer is initialised such that the new buffer has a previous-pointer set to NULL and a next-pointer address value set to the start of the buffer’s available memory plus the allocation header.
[00104] Also in step 645, the memory manager selects the best active buffer to retire to become a non-active buffer and then makes this new buffer the current active buffer. To do this, the memory manager checks each of the active buffers 7100, 7200, 7300 and 7400 to determine which of the active buffers has the least amount of space remaining. The memory manager selects buffer 7200 as it only has 300 bytes remaining. The process at step 645 selects buffer 7200 and retires the buffer. The newly created buffer then replaces buffer 7200 as an active buffer. The process at step 645 proceeds to process 215 where memory is allocated from the current active buffer to store the resized user data. In step 655, data from the original user allocation 8420 is copied to the new allocation. In step 660, the newly created buffer’s previous-pointer is allocated a memory address associated with the start of the new allocation, and the newly created buffer’s next-pointer is allocated a memory address associated with the end of the new allocation while also incorporating space for an allocation header. Finally, the process in step 685 returns the pointer address value for the new allocation of the user data to the caller and the process ends.
[00105] The resizing process examples above describe a very fast custom memory allocator that is well suited to a memory allocation pattern where allocations are predominately freed or resized shortly after allocation and the remaining allocations have a similar lifetime. Process 215 is a very fast operation that in the majority of cases only involves pointer memory addressing reassignment. Similarly, process step 635 that resizes the memory of a recent allocation and process step 420 that frees the memory of a recent allocation are both very fast operations only involving pointer memory addressing reassignment. In addition, process steps 635 and 420 cause no fragmentation.
[00106] According to a second memory allocation example, a separate allocator to the one described above is used for allocations of memory blocks greater than a threshold size.
Larger memory block allocations tend to correspond to long-life allocation patterns. This second memory allocation example is used to provide a coarse separation between short-life and long-life allocations. In process 200, an additional check is performed after step 205 in which the size of the memory request is compared against a memory threshold size value. For example, 128 bytes, 512 bytes, or half the block size may be used as memory threshold size values. If the requested memory block size is equal to or larger than the threshold size value, then instead of proceeding to step 210, the process allocates the memory using the separate allocator, such as a general purpose allocator. Whereas, if the requested memory block size is smaller than the threshold size value, then the process proceeds to step 210. Therefore, allocations of portions of memory greater than a predetermined size are handled by a standard allocator and other allocations of portions of memory are handled according to the methods described herein.
[00107] This technique has the advantage of increasing the likelihood that the process 400 using the memory allocator of the herein described system will be able to find a matching previous-pointer address value to the address value of the user-pointer. This is because it is generally more likely that small memory block allocations will have a shorter lifetime than large memory block allocations, and by removing at least a portion of large memory block allocations from being handled by the process 400 will cause the process 400 to mainly handle small memory block allocations.
[00108] When an operation occurs that frees or resizes a memory block, the memory manager ensures the operation is handled by the allocator that performed the original memory block allocation. That is, a determination is made as to which allocator returned the user-pointer during a “free” or “realloc” command. This process is implemented through the provision of a flag in the allocation header, which is arranged to indicate whether the allocation was made using memory allocator 110 as described above, or a separate allocator. To enable this, the free request process 400 has an extra step incorporated therein after step 405. This extra process step examines the flag inside the allocation header corresponding to the user-pointer and upon determining that the flag indicates that the size of memory required for the process is equal to or larger than the threshold value, the request is passed to the separate allocator. The flag is typically a single bit, where 0 denotes that the threshold has not been reached and 1 denotes that the threshold has been reached. As before, the header is accessed based on the address supplied in the request made to free a memory block. Similarly, when process 600 receives a request to resize a memory block, process step 610 incorporates an extra process step. This extra process step examines the flag inside the allocation header corresponding to the user-pointer and makes a determination as to whether the size of memory required for the resizing process is equal to or larger than the threshold value. Upon a determination that the size of the memory required for the resizing process is equal to or larger than the threshold value, the request is passed to the separate allocator for handling. Whereas, upon a determination that the size of the memory required for the resizing process is smaller than the threshold value, the request is passed to the allocator incorporating the memory manager 110 for handling.
[00109] According to an alternative to the second memory allocation example described above, the process ensures both allocators (i.e. the separate allocator and the allocator incorporating the memory manager 110) store the size of the memory block allocation at the same position in the corresponding headers that they create. This is required as different memory allocators may use headers of different sizes and so the positioning of the relevant size information within the header may be different. This process step uses the size of the allocation to make a determination as to which of the allocators (i.e. the general purpose allocator and the allocator incorporating the memory manager 110) are to be used based on a size threshold value. With this technique, when step 405 of process 400 receives an operation from the application 105 to free a memory block, the size indicated by the allocation header that corresponds with the user-pointer is determined. If the process determines that the size of the allocation associated with the block being freed is equal to or larger than the threshold value, the request to free the memory block is passed to the separate allocator. Whereas, if the process determines that the size of the allocation associated with the block being freed is smaller than the threshold value, the request to free the memory block is passed to the allocator incorporating the memory manager 110.
[00110] Similarly, when process 600 receives a request to resize a memory block, at step 610 the process includes an extra process step that examines the current size of the allocation. If the current size of the allocation is determined to be equal to or larger than the threshold value, the process passed the request to resize to the separate allocator. Whereas, if the current size of the allocation is smaller than the threshold value, the process continues with step 615.
[00111] If process 600 receives a request to resize a memory block in step 610 that would truncate a memory allocation from a size that is greater than the threshold value to a size that is smaller than the threshold value, then the process automatically increases the requested size of the memory block so that the resize operation maintains a memory block size associated with the user-pointer that is equal to or larger than the threshold value. This therefore maintains a relationship between the allocator that performed the operations and the memory block pointers.
[00112] When process 410 in process 400 to free a memory block and process 600 to resize a memory block cannot find an active buffer with a previous pointer address value that matches the address value of the user-pointer, memory space remains allocated for use even though that space will not be reused. According to a third memory allocation example, rather than the memory manager 110 ignoring the un-used memory space, this unused memory space may be added to a “free list”. That is, allocations of portions of memory may be distributed by using a free list to identify the portions of memory to be allocated. The “free list” is a data structure that stores un-used free memory blocks that the memory manager 110 keeps track of. It will be understood that the “free list” may be implemented in the system as a list or a more complex data structure such as a tree. Various techniques exist to coalesce adjacent free blocks in “free lists”, either as entries that are added to the “free list”, or when a memory block is required from the “free list”. For example, the “free list” might include a binary search tree that sorts free memory blocks by memory address. When a new memory block is inserted into the binary search tree, adjacent nodes are examined to identify whether the inserted memory block forms a sequence of contiguous memory blocks. If two contiguous memory blocks are identified, then these memory blocks can be coalesced into a single memory block by combining the two free list entries into a single entry and removing one entry in the binary search tree.
[00113] This coalesced memory may be used by the memory manager 110 when memory for a new buffer is required or when allocations larger than a certain size are made. An advantage of this process is that when an allocation pattern exists that is not only short term, but short and medium term, the medium term memory can be coalesced to make available larger blocks of memory. An advantage of using a “free list” is that overall memory use declines and, given the use of an appropriate allocation pattern, the use of a “free list” will outweigh the extra time taken to perform the coalescing of memory.
[00114] According to a fourth memory allocation example, allocation headers are not used. The advantage of this is that the size of the buffers is not reduced due to the allocation headers. However, if allocation headers are not used, process 600 of Fig. 6 will not always be able to determine the current size of the data that must be resized in step 670. This is because the size of an allocation would not be stored in a header that can be accessed based on the supplied memory allocation request pointer. According to this memory allocation example, the process step 615, does not determine if the memory block resize request is sufficiently different from the existing memory block, but instead the process proceeds directly to steps 410 and 620.
[00115] If the process at step 620 determines a buffer is found where the memory address of a user-pointer matches the memory address of a previous-pointer, the process determines the current size of the memory allocation being resized (prior to resizing) by subtracting the address value of the previous-pointer, that is associated with the buffer in which the previous-pointer was matched to the user-pointer, from the buffer’s next-pointer address value. Once the current size of the memory block being resized is determined, step 630 is bypassed and the process automatically proceeds to step 640 where the process 600 continues to completion.
[00116] If the process does not determine a buffer where the memory address of a user-pointer matches the memory address of a previous-pointer in step 620, the process bypasses step 670 and proceeds directly to step 200. The process at step 675 is adjusted to determine how much data is to be copied from the old buffer to the new buffer. When too much data is copied, there is a performance penalty. However, this is necessary to ensure that at least enough data is copied from the old buffer to the new buffer for the re-sizing operation to succeed. The size of the data to copy must be no larger than the new size of the memory allocation. Therefore, the process at step 675 is modified by assigning the new allocation size to the size of the data to copy prior to copying the old contents to the new contents. Additionally, the user-pointer address value plus the size of the data to copy must not extend past the address value of the next-pointer for the buffer associated with the user-pointer. To find the buffer that is associated with the user-pointer, the process examines all active and inactive buffers until a buffer is found where the former user-pointer address value lies between the start address of the buffer and the last memory location address in that buffer. When the user-pointer address value plus the size of the data to copy is greater than the memory address of the last memory location in that buffer, the size of the data to copy is reassigned to the last memory location in that buffer minus the address of the user-pointer.
[00117] According to a fifth memory allocation example, the process may apply varying algorithms to select the next buffer in a defined sequence of active buffers. According to one implementation, the next active buffer may be chosen at random or in a partially random manner. Similarly, a deterministic but non sequential algorithm may be used such as the pattern A, B, A, C, A, D, A, B etc. In some implementations, particular allocation patterns will be best suited to different algorithms to select the next buffer. For example, if the previous buffer selection pattern was being used and the allocation pattern was 6 allocations followed by a free of the second allocation, then memory will always be reclaimed for the free operation. However if the pattern of buffer selection was A, B, C, D, A, etc. then 6 allocations followed by a free operation of the second allocation will not reclaim memory.
[00118] According to a sixth memory allocation example, an application 105 supplies information for each allocation to the memory manager 110 that is associated with whether an allocation will have a short lifetime or long lifetime. This enables the process to send longer life allocations to a separate allocator in a similar manner to the second memory allocation example. That is, while the second memory allocation example described herein relied upon determining the size of the memory block allocation, according to this particular example the memory manager 110 obtains an input from the application 105. For example, each allocation call includes a Boolean flag which the caller sets to true to mark the allocation as a long-life allocation. While this presents a deviation from a standard malloc interface, this is an effective mechanism to ensure the process is used entirely for short-life allocations even though longlife allocations may be present. In this example, few, if any, matches fail in process step 410, which ensures that very little memory no longer in use cannot be reused.
[00119] According to a seventh memory allocation example, in addition to the process utilising a previous-pointer for each buffer, the process may also utilise a previous-pointer for each allocation made within a buffer. According to this example, if the allocation pattern is stack-like, memory no longer used can always be reused rather than just the last allocation in each buffer. When the allocation pattern is near stack-like, memory used in this memory allocation example can also improve on the memory use of the first memory allocation example.
[00120] In this memory allocation example, buffers may be allocated memory and resized as in the first memory allocation example. Additionally, when the memory block 8230 is allocated in Fig. 8A, the memory address value 8221 of the former previous-pointer 7201 of the buffer 7200 is also stored in the header of the new allocation 8230. In other aspects the allocation process of 200 is unaltered. When memory block 8230 is freed and it is still the last allocation associated with the buffer 7200, the memory manager 110 operates as before. However, instead of assigning NULL to the current previous-pointer 7201 for the buffer 7200, the current previous-pointer 7201 of buffer 7200 is set to the address 8221 stored in the header of the memory block allocation 8230 that is being freed. That is, the previous-pointer memory address value is changed from a current memory address value to a memory address value that is stored in a header of a memory block currently being freed. By using the same process as in the first memory allocation example, the next-pointer for buffer 7200 is left pointing at the start of memory block 8230 so that the memory previously occupied by the allocation 8320 can be reused.
[00121] Similarly, if a later request is made to free memory block 8220 and it is still the last allocation from the buffer 7200, the memory occupied by memory block 8220 can be reclaimed by using the previous-pointer memory address stored in the header of memory block 8220.
After memory block 8220 is freed, the next-pointer for buffer 7200 is left pointing at the start of memory block 8220 so that the memory block allocation 8220 can be reused. That is, the header associated with the memory allocation within the allocation buffer may contain a pointer to the previous allocation within the allocation buffer.
Example Use Case [00122] An example use case that demonstrates the benefits of the herein described process is the building of a display list for use by raster image processing software or system, to define or produce an output image. For example, this may be desired within a rendering device that requires instructions to produce each object being rendered. The processes described herein are particularly well suited to this example and are able to be incorporated within an allocator as described herein that is significantly more efficient than other standard allocators. This is due to a large proportion of allocations being short-life allocations which are reallocated or freed soon after allocation.
[00123] One component of the display list is a stroking module that produces or generates a set of control points for each of a number of stroked paths. A stroked line is generated by following a straight or curved path. The stroked line is centred on the path with sides parallel to the path. Stroked lines are described by stroking parameters, such as line width, dash patterns, cap style, etc. The stroking module generates control points for the stroked line using the provided path and stroking parameters. The process of generating control points for a stroked path is known as stroking. Prior to generating the control points for a stroked path, the total number of points is unknown. A common data structure for storing the control points for a stroked path is a list of arrays. The control points for a stroked path are simultaneously generated both in a forwards and backwards direction for each side of the path outline.
[00124] A common method for stroking curved lines is to vectorise the curve into a series of straight lines, commonly referred to as straight line segments. The straight line segments, taking into account a specified flatness tolerance, form a path that is an approximation of the centre line of the stroked curve. The vectorisation is performed to the specified flatness tolerance which defines the maximum distance by which any part of the vectorised curve might deviate from the curve. Using the stroke width and end cap type, an outline path, representing the stroked curve, is created which is then filled by a renderer to generate the stroked curve.
[00125] The outline of the centre line path is generated by considering each straight line segment of the vectorised curve in turn. Points perpendicular to the segment, and half stroke width distance from the start and end of each straight line segment, are calculated for both sides of the segment. Further points may be added between the segment outline points in order to maintain the representation of a smooth outline curve. This is typically done by using a round join. Additional outline points are generated at the start and end of the curve to represent the end cap type. Note that the outline path representing the stroked curve must be within flatness tolerance of the ideal stroked curve. A renderer will then fill the outline of the stroked curve to generate the output for printing or display on a target device.
[00126] An intermediate state 900 of the stroking process is shown in Fig. 9. Stroking the path 910 generates control points 1A-3A along a first edge 920, control points 1B-2B along a second edge 930, and control points 1A-1B along a third edge 925 (the stroke cap). Storage is reserved for these control points by means of allocating two arrays using memory allocator 110. The sequences of control points (1 A, 2A, 3A) and (1 A, 1B, 2B) are added to a first array in a forwards direction array 950 and a first array in a backwards direction array 955 respectively.
[00127] After all control points have been generated, a second intermediate state 1000 is shown in Fig. 10. The stroked outline of path 910 is described by control points 1A-13A along a first edge 1020, and control points 1B-12B along a second edge 1030, and control points 1A- 1B along a third edge 925. Subsequent arrays are allocated sequentially by memory allocator 110 for the forwards and backwards lists (forwards direction array and backwards direction array) as they become necessary. The sequence of control points 1A-13A, 12B-1B, 1A describes the outline of the stroked path. The sequence of control points 1A-13A are stored in or added to the forwards list of arrays in elements 950, 1060 and 1070. The sequence of control points 12B-1B, 1A are stored in or added to the backwards list of arrays in elements 1075, 1065 and 955. Note that the control points along the backwards list of arrays are stored in reverse order so that when combined with the forwards list of arrays the outline forms a closed path.
[00128] When the path stroking has completed, as shown in the final state 1100 in Fig. 11, the final number of control points is known and last array 1170 for the forward direction 1120 and the last array 1175 for the reverse direction 1130 are combined and memory can then be reclaimed using the processes described herein. The most recently allocated array 1170 for the forwards direction must be resized to a smaller size and the most recently allocated array 1175 for the backwards direction must be freed using the processes described herein. The forwards and backwards lists of arrays are then combined by appending the backwards direction list to the forwards direction list as shown in Fig. 11, thus creating a single list of arrays containing the desired sequence of control points 1A-13A, 12B-1B, 1A for the stroked outline of the path 910. Therefore, the control points in the forwards direction array and the backwards direction array are allocated to portions of memory in a plurality of buffers by accessing the plurality of buffers in a defined sequence for allocating the control points so that sequential memory management commands access a different allocation buffer. Typically, a display list generator will perform this set of operations extremely frequently and often up to tens of thousands of times per page. The herein described processes are ideally suited to this allocation pattern both in terms of performance and memory overhead, since the truncated and freed memory can be quickly reclaimed for both the forwards and backwards array allocations.
[00129] Determining the number of buffers the memory manager 110 is to initialise depends in large on the allocation pattern. Fast memory reuse is maximised when all operations to free and resize memory blocks on the final forwards and reverse point arrays find a match in step 410. In the case where a dedicated memory manager 110 is used for point array storage, fast memory reuse can be maximised by ensuring that when the last array for the forwards direction 1170 is resized and the last array for the reverse direction 1175 is freed, a match occurs in each case in step 410. This requires a minimum of two buffers (for example, 7400 and 7200) where in one, the last allocation pointer 7401 points to the array 1070 to be truncated and in the other, the last allocation pointer 7201 points to the array 1075 to be freed. If the number of buffers was greater than two, the matching process of step 410 would be slower, and the number of matches found would be essentially unchanged. Alternatively, if the allocator was initialised with only one buffer, the matching process of step 410 would be faster but opportunities to match and reuse the last allocation in a buffer would be lost.
[00130] According to another use scenario, the stroking module may share the memory manager 110 with other memory users in a multi-threaded arrangement. In this case, other users can make allocations in between the allocation of the final point arrays 1070 and 1075, and the corresponding operations to resize and free memory blocks. In such a use scenario, it is generally not possible to guarantee that all operations to free and resize memory blocks find a match in step 410, even if the number of buffers is very large. In this scenario, the number of buffers that the memory manager 110 has determined to initialise depends not only on the allocation pattern, but also on user priorities regarding performance versus memory reuse. The more buffers used, the slower the matching process of step 410 becomes, however, the greater the chance of a match. The fewer buffers used, the faster the matching process of step 410 becomes, but the smaller the chance of a match. Therefore, it will be understood that the optimum number of buffers may be determined as described herein and may also be modified according to specific user needs.
[00131] The arrangements described are applicable to the computer, embedded device and data processing industries and particularly for the buffer allocation processes of those industries.
[00132] The foregoing describes only some embodiments of the present invention, and modifications and/or changes can be made thereto without departing from the scope and spirit of the invention, the embodiments being illustrative and not restrictive.
[00133] In the context of this specification, the word “comprising” means “including principally but not necessarily solely” or “having” or “including”, and not “consisting only of”. Variations of the word "comprising", such as “comprise” and “comprises” have correspondingly varied meanings.

Claims (23)

  1. CLAIMS:
    1. A memory allocation method, the method comprising: establishing a plurality of allocation buffers based on short-lifetime allocations; distributing allocations of portions of memory by accessing the plurality of buffers in a sequence for reducing the likelihood that a memory management command would cause fragmentation of at least one of the plurality of buffers; detecting a memory management command referencing a portion of memory allocated from an allocation buffer; in response to detecting the memory management command, examining the plurality of allocation buffers to determine whether the detected memory management command can be performed without fragmentation of the allocation buffer; and if the memory management command can be performed without fragmentation of the allocation buffer, performing the memory management command to allocate the portions of memory.
  2. 2. The method of claim 1, wherein the step of distributing allocations comprises the step of accessing the plurality of buffers in a defined sequence so that each sequential memory management command accesses a different allocation buffer.
  3. 3. The method of claim 1, wherein the step of examining the plurality of buffers comprises the step of determining if a buffer in the plurality of buffers has a previous-pointer with a memory address equal to the memory address of a user-pointer associated with the memory management command.
  4. 4. The method of claim 1, wherein the defined sequence is determined based on allocation requests that are contiguous in a sequence of allocation requests being allocated non-contiguously in the allocation buffers.
  5. 5. The method of claim 1, wherein the step of establishing a plurality of allocation buffers comprises determining the number of allocation buffers based on a defined algorithm.
  6. 6. The method of claim 5, wherein the defined algorithm is based on the equation N = L[p](E).
  7. 7. The method of claim 1 further comprising the step of: identifying long lifetime allocations and handling the identified long lifetime allocations separately from short lifetime allocations by using a standard allocator to allocate the portions of memory.
  8. 8. The method of claim 1, wherein allocations of portions of memory greater than a predetermined size are handled by a standard allocator and other allocations of portions of memory are handled according to the method of claim 1.
  9. 9. The method of claim 1, further comprising the step of distributing allocations of portions of memory by using a free list to identify the portions of memory to be allocated.
  10. 10. The method of claim 1, wherein each subsequent allocation within an allocation buffer abuts a previous allocation in that buffer.
  11. 11. The method of claim 1, wherein a header associated with a memory allocation precedes the memory allocation within an allocation buffer.
  12. 12. The method of claim 11, wherein the header associated with the memory allocation contains at least an allocation size associated with the memory allocation.
  13. 13. The method of claim 11, wherein the header associated with the memory allocation within the allocation buffer contains a pointer to the previous allocation within the allocation buffer.
  14. 14. A memory manager for controlling short lifetime allocations to prevent memory fragmentation, the memory manager configured to: establish a plurality of designated allocation buffers adapted for a short lifetime allocation pattern; detect a command associated with a portion of memory allocated from a designated allocation buffer; and alternate allocations of portions of memory between the plurality of allocation buffers in accordance with the short lifetime allocation pattern to allow the detected command to be performed substantially without fragmentation of the allocation buffer.
  15. 15. The memory manager of claim 14, wherein the memory manager is further configured to perform the detected command by changing the memory address associated with one or more pointers in the allocation buffer to avoid fragmentation.
  16. 16. The memory manager of claim 14, wherein the memory manager is further configured to alternate allocations by distributing allocations in a manner where allocation requests that are contiguous in a sequence of allocation requests, are allocated non-contiguously in memory.
  17. 17. The memory manager of claim 14, wherein the memory manager is further configured to alternate allocations by distributing allocations uniformly between the allocation buffers according to a deterministic cyclical pattern.
  18. 18. The memory manager of claim 14, wherein the memory manager is further configured to alternate allocations by distributing allocations between the allocation buffers according to a uniform random function.
  19. 19. The memory manager of claim 14, wherein the memory manager is further configured to alternate allocations by distributing allocations according to a non-uniform random function.
  20. 20. A method of building a display list for use by a raster image processing system when producing an object, the method comprising: generating a set of control points for a plurality of stroked paths, adding the generated control points to a forwards direction array and a backwards direction array, and upon completion of generating the control points for the object, allocating the control points in the forwards direction array and the backwards direction array to portions of memory in a plurality of buffers by accessing the plurality of buffers in a defined sequence for allocating the control points so that sequential memory management commands access a different allocation buffer.
  21. 21. A raster image processing system for building a display list when producing an object, the processing system comprising a stroking module and a memory allocator: wherein the stroking module is arranged to generate a set of control points for a plurality of stroked paths for an object, and add the generated control points to a forwards direction array and a backwards direction array, and upon completion of generating the control points for the object; and the memory allocator is arranged to allocate the control points in the forwards direction array and the backwards direction array to portions of memory in a plurality of buffers by accessing the plurality of buffers in a defined sequence so that sequential memory management commands access a different allocation buffer.
  22. 22. A raster image processing method for building a display list when producing an object, the processing method comprising the steps of: generating a set of control points for a plurality of stroked paths for an object, and adding the generated control points to a forwards direction array and a backwards direction array, and upon completion of generating the control points for the object, allocating the control points in the forwards direction array and the backwards direction array to portions of memory in a plurality of buffers by accessing the plurality of buffers in a defined sequence so that sequential memory management commands access a different allocation buffer.
  23. 23. A memory manager configured to: establish a plurality of allocation buffers based on short-lifetime allocations; distribute allocations of portions of memory by accessing the plurality of buffers in a sequence for reducing the likelihood that a memory management command would cause fragmentation of at least one of the plurality of buffers; detect a memory management command referencing a portion of memory allocated from an allocation buffer; in response to detecting the memory management command, examine the plurality of allocation buffers to determine whether the detected memory management command can be performed without fragmentation of the allocation buffer; and if the memory management command can be performed without fragmentation of the allocation buffer, perform the memory management command to allocate the portions of memory.
AU2014268230A 2014-11-27 2014-11-27 Cyclic allocation buffers Abandoned AU2014268230A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2014268230A AU2014268230A1 (en) 2014-11-27 2014-11-27 Cyclic allocation buffers

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
AU2014268230A AU2014268230A1 (en) 2014-11-27 2014-11-27 Cyclic allocation buffers

Publications (1)

Publication Number Publication Date
AU2014268230A1 true AU2014268230A1 (en) 2016-06-16

Family

ID=56109777

Family Applications (1)

Application Number Title Priority Date Filing Date
AU2014268230A Abandoned AU2014268230A1 (en) 2014-11-27 2014-11-27 Cyclic allocation buffers

Country Status (1)

Country Link
AU (1) AU2014268230A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2602837B (en) * 2021-01-19 2023-09-13 Picocom Tech Limited Methods and controllers for controlling memory operations field

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2602837B (en) * 2021-01-19 2023-09-13 Picocom Tech Limited Methods and controllers for controlling memory operations field

Similar Documents

Publication Publication Date Title
US6505283B1 (en) Efficient memory allocator utilizing a dual free-list structure
US6175900B1 (en) Hierarchical bitmap-based memory manager
US20040098724A1 (en) Associating a native resource with an application
US6643753B2 (en) Methods and systems for managing heap creation and allocation
US10338842B2 (en) Namespace/stream management
US20080209154A1 (en) Page oriented memory management
JP2014504768A (en) Method, computer program product, and apparatus for progressively unloading classes using a region-based garbage collector
CN106557427B (en) Memory management method and device for shared memory database
US20050188164A1 (en) System and method for memory management
US6804761B1 (en) Memory allocation system and method
US20110307677A1 (en) Device for managing data buffers in a memory space divided into a plurality of memory elements
US6985976B1 (en) System, method, and computer program product for memory management for defining class lists and node lists for allocation and deallocation of memory blocks
EP3304317B1 (en) Method and apparatus for managing memory
US20070203959A1 (en) Apparatus and method for managing resources using virtual ID in multiple Java application environment
WO2022120522A1 (en) Memory space allocation method and device, and storage medium
CN111367671A (en) Memory allocation method, device, equipment and readable storage medium
CN114327917A (en) Memory management method, computing device and readable storage medium
US20100325083A1 (en) Skip list generation
US20240111669A1 (en) Allocation of memory within a data type-specific memory heap
US10860472B2 (en) Dynamically deallocating memory pool subinstances
CN105677481A (en) Method and system for processing data and electronic equipment
US20100299672A1 (en) Memory management device, computer system, and memory management method
AU2014268230A1 (en) Cyclic allocation buffers
US20060230242A1 (en) Memory for multi-threaded applications on architectures with multiple locality domains
CN115729702A (en) Application program memory configuration method, electronic device and computer storage medium

Legal Events

Date Code Title Description
MK5 Application lapsed section 142(2)(e) - patent request and compl. specification not accepted