WO2024087875A1 - 内存管理方法、装置、介质及电子设备 - Google Patents

内存管理方法、装置、介质及电子设备 Download PDF

Info

Publication number
WO2024087875A1
WO2024087875A1 PCT/CN2023/115923 CN2023115923W WO2024087875A1 WO 2024087875 A1 WO2024087875 A1 WO 2024087875A1 CN 2023115923 W CN2023115923 W CN 2023115923W WO 2024087875 A1 WO2024087875 A1 WO 2024087875A1
Authority
WO
WIPO (PCT)
Prior art keywords
memory
target
target object
allocation
strategy
Prior art date
Application number
PCT/CN2023/115923
Other languages
English (en)
French (fr)
Inventor
张逸飞
陆传胜
季向东
王德宇
顾天晓
Original Assignee
北京火山引擎科技有限公司
脸萌有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京火山引擎科技有限公司, 脸萌有限公司 filed Critical 北京火山引擎科技有限公司
Publication of WO2024087875A1 publication Critical patent/WO2024087875A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/06Addressing a physical block of locations, e.g. base addressing, module addressing, memory dedication
    • G06F12/0646Configuration or reconfiguration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0893Caches characterised by their organisation or structure
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0604Improving or facilitating administration, e.g. storage management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0629Configuration or reconfiguration of storage systems
    • G06F3/0631Configuration or reconfiguration of storage systems by allocating resources to storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0656Data buffering arrangements

Definitions

  • the present disclosure relates to the field of computer technology, and in particular to a memory management method, device, medium and electronic device.
  • Memory management is an important means to ensure efficient and stable operation of computers.
  • Memory management can include multiple parts such as memory resource allocation and recycling.
  • the memory management method in the related art has the problem of low memory allocation speed.
  • the present disclosure provides a memory management method, the method comprising:
  • the memory allocation strategy comprising a first strategy and a second strategy, the first strategy indicating that a memory of a corresponding size is allocated to the target object in an allocation buffer of a heap memory, and the second strategy indicating that a memory of a corresponding size is allocated to the target object in the heap memory;
  • memory is allocated for the target object in a target allocation buffer corresponding to the target thread, or memory is allocated for the target object in the heap memory, and the target allocation buffer is used to allocate memory for multiple objects of the target thread corresponding to the first strategy.
  • the present disclosure provides a memory management device, the device comprising:
  • the first acquisition module is used to acquire the target object of the current memory to be allocated in the target thread
  • a first determination module is used to determine a memory allocation strategy corresponding to the target object according to the references of the target object to other objects, wherein the memory allocation strategy includes a first strategy and a second strategy, wherein the first strategy indicates that a memory of a corresponding size is allocated to the target object in an allocation buffer of a heap memory, and the second strategy indicates that a memory of a corresponding size is allocated to the target object in the heap memory;
  • An allocation module is used to allocate memory for the target object in a target allocation buffer corresponding to the target thread or to allocate memory for the target object in the heap memory according to a memory allocation strategy corresponding to the target object, wherein the target allocation buffer is used to allocate memory for multiple objects of the target thread corresponding to the first strategy.
  • the present disclosure provides a computer-readable medium having a computer program stored thereon, which implements the steps of the method described in the first aspect when the program is executed by a processing device.
  • an electronic device including:
  • a processing device is used to execute the computer program in the storage device to implement the steps of the method in the first aspect.
  • the memory allocation strategy corresponding to the target object is determined, and according to the memory allocation strategy corresponding to the target object, memory is allocated to the target object in the target allocation buffer corresponding to the target thread, or in the heap memory. Therefore, under the framework of the memory management algorithm, in addition to directly allocating the corresponding size of memory to the target object in the heap memory, In addition to the target allocation buffer, a larger memory can be applied for in the heap memory, and the memory can be used to allocate memory for multiple objects of the target thread corresponding to the first strategy.
  • the process of allocating memory for multiple objects of the target thread corresponding to the first strategy can be completed, thereby reducing the number of times of applying for memory from the operating system or the memory management system at the next layer, reducing the CPU usage of the memory management related code, thereby increasing the speed of memory allocation and improving the memory management efficiency.
  • the first strategy since the first strategy allocates memory in an allocation buffer in the heap memory, the first strategy can be regarded as a fast path for the second strategy, which can further improve the object memory allocation speed.
  • FIG. 1 is a flowchart showing a memory management method according to an exemplary embodiment of the present disclosure.
  • Fig. 2 is a schematic diagram of the structure of an object allocated in an allocation buffer according to an exemplary embodiment of the present disclosure.
  • Fig. 3 is a block diagram showing a memory management device according to an exemplary embodiment of the present disclosure.
  • Fig. 4 is a schematic diagram showing the structure of an electronic device according to an exemplary embodiment of the present disclosure.
  • a prompt message is sent to the user to clearly prompt the user that the operation requested to be performed will require obtaining and using the user's personal information.
  • the user can autonomously choose whether to provide personal information to software or hardware such as an electronic device, application, server, or storage medium that performs the operation of the technical solution of the present disclosure according to the prompt message.
  • the prompt information in response to receiving the user's active request, may be sent to the user in the form of a pop-up window, in which the prompt information may be presented in text form.
  • the pop-up window may also carry an option for the user to choose "agree” or “disagree”. “Agree” to provide personal information to electronic devices.
  • an object is an abstraction of the objective world and a representation of the objective world in a computer.
  • a person or an object can be represented by an object in a computer.
  • An object occupies memory and contains many attributes. The values of the attributes are stored in the memory occupied by the object.
  • the attribute of an object can be the address of another object, that is, a pointer to other objects. Other objects can be referenced through pointers.
  • An object is alive if it can be referenced by other alive objects. If an object cannot be referenced by other alive objects, then the object cannot be accessed and the memory space it occupies can be recycled and released. Otherwise, it will cause memory leaks, memory exhaustion, and the program will crash and cannot continue to run.
  • the embodiments of the present disclosure provide a memory management method, device, medium and electronic device to at least increase the memory allocation speed, thereby improving the memory management efficiency to a certain extent.
  • FIG1 is a flowchart of a memory management method according to an exemplary embodiment of the present disclosure.
  • the memory management method can be applied to electronic devices, where the electronic devices can be, for example, mobile phones, tablet computers, and notebooks.
  • the memory management method includes the following steps:
  • the memory allocation strategy determines the memory allocation strategy corresponding to the target object based on the target object's references to other objects, the memory allocation strategy includes a first strategy and a second strategy, the first strategy indicates allocating memory of a corresponding size for the target object in an allocation buffer of the heap memory, and the second strategy indicates allocating memory of a corresponding size for the target object in the heap memory.
  • the target thread can be the thread currently being executed in the electronic device. It can be understood that during the execution of the target thread, multiple objects can be allocated, and these multiple objects can be allocated memory in a preset order. Therefore, the object currently ready to allocate memory in the target thread is the target object.
  • the heap memory is another memory area different from the stack area, global data area and code area.
  • the heap allows threads to dynamically apply for a certain size of memory space during execution. Therefore, the target allocation buffer can be understood as a large memory space applied by the target thread in the heap memory.
  • the memory allocation strategy corresponding to the target object can be determined according to the references of the target object to other objects, that is, according to the references of the target object to other objects, it is determined whether the memory allocation strategy corresponding to the target object is the first strategy or the second strategy. If the memory allocation strategy is determined to be the first strategy, a corresponding size of free memory is allocated to the target object in the target allocation buffer corresponding to the target thread; if the memory allocation strategy is determined to be the second strategy, a corresponding size of free memory is allocated to the target object in the heap memory.
  • free memory of a corresponding size is allocated to the target object in the heap memory, and the allocation can be performed using any memory allocation method in the existing memory management method.
  • the memory allocation strategy corresponding to the target object is determined, and according to the memory allocation strategy corresponding to the target object, memory is allocated to the target object in the target allocation buffer corresponding to the target thread, or memory is allocated to the target object in the heap memory.
  • a larger memory can also be applied for in the heap memory, and memory can be allocated to multiple objects corresponding to the first strategy of the target thread through the memory.
  • the process of allocating memory to multiple objects corresponding to the first strategy of the target thread can be completed, thereby reducing the number of times of applying for memory to the operating system or the memory management system of the next layer, reducing the CPU occupancy of the memory management related code, thereby increasing the speed of memory allocation and improving the efficiency of memory management.
  • the first strategy allocates memory in the allocation buffer in the heap memory
  • the first strategy can be regarded as a fast path for the second strategy, which can further improve the object memory allocation speed.
  • the use of a memory allocation method different from the second strategy in the related technology is an optimization of the allocation method in the existing memory management method.
  • step S120 may include the following steps:
  • the memory allocation strategy corresponding to the target object is a first strategy.
  • whether the target object references other objects can be determined by whether the target object has a pointer to the other object. Determine whether the target object has other objects referenced. When the target object does not have pointers to other objects, determine whether the target object has other objects referenced.
  • the target object when determining whether the target object uses the first strategy or the second strategy to allocate memory, it can be determined whether the target object references other objects. If it is determined that the target object references other objects, it can be determined that the memory allocation strategy corresponding to the target object is the second strategy, and then free memory of a corresponding size is allocated to the target object in the heap memory. If it is determined that the target object does not reference other objects, it can be determined that the memory allocation strategy corresponding to the target object is the first strategy.
  • all target objects may be allocated memory using the first strategy.
  • the first strategy or the second strategy may be randomly selected to allocate memory to the target objects.
  • the method of the embodiment of the present disclosure may further include the following steps:
  • a free memory space of a preset size is applied for in the heap memory as the target allocation buffer.
  • the target thread may be associated with the address of an allocation buffer.
  • the information saved by the target thread may be set to include a field corresponding to the address of the target allocation buffer, so that the address of the allocation buffer associated with the target thread may be obtained from the corresponding field, and then based on the address of the allocation buffer associated with the target thread, it may be determined whether the target allocation buffer is allocated to the target thread.
  • the address of the associated allocation buffer is a null value, it can be determined that no target allocation buffer is allocated for the target thread, so that a preset size of free memory space can be applied in the heap memory as the target allocation buffer. If the address of the associated allocation buffer is not a null value, it can be determined that a target allocation buffer has been allocated for the target thread, so that memory is directly allocated for the target object in the target allocation buffer corresponding to the target thread.
  • the address of the associated allocation buffer may be any address in the allocation buffer, such as a start address, a currently used address, or an end address.
  • the preset size of free memory space applied for in the heap memory may be, for example, a space area of 1KB, 10KB, 100KB, etc., which is set according to actual needs.
  • allocating memory for the target object in the target allocation buffer corresponding to the target thread in step S130 may include the following steps:
  • a preset size of free memory space is applied for in the heap memory as a new target allocation buffer, and memory is allocated for the target object in the new target allocation buffer.
  • memory can be allocated to each target object in the target allocation buffer by means of pointer collision. In this case, it can be determined whether the sum of the currently used position in the target allocation buffer and the actual size of the target object is greater than the end position of the target allocation buffer to determine whether the free memory in the target allocation buffer supports the memory allocation for the target object.
  • allocating memory for objects by using pointer collision can increase the speed of object allocation. It also utilizes the principle of locality of the program to place objects that may be accessed in the near future in adjacent locations in space, increasing the hit rate of the central processing unit cache (CPU cache) and thus improving program performance.
  • CPU cache central processing unit cache
  • the following is an example to illustrate how to allocate memory for a target object in a target allocation buffer corresponding to a target thread in an embodiment of the present disclosure.
  • the address of the allocation buffer associated with thread t can be obtained. Further assuming that the address obtained is a null value, it is determined that no target allocation buffer is allocated for thread t. At this time, a 1KB memory space can be applied from the heap memory as the target allocation buffer corresponding to thread t. At this time, the target allocation buffer can be marked with three pointers, which are base, end, and top. Base represents the starting address of the target allocation buffer, end represents the ending address of the target allocation buffer, and top represents the current address of the target allocation buffer. At the same time, the value of any pointer can be assigned to the address of the allocation buffer associated with thread t to indicate that the target allocation buffer has been allocated for the target thread.
  • the actual size S ⁇ of the object can be calculated based on the size of the object and the memory alignment rule, and the value of S ⁇ +top is calculated and compared with end to determine whether the free memory in the target allocation buffer supports the memory allocation for the target object. If the value of S ⁇ +top is less than or equal to the value of end, it is determined that the free memory in the target allocation buffer supports the memory allocation for the target object. Then, memory is allocated to the target object in the target allocation buffer. If the value of S ⁇ +top is greater than the value of end, it is determined that the target The free memory in the allocation buffer does not support memory allocation for the target object. Then, 1KB of free memory space is applied for again in the heap memory as a new target allocation buffer, and memory is allocated to each target object in the new target allocation buffer in the same way.
  • the first time memory is allocated in the target allocation buffer, the top value is equal to the base value, that is, memory allocation starts from the starting address of the allocation buffer. After the first object memory allocation is completed, top is moved to the position of S ⁇ +base. Then, the second time memory is allocated in the target allocation buffer, allocation starts from the top position obtained by the move, and so on, until the value of S ⁇ +top is greater than the value of end, and 1KB of free memory space is applied for again in the heap memory as the new target allocation buffer.
  • memory for each target object in the target allocation buffer may also be allocated by means of randomly acquiring free memory.
  • the memory allocation strategy may include a first strategy and a second strategy.
  • the memory allocation method of the embodiment of the present disclosure may further include the following steps:
  • a surviving first object and a surviving second object in the heap memory are marked, the first object being an object to which memory is allocated by applying the first strategy, and the second object being an object to which memory is allocated by applying the second strategy;
  • the unmarked second object and the candidate allocation buffer in the heap memory are recovered, and the candidate allocation buffer is an allocation buffer of the first object without the mark.
  • satisfying a preset recycling condition may be that a memory occupancy rate reaches a set threshold, or reaches a certain memory recycling cycle, etc.
  • whether the first object or the second object is alive can be determined by determining whether the first object or the second object is referenced by other alive objects as mentioned above.
  • the unmarked second object and the candidate for the first object that does not have the mark in the heap memory can be further marked. Select allocated buffer for recycling.
  • the recycling logic since the candidate allocation buffer can be recycled as a whole, the recycling logic only needs to be executed once to complete the recycling of multiple objects, thereby reducing the number of times the recycling and cleaning logic is executed, increasing the memory recycling and cleaning speed, and thereby improving memory management efficiency.
  • the first objects in the candidate allocation buffer are all dead when recycled, and in this case, the first objects in the candidate allocation buffer will not be marked.
  • the first objects in the candidate allocation buffer are recycled, part of them survive and part of them do not survive, and the surviving parts are moved to other memory spaces in the heap memory, so that all the first objects in the candidate allocation buffer are not marked.
  • the memory allocation method of the embodiment of the present disclosure may further include the following steps:
  • the first object marked in the heap memory is moved to the target memory space to obtain a candidate allocation buffer.
  • the total memory usage of each marked first object may be calculated by accumulation.
  • a target memory space of the same size as the total memory usage can be applied for in the heap memory. Then, each marked first object in the heap memory can be moved to the target memory space. In this way, the marked first object no longer exists in the candidate allocation buffer.
  • the object forwarding table and the pointer queue can be used to help mark each The first object is moved.
  • the pointer queue stores the addresses of pointers that meet the following two conditions at the same time, such as the address &x.f, where xf represents a pointer. Assuming that the pointer xf points to the object o, the first condition is that the object x pointing to the object o is a live object, and the second condition is that the object o pointed to by xf is in the target allocation buffer.
  • the forwarding table records the movement of the object. For example, if the forwarding table records xf->y, it means that the object o has moved to the new address y.
  • the following describes a process of moving a first object marked in a heap memory to a target memory space by taking an example.
  • the address &x2.f2 is dereferenced to get the pointer x2.f2, which also points to the object o1.
  • the object o1 has been moved from the address &x1.f1 to the address y, which means that the object o1 pointed to by the pointer x2.f2 has been moved and does not need to be moved again.
  • the address &x3.f3 is taken out from the pointer queue again to determine whether to move the dereferenced object o2, until all the addresses in the pointer queue are processed, and the first object marked in the heap memory is moved to the target memory space.
  • the process of moving o1 to the target memory space is as follows according to the actual size of the object o1:
  • the starting address p of object o1 is obtained according to the object starting address marked when the object is allocated. Since the objects in the allocation buffer are allocated continuously, the size q of object o1 can be obtained according to the starting address p ⁇ of the object immediately following o1,
  • xf does not need to point to the beginning of object o1.
  • That is, the offset within object o1, denoted as offset.
  • multiple moving threads may be used to move in parallel to speed up the recycling and cleaning process.
  • the memory allocated for the target object in the target allocation buffer is used to sequentially store metadata of the target object and content of the target object, wherein when the target object is referenced, the pointer referencing the target object points to the starting address of the content of the target object.
  • metadata may be added to the object in the allocated buffer, for example, to record the starting address of the object, the size of the object, the transfer table and other information.
  • the metadata may be further set before the starting position of the content of the corresponding target object, and the pointer pointing to the target object may be set to point to the starting address of the content of the target object.
  • the target object includes metadata and content corresponding to the object.
  • the metadata and content corresponding to the object occupy memory space respectively, and the pointer pointing to the target object points to the starting address of the content corresponding to the object.
  • the allocation buffer is a large object from the perspective of heap memory
  • the execution of the program does not need to know the detailed implementation of the allocation buffer
  • the first object is transparent from the perspective of heap memory. Therefore, when using any existing memory management method for memory allocation and memory recovery methods for allocation and recovery, the existence of the allocation buffer will not be perceived, and the allocation buffer can be treated as a whole object and marked and recovered normally. Therefore, there is no need to change any existing memory management method.
  • the marking and recycling process of some memory management methods does not require additional special processing, so that the allocation and movement process of the first object can be embedded in any existing memory management method.
  • the objects allocated by the first strategy and the second strategy can have different object layouts.
  • the objects allocated by the second strategy may not contain metadata, and the objects allocated by the first strategy may include metadata, but the objects allocated by the two strategies are consistent at the memory management level. Therefore, it can be achieved that: objects allocated by the first strategy can be managed by the first strategy and the second strategy at the same time, and objects allocated by different strategies can reference each other.
  • FIG. 3 is a block diagram of a memory management device according to an exemplary embodiment of the present disclosure.
  • the memory management device 300 includes:
  • a first acquisition module 310 is used to acquire a target object to be currently allocated memory in a target thread
  • a first determination module 320 is used to determine a memory allocation strategy corresponding to the target object according to the references of the target object to other objects, wherein the memory allocation strategy includes a first strategy and a second strategy, wherein the first strategy indicates that a memory of a corresponding size is allocated to the target object in an allocation buffer of a heap memory, and the second strategy indicates that a memory of a corresponding size is allocated to the target object in the heap memory;
  • the allocation module 330 is used to allocate memory for the target object in the target allocation buffer corresponding to the target thread, or to allocate memory for the target object in the heap memory according to the memory allocation strategy corresponding to the target object.
  • the target allocation buffer is used to allocate memory for multiple objects of the target thread corresponding to the first strategy.
  • the first determination module 320 is further used to determine that the memory allocation policy corresponding to the target object is the second policy in response to the target object referencing other objects; and to determine that the memory allocation policy corresponding to the target object is the first policy in response to the target object not referencing other objects.
  • the apparatus 300 further includes:
  • a second determination module configured to determine whether a target allocation buffer is allocated for the target thread based on an address of an allocation buffer associated with the target thread
  • the target allocation buffer application module is used to apply for a preset size of free memory space in the heap memory as the target allocation buffer when it is determined that the target allocation buffer is not allocated to the target thread.
  • the allocation module 330 includes:
  • a determination submodule used to determine whether the free memory in the target allocation buffer supports allocating memory for the target object
  • a first allocation submodule configured to allocate memory to the target object in the target allocation buffer when it is determined that free memory in the target allocation buffer supports allocation of memory to the target object;
  • the second allocation submodule is used to apply for free memory space of a preset size in the heap memory as a new target allocation buffer and allocate memory for the target object in the new target allocation buffer when it is determined that the free memory in the target allocation buffer does not support the memory allocation for the target object.
  • the apparatus 300 further includes:
  • a marking module used for marking a surviving first object and a surviving second object in the heap memory when a preset memory recycling condition is met, wherein the first object is an object to which memory is allocated by applying the first policy, and the second object is an object to which memory is allocated by applying the second policy;
  • the recycling module is used to recycle the unmarked second objects in the heap memory and the candidate allocation buffer, where the candidate allocation buffer is an allocation buffer of the first object without any mark.
  • the apparatus 300 further includes:
  • a second acquisition module is used to acquire the total memory usage of the first object marked in the heap memory
  • a target memory space application module used for applying for a target memory space of the size of the total memory occupancy in the heap memory
  • a moving module is used to move the first object marked in the heap memory to the target memory space to obtain the candidate allocation buffer.
  • the memory allocated for the target object in the target allocation buffer is used to
  • the metadata of the target object and the content of the target object are stored in sequence, wherein when the target object is referenced, the pointer referencing the target object points to the starting address of the content of the target object.
  • the terminal device in the embodiment of the present disclosure may include, but is not limited to, mobile terminals such as mobile phones, laptop computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), vehicle-mounted terminals (such as vehicle-mounted navigation terminals), etc., and fixed terminals such as digital TVs, desktop computers, etc.
  • mobile terminals such as mobile phones, laptop computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), vehicle-mounted terminals (such as vehicle-mounted navigation terminals), etc., and fixed terminals such as digital TVs, desktop computers, etc.
  • PDAs personal digital assistants
  • PADs tablet computers
  • PMPs portable multimedia players
  • vehicle-mounted terminals such as vehicle-mounted navigation terminals
  • fixed terminals such as digital TVs, desktop computers, etc.
  • the electronic device shown in FIG. 4 is only an example and should not bring any limitation to the functions and scope of use
  • the electronic device 400 may include a processing device (e.g., a central processing unit, a graphics processing unit, etc.) 401, which can perform various appropriate actions and processes according to a program stored in a read-only memory (ROM) 402 or a program loaded from a storage device 408 to a random access memory (RAM) 403.
  • a processing device e.g., a central processing unit, a graphics processing unit, etc.
  • RAM random access memory
  • various programs and data required for the operation of the electronic device 400 are also stored.
  • the processing device 401, the ROM 402, and the RAM 403 are connected to each other via a bus 404.
  • An input/output (I/O) interface 405 is also connected to the bus 404.
  • the following devices may be connected to the I/O interface 405: input devices 406 including, for example, a touch screen, a touchpad, a keyboard, a mouse, a camera, a microphone, an accelerometer, a gyroscope, etc.; output devices 407 including, for example, a liquid crystal display (LCD), a speaker, a vibrator, etc.; storage devices 408 including, for example, a magnetic tape, a hard disk, etc.; and communication devices 409.
  • the communication devices 409 may allow the electronic device 400 to communicate wirelessly or wired with other devices to exchange data.
  • FIG. 4 shows an electronic device 400 with various devices, it should be understood that it is not required to implement or have all the devices shown. More or fewer devices may be implemented or have alternatively.
  • an embodiment of the present disclosure includes a computer program product, which includes a computer program carried on a non-transitory computer-readable medium, and the computer program contains program code for executing the method shown in the flowchart.
  • the computer program can be implemented as a computer software program.
  • the computer program is downloaded and installed from the network via the communication device 409, or installed from the storage device 408, or installed from the ROM 402.
  • the processing device 401 When the computer program is executed by the processing device 401, the above functions defined in the method of the embodiment of the present disclosure are executed.
  • the computer-readable medium disclosed above may be a computer-readable signal medium or a computer-readable storage medium or any combination of the above two.
  • the computer-readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device or device, or any combination of the above.
  • Computer-readable storage media may include, but are not limited to: an electrical connection with one or more wires, a portable computer disk, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the above.
  • a computer-readable storage medium may be any tangible medium containing or storing a program that may be used by or in combination with an instruction execution system, device or device.
  • a computer-readable signal medium may include a data signal propagated in a baseband or as part of a carrier wave, in which a computer-readable program code is carried.
  • This propagated data signal may take a variety of forms, including but not limited to an electromagnetic signal, an optical signal, or any suitable combination of the above.
  • the computer readable signal medium may also be any computer readable medium other than a computer readable storage medium, which may send, propagate or transmit a program for use by or in conjunction with an instruction execution system, apparatus or device.
  • the program code contained on the computer readable medium may be transmitted using any suitable medium, including but not limited to: wires, optical cables, RF (radio frequency), etc., or any suitable combination of the above.
  • the electronic devices may communicate using any currently known or future developed network protocol, such as HTTP (HyperText Transfer Protocol), and may be interconnected with any form or medium of digital data communication (e.g., a communication network).
  • HTTP HyperText Transfer Protocol
  • Examples of communication networks include a local area network ("LAN”), a wide area network ("WAN”), an internet (e.g., the Internet), and a peer-to-peer network (e.g., an ad hoc peer-to-peer network), as well as any currently known or future developed network protocols.
  • LAN local area network
  • WAN wide area network
  • Internet internet
  • peer-to-peer network e.g., an ad hoc peer-to-peer network
  • the computer-readable medium may be included in the electronic device, or may exist independently without being installed in the electronic device.
  • the computer-readable medium carries one or more programs.
  • the electronic device When the one or more programs are executed by the electronic device, the electronic device: obtains the target object to which memory is currently to be allocated in the target thread; determines a memory allocation strategy corresponding to the target object, wherein the memory allocation strategy includes a first strategy for allocating memory to the target object in an allocation buffer of a heap memory; and when it is determined that the memory allocation strategy is the first strategy, allocates memory to the target object in a target allocation buffer corresponding to the target thread, wherein the target allocation buffer is used to allocate memory to multiple objects of the target thread corresponding to the first strategy.
  • Computer program code for performing the operations of the present disclosure may be written in one or more programming languages or a combination thereof, including, but not limited to, object-oriented programming languages, such as Java, Smalltalk, C++, and conventional procedural programming languages, such as "C" or similar programming languages.
  • the program code may be executed entirely on the user's computer, partially on the user's computer, as a separate software package, partially on the user's computer and partially on a remote computer, or entirely on a remote computer or server.
  • the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computer (e.g., via the Internet using an Internet service provider).
  • LAN local area network
  • WAN wide area network
  • Internet service provider e.g., via the Internet using an Internet service provider
  • each box in the flowchart or block diagram may represent a module, a program segment, or a portion of a code, which contains one or more executable instructions for implementing a specified logical function.
  • the functions marked in the boxes may also occur in an order different from that marked in the accompanying drawings. For example, two boxes represented in succession may actually be executed substantially in parallel, and they may sometimes be executed in the opposite order, depending on the functions involved.
  • each block in the block diagram and/or flow chart, and combinations of blocks in the block diagram and/or flow chart can be implemented by a dedicated hardware-based system that performs the specified functions or operations, or can be implemented by a combination of dedicated hardware and computer instructions.
  • modules involved in the embodiments described in the present disclosure may be implemented by software or hardware, wherein the name of a module does not, in some cases, limit the module itself.
  • exemplary types of hardware logic components include: field programmable gate arrays (FPGAs), application specific integrated circuits (ASICs), application specific standard products (ASSPs), systems on chip (SOCs), complex programmable logic devices (CPLDs), and the like.
  • FPGAs field programmable gate arrays
  • ASICs application specific integrated circuits
  • ASSPs application specific standard products
  • SOCs systems on chip
  • CPLDs complex programmable logic devices
  • a machine-readable medium may be a tangible medium that may contain or store a program for use by or in conjunction with an instruction execution system, device, or equipment.
  • a machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium.
  • a machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or equipment, or any suitable combination of the foregoing.
  • a more specific example of a machine-readable storage medium may include an electrical connection based on one or more lines, a portable computer disk, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
  • RAM random access memory
  • ROM read-only memory
  • EPROM or flash memory erasable programmable read-only memory
  • CD-ROM portable compact disk read-only memory
  • CD-ROM compact disk read-only memory
  • magnetic storage device or any suitable combination of the foregoing.
  • Example 1 provides a memory management method, the method comprising:
  • a memory allocation strategy corresponding to the target object is determined, wherein the memory allocation strategy includes a first strategy and a second strategy, wherein the first strategy indicates that a memory of a corresponding size is allocated to the target object in an allocation buffer of a heap memory, and the second strategy indicates that a memory of a corresponding size is allocated to the target object in an allocation buffer of a heap memory.
  • the second strategy indicates that a memory of a corresponding size is allocated for the target object in the heap memory;
  • memory is allocated for the target object in a target allocation buffer corresponding to the target thread, or memory is allocated for the target object in the heap memory, and the target allocation buffer is used to allocate memory for multiple objects of the target thread corresponding to the first strategy.
  • Example 2 provides the method of Example 1, wherein determining the memory allocation strategy corresponding to the target object includes:
  • the memory allocation strategy corresponding to the target object is the first strategy.
  • Example 3 provides the method of Example 1, wherein the method further includes:
  • a free memory space of a preset size is applied for in the heap memory as the target allocation buffer.
  • Example 4 provides the method of Example 1, wherein allocating memory for the target object in a target allocation buffer corresponding to the target thread includes:
  • a preset size of free memory space is applied for in the heap memory as a new target allocation buffer, and memory is allocated for the target object in the new target allocation buffer.
  • Example 5 provides the method of Example 1, wherein the method further includes:
  • the unmarked second object in the heap memory and a candidate allocation buffer are recovered, where the candidate allocation buffer is an allocation buffer where no marked first object exists.
  • Example 6 provides the method of Example 5, wherein the method further includes:
  • the first object marked in the heap memory is moved to the target memory space to obtain the candidate allocation buffer.
  • Example 7 provides the method of Example 1, wherein the memory allocated for the target object in the target allocation buffer is used to store the metadata of the target object and the content of the target object in sequence, wherein, when the target object is referenced, the pointer referencing the target object points to the starting address of the content of the target object.
  • Example 8 provides a memory management device, the device comprising:
  • the first acquisition module is used to acquire the target object of the current memory to be allocated in the target thread
  • a first determination module is used to determine a memory allocation strategy corresponding to the target object according to the references of the target object to other objects, wherein the memory allocation strategy includes a first strategy and a second strategy, wherein the first strategy indicates that a memory of a corresponding size is allocated to the target object in an allocation buffer of a heap memory, and the second strategy indicates that a memory of a corresponding size is allocated to the target object in the heap memory;
  • the allocation module is used to allocate memory to the target object according to the memory allocation strategy corresponding to the target object.
  • the target allocation buffer is used to allocate memory for the target object in a target allocation buffer corresponding to the target thread, or to allocate memory for the target object in the heap memory, wherein the target allocation buffer is used to allocate memory for multiple objects of the target thread corresponding to the first strategy.
  • Example 9 provides a computer-readable medium having a computer program stored thereon, which implements the steps of any of the methods described in Examples 1-7 when executed by a processing device.
  • Example 10 provides an electronic device, including:
  • a processing device is used to execute the computer program in the storage device to implement the steps of any one of the methods described in Examples 1-7.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Memory System (AREA)

Abstract

涉及一种内存管理方法、装置、介质及电子设备,方法包括:获取目标线程中当前待分配内存的目标对象;根据目标对象对于其他对象的引用情况,确定目标对象对应的内存分配策略,内存分配策略包括第一策略以及第二策略,第一策略指示在堆内存的分配缓冲区中为目标对象分配对应大小的内存,第二策略指示在堆内存中为目标对象分配对应大小的内存;根据目标对象对应的内存分配策略,在目标线程对应的目标分配缓冲区中为目标对象分配内存,或者在堆内存中为目标对象分配内存,目标分配缓冲区用于为目标线程的多个对应第一策略的对象分配内存。采用方法可以提高内存分配的速度,进而提高内存管理效率。

Description

内存管理方法、装置、介质及电子设备
本申请要求于2022年10月26日提交中国国家知识产权局、申请号为202211321392.2、发明名称为“内存管理方法、装置、介质及电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本公开涉及计算机技术领域,具体地,涉及一种内存管理方法、装置、介质及电子设备。
背景技术
在计算机技术领域,内存管理是一项保障计算机高效稳定运行的重要手段。其中,内存管理可以包括内存资源分配以及回收等多个部分内容。然而,相关技术中的内存管理方法存在内存分配速度低的问题。
发明内容
提供该发明内容部分以便以简要的形式介绍构思,这些构思将在后面的具体实施方式部分被详细描述。该发明内容部分并不旨在标识要求保护的技术方案的关键特征或必要特征,也不旨在用于限制所要求的保护的技术方案的范围。
第一方面,本公开提供一种内存管理方法,所述方法包括:
获取目标线程中当前待分配内存的目标对象;
根据所述目标对象对于其他对象的引用情况,确定所述目标对象对应的内存分配策略,所述内存分配策略包括第一策略以及第二策略,所述第一策略指示在堆内存的分配缓冲区中为所述目标对象分配对应大小的内存,所述第二策略指示在所述堆内存中为所述目标对象分配对应大小的内存;
根据所述目标对象对应的内存分配策略,在所述目标线程对应的目标分配缓冲区中为所述目标对象分配内存,或者在所述堆内存中为所述目标对象分配内存,所述目标分配缓冲区用于为所述目标线程的多个对应所述第一策略的对象分配内存。
第二方面,本公开提供一种内存管理装置,所述装置包括:
第一获取模块,用于获取目标线程中当前待分配内存的目标对象;
第一确定模块,用于根据所述目标对象对于其他对象的引用情况,确定所述目标对象对应的内存分配策略,所述内存分配策略包括第一策略以及第二策略,所述第一策略指示在堆内存的分配缓冲区中为所述目标对象分配对应大小的内存,所述第二策略指示在所述堆内存中为所述目标对象分配对应大小的内存;
分配模块,用于根据所述目标对象对应的内存分配策略,在所述目标线程对应的目标分配缓冲区中为所述目标对象分配内存,或者在所述堆内存中为所述目标对象分配内存,所述目标分配缓冲区用于为所述目标线程的多个对应所述第一策略的对象分配内存。
第三方面,本公开提供一种计算机可读介质,其上存储有计算机程序,该程序被处理装置执行时实现第一方面中所述方法的步骤。
第四方面,本公开提供一种电子设备,包括:
存储装置,其上存储有计算机程序;
处理装置,用于执行所述存储装置中的所述计算机程序,以实现第一方面中所述方法的步骤。
通过上述技术方案,通过获取目标线程中当前待分配内存的目标对象,根据目标对象对于其他对象的引用情况,确定目标对象对应的内存分配策略,并根据目标对象对应的内存分配策略,在目标线程对应的目标分配缓冲区中为目标对象分配内存,或者在堆内存中为目标对象分配内存。由此,在内存管理算法的框架下,除了可以直接在堆内存中为目标对象分配对应大小的内 存之外,还可以在堆内存中申请一块较大的内存,并通过该内存来为目标线程的多个对应第一策略的对象分配内存,从而,在申请一次目标分配缓冲区的情况下,便可以完成为目标线程的多个对应第一策略的对象分配内存的过程,从而可以减少向操作系统或者下面一层的内存管理系统申请内存的次数,降低内存管理相关代码的CPU占用,进而提高了内存分配的速度,提高内存管理效率。
此外,由于第一策略是在堆内存中的分配缓冲区中进行内存分配,因此第一策略可以看作是第二策略的快速路径,进而可以进一步提高对象内存分配速度。
本公开的其他特征和优点将在随后的具体实施方式部分予以详细说明。
附图说明
结合附图并参考以下具体实施方式,本公开各实施例的上述和其他特征、优点及方面将变得更加明显。贯穿附图中,相同或相似的附图标记表示相同或相似的元素。应当理解附图是示意性的,原件和元素不一定按照比例绘制。在附图中:
图1是根据本公开一示例性实施例示出的一种内存管理方法的流程图。
图2是根据本公开一示例性实施例示出的一种分配缓冲区中分配的对象的结构示意图。
图3是根据本公开一示例性实施例示出的一种内存管理装置的框图。
图4是根据本公开一示例性实施例示出的一种电子设备的结构示意图。
具体实施方式
下面将参照附图更详细地描述本公开的实施例。虽然附图中显示了本公开的某些实施例,然而应当理解的是,本公开可以通过各种形式来实现,而且不应该被解释为限于这里阐述的实施例,相反提供这些实施例是为了更加透彻和完整地理解本公开。应当理解的是,本公开的附图及实施例仅用于示 例性作用,并非用于限制本公开的保护范围。
应当理解,本公开的方法实施方式中记载的各个步骤可以按照不同的顺序执行,和/或并行执行。此外,方法实施方式可以包括附加的步骤和/或省略执行示出的步骤。本公开的范围在此方面不受限制。
本文使用的术语“包括”及其变形是开放性包括,即“包括但不限于”。术语“基于”是“至少部分地基于”。术语“一个实施例”表示“至少一个实施例”;术语“另一实施例”表示“至少一个另外的实施例”;术语“一些实施例”表示“至少一些实施例”。其他术语的相关定义将在下文描述中给出。
需要注意,本公开中提及的“第一”、“第二”等概念仅用于对不同的装置、模块或单元进行区分,并非用于限定这些装置、模块或单元所执行的功能的顺序或者相互依存关系。
需要注意,本公开中提及的“一个”、“多个”的修饰是示意性而非限制性的,本领域技术人员应当理解,除非在上下文另有明确指出,否则应该理解为“一个或多个”。
本公开实施方式中的多个装置之间所交互的消息或者信息的名称仅用于说明性的目的,而并不是用于对这些消息或信息的范围进行限制。
可以理解的是,在使用本公开各实施例公开的技术方案之前,均应当依据相关法律法规通过恰当的方式对本公开所涉及个人信息的类型、使用范围、使用场景等告知用户并获得用户的授权。
例如,在响应于接收到用户的主动请求时,向用户发送提示信息,以明确地提示用户,其请求执行的操作将需要获取和使用到用户的个人信息。从而,使得用户可以根据提示信息来自主地选择是否向执行本公开技术方案的操作的电子设备、应用程序、服务器或存储介质等软件或硬件提供个人信息。
作为一种可选的但非限定性的实现方式,响应于接收到用户的主动请求,向用户发送提示信息的方式例如可以是弹窗的方式,弹窗中可以以文字的方式呈现提示信息。此外,弹窗中还可以承载供用户选择“同意”或者“不同 意”向电子设备提供个人信息的选择控件。
可以理解的是,上述通知和获取用户授权过程仅是示意性的,不对本公开的实现方式构成限定,其它满足相关法律法规的方式也可应用于本公开的实现方式中。
同时,可以理解的是,本技术方案所涉及的数据(包括但不限于数据本身、数据的获取或使用)应当遵循相应法律法规及相关规定的要求。
在程序设计语言中,对象是对客观世界的抽象,是客观世界在计算机中的表示。例如,一个人、一个物体,都可以用计算机中的对象来表示。对象会占据内存,包含许多属性,属性的值存储在对象所占据的内存之中。对象的属性可以是另外一个对象的地址,即指向其他对象的指针,通过指针可以引用其他的对象。
一个对象存活意味着可以通过其他存活的对象引用到该对象。若一个对象无法通过其他存活的对象引用到,那么该对象无法被访问,其占据的内存空间可以被回收并释放,否则会造成内存泄漏,内存用尽,程序会崩溃,无法继续运行。
相关技术中,存在许多内存分配以及回收的技术,但是发明人在长期研究中发现,在这些内存分配技术中,每次为一个对象分配内存,均需要向操作系统或者下面一层的内存管理系统申请内存,然后再进行分配,由于需要多次申请内存,增加了内存分配过程,降低了内存分配速度。此外,在这些内存回收技术中,同样是每针对一个未标记的对象,则执行一次回收清理的逻辑,从而回收该对象对应的内存,由于需要多次执行回收清理逻辑,增加了内存回收过程,降低了内存回收清理速度。可见,相关技术中的内存管理方法存在内存分配以及内存回收速度慢的问题,进而导致内存管理效率低下。
有鉴于此,本公开实施例提供一种内存管理方法、装置、介质及电子设备,以至少提高内存分配速度,进而在一定程度上提高内存管理效率。
以下结合附图,对本公开实施例进行进一步解释说明。
图1是根据本公开一示例性实施例示出的一种内存管理方法的流程图。该内存管理方法可以应用于电子设备,这里的电子设备例如可以是手机、平板电脑以及笔记本等,参照图1,该内存管理方法包括以下步骤:
S110,获取目标线程中当前待分配内存的目标对象。
S120,根据目标对象对于其他对象的引用情况,确定目标对象对应的内存分配策略,内存分配策略包括第一策略以及第二策略,第一策略指示在堆内存的分配缓冲区中为目标对象分配对应大小的内存,第二策略指示在堆内存中为目标对象分配对应大小的内存。
S130,根据目标对象对应的内存分配策略,在目标线程对应的目标分配缓冲区中为目标对象分配内存,或者在堆内存中为目标对象分配内存,目标分配缓冲区用于为目标线程的多个对应第一策略的对象分配内存。
其中,目标线程可以是电子设备中当前正在执行的线程,可以理解的是,目标线程在执行过程中,可以分配多个对象,这多个对象可以是按照预设的顺序进行内存分配,从而,目标线程中当前准备分配内存的对象即为目标对象。
其中,堆内存是区别于栈区、全局数据区和代码区的另一个内存区域。堆允许线程在执行时动态地申请某个大小的内存空间。从而,目标分配缓冲区可以理解为目标线程在堆内存中申请的一块较大的内存空间。
本公开实施例中,在获取到每个目标对象之后,可以根据目标对象对于其他对象的引用情况,确定目标对象对应的内存分配策略,即根据目标对象对于其他对象的引用情况,确定目标对象对应的内存分配策略为第一策略还是第二策略。若确定内存分配策略为第一策略,则在目标线程对应的目标分配缓冲区中为目标对象分配对应大小的空闲内存,若确定内存分配策略为第二策略,则在堆内存中为目标对象分配对应大小的空闲内存。
其中,在堆内存中为目标对象分配对应大小的空闲内存,可以使用任一种现有的内存管理方法中的内存分配方法进行分配。
从而,采用上述方式,通过获取目标线程中当前待分配内存的目标对象,根据目标对象对于其他对象的引用情况,确定目标对象对应的内存分配策略,并根据目标对象对应的内存分配策略,在目标线程对应的目标分配缓冲区中为目标对象分配内存,或者在堆内存中为目标对象分配内存。由此,在内存管理算法的框架下,除了可以直接在堆内存中为目标对象分配对应大小的内存之外,还可以在堆内存中申请一块较大的内存,并通过该内存来为目标线程的多个对应第一策略的对象分配内存。从而,在申请一次目标分配缓冲区的情况下,便可以完成为目标线程的多个对应第一策略的对象分配内存的过程,从而可以减少向操作系统或者下面一层的内存管理系统申请内存的次数,降低内存管理相关代码的CPU占用,进而提高了内存分配的速度,提高内存管理效率。
此外,由于第一策略是在堆内存中的分配缓冲区中进行内存分配,因此第一策略可以看作是第二策略的快速路径,进而可以进一步提高对象内存分配速度,采用与相关技术中的第二策略不同的内存分配方式,是对现有的内存管理方法中的分配方法的优化。
结合前述内容可知,在内存回收时,如果一个对象被一个存活的对象引用,则该对象也是存活对象。并且,本公开实施例中,分配缓冲区可以当作是在堆内存视角下的一块大对象,在内存回收阶段,当分配缓冲区中至少有一个对象存活,则分配缓冲区为存活的。在这种情况下,为了降低内存回收之后的内存占用,在一些实施方式中,步骤S120可以包括以下方式步骤:
响应于目标对象引用其他对象,确定目标对象对应的内存分配策略为第二策略;
响应于目标对象未引用其他对象,确定目标对象对应的内存分配策略为第一策略。
在一些实施方式中,目标对象是否引用其他对象可以通过目标对象是否存在指向其他对象的指针来判断。当目标对象存在指向其他对象的指针时, 确定目标对象存在引用的其他对象,当目标对象不存在指向其他对象的指针时,确定目标对象不存在引用的其他对象。
申请人在长期研究中发现,假设我们在分配缓冲区分配包含指针的对象a,如果该分配缓冲区中有一个对象b存活,那么整个分配缓冲区都会被标记成存活,因此无论对象a实际上是否存活,对象a都被标记为存活,那么就会导致对象a指向的对象是存活的,因此,有可能会因为把对象a标记为存活,间接把一堆本来可以被回收的对象也被标记为存活,提高内存占用。
因此,本公开实施例中,在确定目标对象是使用第一策略分配内存还是使用第二策略分配内存时,可以判断目标对象是否引用其他对象,若判断目标对象引用其他对象,则可以确定目标对象对应的内存分配策略为第二策略,并进而在堆内存中为目标对象分配对应大小的空闲内存,而若判断目标对象未引用其他对象,则可以确定目标对象对应的内存分配策略为第一策略。
也即,通过上述方式,可以只在分配缓冲区中分配不引用其他对象的对象,而将引用其他对象的对象,直接在堆内存中正常分配内存,从而降低后续内存回收后的内存占用。
此外,在一些实施方式中,在不考虑内存回收阶段之后的内存占用的情况下,可以将所有的目标对象均以第一策略进行内存分配。或者,也可以随机选择以第一策略还是第二策略来对目标对象进行内存分配。
此外,考虑到内存管理方法可能执行到对目标线程第一次应用第一策略进行对象内存分配,这种时候可能存在还未在堆内存中申请到目标分配缓冲区的情况,因此,在一些实施方式中,本公开实施例的方法还可以包括以下步骤:
基于与目标线程关联的分配缓冲区的地址,确定是否为目标线程分配有目标分配缓冲区;
在确定未为目标线程分配有目标分配缓冲区的情况下,在堆内存中申请预设大小的空闲内存空间作为目标分配缓冲区。
本公开实施例中,目标线程可以关联有分配缓冲区的地址,例如,可以设置目标线程保存的信息中包括目标分配缓冲区的地址对应的字段,从而可以从对应字段获取目标线程关联的分配缓冲区的地址,进而再根据与目标线程关联的分配缓冲区的地址,确定是否为目标线程分配有目标分配缓冲区。
示例性地,若关联的分配缓冲区的地址为空值,则可以确定未为目标线程分配有目标分配缓冲区,从而便可以在堆内存中申请预设大小的空闲内存空间作为目标分配缓冲区。而若关联的分配缓冲区的地址不为空值,则可以确定已经为目标线程分配有目标分配缓冲区,从而,直接在目标线程对应的目标分配缓冲区中为目标对象分配内存。
可选地,关联的分配缓冲区的地址可以是分配缓冲区中任意地址,例如的起始地址、当前使用到的地址或者结束地址等。
在一些实施方式中,在堆内存中申请预设大小的空闲内存空间例如可以是1KB、10KB、100KB等大小的空间区域,这依据实际需求进行设置。
此外,考虑到一个分配缓冲区的大小有限,不能够无限用于为目标对象分配内存,因此,在一些实施方式中,步骤S130中的在目标线程对应的目标分配缓冲区中为目标对象分配内存,可以包括以下步骤:
确定目标分配缓冲区中的空闲内存是否支持为目标对象分配内存;
在确定目标分配缓冲区中的空闲内存支持为目标对象分配内存的情况下,在目标分配缓冲区中为目标对象分配内存;
在确定目标分配缓冲区中的空闲内存不支持为目标对象分配内存的情况下,在堆内存中申请预设大小的空闲内存空间作为新的目标分配缓冲区,并在新的目标分配缓冲区中为目标对象分配内存。
本公开实施例中,可以先判断目标分配缓冲区中的空闲内存是否支持为目标对象分配内存,如果目标分配缓冲区中的空闲内存支持为目标对象分配内存,则在目标分配缓冲区中为目标对象分配内存,而如果目标分配缓冲区中的空闲内存不支持为目标对象分配内存,则可以在堆内存中申请预设大小 的空闲内存空间作为新的目标分配缓冲区,并在新的目标分配缓冲区中为目标对象分配内存。
在一些实施方式中,可以通过指针碰撞的方式在目标分配缓冲区中为各个目标对象分配内存。这种情况下,可以判断目标分配缓冲区中当前使用到的位置与目标对象实际大小的和,是否大于目标分配缓冲区的结束位置来确定目标分配缓冲区中的空闲内存是否支持为目标对象分配内存。
其中,采用指针碰撞的方式为对象分配内存可以提高对象的分配速度,同时还利用了程序的局部性原理,将时间上可能近期会被访问到的对象在空间上放置在相邻位置,增加了中央处理器高速缓存(CPU cache)命中率,进而提升程序性能。
下面,以一个示例来对本公开实施例中的在目标线程对应的目标分配缓冲区中为目标对象分配内存进行示例性说明。
示例性地,假设目标线程为t,可以获取到线程t关联的分配缓冲区的地址,进一步假设获取到的该地址为空值,则确定未为线程t分配有目标分配缓冲区,此时,可以从堆内存中申请一块1KB大小的内存空间作为线程t对应的目标分配缓冲区,此时,可以用三个指针标记该目标分配缓冲区,三个指针分别是base、end、top,其中,base表示目标分配缓冲区的起始地址,end表示目标分配缓冲区的结束地址,top表示目标分配缓冲区的当前使用地址。同时,可以将任一个指针的值赋值给线程t关联的分配缓冲区的地址,以表示已经为目标线程分配有目标分配缓冲区。
此外,在目标线程对应的目标分配缓冲区中为目标对象分配内存时,可以先根据该对象的大小与内存对齐规则,计算该对象的实际大小S`,计算S`+top的值,并与end进行比较,从而判断目标分配缓冲区中的空闲内存是否支持为目标对象分配内存,若S`+top的值小于或者等于end的值,则确定目标分配缓冲区中的空闲内存支持为目标对象分配内存,接着,便在目标分配缓冲区中为目标对象分配内存。若S`+top的值大于end的值,则确定目标 分配缓冲区中的空闲内存不支持为目标对象分配内存,接着,便在堆内存中再次申请1KB大小的空闲内存空间作为新的目标分配缓冲区,并在新的目标分配缓冲区中按照同样的方式为各个目标对象分配内存。
如果以指针碰撞的方式分配内存,则第一次在目标分配缓冲区中分配内存时,top值等于base值,即从分配缓冲区的起始地址开始分配内存,在完成第一次对象内存分配之后,移动top到S`+base的位置,则第二次在目标分配缓冲区中分配内存时,从移动得到的top位置开始分配,依次类推,直到S`+top的值大于end的值,在堆内存中再次申请1KB大小的空闲内存空间作为新的目标分配缓冲区。
在一些实施方式中,除了可以通过指针碰撞的方式在目标分配缓冲区中为各个目标对象分配内存之外,还可以采用随机获取空闲内存的方式在目标分配缓冲区中为各个目标对象分配内存。
结合前述内容可知,在一些实施方式中,内存分配策略可以包括第一策略以及第二策略,这种情况下,本公开实施例的内存分配方法还可以包括以下步骤:
在满足预设内存回收条件的情况下,对堆内存中存活的第一对象以及存活的第二对象进行标记,第一对象为应用第一策略分配内存的对象,第二对象为应用第二策略分配内存的对象;
对堆内存中未被标记的第二对象以及候选分配缓冲区进行回收,候选分配缓冲区为不存在标记的第一对象的分配缓冲区。
在一些实施方式中,满足预设回收条件可以是内存占用率达到设定的阈值,或者达到一定的内存回收周期等。
本公开实施例中,可以通过前述提到的通过判断第一对象或者第二对象是否被其他的存活对象引用,来判断第一对象或者第二对象是否存活。
在对堆内存中存活的第一对象以及存活的第二对象进行标记之后,便可以进一步对堆内存中未被标记的第二对象以及不存在标记的第一对象的候 选分配缓冲区进行回收。
本公开实施例中,由于可以将候选分配缓冲区整体回收,因此,只需要执行一次回收逻辑,便能够完成对多个对象的回收工作,减少了执行回收清理逻辑的次数,提高了内存回收清理速度,进而提高了内存管理效率。
其中,候选分配缓冲区中不存在标记的第一对象可以有多种情况。可选地,候选分配缓冲区中的第一对象在回收时本身就全部不存活,此时,不会对候选分配缓冲区中的第一对象进行标记。可选地,候选分配缓冲区中的第一对象在回收时,部分存活,部分不存活,而针对存活的部分,采用移动的方式移动到堆内存中的其他内存空间,从而使得候选分配缓冲区中的第一对象全部不存在标记。
如上,考虑到候选分配缓冲区中的第一对象在回收时,部分存活,部分不存活,而针对存活的部分,可以采用移动的方式移动到堆内存中的其他内存空间,从而使得候选分配缓冲区中的第一对象全部不存在标记。因此,在一些实施方式中,本公开实施例的内存分配方法还可以包括以下步骤:
获取堆内存中标记的第一对象的总内存占用量;
在堆内存中申请总内存占用量大小的目标内存空间;
将堆内存中标记的第一对象移动到目标内存空间,得到候选分配缓冲区。
在一些实施方式中,可以在对堆内存中存活的第一对象进行标记的时候,通过累加的方式统计出标记的各个第一对象的总内存占用量。
本公开实施例中,在获取堆内存中标记的各个第一对象的总内存占用量之后,可以在堆内存中申请总内存占用量大小的目标内存空间,接着,便可以将堆内存中标记的各个第一对象均移动到目标内存空间,这样,得到候选分配缓冲区中便不再存在标记的第一对象。
本公开实施例中,通过对堆内存中标记的第一对象进行移动,可以避免分配缓冲区中内存的碎片化。
在一些实施方式中,可以根据对象转发表以及指针队列来帮助标记的各 个第一对象进行移动。其中,指针队列保存的是同时满足以下两个条件的指针的地址,例如地址&x.f,x.f代表指针,假设指针x.f指向对象o,第一个条件是指向对象o的对象x是存活对象,第二个条件是x.f指向的对象o在目标分配缓冲区中。转发表记录的是对象的移动情况,例如,转发表中记录了x.f->y,则表示对象o已经发生了移动,移动到了新地址y。
下面,以一个示例来对将堆内存中标记的第一对象移动到目标内存空间的过程进行描述。
依次从指针队列(假设包括指针队列4个指针地址&x1.f1、&x2.f2、&x3.f3、&x4.f4,4个指针地址分别对应的是分配缓冲区中的对象o1、o1、o2、o2的地址)中取出一个指针地址,假设是本次取出的是地址&x1.f1,对地址&x1.f1解引得到指针x1.f1,该指针指向对象o1,根据转发表,查询对象o1是否发生移动,若o1未发生移动,则根据对象o1的实际大小,将o1移动到目标内存空间。在移动后,在对象转发表中记录指针x1.f1指向的地址由地址&x1.f1移动到地址y。
接着,假设第二次取出的是地址&x2.f2,对地址&x2.f2解引得到指针x2.f2,该指针同样指向对象o1,根据转发表,查询对象o1已经由地址&x1.f1移动到地址y,则说明指针x2.f2指向的对象o1已经发生移动,则不需要再进行移动。依次类推,再次从指针队列中取出地址&x3.f3,判断是否对解引出的对象o2进行移动,直到将指针队列中的全部地址都处理一遍,实现将堆内存中标记的第一对象移动到目标内存空间。
其中,若采用的是指针碰撞的方式在目标分配缓冲区中为目标对象分配内存,则根据对象o1的实际大小,将o1移动到目标内存空间的过程如下:
根据对象分配时标记的对象起始地址获得对象o1的起始地址p,由于分配缓冲区中的对象是连续分配的,根据o1紧邻的后面的对象的起始地址p`,|p`-p|即可以获得对象o1的大小q。
x.f可以不指向对象o1的开头。根据对象o1的起始地址p和x.f,|p-x.f| 即在对象o1内部的偏移,记为offset。
将对象o1移动到目标内存空间的top所指向的位置。如果是第一个对象进行移动,则top为目标内存空间的开起始位置。
修改x.f为y,其中,y=top+offset,即x.f指向移动后的新内存地址,从而完成对一个标记的第一对象的移动过程。此外,将top的值修改为Top+q,以便于下一次移动。
在一些实施方式中,堆内存中标记的第一对象可以有多个,此时,可以采用多个移动线程来并行移动,加快回收清理过程。
在一些实施方式中,在目标分配缓冲区中为目标对象分配的内存用于依次存储目标对象的元数据以及目标对象的内容,其中,当目标对象被引用时,引用目标对象的指针指向目标对象的内容的起始地址。
本公开实施例中,为了便于对分配缓冲区中的内存进行管理,可以为分配缓冲区的对象添加元数据,例如,用于记录对象的起始地址、对象的大小、转发表等信息。在这种情况下,为了使得第二对象能够正常引用第一对象,使得第一对象在堆内存视角下更加透明,可以进一步将元数据设置在对应的目标对象的内容的起始位置之前,并且,设置指向该目标对象的指针指向目标对象的内容的起始地址。从而,使得每次在引用目标对象时,能够准确获取到对象对应的内容,而不会获取到元数据。
示例性地,如图2所示,目标对象包括元数据以及对象对应的内容,元数据以及对象对应的内容分别占据内存空间,指向目标对象的指针指向对象对应的内容的起始地址。
通过上述实施例中的方法,由于分配缓冲区在堆内存视角下是一个较大的对象,程序的执行无需知晓分配缓冲区的细节实现,第一对象在堆内存视角下是透明的。因此,在使用任一种现有的内存管理方法中的内存分配以及内存回收方法进行分配和回收时,不会感知到分配缓冲区的存在,分配缓冲区可以被当作是一个整体对象正常标记和回收,因此,不需要改变任一种现 有的内存管理方法的标记以及回收过程,不需要进行额外特殊的处理,从而方便于第一对象的分配以及移动的过程能够嵌入任一种现有的内存管理方法。并且,第一策略和第二策略分配出来的对象可以具有不同的对象布局,例如,第二策略分配出来的对象可以不包含元数据,第一策略分配出来的对象可以包括元数据,但两种策略分配出来的对象在内存管理层面保持一致,因此可以做到:采用第一策略分配的对象可以同时被第一策略和第二策略所管理,不同策略分配出的对象之间可以相互引用。
图3是根据本公开一示例性实施例示出的一种内存管理装置的框图,参照图3,该内存管理装置300包括:
第一获取模块310,用于获取目标线程中当前待分配内存的目标对象;
第一确定模块320,用于根据所述目标对象对于其他对象的引用情况,确定所述目标对象对应的内存分配策略,所述内存分配策略包括第一策略以及第二策略,所述第一策略指示在堆内存的分配缓冲区中为所述目标对象分配对应大小的内存,所述第二策略指示在所述堆内存中为所述目标对象分配对应大小的内存;
分配模块330,用于根据所述目标对象对应的内存分配策略,在所述目标线程对应的目标分配缓冲区中为所述目标对象分配内存,或者在所述堆内存中为所述目标对象分配内存,所述目标分配缓冲区用于为所述目标线程的多个对应所述第一策略的对象分配内存。
可选地,所述第一确定模块320还用于响应于所述目标对象引用其他对象,确定所述目标对象对应的内存分配策略为所述第二策略;响应于所述目标对象未引用其他对象,确定所述目标对象对应的内存分配策略为所述第一策略。
可选地,装置300还包括:
第二确定模块,用于基于与所述目标线程关联的分配缓冲区的地址,确定是否为所述目标线程分配有目标分配缓冲区;
目标分配缓冲区申请模块,用于在确定未为所述目标线程分配有所述目标分配缓冲区的情况下,在所述堆内存中申请预设大小的空闲内存空间作为所述目标分配缓冲区。
可选地,分配模块330包括:
确定子模块,用于确定所述目标分配缓冲区中的空闲内存是否支持为所述目标对象分配内存;
第一分配子模块,用于在确定所述目标分配缓冲区中的空闲内存支持为所述目标对象分配内存的情况下,在所述目标分配缓冲区中为所述目标对象分配内存;
第二分配子模块,用于在确定所述目标分配缓冲区中的空闲内存不支持为所述目标对象分配内存的情况下,在所述堆内存中申请预设大小的空闲内存空间作为新的目标分配缓冲区,并在所述新的目标分配缓冲区中为所述目标对象分配内存。
可选地,装置300还包括:
标记模块,用于在满足预设内存回收条件的情况下,对所述堆内存中存活的第一对象以及存活的第二对象进行标记,所述第一对象为应用所述第一策略分配内存的对象,所述第二对象为应用所述第二策略分配内存的对象;
回收模块,用于对所述堆内存中未被标记的第二对象以及候选分配缓冲区进行回收,所述候选分配缓冲区为不存在标记的第一对象的分配缓冲区。
可选地,装置300还包括:
第二获取模块,用于获取所述堆内存中标记的第一对象的总内存占用量;
目标内存空间申请模块,用于在所述堆内存中申请所述总内存占用量大小的目标内存空间;
移动模块,用于将所述堆内存中标记的第一对象移动到所述目标内存空间,得到所述候选分配缓冲区。
可选地,所述在所述目标分配缓冲区中为所述目标对象分配的内存用于 依次存储所述目标对象的元数据以及所述目标对象的内容,其中,当所述目标对象被引用时,引用所述目标对象的指针指向所述目标对象的内容的起始地址。
下面参考图4,其示出了适于用来实现本公开实施例的电子设备400的结构示意图。本公开实施例中的终端设备可以包括但不限于诸如移动电话、笔记本电脑、数字广播接收器、PDA(个人数字助理)、PAD(平板电脑)、PMP(便携式多媒体播放器)、车载终端(例如车载导航终端)等等的移动终端以及诸如数字TV、台式计算机等等的固定终端。图4示出的电子设备仅仅是一个示例,不应对本公开实施例的功能和使用范围带来任何限制。
如图4所示,电子设备400可以包括处理装置(例如中央处理器、图形处理器等)401,其可以根据存储在只读存储器(ROM)402中的程序或者从存储装置408加载到随机访问存储器(RAM)403中的程序而执行各种适当的动作和处理。在RAM 403中,还存储有电子设备400操作所需的各种程序和数据。处理装置401、ROM 402以及RAM 403通过总线404彼此相连。输入/输出(I/O)接口405也连接至总线404。
通常,以下装置可以连接至I/O接口405:包括例如触摸屏、触摸板、键盘、鼠标、摄像头、麦克风、加速度计、陀螺仪等的输入装置406;包括例如液晶显示器(LCD)、扬声器、振动器等的输出装置407;包括例如磁带、硬盘等的存储装置408;以及通信装置409。通信装置409可以允许电子设备400与其他设备进行无线或有线通信以交换数据。虽然图4示出了具有各种装置的电子设备400,但是应理解的是,并不要求实施或具备所有示出的装置。可以替代地实施或具备更多或更少的装置。
特别地,根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在非暂态计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通 过通信装置409从网络上被下载和安装,或者从存储装置408被安装,或者从ROM 402被安装。在该计算机程序被处理装置401执行时,执行本公开实施例的方法中限定的上述功能。
需要说明的是,本公开上述的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本公开中,计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读信号介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:电线、光缆、RF(射频)等等,或者上述的任意合适的组合。
在一些实施方式中,电子设备可以利用诸如HTTP(HyperText Transfer Protocol,超文本传输协议)之类的任何当前已知或未来研发的网络协议进行通信,并且可以与任意形式或介质的数字数据通信(例如,通信网络)互连。通信网络的示例包括局域网(“LAN”),广域网(“WAN”),网际网(例如,互联网)以及端对端网络(例如,ad hoc端对端网络),以及任何当前已知或 未来研发的网络。
上述计算机可读介质可以是上述电子设备中所包含的;也可以是单独存在,而未装配入该电子设备中。
上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被该电子设备执行时,使得该电子设备:获取目标线程中当前待分配内存的目标对象;确定所述目标对象对应的内存分配策略,所述内存分配策略包括在堆内存的分配缓冲区中为所述目标对象分配内存的第一策略;在确定所述内存分配策略为所述第一策略的情况下,在所述目标线程对应的目标分配缓冲区中为所述目标对象分配内存,所述目标分配缓冲区用于为所述目标线程的多个对应所述第一策略的对象分配内存。
可以以一种或多种程序设计语言或其组合来编写用于执行本公开的操作的计算机程序代码,上述程序设计语言包括但不限于面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言——诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(LAN)或广域网(WAN)——连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。
附图中的流程图和框图,图示了按照本公开各种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能 而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
描述于本公开实施例中所涉及到的模块可以通过软件的方式实现,也可以通过硬件的方式来实现。其中,模块的名称在某种情况下并不构成对该模块本身的限定。
本文中以上描述的功能可以至少部分地由一个或多个硬件逻辑部件来执行。例如,非限制性地,可以使用的示范类型的硬件逻辑部件包括:现场可编程门阵列(FPGA)、专用集成电路(ASIC)、专用标准产品(ASSP)、片上系统(SOC)、复杂可编程逻辑设备(CPLD)等等。
在本公开的上下文中,机器可读介质可以是有形的介质,其可以包含或存储以供指令执行系统、装置或设备使用或与指令执行系统、装置或设备结合地使用的程序。机器可读介质可以是机器可读信号介质或机器可读储存介质。机器可读介质可以包括但不限于电子的、磁性的、光学的、电磁的、红外的、或半导体系统、装置或设备,或者上述内容的任何合适组合。机器可读存储介质的更具体示例会包括基于一个或多个线的电气连接、便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦除可编程只读存储器(EPROM或快闪存储器)、光纤、便捷式紧凑盘只读存储器(CD-ROM)、光学储存设备、磁储存设备、或上述内容的任何合适组合。
根据本公开的一个或多个实施例,示例1提供了一种内存管理方法,所述方法包括:
获取目标线程中当前待分配内存的目标对象;
根据所述目标对象对于其他对象的引用情况,确定所述目标对象对应的内存分配策略,所述内存分配策略包括第一策略以及第二策略,所述第一策略指示在堆内存的分配缓冲区中为所述目标对象分配对应大小的内存,所述 第二策略指示在所述堆内存中为所述目标对象分配对应大小的内存;
根据所述目标对象对应的内存分配策略,在所述目标线程对应的目标分配缓冲区中为所述目标对象分配内存,或者在所述堆内存中为所述目标对象分配内存,所述目标分配缓冲区用于为所述目标线程的多个对应所述第一策略的对象分配内存。
根据本公开的一个或多个实施例,示例2提供了示例1的方法,所述确定所述目标对象对应的内存分配策略,包括:
响应于所述目标对象引用其他对象,确定所述目标对象对应的内存分配策略为所述第二策略;
响应于所述目标对象未引用其他对象,确定所述目标对象对应的内存分配策略为所述第一策略。
根据本公开的一个或多个实施例,示例3提供了示例1的方法,所述方法还包括:
基于与所述目标线程关联的分配缓冲区的地址,确定是否为所述目标线程分配有目标分配缓冲区;
在确定未为所述目标线程分配有所述目标分配缓冲区的情况下,在所述堆内存中申请预设大小的空闲内存空间作为所述目标分配缓冲区。
根据本公开的一个或多个实施例,示例4提供了示例1的方法,所述在所述目标线程对应的目标分配缓冲区中为所述目标对象分配内存,包括:
确定所述目标分配缓冲区中的空闲内存是否支持为所述目标对象分配内存;
在确定所述目标分配缓冲区中的空闲内存支持为所述目标对象分配内存的情况下,在所述目标分配缓冲区中为所述目标对象分配内存;
在确定所述目标分配缓冲区中的空闲内存不支持为所述目标对象分配内存的情况下,在所述堆内存中申请预设大小的空闲内存空间作为新的目标分配缓冲区,并在所述新的目标分配缓冲区中为所述目标对象分配内存。
根据本公开的一个或多个实施例,示例5提供了示例1的方法,所述方法还包括:
在满足预设内存回收条件的情况下,对所述堆内存中存活的第一对象以及存活的第二对象进行标记,所述第一对象为应用所述第一策略分配内存的对象,所述第二对象为应用所述第二策略分配内存的对象;
对所述堆内存中未被标记的第二对象以及候选分配缓冲区进行回收,所述候选分配缓冲区为不存在标记的第一对象的分配缓冲区。
根据本公开的一个或多个实施例,示例6提供了示例5的方法,所述方法还包括:
获取所述堆内存中标记的第一对象的总内存占用量;
在所述堆内存中申请所述总内存占用量大小的目标内存空间;
将所述堆内存中标记的第一对象移动到所述目标内存空间,得到所述候选分配缓冲区。
根据本公开的一个或多个实施例,示例7提供了示例1的方法,所述在所述目标分配缓冲区中为所述目标对象分配的内存用于依次存储所述目标对象的元数据以及所述目标对象的内容,其中,当所述目标对象被引用时,引用所述目标对象的指针指向所述目标对象的内容的起始地址。
根据本公开的一个或多个实施例,示例8提供了一种内存管理装置,所述装置包括:
第一获取模块,用于获取目标线程中当前待分配内存的目标对象;
第一确定模块,用于根据所述目标对象对于其他对象的引用情况,确定所述目标对象对应的内存分配策略,所述内存分配策略包括第一策略以及第二策略,所述第一策略指示在堆内存的分配缓冲区中为所述目标对象分配对应大小的内存,所述第二策略指示在所述堆内存中为所述目标对象分配对应大小的内存;
分配模块,用于根据所述目标对象对应的内存分配策略,在所述目标线 程对应的目标分配缓冲区中为所述目标对象分配内存,或者在所述堆内存中为所述目标对象分配内存,所述目标分配缓冲区用于为所述目标线程的多个对应所述第一策略的对象分配内存。
根据本公开的一个或多个实施例,示例9提供了一种计算机可读介质,其上存储有计算机程序,该程序被处理装置执行时实现示例1-7中任一项所述方法的步骤。
根据本公开的一个或多个实施例,示例10提供了一种电子设备,包括:
存储装置,其上存储有计算机程序;
处理装置,用于执行所述存储装置中的所述计算机程序,以实现示例1-7中任一项所述方法的步骤。
以上描述仅为本公开的较佳实施例以及对所运用技术原理的说明。本领域技术人员应当理解,本公开中所涉及的公开范围,并不限于上述技术特征的特定组合而成的技术方案,同时也应涵盖在不脱离上述公开构思的情况下,由上述技术特征或其等同特征进行任意组合而形成的其它技术方案。例如上述特征与本公开中公开的(但不限于)具有类似功能的技术特征进行互相替换而形成的技术方案。
此外,虽然采用特定次序描绘了各操作,但是这不应当理解为要求这些操作以所示出的特定次序或以顺序次序执行来执行。在一定环境下,多任务和并行处理可能是有利的。同样地,虽然在上面论述中包含了若干具体实现细节,但是这些不应当被解释为对本公开的范围的限制。在单独的实施例的上下文中描述的某些特征还可以组合地实现在单个实施例中。相反地,在单个实施例的上下文中描述的各种特征也可以单独地或以任何合适的子组合的方式实现在多个实施例中。
尽管已经采用特定于结构特征和/或方法逻辑动作的语言描述了本主题,但是应当理解所附权利要求书中所限定的主题未必局限于上面描述的特定特征或动作。相反,上面所描述的特定特征和动作仅仅是实现权利要求书的 示例形式。关于上述实施例中的装置,其中各个模块执行操作的具体方式已经在有关该方法的实施例中进行了详细描述,此处将不做详细阐述说明。

Claims (10)

  1. 一种内存管理方法,其特征在于,所述方法包括:
    获取目标线程中当前待分配内存的目标对象;
    根据所述目标对象对于其他对象的引用情况,确定所述目标对象对应的内存分配策略,所述内存分配策略包括第一策略以及第二策略,所述第一策略指示在堆内存的分配缓冲区中为所述目标对象分配对应大小的内存,所述第二策略指示在所述堆内存中为所述目标对象分配对应大小的内存;
    根据所述目标对象对应的内存分配策略,在所述目标线程对应的目标分配缓冲区中为所述目标对象分配内存,或者在所述堆内存中为所述目标对象分配内存,所述目标分配缓冲区用于为所述目标线程的多个对应所述第一策略的对象分配内存。
  2. 根据权利要求1所述的方法,其特征在于,所述根据所述目标对象对于其他对象的引用情况,确定所述目标对象对应的内存分配策略,包括:
    响应于所述目标对象引用其他对象,确定所述目标对象对应的内存分配策略为所述第二策略;
    响应于所述目标对象未引用其他对象,确定所述目标对象对应的内存分配策略为所述第一策略。
  3. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    基于与所述目标线程关联的分配缓冲区的地址,确定是否为所述目标线程分配有目标分配缓冲区;
    在确定未为所述目标线程分配有所述目标分配缓冲区的情况下,在所述堆内存中申请预设大小的空闲内存空间作为所述目标分配缓冲区。
  4. 根据权利要求1所述的方法,其特征在于,所述在所述目标线程对 应的目标分配缓冲区中为所述目标对象分配内存,包括:
    确定所述目标分配缓冲区中的空闲内存是否支持为所述目标对象分配内存;
    在确定所述目标分配缓冲区中的空闲内存支持为所述目标对象分配内存的情况下,在所述目标分配缓冲区中为所述目标对象分配内存;
    在确定所述目标分配缓冲区中的空闲内存不支持为所述目标对象分配内存的情况下,在所述堆内存中申请预设大小的空闲内存空间作为新的目标分配缓冲区,并在所述新的目标分配缓冲区中为所述目标对象分配内存。
  5. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    在满足预设内存回收条件的情况下,对所述堆内存中存活的第一对象以及存活的第二对象进行标记,所述第一对象为应用所述第一策略分配内存的对象,所述第二对象为应用所述第二策略分配内存的对象;
    对所述堆内存中未被标记的第二对象以及候选分配缓冲区进行回收,所述候选分配缓冲区为不存在标记的第一对象的分配缓冲区。
  6. 根据权利要求5所述的方法,其特征在于,所述方法还包括:
    获取所述堆内存中标记的第一对象的总内存占用量;
    在所述堆内存中申请所述总内存占用量大小的目标内存空间;
    将所述堆内存中标记的第一对象移动到所述目标内存空间,得到所述候选分配缓冲区。
  7. 根据权利要求1所述的方法,其特征在于,所述在所述目标分配缓冲区中为所述目标对象分配的内存用于依次存储所述目标对象的元数据以及所述目标对象的内容,其中,当所述目标对象被引用时,引用所述目标对 象的指针指向所述目标对象的内容的起始地址。
  8. 一种内存管理装置,其特征在于,所述装置包括:
    第一获取模块,用于获取目标线程中当前待分配内存的目标对象;
    第一确定模块,用于根据所述目标对象对于其他对象的引用情况,确定所述目标对象对应的内存分配策略,所述内存分配策略包括第一策略以及第二策略,所述第一策略指示在堆内存的分配缓冲区中为所述目标对象分配对应大小的内存,所述第二策略指示在所述堆内存中为所述目标对象分配对应大小的内存;
    分配模块,用于根据所述目标对象对应的内存分配策略,在所述目标线程对应的目标分配缓冲区中为所述目标对象分配内存,或者在所述堆内存中为所述目标对象分配内存,所述目标分配缓冲区用于为所述目标线程的多个对应所述第一策略的对象分配内存。
  9. 一种计算机可读介质,其上存储有计算机程序,其特征在于,该程序被处理装置执行时实现权利要求1-7中任一项所述方法的步骤。
  10. 一种电子设备,其特征在于,包括:
    存储装置,其上存储有计算机程序;
    处理装置,用于执行所述存储装置中的所述计算机程序,以实现权利要求1-7中任一项所述方法的步骤。
PCT/CN2023/115923 2022-10-26 2023-08-30 内存管理方法、装置、介质及电子设备 WO2024087875A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2022113213922 2022-10-26
CN202211321392.2A CN115599707A (zh) 2022-10-26 2022-10-26 内存管理方法、装置、介质及电子设备

Publications (1)

Publication Number Publication Date
WO2024087875A1 true WO2024087875A1 (zh) 2024-05-02

Family

ID=84851545

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/115923 WO2024087875A1 (zh) 2022-10-26 2023-08-30 内存管理方法、装置、介质及电子设备

Country Status (2)

Country Link
CN (1) CN115599707A (zh)
WO (1) WO2024087875A1 (zh)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115599707A (zh) * 2022-10-26 2023-01-13 北京火山引擎科技有限公司(Cn) 内存管理方法、装置、介质及电子设备
CN116107920B (zh) * 2023-04-11 2023-07-28 阿里云计算有限公司 内存管理方法、非易失性存储介质、处理器及终端设备
CN116139498B (zh) * 2023-04-18 2023-07-04 深圳市益玩网络科技有限公司 游戏场景下基于内存管理的对象创建方法及相关产品

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102915276A (zh) * 2012-09-25 2013-02-06 武汉邮电科学研究院 一种用于嵌入式系统的内存控制方法
US20130198455A1 (en) * 2012-01-31 2013-08-01 International Business Machines Corporation Cache memory garbage collector
CN114035778A (zh) * 2021-11-04 2022-02-11 北京字节跳动网络技术有限公司 对象生成方法、装置、存储介质以及电子设备
CN114153614A (zh) * 2021-12-08 2022-03-08 阿波罗智能技术(北京)有限公司 内存管理方法、装置、电子设备和自动驾驶车辆
CN115599707A (zh) * 2022-10-26 2023-01-13 北京火山引擎科技有限公司(Cn) 内存管理方法、装置、介质及电子设备

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130198455A1 (en) * 2012-01-31 2013-08-01 International Business Machines Corporation Cache memory garbage collector
CN102915276A (zh) * 2012-09-25 2013-02-06 武汉邮电科学研究院 一种用于嵌入式系统的内存控制方法
CN114035778A (zh) * 2021-11-04 2022-02-11 北京字节跳动网络技术有限公司 对象生成方法、装置、存储介质以及电子设备
CN114153614A (zh) * 2021-12-08 2022-03-08 阿波罗智能技术(北京)有限公司 内存管理方法、装置、电子设备和自动驾驶车辆
CN115599707A (zh) * 2022-10-26 2023-01-13 北京火山引擎科技有限公司(Cn) 内存管理方法、装置、介质及电子设备

Also Published As

Publication number Publication date
CN115599707A (zh) 2023-01-13

Similar Documents

Publication Publication Date Title
WO2024087875A1 (zh) 内存管理方法、装置、介质及电子设备
US11947489B2 (en) Creating snapshots of a storage volume in a distributed storage system
JP5425286B2 (ja) データ処理システムのメモリ使用状況を追跡する方法
CN113287286B (zh) 通过rdma进行分布式存储节点中的输入/输出处理
US9268678B2 (en) Memory defragmentation in a hosted hypervisor
US20140192074A1 (en) Memory management techniques
US10664401B2 (en) Method and system for managing buffer device in storage system
US8799611B2 (en) Managing allocation of memory pages
CN114625481B (zh) 数据处理方法、装置、可读介质及电子设备
US9389997B2 (en) Heap management using dynamic memory allocation
US11016886B2 (en) Multi-ring shared, traversable, and dynamic advanced database
US9697047B2 (en) Cooperation of hoarding memory allocators in a multi-process system
CN113625973A (zh) 数据写入方法、装置、电子设备及计算机可读存储介质
US9189406B2 (en) Placement of data in shards on a storage device
CN117056123A (zh) 数据恢复方法、装置、介质及电子设备
CN117112215A (zh) 内存分配方法、设备及存储介质
CN109614089B (zh) 数据访问代码的自动生成方法、装置、设备及存储介质
CN116841624A (zh) 访存指令的调度方法、装置、电子设备和存储介质
US9405470B2 (en) Data processing system and data processing method
CN116841623A (zh) 访存指令的调度方法、装置、电子设备和存储介质
CN116633900A (zh) 逻辑地址分配方法、装置、电子设备及存储介质
WO2023273564A1 (zh) 虚拟机内存管理方法、装置、存储介质及电子设备
US11467766B2 (en) Information processing method, apparatus, device, and system
US20170293554A1 (en) Hardware-assisted garbage collection
CN113448550B (zh) 实现类的收集管理方法、装置、电子设备及计算机介质