WO2024027544A1 - 内存管理方法及电子设备 - Google Patents

内存管理方法及电子设备 Download PDF

Info

Publication number
WO2024027544A1
WO2024027544A1 PCT/CN2023/109436 CN2023109436W WO2024027544A1 WO 2024027544 A1 WO2024027544 A1 WO 2024027544A1 CN 2023109436 W CN2023109436 W CN 2023109436W WO 2024027544 A1 WO2024027544 A1 WO 2024027544A1
Authority
WO
WIPO (PCT)
Prior art keywords
memory
application
background
running state
state
Prior art date
Application number
PCT/CN2023/109436
Other languages
English (en)
French (fr)
Inventor
季柯丞
方锦轩
王琳
李玲
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2024027544A1 publication Critical patent/WO2024027544A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5022Mechanisms to release resources

Definitions

  • This application relates to the field of terminal technology, and in particular to memory management methods and electronic devices.
  • embodiments of the present application provide a memory management method and electronic device.
  • the technical solution provided by the embodiments of this application can perform memory management at the granularity of the application's business or functional modules, thereby improving the memory management performance of electronic devices.
  • the first aspect provides a memory management method, which method is applied to electronic devices or components (such as chip systems) that can realize the functions of electronic devices.
  • the method includes:
  • Detecting the running state of the first application detecting that the running state of the first application switches from the first running state to the second running state, and processing part of the memory used by the first application; the part of the memory is the first The memory associated with the target service of the application; the target service is the non-critical service of the first application in the second running state.
  • the mobile phone can process non-critical business memory used by navigation applications (such as memory used by rendering threads and user interface threads).
  • the memory is processed, including memory recycling.
  • memory processing is performed based on the functional module or business granularity of the application, the precision of memory processing is higher.
  • processing some functional modules of the application such as functional modules that will not be used temporarily
  • memory data used by non-critical services will not affect the normal operation of the entire application and can improve the degree of application keep-aliveness. Reduce the probability of the application being scanned or exiting abnormally.
  • the electronic device can only recycle some functional modules of the background application or the memory used by non-critical services. In this way, the application can be released to a maximum extent while ensuring the survival of the background application as much as possible. Memory, enhance the fluency and stability of electronic devices, and improve the memory management efficiency of electronic devices.
  • the application does not actually release the memory data and the memory pressure of the system is still large.
  • the technical solution of the embodiment of the present application does not rely on the application to actively release the memory data. Release memory data.
  • the electronic device when detecting that the application is in a corresponding running state of the life cycle, the electronic device can automatically process some functional modules of the application or memory data of non-critical services. This, on the one hand, helps alleviate the memory pressure on the system. On the other hand, the electronic device recycles corresponding memory data from some functional modules or non-critical services of the application, which reduces the total memory occupied by the application. Therefore, the probability of the application being checked or abnormally exited will be greatly reduced.
  • detecting that the running state of the first application switches from the first running state to the second running state processing part of the memory used by the first application includes: detecting that the running state of the first application When the state is switched from the first running state to the second running state, and the duration of the first application in the second running state reaches a first threshold, a part of the memory used by the first application is processed.
  • the running state of the navigation application (an example of the first application) switches from the foreground running state (an example of the first running state) to the background playback state (an example of the second running state).
  • the duration of the navigation application in the background playing state (duration t1-t1') reaches the first threshold, part of the memory used by the navigation application (memory used by the rendering thread and the user interface thread) is processed.
  • the electronic device can delay processing.
  • the electronic device can perform memory recycling after a period of time after the application switches the running state to ensure that the application's running state is stable.
  • the target service includes a first service and/or a second service;
  • the memory associated with the first service includes a first part of memory, and the memory associated with the second service includes a second part of memory;
  • the first part of memory is: pages that are not used by the first application within the first time period;
  • the second part of memory is: compressed pages that are not used by the first application within the second time period.
  • the mobile phone can recycle the memory used by non-critical services such as rendering threads and user interface threads.
  • the first part of memory includes the memory used by the desktop (surface) and media (media).
  • the first part of memory is not used by the navigation application during the t2-t3 time period.
  • the phone can recycle the memory used by non-critical services such as surface and media.
  • the first part of the memory includes the memory used by the service (service)
  • the second part of the memory includes the memory used by the rendering thread and the user interface thread.
  • the first part of the memory is not used by the navigation application during the t3-t4 time period
  • the second part of the memory is not used by the navigation application during the t1-t3 time period (the second time period).
  • the second part of memory includes the memory used by service, media, and surface.
  • the memory of the service has not been used by the navigation application in the t3-t4 time period (the second time period)
  • the memory of the media and surface has not been used by the navigation application in the t2-t4 time period (the second time period). It is detected that the navigation application switches from the background cache running state to the shallow freezing state, and the mobile phone can recycle the memory used by non-critical services such as service, media, and surface.
  • the first part of the memory includes the memory used by the object, class, and method.
  • the memory of object, class, and method is not used by the navigation application in the t5-t6 time period (the first time period).
  • the mobile phone can recycle the memory used by non-critical services such as objects, classes, and methods.
  • the second time period is longer than the first time period.
  • a first compression method is used to process the memory of the first service
  • a second compression method is used to process the memory of the second service.
  • the second running state includes a first background running state
  • processing part of the memory used by the first application including:
  • the first part of memory can be memory that is not used by the application within a short period of time (such as the first period of time).
  • this part of the memory data can be compressed and stored in the compressed space, so as to save part of the memory space.
  • the second running state includes a first background running state
  • processing part of the memory used by the first application including:
  • the second part of memory can be memory that is not used by the application for a long time (such as the second time period).
  • this part of the compressed memory data can be swapped out, for example, through the disk drop mechanism, swapped out to disk space (such as on a Flash device) , reduce memory usage.
  • the first running state is a foreground running state.
  • the first running state is a second background running state.
  • the method further includes:
  • Compress a third part of the memory of the first application where the third part of the memory is: memory that is not used by the first application within the first time period;
  • the fourth part of the memory is: the memory that is not used by the first application in the second time period; the memory of the fourth time period.
  • the duration is longer than the duration of the third time period.
  • processing part of the memory used by the first application includes:
  • the linked list includes the partial memory
  • the method further includes:
  • the first background running state includes the following states: background playing state, background service state, background cache state, shallow freezing state, and deep freezing state;
  • the first application performs the first task in the background.
  • the background playing state means that the application is switched to the background and the graphical interface is no longer presented, but the functions or tasks of the application are still running.
  • a music app runs a music playback task in the background (an example of the first task)
  • a navigation app runs a navigation task in the background.
  • the first application provides background services in the background, and the first application does not perform the first task in the background.
  • Background service applications mainly implement background data collection, message push or resident interrupt waiting services, such as Bluetooth connections.
  • the application can push messages in the background, and for example, the application can collect some data. In order to push some messages to users.
  • the first application does not perform the first task in the background and does not provide background services, and the first application is in the background running state for a first time period.
  • the first application does not perform the first task in the background and does not provide background services, and the first application is in the background running state for a second period of time; the second period of time is greater than the first duration.
  • the first application does not perform the first task in the background and does not provide background services, and the first application is in the background running state for a third period of time; the third period of time is greater than the second duration.
  • the first service includes services related to interface display.
  • interface display-related services include rendering threads and non-critical services executed by user interface threads.
  • the electronic device can recycle the memory used by the application's rendering thread and user interface thread to reduce the memory usage of the application.
  • the first service includes a service corresponding to the first task (a task that the first application does not execute in the background).
  • the first application usually does not execute services corresponding to media and surface in the background.
  • the electronic device can compress the memory used by media and surface to obtain compressed memory data.
  • the first service includes background services
  • the second service includes interface display-related services.
  • the application no longer executes background services (such as service) and has not switched to the foreground display interface for a long time.
  • the first business includes background services
  • the second business includes rendering threads and user interface threads. Executed business.
  • the electronic device can compress the memory related to the service that has not been used by the application in a short period of time to obtain compressed memory data.
  • the electronic device can swap out compressed memory related to rendering threads and user interface threads that have not been used by applications for a long time.
  • the second service includes: the service corresponding to the first task and the background service.
  • the second service includes: media, surface (the service corresponding to the first task), and service (background service).
  • the first service includes services corresponding to objects, classes, and methods.
  • the electronic device can compress the memory used by objects, classes, and methods.
  • a memory management device is provided.
  • the device is applied to electronic equipment or components that support the functions of electronic equipment (such as chip systems).
  • the device includes:
  • a processing unit configured to detect the running state of the first application; detect that the running state of the first application switches from the first running state to the second running state, and process a part of the memory used by the first application; the part of the memory
  • the memory associated with the target service of the first application; the target service is the non-critical service of the first application in the second running state.
  • detecting that the running state of the first application switches from the first running state to the second running state processing part of the memory used by the first application includes: detecting that the running state of the first application When the state is switched from the first running state to the second running state, and the duration of the first application in the second running state reaches a first threshold, a part of the memory used by the first application is processed.
  • the target service includes a first service and/or a second service;
  • the memory associated with the first service includes a first part of memory, and the memory associated with the second service includes a second part of memory;
  • the first part of memory is: pages that are not used by the first application within the first time period;
  • the second part of memory is: compressed pages that are not used by the first application within the second time period.
  • the second time period is longer than the first time period.
  • the second running state includes a first background running state
  • processing part of the memory used by the first application including:
  • the second running state includes a first background running state
  • processing part of the memory used by the first application including:
  • the first running state is a foreground running state.
  • the first running state is a second background running state.
  • the processing unit is also used to:
  • Compress a third part of the memory of the first application where the third part of the memory is: memory that is not used by the first application within the first time period;
  • the fourth part of the memory is: the memory that is not used by the first application in the second time period; the memory of the fourth time period.
  • the duration is longer than the duration of the third time period.
  • processing part of the memory used by the first application includes:
  • the linked list includes the partial memory
  • the device further includes:
  • Display unit used to display the first interface
  • An input unit is configured to receive an operation input by a user on the first interface, where the operation is used to enable the memory management function.
  • the first background running state includes the following states: background playing state, background service state, background cache state, shallow freezing state, and deep freezing state;
  • the first application performs the first task in the background
  • the first application provides background services in the background, and the first application does not perform the first task in the background;
  • the first application does not perform the first task in the background and does not provide background services, and the first application is in the background running state for a first length of time;
  • the first application does not perform the first task in the background and does not provide background services, and the first application is in the background running state for a second period of time; the second period of time is greater than the first duration;
  • the first application does not perform the first task in the background and does not provide background services, and the first application is in the background running state for a third period of time; the third period of time is greater than the second duration.
  • the first service includes services related to interface display
  • the first service includes the service corresponding to the first task
  • the first service includes background services, and the second service includes interface display-related services;
  • the second service includes: the business corresponding to the first task and the background service;
  • the first service includes services corresponding to objects, classes, and methods.
  • embodiments of the present application provide an electronic device that has the function of implementing the method described in any of the above aspects and any of the possible implementations.
  • This function can be implemented by hardware, or can be implemented by hardware and corresponding software.
  • the hardware or software includes one or more modules corresponding to the above functions.
  • embodiments of the present application provide a computer-readable storage medium.
  • the computer-readable storage medium stores a computer program (which may also be referred to as instructions or codes).
  • the computer program When the computer program is executed by an electronic device, it causes the electronic device to execute the first aspect or any of the first aspects.
  • embodiments of the present application provide a computer program product, which when the computer program product is run on an electronic device, causes the electronic device to execute the method of the first aspect or any one of the implementation modes of the first aspect.
  • inventions of the present application provide a circuit system.
  • the circuit system includes a processing circuit, and the processing circuit is configured to execute the method of the first aspect or any one of the implementation modes of the first aspect.
  • embodiments of the present application provide a chip system, including at least one processor and at least one interface circuit.
  • the at least one interface circuit is used to perform transceiver functions and send instructions to at least one processor.
  • at least one processor When at least one processor When executing instructions, at least one processor performs the method of the first aspect or any one of the implementations of the first aspect.
  • Figure 1A is a schematic diagram of memory division provided by an embodiment of the present application.
  • Figure 1B is a schematic diagram of memory management based on a linked list provided by an embodiment of the present application.
  • Figure 1C is a schematic diagram of the linked list management mechanism when the memory page is used according to the embodiment of the present application.
  • Figure 1D is a schematic diagram of the recycling process in the inactive linked list when the memory is insufficient according to the embodiment of the present application;
  • Figure 1E is a schematic diagram of the memory compression and swap-out mechanism provided by the embodiment of the present application.
  • Figure 1F is a schematic diagram of the decay process in the active linked list when memory is insufficient according to an embodiment of the present application
  • Figure 1G is a schematic diagram of the memory management mechanism in related technologies
  • Figure 2 is a schematic diagram of framework-based memory management provided by an embodiment of the present application.
  • Figure 3 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • Figure 4 is a schematic diagram of the software structure of an electronic device provided by an embodiment of the present application.
  • Figure 5 is a schematic structural diagram of another electronic device provided by an embodiment of the present application.
  • Figure 6 is a schematic scenario diagram of the memory management method provided by the embodiment of the present application.
  • Figure 7 is a schematic diagram of the virtual memory space and linked list provided by the embodiment of the present application.
  • Figure 8 is a schematic diagram of a memory management method provided by an embodiment of the present application.
  • Figure 9 is a schematic diagram of the memory management operation in the background cache state provided by the embodiment of the present application.
  • Figure 10 is a schematic diagram of memory management operations in a shallow freezing state provided by an embodiment of the present application.
  • Figure 11 is a schematic scenario diagram of the memory management method provided by the embodiment of the present application.
  • Figure 12 is a schematic diagram of the interface provided by the embodiment of the present application.
  • Figure 13 is a schematic structural diagram of a memory management device provided by an embodiment of the present application.
  • first and second are used for descriptive purposes only and cannot be understood as indicating or implying relative importance or implicitly indicating the quantity of indicated technical features. Therefore, features defined as “first” and “second” may explicitly or implicitly include one or more of these features. In the description of the embodiments of this application, unless otherwise specified, "plurality” means two or more.
  • Memory units include but are not limited to memory pages.
  • the memory can be divided into memory pages according to a certain size (such as 4K). Through the memory paging mechanism, the efficiency of accessing memory can be improved.
  • memory units can also be implemented in other ways, not limited to memory pages. It should be understood that in the embodiment of the present application, memory pages are used as examples of memory units, but this does not constitute a limitation on the memory units.
  • User-mode memory includes anonymous pages and file-backed pages.
  • the file page refers to a memory page (which may be referred to as a memory page or page) with a source backup page in an external storage space (such as a disk).
  • the file page has a mapping relationship with the source backup page in the external storage space.
  • file pages may be used to cache files data.
  • the file page includes core library code, application code or icon resources, etc.
  • the program can read files from the disk through basic operations, such as read/mmap (read/mmap).
  • the system can apply for pages to store the content read from the disk. These are used to store the contents of the disk file.
  • Page can be regarded as a kind of file page.
  • Anonymous pages refer to memory pages that do not have corresponding files in the external storage space.
  • pages used by the heap, stack, etc. of a process can be anonymous pages.
  • Anonymous pages can be used to store temporary calculation results of a running process.
  • external storage space refers to storage space other than memory.
  • memory can also include memory independently managed by the kernel or modules in the kernel. This part of memory can be used to store basic data structures and driver data that maintain the normal operation of the system.
  • the operating system can recycle different types of memory in different ways and proportions.
  • the memory of the electronic device can be divided into three parts:
  • the first part of memory the heap memory of the Java virtual machine (on-heap memory).
  • the virtual machine uses a garbage collector (GC) to manage the heap memory based on its own memory management mechanism.
  • GC garbage collector
  • the virtual machine calls the mmap interface to allocate heap memory.
  • the operating system will record the portion of heap memory allocated by the virtual machine.
  • the heap memory of the virtual machine can be used to store one or more of objects, classes, and methods.
  • the second part of memory is direct (native) memory.
  • the direct memory may be off-heap memory.
  • heap memory that is managed by a virtual machine
  • direct memory is managed by the operating system, which can reduce the impact of garbage collection on applications to a certain extent.
  • the data in the direct memory mainly includes: data associated with the display function, data associated with the rendering function, and data associated with system services.
  • direct memory stores data related to the running of threads such as the user interface (UI) thread and the rendering (Render) thread.
  • UI user interface
  • Render rendering
  • Another example is to store the runtime metadata of threads in direct memory.
  • Another example is to store push service-related data directly in memory.
  • the operating system allocates direct memory through, for example, the C++ allocator.
  • C++ allocators include but are not limited to jemalloc, scudo, etc.
  • the C++ allocator can identify the identity of the thread currently requesting memory (such as the thread name), and determine the function of the thread based on the thread's identity. For example, if the C++ allocator identifies that the thread name of the thread currently applying for memory is RenderThread, it can be determined that the thread is a rendering thread, and the function corresponding to the thread name is to render the application interface.
  • the third part of memory streaming memory.
  • hardware encoding and decoding is basically used in chip architecture to accelerate the rendering of video and audio.
  • the industry usually uses (ION) or dma_buf allocator to identify memory space commonalities. By identifying the ION or dma_buf attribute, you can know that this part of the memory is used for playback.
  • streaming media memory can be used to store data related to display class (surface), multimedia class (media), and service class (service).
  • the operating system manages memory uniformly
  • memory reclamation can be performed based on the memory watermark.
  • the system when receiving a memory allocation request, the system detects the amount of remaining memory. If the remaining memory is lower than the set low (low) watermark threshold, it wakes up the recycling thread (kswapd) to implement asynchronous memory. Recycle to maintain the remaining amount of system memory and meet memory allocation requirements.
  • the recycling thread can maintain the free memory at 100 ⁇ 200M.
  • the recycling thread can scan the used memory and recycle part of the used memory.
  • the recycling thread can calculate the number of memory to be recycled based on the parameter Swappiness and the proportion of recycled memory, and recycle the memory to be recycled based on the number of memory to be recycled.
  • the recycling thread calculates the amount of memory to be recycled, it does not consider the actual memory requirements of each application, which may lead to excessive memory recycling or insufficient memory recycling. For example, when the electronic device has very little free memory, the electronic device excessively reclaims a large amount of memory, causing normally running applications to exit abnormally, affecting the performance of the electronic device.
  • the overall memory of the electronic device can include kernel memory, file pages, anonymous pages, free memory, and other memories.
  • Different applications can use different memory to support the operation of the application.
  • application 1 uses file pages and anonymous pages
  • application 2 uses file pages and anonymous pages.
  • the file pages used by Application 1 and Application 2 can be the same file page. In this case, the file page can be called a shared page.
  • the file pages used by Application 1 and Application 2 may be different file pages.
  • the anonymous pages used by Application 1 and Application 2 can be the same or different anonymous pages.
  • the kernel can manage file pages and anonymous pages through the least recently used (LRU) linked list.
  • the linked list can be divided into two levels: active and inactive linked lists.
  • the active linked list includes: an active anonymous page linked list used to manage anonymous pages, and an active file page linked list used to manage file pages.
  • the inactive linked list includes: an inactive anonymous page used to manage anonymous pages. Linked list, and inactive file page linked list for managing file pages.
  • Multiple memory pages can be stored in a linked list.
  • multiple active file pages can be stored in the active file page linked list, and multiple inactive file pages can be stored in the inactive file page linked list.
  • the active anonymous page linked list can store multiple active anonymous pages, and the inactive anonymous page linked list can store multiple inactive anonymous pages.
  • Active memory pages can be memory pages that are frequently used by the process, and inactive memory pages can be memory pages that are not frequently used by the process.
  • each memory page corresponds to a usage flag (such as the flag bit PG_referenced), which can be used to indicate whether the memory page has been used (referenced).
  • a usage flag such as the flag bit PG_referenced
  • using a memory page may also be called accessing a memory page, calling a memory page, etc.
  • the electronic device can move memory page A from the inactive linked list to the active linked list, and can Set the flag bit PG_referenced to 0, indicating that memory page A has not been used since it was moved to the active linked list.
  • the following describes the mechanism for managing memory pages based on linked lists when there is insufficient free memory (for example, the free memory is below the waterline).
  • memory pages can be recycled from the inactive linked list first.
  • the electronic device scans the memory page in the inactive linked list and determines whether to recycle the memory page based on the identification bit of the memory page.
  • the identification bit PG_referenced of the memory page A is 1, which means that the memory page A has been used. In this case, you can It is believed that the probability of memory page A being used again in the short term is relatively high. In order to avoid being unable to find memory page A quickly when memory page A is used again in the short term, the electronic device skips memory page A and does not recycle the memory page. , and set the identification bit PG_referenced of this memory page A to 0.
  • the identification bit PG_referenced of memory page A is 0, which means that memory page A has not been used within a certain period of time. Then, in order to increase the free memory of the entire machine, the electronic device can recycle memory page A into the memory recycling area. As the memory pages at the tail of the inactive linked list are reclaimed, the memory pages at the front of the inactive linked list move toward the tail.
  • the memory recovery area includes but is not limited to compressed space (such as zram) and disk space (such as hard disk, etc.).
  • the electronic device can recycle the memory pages into the compressed space by compressing the memory pages and storing the compressed memory pages in the compressed space.
  • the electronic device can recycle the memory pages to the disk space by swapping the memory pages to the disk space.
  • the electronic device can reclaim some pages stored in the memory through the page compression thread.
  • the way the page compression thread reclaims memory is generally compression.
  • the anonymous page is compressed to obtain a compressed anonymous page, and the compressed anonymous page can be stored in the compressed space of the memory.
  • the memory footprint of a compressed anonymous page is smaller than the memory footprint of the corresponding anonymous page.
  • you can also release allocated anonymous pages in memory such as anonymous pages that are not commonly used). After the anonymous pages are released, they no longer occupy memory. This enables memory recycling and increases free memory for application use.
  • the page can be decompressed from the compressed space, and the decompressed page can be allocated to the process for use.
  • the electronic device may reclaim compressed pages in memory through a page swap thread.
  • the page swap thread reclaims memory by swapping out compressed pages in memory (such as compressed space) to disk.
  • compressed anonymous pages in memory such as compressed space
  • the page swap thread transfers the compressed anonymous pages stored in the double data rate (DDR) synchronous dynamic random access memory (SDRAM) through the input/output (I/O) interface. Send to disk for storage. This reduces the memory usage in DDR SDRAM and facilitates subsequent applications to apply for memory in DDR SDRAM.
  • DDR double data rate
  • SDRAM synchronous dynamic random access memory
  • the compressed page in the disk space (such as the compressed anonymous page) can be swapped back to the memory (such as the compressed space) and decompressed , allocate the decompressed page to the process.
  • the above-mentioned swap-out process can also be understood as storing data from random access memory (RAM) to read-only memory (ROM).
  • RAM random access memory
  • ROM read-only memory
  • part of the ROM storage space can be used as RAM, thereby realizing expansion of the RAM storage space.
  • devices with insufficient RAM space such as mobile phones with 2G or 4G memory
  • it can significantly reduce system lag.
  • the memory pages in the active linked list will also decline, as shown in Figure 1F, taking memory page A at the tail of the active linked list as an example.
  • the decline process is as follows:
  • the identification bit PG_referenced of memory page A is 0, which means that memory page A has not been used within a certain period of time. Then, the electronic device can move memory page A to inactive in the linked list. As the memory pages at the tail of the active linked list move out of the active linked list, the memory pages at the front of the active linked list move toward the tail.
  • the memory recycling process can be simplified to the following steps: the memory pages in the active linked list can be moved to the tail of the active linked list. In some cases, the memory pages that meet the conditions at the tail of the active linked list will be migrated to the head of the inactive linked list. Realize the migration of memory pages between linked lists. The memory pages in the inactive linked list can be migrated to the tail of the inactive linked list, and the electronic device can reclaim the memory pages that meet the conditions from the tail of the inactive linked list.
  • an application can run in the software system of an electronic device in the form of one or more processes.
  • some processes of the application can run in the foreground, and processes running in the foreground can often have a visual interface. Since the foreground process is usually related to the interface visible to the user, the smooth running of the foreground process usually has a greater impact on the smoothness of the electronic device. Similarly, when an application runs in the foreground, it usually has a visual interface, and foreground applications have a greater impact on the smoothness of electronic devices.
  • some processes of the application can run in the background.
  • background processes usually do not have a visual interface
  • the running of some background processes also has a great impact on the smoothness of the electronic device.
  • the background download task is not in the foreground, it directly affects the response delay of the electronic device and affects the smoothness of the electronic device.
  • the background processing process of the photo is invisible, its processing speed affects the user's photo-taking experience and the smoothness of the electronic device.
  • some applications running in the background do not have a visual interface, their performance has a great impact on the smoothness of the electronic device.
  • the memory of an application can be managed uniformly by the operating system.
  • the operating system can prioritize the memory pages of the target application for recycling based on the application's frequency of use of the memory page.
  • the target application is an application that uses the memory page less frequently.
  • some background applications may be highly active, and accordingly, the frequency of use of memory pages by background applications will increase.
  • the memory pages used by the foreground application are recycled, affecting the performance of the foreground application, and thus affecting the operation of electronic devices. the fluency.
  • this memory management solution may also cause the memory of some important background applications to be recycled, which may also affect the smoothness of electronic devices.
  • the system can determine the application that needs to be killed based on the memory usage of the application.
  • the system will give priority to killing applications with high memory usage. For example, if application A occupies 1G of memory, application B occupies 300M of memory, and application C occupies 200M of memory, the system can kill application A that occupies the most memory first.
  • the operating system In addition to the globally unified memory management mechanism provided by the operating system kernel, the operating system also provides a framework-based or interface-based memory management method.
  • AOSP Android open source project
  • the memory that the application does not need to use temporarily can be released. In this way, for the target application that has released part of the memory, the total memory occupied by it will be relatively reduced.
  • the target application When the memory of the electronic device is insufficient, the target application has released part of the memory based on the Purgeable Memory mechanism. memory, takes up less memory, Therefore, the probability of the target application being killed by the system is usually reduced, which can improve the degree of survival of the target application. That is, the target application can survive longer in the system by actively releasing part of the memory, which can improve the user's experience of using the target application.
  • the memory released by the electronic device includes memory occupied by any one or more of the following data: files, pictures, dynamically generated view controls, etc.
  • the operating system can detect the running status of the application.
  • the operating system sends a communication message to the application in the specific running status.
  • the application responds to the notification message and calls the onTrimMemory interface in order to execute the memory release method in the onTrimMemory interface and release part of the memory.
  • the onTrimMemory interface is provided by the system, and application developers (referred to as application developers) can implement the memory release method based on the onTrimMemory interface.
  • application developers can override the onTrimMemory interface and define the application's memory release method in the onTrimMemory interface. Later, under different circumstances, the application can call the onTrimMemory interface to release its own memory to prevent the application from being killed directly by the system and improve the user experience of using the application.
  • the onTrimMemory interface can be implemented in the following format:
  • the running status of the application can be as shown in Table 1 below.
  • the application runs in the foreground, and the system service in the operating system detects the running status of the application. Subsequently, the application switches to the background and the system service monitors that the application switches from the foreground to the TRIM_MEMORY_BACKGROUND (the application has switched to the background) running state.
  • the system service sends a notification message to the application and triggers the application to call the onTrimMemory interface. , execute the memory release method in the onTrimMemory interface to release part of the memory to reduce the memory pressure of the system.
  • the memory release method in the onTrimMemory interface needs to be implemented by the application developer. That is, the system leaves the memory recycling to the application itself. Therefore, the effect of memory release depends to a large extent on the development level and ability of the application developer. And because many application developers don't know which memory can be released or based on other reasons As a result, most applications retreat to the background without releasing memory, or the memory release effect is not good, or memory release errors cause the system to crash.
  • the running status of each application changes all the time. It is very likely that during the execution of the onTrimMemory method of the application, the running status of the application suddenly deteriorates, causing the application to be blocked due to high memory usage before the application's memory can be released. Kill it, thereby affecting the user's experience using the application.
  • embodiments of the present application provide a memory management method.
  • the electronic device can detect the life cycle of the application, and according to the stage of the life cycle of the application, adjust the memory management method used by the functional module corresponding to the application.
  • Memory data is processed.
  • the performance requirements of applications may be different at different life cycle stages, when the application is in different life cycle stages, the memory data of each functional module of the application may be processed in different ways.
  • the memory data used by the application's functional modules for interface display (such as memory data used by the rendering thread and UI thread) is compressed.
  • the memory data used by the application's functional module for interface display is swapped out.
  • the memory processing needs of the application in different life cycle stages can be met.
  • the memory can be processed from the functional module of the application as efficiently as possible, so as to improve the performance of the electronic device. Free memory, thereby improving the overall operating performance of electronic equipment.
  • the memory management method in the embodiment of the present application can be applied in electronic devices.
  • it can be used in AOSP system or similar system electronic equipment.
  • the electronic device can allocate memory using mmap, standard allocator, kernel allocator, etc.
  • the electronic device can be a mobile phone, a tablet computer, a personal computer (PC), a netbook, and other devices that require memory optimization. This application does not place any special restrictions on the specific form of the electronic device.
  • FIG. 3 shows a schematic diagram of the hardware structure of the electronic device 100a.
  • the structure of other electronic devices may refer to the structure of the electronic device 100a.
  • the electronic device 100a may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2 , mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, headphone interface 170D, sensor module 180, button 190, motor 191, indicator 192, camera 193, display screen 194, and Subscriber identification module (SIM) card interface 195, etc.
  • SIM Subscriber identification module
  • the sensor module 180 may include a pressure sensor 180A, a gyro sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, and ambient light. Sensor 180L, bone conduction sensor 180M, etc.
  • the structure illustrated in the embodiment of the present invention does not constitute a specific limitation on the electronic device 100a.
  • the electronic device 100a may include more or fewer components than shown in the figures, or some components may be combined, some components may be separated, or some components may be arranged differently.
  • the components illustrated may be implemented in hardware, software, or a combination of software and hardware.
  • the processor 110 may include one or more processing units.
  • the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processing unit (GPU), and an image signal processor. (image signal processor, ISP), controller, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural network processor (neural-network processing unit, NPU), etc.
  • application processor application processor, AP
  • modem processor graphics processing unit
  • GPU graphics processing unit
  • image signal processor image signal processor
  • ISP image signal processor
  • controller video codec
  • digital signal processor digital signal processor
  • DSP digital signal processor
  • baseband processor baseband processor
  • neural network processor neural-network processing unit
  • the controller can generate operation control signals based on the instruction operation code and timing signals to complete the control of fetching and executing instructions.
  • the processor 110 may also be provided with a memory for storing instructions and data.
  • the memory in processor 110 is cache memory. This memory may hold instructions or data that have been recently used or recycled by processor 110 . If the processor 110 needs to use the instructions or data again, it can be called directly from the memory. Repeated access is avoided and the waiting time of the processor 110 is reduced, thus improving the efficiency of the system.
  • processor 110 may include one or more interfaces.
  • Interfaces may include integrated circuit (inter-integrated circuit, I2C) interface, integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, pulse code modulation (pulse code modulation, PCM) interface, universal asynchronous receiver and transmitter (universal asynchronous receiver/transmitter (UART) interface, mobile industry processor interface (MIPI), general-purpose input/output (GPIO) interface, subscriber identity module (SIM) interface, and /or universal serial bus (USB) interface, etc.
  • I2C integrated circuit
  • I2S integrated circuit built-in audio
  • PCM pulse code modulation
  • UART universal asynchronous receiver and transmitter
  • MIPI mobile industry processor interface
  • GPIO general-purpose input/output
  • SIM subscriber identity module
  • USB universal serial bus
  • the USB interface 130 is an interface that complies with the USB standard specification, and can specifically be a Mini USB interface, a Micro USB interface, USB Type C interface, etc.
  • the USB interface 130 can be used to connect a charger to charge the electronic device 100a, and can also be used to transmit data between the electronic device 100a and peripheral devices. It can also be used to connect headphones to play audio through them.
  • This interface can also be used to connect other electronic devices, such as AR devices, etc.
  • the interface connection relationships between the modules illustrated in the embodiment of the present invention are only schematic illustrations and do not constitute a structural limitation on the electronic device 100a.
  • the electronic device 100a may also adopt different interface connection methods in the above embodiments, or a combination of multiple interface connection methods.
  • the charging management module 140 is used to receive charging input from the charger.
  • the charger can be a wireless charger or a wired charger.
  • the charging management module 140 may receive charging input from the wired charger through the USB interface 130 .
  • the charging management module 140 may receive wireless charging input through the wireless charging coil of the electronic device 100a. While charging the battery 142, the charging management module 140 can also provide power to the terminal through the power management module 141.
  • the power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110.
  • the power management module 141 receives input from the battery 142 and/or the charging management module 140, and supplies power to the processor 110, the internal memory 121, the display screen 194, the camera 193, the wireless communication module 160, and the like.
  • the power management module 141 can also be used to monitor battery capacity, battery cycle times, battery health status (leakage, impedance) and other parameters.
  • the power management module 141 may also be provided in the processor 110 .
  • the power management module 141 and the charging management module 140 may also be provided in the same device.
  • the wireless communication function of the electronic device 100a can be implemented through the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modem processor and the baseband processor.
  • Antenna 1 and Antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in electronic device 100a may be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization. For example: Antenna 1 can be reused as a diversity antenna for a wireless LAN. In other embodiments, antennas may be used in conjunction with tuning switches.
  • the mobile communication module 150 can provide wireless communication solutions including 2G/3G/4G/5G applied on the electronic device 100a.
  • the mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (LNA), etc.
  • the mobile communication module 150 can receive electromagnetic waves through the antenna 1, perform filtering, amplification and other processing on the received electromagnetic waves, and transmit them to the modem processor for demodulation.
  • the mobile communication module 150 can also amplify the signal modulated by the modem processor and convert it into electromagnetic waves through the antenna 1 for radiation.
  • at least part of the functional modules of the mobile communication module 150 may be disposed in the processor 110 .
  • at least part of the functional modules of the mobile communication module 150 and at least part of the modules of the processor 110 may be provided in the same device.
  • a modem processor may include a modulator and a demodulator.
  • the modulator is used to modulate the low-frequency baseband signal to be sent into a medium-high frequency signal.
  • the demodulator is used to demodulate the received electromagnetic wave signal into a low-frequency baseband signal.
  • the demodulator then transmits the demodulated low-frequency baseband signal to the baseband processor for processing.
  • the application processor outputs sound signals through audio devices (not limited to speaker 170A, receiver 170B, etc.), or displays images or videos through display screen 194.
  • the modem processor may be a stand-alone device.
  • the modem processor may be independent of the processor 110 and may be provided in the same device as the mobile communication module 150 or other functional modules.
  • the wireless communication module 160 can provide applications on the electronic device 100a including wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) network), Bluetooth (bluetooth, BT), and global navigation satellites.
  • WLAN wireless local area networks
  • System global navigation satellite system, GNSS
  • frequency modulation frequency modulation, FM
  • near field communication technology near field communication, NFC
  • infrared technology infrared, IR
  • the wireless communication module 160 may be one or more devices integrating at least one communication processing module.
  • the wireless communication module 160 receives electromagnetic waves via the antenna 2 , frequency modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110 .
  • the wireless communication module 160 can also receive the signal to be sent from the processor 110, frequency modulate it, amplify it, and convert it into electromagnetic waves through the antenna 2 for radiation.
  • the electronic device 100a can establish wireless connections with other terminals or servers through the wireless communication module 160 (such as a WLAN module) and the antenna 2 to implement communication between the electronic device 100a and other terminals or servers.
  • the wireless communication module 160 such as a WLAN module
  • the antenna 1 of the electronic device 100a is coupled to the mobile communication module 150, and the antenna 2 is coupled to the wireless communication module 160, so that the electronic device 100a can communicate with the network and other devices through wireless communication technology.
  • the wireless communication technology may include global system for mobile communications (GSM), general packet radio service (GPRS), code division multiple access (CDMA), broadband Code division multiple access (wideband code division multiple access, WCDMA), time-division code division multiple access (TD-SCDMA), long term evolution (long term evolution, LTE), BT, GNSS, WLAN, NFC , FM, and/or IR technology, etc.
  • GSM global system for mobile communications
  • GPRS general packet radio service
  • CDMA code division multiple access
  • WCDMA broadband Code division multiple access
  • TD-SCDMA time-division code division multiple access
  • LTE long term evolution
  • BT GNSS
  • WLAN wireless local area network
  • NFC long term evolution
  • FM FM
  • SBAS satellite based augmentation systems
  • the electronic device 100a implements display functions through a GPU, a display screen 194, an application processor, and the like.
  • the GPU is an image processing microprocessor and is connected to the display screen 194 and the application processor. GPUs are used to perform mathematical and geometric calculations for graphics rendering.
  • Processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
  • the display screen 194 is used to display images, videos, etc.
  • Display 194 includes a display panel.
  • the display panel can use a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active matrix organic light emitting diode or an active matrix organic light emitting diode (active-matrix organic light emitting diode).
  • LCD liquid crystal display
  • OLED organic light-emitting diode
  • AMOLED organic light-emitting diode
  • FLED flexible light-emitting diode
  • Miniled MicroLed, Micro-oLed, quantum dot light emitting diode (QLED), etc.
  • the electronic device 100a may include 1 or N display screens 194, where N is a positive integer greater than 1.
  • the electronic device 100a can implement the shooting function through an ISP, a camera 193, a video codec, a GPU, a display screen 194, an application processor, and the like.
  • the ISP is used to process the data fed back by the camera 193. For example, when taking a photo, the shutter is opened, the light is transmitted to the camera sensor through the lens, the optical signal is converted into an electrical signal, and the camera sensor passes the electrical signal to the ISP for processing, and converts it into an image visible to the naked eye. ISP can also perform algorithm optimization on image noise, brightness, and skin color. ISP can also optimize the exposure, color temperature and other parameters of the shooting scene. In some embodiments, the ISP may be provided in the camera 193.
  • Camera 193 is used to capture still images or video.
  • the object passes through the lens to produce an optical image that is projected onto the photosensitive element.
  • the photosensitive element can be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
  • CMOS complementary metal-oxide-semiconductor
  • the photosensitive element converts the optical signal into an electrical signal, and then passes the electrical signal to the ISP to convert it into a digital image signal.
  • ISP outputs digital image signals to DSP for processing.
  • DSP converts digital image signals into standard RGB, YUV and other format image signals.
  • the electronic device 100a may include 1 or N cameras 193, where N is a positive integer greater than 1.
  • Digital signal processors are used to process digital signals. In addition to digital image signals, they can also process other digital signals. For example, when the electronic device 100a selects a frequency point, the digital signal processor is used to perform Fourier transform on the frequency point energy.
  • Video codecs are used to compress or decompress digital video.
  • Electronic device 100a may support one or more video codecs. In this way, the electronic device 100a can play or record videos in multiple encoding formats, such as moving picture experts group (MPEG) 1, MPEG2, MPEG3, MPEG4, etc.
  • MPEG moving picture experts group
  • NPU is a neural network (NN) computing processor.
  • NN neural network
  • the NPU can realize intelligent cognitive applications of the electronic device 100a, such as image recognition, face recognition, speech recognition, text understanding, etc.
  • the external memory interface 120 can be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the electronic device 100a.
  • the external memory card communicates with the processor 110 through the external memory interface 120 to implement the data storage function. Such as saving music, videos, etc. files in external memory card.
  • Internal memory 121 may be used to store computer executable program code, which includes instructions.
  • the internal memory 121 may include a program storage area and a data storage area.
  • the stored program area can store an operating system, at least one application program required for a function (such as a sound playback function, an image playback function, etc.).
  • the storage data area can store data created during use of the electronic device 100a (such as audio data, phone book, etc.).
  • the internal memory 121 may include high-speed random access memory, and may also include non-volatile memory, such as at least one disk storage device, flash memory device, universal flash storage (UFS), etc.
  • the processor 110 executes various functional applications and data processing of the electronic device 100a by executing instructions stored in the internal memory 121 and/or instructions stored in a memory provided in the processor.
  • the electronic device 100a can implement audio functions through the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the headphone interface 170D, and the application processor. Such as music playback, recording, etc.
  • the audio module 170 is used to convert digital audio information into analog audio signal output, and is also used to convert analog audio input into digital audio signals. Audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be provided in the processor 110 , or some functional modules of the audio module 170 may be provided in the processor 110 .
  • Speaker 170A also called “speaker” is used to convert audio electrical signals into sound signals. Electronic device 100a can listen to music through speaker 170A, or listen to hands-free calls.
  • Receiver 170B also called “earpiece” is used to convert audio electrical signals into sound signals.
  • the electronic device 100a answers a call or voice message When listening, the voice can be heard by bringing the receiver 170B close to the human ear.
  • Microphone 170C also called “microphone” or “microphone” is used to convert sound signals into electrical signals. When making a call or sending a voice message, the user can speak close to the microphone 170C with the human mouth and input the sound signal to the microphone 170C.
  • the electronic device 100a may be provided with at least one microphone 170C. In other embodiments, the electronic device 100a may be provided with two microphones 170C, which in addition to collecting sound signals, may also implement a noise reduction function. In other embodiments, the electronic device 100a can also be provided with three, four or more microphones 170C to collect sound signals, reduce noise, identify sound sources, and implement directional recording functions, etc.
  • the headphone interface 170D is used to connect wired headphones.
  • the headphone interface 170D can be a USB interface 130, or a 3.5mm open mobile terminal platform (OMTP) standard interface, or a cellular telecommunications industry association of the USA (CTIA) standard interface.
  • OMTP open mobile terminal platform
  • CTIA cellular telecommunications industry association of the USA
  • the buttons 190 include a power button, a volume button, etc.
  • Key 190 may be a mechanical key. It can also be a touch button.
  • the electronic device 100a may receive key inputs and generate key signal inputs related to user settings and function control of the electronic device 100a.
  • the motor 191 can generate vibration prompts.
  • the motor 191 can be used for vibration prompts for incoming calls and can also be used for touch vibration feedback.
  • touch operations for different applications can correspond to different vibration feedback effects.
  • the motor 191 can also respond to different vibration feedback effects for touch operations in different areas of the display screen 194 .
  • Different application scenarios such as time reminders, receiving information, alarm clocks, games, etc.
  • the touch vibration feedback effect can also be customized.
  • the indicator 192 may be an indicator light, which may be used to indicate charging status, power changes, or may be used to indicate messages, missed calls, notifications, etc.
  • the SIM card interface 195 is used to connect a SIM card.
  • the SIM card can be connected to or separated from the electronic device 100a by inserting it into the SIM card interface 195 or pulling it out from the SIM card interface 195 .
  • the electronic device 100a can support 1 or N SIM card interfaces, where N is a positive integer greater than 1.
  • SIM card interface 195 can support Nano SIM card, Micro SIM card, SIM card, etc. Multiple cards can be inserted into the same SIM card interface 195 at the same time. The types of the plurality of cards may be the same or different.
  • the SIM card interface 195 is also compatible with different types of SIM cards.
  • the SIM card interface 195 is also compatible with external memory cards.
  • the electronic device 100a interacts with the network through the SIM card to implement functions such as phone calls and data communication.
  • the electronic device 100a uses an eSIM, that is, an embedded SIM card.
  • the eSIM card can be embedded in the electronic device 100a and cannot be separated from the electronic device 100a.
  • the structure of the electronic device can also refer to the structure shown in Figure 5.
  • the electronic device can have more or fewer components than the structure shown in Figure 5, or some components can be combined or some components can be separated. Or a different component arrangement.
  • the components illustrated may be implemented in hardware, software, or a combination of software and hardware.
  • the software system of the electronic device 100a may adopt a layered architecture, an event-driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture.
  • the embodiment of the present invention uses a layered architecture
  • the system is taken as an example to illustrate the software structure of the electronic device 100a.
  • FIG. 4 is a software structure block diagram of the electronic device 100a according to the embodiment of the present invention.
  • the layered architecture divides the software into several layers, and each layer has clear roles and division of labor.
  • the layers communicate through software interfaces.
  • the Android system is divided into four layers, from top to bottom: application layer, application framework layer, Android runtime and system libraries, and kernel layer.
  • the application layer can include a series of application packages.
  • the application package can include camera, gallery, calendar, call, map, navigation, WLAN, Bluetooth, music, video, short message and other applications.
  • One or more applications may be running in the electronic device, and each application has at least one corresponding process.
  • a process has at least one thread executing tasks. That is, there are multiple threads running in electronic devices. Threads include java threads and C/C++ threads.
  • electronic devices can allocate processing units (such as CPU cores) to threads according to certain strategies. After a thread is assigned a processing unit, it can perform corresponding tasks through the processing unit.
  • processing units such as CPU cores
  • the application framework layer provides an application programming interface (API) and programming framework for applications in the application layer.
  • API application programming interface
  • the application framework layer includes some predefined functions.
  • the application framework layer may include a first service, a virtual machine, a direct memory allocator, and a mmap interface.
  • the first service is used to detect the running status of the application.
  • a notification message is sent to the memory management module of the kernel layer.
  • notification messages are used to indicate the running status of the application.
  • the memory management module manages the memory of one or more functional modules of the application according to the running status of the application.
  • the memory of each functional module can be managed based on the granularity of functional modules or business types in the application.
  • the first method is used to manage the memory of the first component
  • the second method is used to manage the memory of the second component.
  • the functional modules of the application include but are not limited to one or more of the following: components, threads, methods, classes, objects, surfaces (can correspond to a memory area, and the memory area Stores information about the image to be displayed on the screen), media.
  • Optional components include but are not limited to any one or more of the following: activity, fragment, service, content provider, and broadcast receiver.
  • application threads include but are not limited to one or more of the following: rendering (Render) thread, user interface (UI) thread.
  • different functional modules can correspond to different business types.
  • the above rendering thread and UI thread can correspond to the interface display business type.
  • the running status of the application includes but is not limited to any one or more of the following: foreground running status, background playback status, background service status, background cache status, shallow freeze status, and deep freeze status.
  • foreground running status means that the application is running in the foreground.
  • the application can usually present a graphical interface (such as UI), and users can interact with the application through the UI.
  • UI graphical interface
  • Background playing state means that the application switches to the background and no longer displays the graphical interface, but the application's functions or tasks are still running. For example, a music application runs music playback tasks in the background, and a navigation application runs navigation tasks in the background.
  • Background service status refers to background service applications.
  • Background service applications mainly implement background data collection, message push or resident interrupt waiting services, such as Bluetooth connections.
  • applications can push messages in the background, and applications can collect some data. In order to push some messages to users.
  • Background cache status refers to that after the application switches to the background, it usually no longer runs tasks, and the user does not operate the application within a short period of time (such as the first period of time). This part of the application resides in the background of the system, mainly to ensure that when switching from the background to the foreground, the browsing records that were previously running in the foreground can be retained, so that users can continue to operate the application and browse the corresponding content.
  • Shallow frozen state means that after the application retreats to the background, it no longer runs tasks, and the user has not operated the application for a certain period of time (such as the second period of time).
  • the background application In this running state, the background application only responds to system behavior when some specific system events occur, such as responding to the system to perform resolution conversion, etc. Among them, the second duration is longer than the first duration.
  • Deep freeze state means that after the application retreats to the background, it no longer runs tasks, and the user does not use the application again for a long period of time (such as the third period of time).
  • the background application no longer responds to system behavior, and the application completely enters a non-running state.
  • the third duration is longer than the second duration.
  • the application does not run tasks in these three running states.
  • the difference between the three states is the length of time the application resides in the background.
  • the running status of the application is in the background cache state.
  • the running state of the application is in a shallow frozen state.
  • the running state of the application is in a deep freeze state.
  • an application in the background cache state is not operated by the user (does not run tasks) after reaching the second duration, it can be switched to the shallow freeze state. If an application in the shallow freeze state is still not operated by the user (does not run tasks) after reaching the third period of time, it can be switched to the deep freeze state.
  • the electronic device can detect the running status of the application, and store the memory used by the functional module of the application according to the running status of the application. For example, when the first service detects that the music application switches from the background service state to the background cache state, the first service can send a notification message to the memory management module of the kernel layer to indicate the running status of the music application. After receiving the notification message, the memory management module manages the memory of one or more functional modules of the music application according to the running status of the music application.
  • the memory management module can compress the memory pages of the functional modules used for display to obtain compressed memory. page so that when the application switches back to the foreground, the memory page can be quickly decompressed and used to implement the interface display function.
  • switching the running state of a music application to the background cache state means that the probability of the user switching the application back to the foreground is low.
  • the memory management module can swap out the compressed memory pages of the functional modules used for display. Operations, such as swapping out (or dropping) to a storage device (such as Flash).
  • the memory management module can also compress the memory of some functional modules in other functional modules of the application (functional modules other than the functional modules used for display).
  • functional modules other than the functional modules used for display.
  • different methods are used to process the memory of different functional modules, so as to ensure the performance of the application in the corresponding operating state as much as possible, and at the same time improve the free memory of the electronic device.
  • the electronic device can compress the memory data used by the application's functional module for interface display (such as the rendering thread), and support Some functional modules running in the background Memory data used by blocks (such as services) is not processed. In this way, the normal operation of the application in the background playback state can be ensured as much as possible, and the free memory of the electronic device can be increased at the same time.
  • Virtual machine used to manage heap memory.
  • the virtual machine calls the mmap interface to allocate heap memory.
  • the operating system will record the portion of heap memory allocated by the virtual machine.
  • Direct memory allocator used to allocate direct memory.
  • the virtual machine calls the mmap interface to allocate direct memory.
  • the hardware abstraction layer includes the streaming memory allocator.
  • Streaming media memory allocator used to allocate streaming media memory.
  • streaming media memory allocator includes dma_buf memory allocator or ION memory allocator.
  • the kernel layer is the layer between hardware and software.
  • the kernel layer may include at least a display driver, camera driver, audio driver, and sensor driver.
  • the kernel layer may include a memory management module for managing the memory of the application. For example, after receiving a notification message (used to indicate the running status of the application) from the first service, the memory of one or more functional modules of the application is managed according to the running status of the application.
  • a notification message used to indicate the running status of the application
  • the software architecture shown in Figure 4 is only an example.
  • the software architecture of the electronic device can also be in other forms.
  • the first service is set in other layers.
  • the embodiments of this application do not limit the software architecture of the electronic device.
  • FIG. 5 shows another possible structure of an electronic device.
  • the electronic device may include a processor 401 (optional, including a processor 408), a memory 403, a transceiver 404, and so on.
  • a channel may be included between the above-mentioned components for transmitting information between the above-mentioned components.
  • Transceiver 404 used for communicating with other devices or communication networks via protocols such as Ethernet, WLAN, etc.
  • the physical memory in the electronic device can be divided into multiple memory units.
  • memory units include but are not limited to memory pages.
  • the following embodiments mainly take the memory page as a memory unit as an example for description, but the technical solutions of the embodiments of the present application are not limited thereto.
  • the life cycle of the application can be detected, and the memory can be managed according to the life cycle of the application.
  • this memory data can be compressed and stored in the compressed space to save some memory space.
  • this part of the compressed memory data can be swapped out, for example, through the disk drop mechanism, swapped out to disk space (such as on a Flash device).
  • the application life cycle includes one or more of the following running stages: foreground running, background playback, background service, background cache, shallow freeze, and deep freeze.
  • the stage when the application is running in the foreground can also be called the running state of the application is the foreground running state.
  • the application when the application is in the background playing stage, it can also be called the running state of the application is the background playing state.
  • the application is in the background service stage, which can also be called the running state of the application as the background service state.
  • the application is in the background cache stage, which can also be called the running state of the application as the background cache state.
  • the application is in the shallow freezing stage, which can also be said that the running state of the application is a shallow freezing state.
  • the application is in the deep freeze stage, which can also be called the running state of the application is the deep freeze state.
  • the mobile phone may not process the memory of the navigation application.
  • the mobile phone detects the user's operation of switching the navigation application to the background. Assuming that the navigation application is still running navigation functions in the background, such as determining the navigation route and playing the navigation route voice, it is determined that the navigation application enters the background playback state. Considering that when the navigation application is running in the background, the graphical interface is no longer displayed (the display function is no longer performed), therefore, the memory related to the interface display can be processed.
  • Processing the memory related to the interface display can be implemented as follows: compress the memory data related to the interface display of the application, obtain the compressed memory data related to the interface display and store it in the compressed space, without The compressed memory data related to the interface display is swapped out to disk space (such as Flash device). In this way, it can avoid the time-consuming process of re-reading data from the disk and delaying the application to switch back to the foreground.
  • disk space such as Flash device
  • the mobile phone can process memory data related to the interface display.
  • the memory data related to the interface display includes but is not limited to: memory data related to the user interface thread and memory data related to the rendering thread. .
  • the mobile phone detects the navigation application running in the background, which no longer runs core functions such as navigation. For example, the current navigation task has ended and the user has not added a new navigation task. Then, the mobile phone can determine that the navigation application has entered the background service state. Considering that when the navigation application is in the background service state, it no longer runs navigation tasks/functions. Therefore, the mobile phone can process in the previous memory (i.e., display the interface). Based on the relevant memory data processing), the memory data related to core functions such as navigation is processed. Navigation functions include but are not limited to determining navigation routes and playing navigation audio.
  • processing memory data related to core functions such as navigation can be implemented by: compressing memory data related to core functions such as navigation, and storing this part of compressed memory data related to core functions such as navigation into the compression space. .
  • the mobile phone can also process the application Memory data related to core functions (such as navigation).
  • Memory data related to navigation functions include but are not limited to: memory data related to surface threads and memory data related to media threads.
  • the navigation application retreats to the background for a longer time, for example, at time t3, the navigation application runs in the background for the first time, and the navigation application enters the background cache state. Considering that when the application is in the background cache state, the probability of the user switching the navigation application back to the foreground is low. Therefore, in order to save more memory space, as a possible implementation method, the mobile phone can use the navigation application for interface display. Memory data is swapped out to disk space (such as on a Flash device).
  • the mobile phone swaps out memory data related to interface display (such as memory data related to rendering threads and user interface threads) to disk space.
  • memory data related to interface display such as memory data related to rendering threads and user interface threads
  • the mobile phone can use the service
  • the memory data is compressed.
  • the mobile phone can also compress related memory data used by media and surface.
  • the navigation application continues to run in the background.
  • time t4 when the duration of the navigation application running in the background reaches the second length, the navigation application enters a shallow freeze state. Considering that in this state, the possibility of navigation applications being active in the background is greatly reduced, therefore, as a possible implementation method, the mobile phone can swap out all the memory data other than the heap memory data of the Java virtual machine to disk space. (such as Flash devices).
  • the mobile phone swaps out memory data related to the user interface thread, rendering thread, surface, media, and service to disk space.
  • memory data related to the user interface thread rendering thread, surface, media, and service to disk space.
  • as much memory as possible can be processed from inactive background applications to increase the amount of free memory of the entire machine, thereby improving the operating performance of the entire machine.
  • the navigation application continues to run in the background.
  • time t5 when the navigation application runs in the background for the third time, the navigation application enters a deep freeze state.
  • the navigation application is in a deep freeze state, which means that the navigation application has not been used by the user again for a long period of time. Based on this, the mobile phone can predict that the navigation application will not be used again by the user for a long period of time in the future.
  • the mobile phone can compress the heap memory data of the Java virtual machine of the navigation application in the deep freeze state and store it in the compressed space.
  • the memory usage of the heap memory data can be reduced, and because the heap memory data is compressed into the compressed space (not swapped out to the Flash device), when subsequent processes want to use the heap memory data, they only need to compress the Decompress the heap memory data without having to replace the heap memory data from the disk space, which can avoid the delay caused by replacing the heap memory data, thereby reducing the lag when the user switches the navigation application back to the foreground.
  • the mobile phone can swap out the memory data related to the user interface thread, rendering thread, surface, media, and service to the disk space, and then the heap of the Java virtual machine can be Memory data (such as memory data related to methods, classes, and objects) is compressed, and the compression results are stored in the compression space.
  • Memory data such as memory data related to methods, classes, and objects
  • the electronic device can detect the life cycle of the application, and process the memory data used by some functional modules corresponding to the application according to the life cycle stage of the application.
  • the precision of memory processing is higher.
  • processing the memory data used by some functional modules of the application (such as functional modules that will not be used temporarily) will not affect the normal operation of the entire application. It can improve the degree of application keep-aliveness and reduce the risk of being checked. The probability of killing or exiting abnormally.
  • the electronic device can only process the memory used by some functional modules of the background application. In this way, while ensuring the survival of the background application as much as possible, the memory of the application can be released to a large extent and the electronic device can be enhanced. Improve the smoothness and stability of the device (avoiding memory thrashing) and improve the memory management efficiency of electronic devices.
  • the application does not actually release the memory data and the memory pressure of the system is still large.
  • the technical solution of the embodiment of the present application does not rely on the application to actively release the memory data. Release memory data.
  • the electronic device when detecting that the application is in a corresponding stage of the life cycle, can automatically process memory data from some functional modules of the application. This, on the one hand, helps alleviate the memory pressure on the system. On the other hand, the electronic device processes corresponding memory data from some functional modules of the application, which reduces the total memory occupied by the application. Therefore, the probability of the application being checked or abnormally exited will be greatly reduced.
  • the memory data of each functional module of the application can be processed in different ways. In this way, it can meet the memory processing needs of the application in different life cycle stages. In other words, it can meet the performance of the application in the corresponding life cycle stage.
  • process as much memory as possible from the functional modules of the application in order to increase the free memory of the electronic device and thereby improve the overall operating performance of the electronic device.
  • Electronic devices can implement memory management based on virtual memory technology.
  • the thread initiates a memory allocation request.
  • the electronic device allocates virtual memory areas (VMA) and physical memory to the thread through the memory allocator.
  • VMA virtual memory areas
  • the physical memory may include one or more linked lists, and each linked list may include one or more memory pages.
  • the electronic device can mark the assigned VMA.
  • Figure 7 shows the VMA allocated by the memory allocator of the electronic device to the user interface (UI) thread and the rendering thread.
  • the VMA corresponding to the UI thread and the VMA corresponding to the rendering thread may have different identities.
  • the identification information of the VMA can be passed to the memory management module, and the memory management module can distinguish threads associated with different VMAs based on the identification information of the VMA.
  • the electronic device can also mark memory pages in allocated physical memory. For example, set different marks for memory pages used by different functional modules. For example, the mark of the memory page used by the UI thread is set to mark 1, and the mark of the memory page used by the rendering thread is set to mark 2.
  • the functional modules such as threads
  • business types associated with different memory pages can be distinguished.
  • the thread can apply to use the memory page mapped to the VMA.
  • the virtual address A' of the memory page to be used can be carried in the use request, and the virtual address A' includes an offset.
  • the electronic device can obtain the base address of the physical address A corresponding to the virtual address A' by querying the page table, calculate the physical address A based on the base address and the offset in the virtual address A', and then address the physical address of the memory page. Address A allows the thread to use the memory page at physical address A.
  • the electronic device can manage the memory data used by one or more functional modules of the application based on the running status of the application.
  • FIG 8 shows an exemplary flow of the memory management method according to the embodiment of the present application. As shown in Figure 8, the method includes the following steps:
  • the first service detects a change in the running status of the application.
  • the first service is located at the framework layer.
  • the first service sends message A to the memory management module.
  • message A is used to indicate changes in running status.
  • the memory management module is located at the kernel layer.
  • the memory management module sends message B to the processing thread.
  • message B is used to instruct the processing thread to perform memory processing.
  • the processing thread processes the memory page used by the application according to the functional module (or business type) associated with the memory page.
  • the processing thread processes as many non-critical memory pages (or non-important memory pages) used by the application as possible and processes as few critical memory pages as possible.
  • the division of critical memory pages and non-critical memory pages is related to the functional module (or business type) associated with the memory page in a specific operating state.
  • all memory pages used by the application can be regarded as critical memory pages.
  • the business related to the interface display is regarded as non-critical business, or in other words, the application function modules related to the interface display are regarded as non-critical functional modules.
  • memory pages related to interface display can be regarded as non-critical memory pages.
  • the electronic device can process as many memory pages related to interface display as possible to reduce the memory pressure of the electronic device and improve the performance of the electronic device.
  • non-critical memory pages can include two types, one is a memory page that is not used in a relatively short period of time, and the other is a memory page that is not used in a relatively long period of time.
  • non-critical memory pages that are not used in a short period of time such as memory pages used by surface, media, and service in the background cache state
  • the electronic device can compress these non-critical memory pages and store them in the compressed space.
  • non-critical memory pages that have not been used for a long time for example, in the background cache state, the interface displays relevant memory pages
  • the electronic device can swap out these non-critical memory pages and store them in disk space.
  • the memory management method can determine whether the application is running at different times.
  • the corresponding key business (or key functional module) and non-critical business (or non-critical functional module) in the state, and according to the key business and non-critical business, the memory (critical memory) actually needed for the key business of the application, electronic equipment No processing is performed.
  • the electronic device processes the memory that is not actually used by the key business (or key functional module) of the application (that is, non-critical memory). In this way, the appropriate amount of memory can be processed to match the actual memory usage requirements of the application.
  • the following takes a navigation application as an example to illustrate the memory processing process of the processing thread.
  • the electronic device can wake up the processing thread (such as kswapd) .
  • the processing thread can process the memory used by one or more functional modules of the application according to the current running status of the navigation application.
  • the processing thread can scan the linked list 1 mapped to the VMA according to the VMA of the UI thread of the navigation application; scan the linked list 2 mapped to the VMA according to the VMA of the rendering thread of the navigation application; According to the VMA of the surface, scan the linked list 3 to which the VMA is mapped; according to the VMA of the media of the navigation application, scan the linked list 4 to which the VMA is mapped; according to the VMA of the service of the navigation application, scan the linked list 5 to which the VMA is mapped.
  • the processing thread can swap out this part of the memory pages (such as memory page C) to the disk space.
  • the processing thread can compress this part of the memory page (such as memory page D) to compressed space.
  • the processing thread can compress the processable memory pages (media-related memory pages) in linked list 4 into the compressed space, and compress the processable memory pages (service-related memory pages) in linked list 5 into the compressed space.
  • Figure 9 takes the VMA of one functional module corresponding to a linked list as an example.
  • the VMA of one functional module may also correspond to multiple linked lists, or the VMAs of multiple functional modules may correspond to One linked list, or the VMAs of multiple functional modules cross-correspond to multiple linked lists.
  • the VMAs of the UI thread and rendering thread of the application are mapped to linked list 1, and the VMAs of the other functional modules of the application are mapped to linked list 2.
  • the embodiment of this application does not limit the mapping relationship between the VMA of the functional module and the physical memory.
  • the electronic device can wake up the processing thread, and the processing thread can perform one or more functions of the application according to the current running state of the navigation application.
  • the memory used by the module is processed.
  • the processing thread can scan the linked list 1 mapped to the VMA based on the VMA of the UI thread of the navigation application, and swap out the memory pages in linked list 1 that meet the processing conditions (that is, the memory pages used by the UI thread) to disk space.
  • the processing thread can swap out memory pages used by other functional modules of the navigation application (such as memory pages used by the rendering thread, surface, media, and service).
  • the electronic device can process part of the memory of the application according to the running state of the application.
  • the electronic device can delay the processing.
  • the electronic device can perform memory processing after a period of time after the application switches the running state to ensure that the application's running state is stable. For example, as shown in Figure 11, the application enters the background playing state at time t1. In order to prevent the user from switching the application back to the foreground in the short term, the electronic device can delay for a period of time.
  • the electronic device predicts that the application will not be switched back to the foreground in a short period of time in the future, then the electronic device Memory related to interface display (such as memory used by rendering threads and UI threads) can be processed at time t1'.
  • the electronic device can provide a setting portal for memory management, and the user can set related functions of memory management through the setting portal.
  • the electronic device displays a setting interface 110 (an example of the first interface).
  • the interface 110 includes a setting option 1101, which is used to turn on or off the memory management function.
  • the electronic device can manage the memory used by one or more functional modules of the application based on the running status of the application and at the granularity of functional modules.
  • the electronic device can always enable the memory management function.
  • the electronic device can automatically turn on the memory management function when certain conditions are met.
  • Table 2 below shows the performance indicators of the electronic device when the memory management function is turned on and when the memory management function is not turned on.
  • the above only gives several examples of the application running status.
  • the running status or life cycle stages of the application can also be separately divided, and the above technical solutions can be used, according to the application.
  • part of the memory of the application (the memory of non-critical business in the corresponding running state) is processed.
  • multiple embodiments of the present application can be combined and the combined solution can be implemented.
  • some operations in the processes of each method embodiment are optionally combined, and/or the order of some operations is optionally changed.
  • the execution order between the steps of each process is only exemplary and does not constitute a limitation on the execution order between the steps. Other execution orders are possible between the steps. It is not intended that the order of execution described is the only order in which these operations may be performed.
  • One of ordinary skill in the art will recognize various ways to reorder the operations described herein.
  • the process details involved in a certain embodiment herein are also applicable to other embodiments in a similar manner, or different embodiments can be used in combination.
  • each method embodiment can be implemented individually or in combination.
  • the electronic device in the embodiment of the present application includes a corresponding hardware structure and/or software module to perform each function.
  • the embodiments of this application can be implemented in the form of hardware or a combination of hardware and computer software. Whether a function is performed by hardware or computer software driving the hardware depends on the specific application and design constraints of the technical solution. Those skilled in the art can use different methods to implement the described functions for each specific application, but such implementation should not be considered to be beyond the scope of the technical solutions of the embodiments of the present application.
  • Embodiments of the present application can divide the electronic device into functional units according to the above method examples.
  • each functional unit can be divided corresponding to each function, or two or more functions can be integrated into one processing unit.
  • the above integrated units can be implemented in the form of hardware or software functional units. It should be noted that the division of units in the embodiment of the present application is schematic and is only a logical function division. In actual implementation, there may be other division methods.
  • FIG. 13 shows a schematic block diagram of a memory management device provided in an embodiment of the present application.
  • the device may be the above-mentioned first electronic device or a component with corresponding functions.
  • the device 1700 may exist in the form of software, or may be a chip that can be used in a device.
  • the apparatus 1700 includes a processing unit 1702.
  • the processing unit 1702 may be used to support S101, S104, etc. shown in FIG. 8, and/or other processes for the solutions described herein.
  • the device 1700 may also include a communication unit 1703.
  • the communication unit 1703 can also be divided into a sending unit (not shown in Figure 13) and a receiving unit (not shown in Figure 13).
  • the sending unit is used to support the device 1700 in sending information to other electronic devices.
  • the receiving unit is used to support the device 1700 to receive information from other electronic devices.
  • the device 1700 may also include a storage unit 1701 for storing program codes and data of the device 1700.
  • the data may include but is not limited to original data or intermediate data.
  • the processing unit 1702 can be a controller or the processor 401 and/or 408 shown in Figure 5, for example, it can be a central processing unit (Central Processing Unit, CPU), a general-purpose processor, a digital signal processing ( Digital Signal Processing (DSP), Application Specific Integrated Circuit (ASIC), Field-Programmable Gate Array (FPGA) or other programmable logic devices, transistor logic devices, hardware components or any of them combination. It may implement or execute the various illustrative logical blocks, modules, and circuits described in connection with this disclosure.
  • the processor can also be a combination that implements computing functions, such as a combination of one or more microprocessors, a combination of DSP and microprocessors, and so on.
  • the communication unit 1703 may include the transceiver 404 shown in FIG. 5 , and may also include a transceiver circuit, a radio frequency device, etc.
  • the storage unit 1701 may be the memory 403 shown in FIG. 5 .
  • An embodiment of the present application also provides an electronic device, including one or more processors and one or more memories.
  • the one or more memories are coupled to one or more processors.
  • the one or more memories are used to store computer program codes.
  • the computer program codes include computer instructions.
  • the electronic device causes the electronic device to execute The above-mentioned relevant method steps implement the above-mentioned embodiments. method.
  • An embodiment of the present application also provides a chip system, including: a processor, the processor is coupled to a memory, and the memory is used to store programs or instructions. When the program or instructions are executed by the processor, the The chip system implements the method in any of the above method embodiments.
  • processors in the chip system there may be one or more processors in the chip system.
  • the processor can be implemented in hardware or software.
  • the processor may be a logic circuit, an integrated circuit, or the like.
  • the processor may be a general-purpose processor implemented by reading software code stored in memory.
  • the memory may be integrated with the processor or may be provided separately from the processor, which is not limited by this application.
  • the memory can be a non-transient processor, such as a read-only memory ROM, which can be integrated on the same chip as the processor, or can be separately provided on different chips.
  • This application describes the type of memory, and the relationship between the memory and the processor. There is no specific limitation on how the processor is configured.
  • the chip system can be a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), or a system on chip (SoC), or It can be a central processing unit (CPU), a network processor (NP), a digital signal processing circuit (DSP), or a microcontroller unit , MCU), it can also be a programmable logic device (PLD) or other integrated chip.
  • FPGA field programmable gate array
  • ASIC application specific integrated circuit
  • SoC system on chip
  • CPU central processing unit
  • NP network processor
  • DSP digital signal processing circuit
  • MCU microcontroller unit
  • PLD programmable logic device
  • each step in the above method embodiment can be completed by an integrated logic circuit of hardware in the processor or instructions in the form of software.
  • the method steps disclosed in conjunction with the embodiments of this application can be directly implemented by a hardware processor, or executed by a combination of hardware and software modules in the processor.
  • Embodiments of the present application also provide a computer-readable storage medium.
  • Computer instructions are stored in the computer-readable storage medium.
  • the electronic device causes the electronic device to execute the above related method steps to implement the above embodiments. Methods.
  • An embodiment of the present application also provides a computer program product.
  • the computer program product When the computer program product is run on a computer, it causes the computer to perform the above related steps to implement the method in the above embodiment.
  • inventions of the present application also provide a device.
  • the device may be a component or module.
  • the device may include a connected processor and a memory.
  • the memory is used to store computer execution instructions.
  • the processor When the device is running, the processor The computer execution instructions stored in the executable memory can cause the device to execute the methods in the above method embodiments.
  • the electronic devices, computer-readable storage media, computer program products or chips provided by the embodiments of the present application are all used to execute the corresponding methods provided above. Therefore, the beneficial effects they can achieve can be referred to the above provided The beneficial effects of the corresponding methods will not be described again here.
  • the electronic device includes corresponding hardware and/or software modules that perform each function.
  • the present application can be implemented in the form of hardware or a combination of hardware and computer software. Whether a function is performed by hardware or computer software driving the hardware depends on the specific application and design constraints of the technical solution. Those skilled in the art can use different methods to implement the described functions in conjunction with the embodiments for each specific application, but such implementations should not be considered to be beyond the scope of this application.
  • This embodiment can divide the electronic device into functional modules according to the above method examples.
  • each functional module can be divided corresponding to each function, or two or more functions can be integrated into one processing module.
  • the above integrated modules can be implemented in the form of hardware. It should be noted that the division of modules in this embodiment is schematic and is only a logical function division. In actual implementation, there may be other division methods.
  • the disclosed method can be implemented in other ways.
  • the terminal device embodiments described above are only illustrative.
  • the division of modules or units is only a logical function division.
  • there may be other division methods, such as multiple units or components. can be combined or can be integrated into another system, or some features can be ignored, or not implemented.
  • the coupling or direct coupling or communication connection between each other shown or discussed may be through some interfaces, indirect coupling or communication connection of modules or units, which may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in one place, or they may be distributed to multiple network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in each embodiment of the present application can be integrated into one processing unit, each unit can exist physically alone, or two or more units can be integrated into one unit.
  • the above integrated units can be implemented in the form of hardware or software functional units.
  • the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it may be stored in a computer-readable storage medium.
  • the technical solution of the present application is essentially or contributes to the existing technology, or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium , including several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor to execute all or part of the steps of the methods described in various embodiments of the application.
  • the aforementioned storage media include: flash memory, mobile hard disk, read-only memory, random access memory, magnetic disk or optical disk and other media that can store program instructions.

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

一种内存管理方法及电子设备,涉及终端技术领域,能够以应用程序的业务或功能模块为粒度进行内存管理,提升电子设备的内存管理性能。所述方法应用于电子设备,所述方法包括:检测第一应用的运行状态;检测到第一应用的运行状态由第一运行状态切换至第二运行状态,对所述第一应用使用的部分内存进行处理;所述部分内存为所述第一应用的目标业务关联的内存;所述目标业务为所述第一应用在所述第二运行状态下的非关键业务。

Description

内存管理方法及电子设备
本申请要求于2022年07月30日提交国家知识产权局、申请号为202210912594.8、申请名称为“内存管理方法及电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及终端技术领域,尤其涉及内存管理方法及电子设备。
背景技术
目前,随着手机等电子设备的普及,用户对于应用的流畅性需求越来越高。在电子设备中,内存是非常重要的系统资源之一。如果系统空闲内存不足,则会大大降低应用的流畅性,导致应用卡顿,影响使用体验。因此,亟待提出一种有效的内存管理方法,用于提升电子设备的性能。
发明内容
为了解决上述的技术问题,本申请实施例提供了一种内存管理方法及电子设备。本申请实施例提供的技术方案,能够以应用程序的业务或功能模块为粒度进行内存管理,提升电子设备的内存管理性能。
为了实现上述的技术目的,本申请实施例提供了如下技术方案:
第一方面,提供一种内存管理方法,该方法应用于电子设备或能够实现电子设备功能的组件(比如芯片系统),所述方法包括:
检测第一应用的运行状态;检测到第一应用的运行状态由第一运行状态切换至第二运行状态,对所述第一应用使用的部分内存进行处理;所述部分内存为所述第一应用的目标业务关联的内存;所述目标业务为所述第一应用在所述第二运行状态下的非关键业务。
示例性的,以第一应用是导航应用为例,如图6,检测到导航应用的状态由前台运行状态(第一运行状态的一个示例)切换至后台播放状态(第二运行状态的一个示例),手机可以对导航应用使用的非关键业务的内存(比如渲染线程、用户界面线程使用的内存)进行处理。
其中,对内存进行处理,包括对内存进行回收。首先,由于是基于应用的功能模块或业务的粒度进行内存处理,因此,内存处理的精细度更高。在一些场景中,对应用的部分功能模块(比如暂且不会使用的功能模块)或非关键业务所使用的内存数据进行处理,不至于影响整个应用的正常运行,能够提升应用的保活程度,降低应用被查杀或异常退出的概率。比如,对于某些后台应用,电子设备可以仅对后台应用的部分功能模块或非关键业务所使用的内存进行回收,如此,在尽可能保证后台应用存活的情况下,较大程度地释放应用的内存,增强电子设备的流畅度和稳定性,提高电子设备的内存管理效率。
其次,相比于相关技术中,依赖onTrimMemory机制,期望应用主动释放内存数据,导致应用实际上并未释放内存数据,系统的内存压力仍然较大,本申请实施例的技术方案,不依赖应用主动释放内存数据。具体的,本申请实施例中,在检测到应用处于生命周期的相应运行状态时,电子设备可以自动对应用的部分功能模块或非关键业务的内存数据进行处理。如此,一方面,有助于缓解系统的内存压力。另一方面,电子设备从应用的部分功能模块或非关键业务回收相应内存数据,使得该应用占用的总内存有所降低,因此,该应用被查杀或异常退出的概率将大幅降低。
在一种可能的设计中,检测到第一应用的运行状态由第一运行状态切换至第二运行状态,对所述第一应用使用的部分内存进行处理,包括:检测到第一应用的运行状态由第一运行状态切换至第二运行状态,且所述第一应用处于所述第二运行状态的时长达到第一阈值,对所述第一应用使用的部分内存进行处理。
示例性的,如图11,检测到在t1时刻,导航应用(第一应用的示例)的运行状态由前台运行状态(第一运行状态的示例)切换至后台播放状态(第二运行状态的示例),且导航应用处于后台播放状态的时长(时长t1-t1’)达到第一阈值,对导航应用使用的部分内存(渲染线程、用户界面线程使用的内存)进行处理。
如此,在应用切换运行状态后,电子设备可以延迟处理,换句话说,电子设备可以在应用切换运行状态后的一段时间后进行内存回收,以确保应用的运行状态稳定。
在一种可能的设计中,所述目标业务包括第一业务和/或第二业务;所述第一业务关联的内存包括第一部分内存,所述第二业务关联的内存包括第二部分内存;
所述第一部分内存为:在第一时间段内未被所述第一应用使用的页面;
所述第二部分内存为:在第二时间段内未被所述第一应用使用的压缩页面。
示例性的,如图6,以应用由前台运行状态切换至后台播放状态为例,在后台播放状态下,第一部分内存包括渲染线程、用户界面线程使用的内存。第一部分内存在t1-t2时间段内未被第一应用使用。检测到应用由前台运行状态切换至后台播放状态,手机可以对渲染线程、用户界面线程等非关键业务使用的内存进行回收。
以导航应用由后台播放状态切换至后台服务状态为例,在后台服务状态下,第一部分内存包括桌面(surface)、媒体(media)使用的内存。第一部分内存在t2-t3时间段内未被导航应用使用。检测到应用由后台播放运行状态切换至后台服务状态,手机可以对surface、media等非关键业务使用的内存进行回收。
以导航应用由后台服务状态切换至后台缓存状态为例,在后台缓存状态下,第一部分内存包括服务(service)使用的内存,第二部分内存包括渲染线程、用户界面线程使用的内存。第一部分内存在t3-t4时间段内未被导航应用使用,第二部分内存在t1-t3时间段(第二时间段)内均未被导航应用使用。检测到应用由后台服务运行状态切换至后台缓存状态,手机可以对service、渲染线程、用户界面线程等非关键业务使用的内存进行回收。
以导航应用由后台缓存状态切换至浅冻结状态为例,在浅冻结状态下,第二部分内存包括service、media、surface使用的内存。其中,service的内存在t3-t4时间段(第二时间段)内均未被导航应用使用,media、surface的内存在t2-t4时间段(第二时间段)内未被导航应用使用。检测到导航应用由后台缓存运行状态切换至浅冻结状态,手机可以对service、media、surface等非关键业务使用的内存进行回收。
以导航应用由浅冻结状态切换至深度冻结状态为例,在深度冻结状态下,第一部分内存包括目标(object)、类(class)、方法(method)使用的内存。object、class、method的内存在t5-t6时间段(第一时间段)内未被导航应用使用。检测到导航应用由浅冻结运行状态切换至深度冻结状态,手机可以对object、class、method等非关键业务使用的内存进行回收。
在一种可能的设计中,所述第二时间段的时长大于所述第一时间段的时长。
在一种可能的设计中,采用第一压缩方式,对第一业务的内存进行处理,采用第二压缩方式,对第二业务的内存进行处理。
如此,对于短期内不被应用程序使用的内存数据和长期内不被应用程序使用的内存数据,可以使用不同的压缩方式进行内存处理,提升内存处理性能。
在一种可能的设计中,所述第二运行状态包括第一后台运行状态;
检测到第一应用的运行状态由第一运行状态切换至第二运行状态,对所述第一应用使用的部分内存进行处理,包括:
检测到所述第一应用的运行状态由所述第一运行状态切换至第一后台运行状态,对所述第一业务关联的第一部分内存进行压缩。第一部分内存可以是短期(比如第一时间段)内不被应用程序使用的内存。
如此,对于短期内用不到的内存数据,可以对这部分内存数据进行压缩,并存储至压缩空间,以便节省一部分内存空间。
在一种可能的设计中,所述第二运行状态包括第一后台运行状态;
检测到第一应用的运行状态由第一运行状态切换至第二运行状态,对所述第一应用使用的部分内存进行处理,包括:
检测到所述第一应用的运行状态由所述第一运行状态切换至所述第一后台运行状态,将所述第二业务关联的第二部分内存换出至磁盘空间。第二部分内存可以是长期(比如第二时间段)不被应用程序使用的内存。
如此,对于长期内用不到的压缩内存数据(比如压缩后的内存页),可以对这部分压缩内存数据进行换出处理,比如通过落盘机制,换出至磁盘空间(比如Flash器件上),降低内存的占用。
在一种可能的设计中,所述第一运行状态为前台运行状态。
在一种可能的设计中,所述第一运行状态为第二后台运行状态。
在一种可能的设计中,所述方法还包括:
检测到所述第一应用的运行状态由所述第一后台运行状态切换至第三后台运行状态;
对所述第一应用的第三部分内存进行压缩,所述第三部分内存为:所述第一时间段内未被所述第一应用使用的内存;
和/或,
将所述第一应用的第四部分内存换出至磁盘空间,所述第四部分内存为:所述第二时间段内未被所述第一应用使用的内存;所述第四时间段的时长大于所述第三时间段的时长。
在一种可能的设计中,对所述第一应用使用的部分内存进行处理,包括:
获取所述部分内存对应的虚拟内存空间VMA以及所述VMA对应的链表;所述链表包括所述部分内存;
对所述链表中的所述部分内存进行处理。
在一种可能的设计中,所述方法还包括:
显示第一界面;
接收用户在所述第一界面上输入的操作,所述操作用于开启内存管理功能。
在一种可能的设计中,所述第一后台运行状态包括如下状态:后台播放状态、后台服务状态、后台缓存状态、浅冻结状态、深度冻结状态;
其中,所述后台播放状态下,所述第一应用在后台执行第一任务。示例性的,后台播放状态,指应用切换到后台运行,且不再呈现图形界面,但是应用的功能或任务仍在运行。例如,音乐应用在后台运行音乐播放任务(第一任务的一个示例),导航应用在后台运行导航任务
所述后台服务状态下,所述第一应用在后台提供后台服务,所述第一应用在后台不执行所述第一任务。示例性的,指后台服务类应用,后台服务类应用主要实现后台数据采集,消息推送或者常驻中断等待服务,例如蓝牙连接,再例如应用可以在后台推送消息,再例如应用可以采集一些数据,以便向用户推送一些消息。
所述后台缓存状态下,所述第一应用在后台不执行所述第一任务,且不提供后台服务,且所述第一应用处于后台运行状态的时长达到第一时长。
所述浅冻结状态下,所述第一应用在后台不执行所述第一任务,且不提供后台服务,且所述第一应用处于后台运行状态的时长达到第二时长;所述第二时长大于所述第一时长。
所述深度冻结状态下,所述第一应用在后台不执行所述第一任务,且不提供后台服务,且所述第一应用处于后台运行状态的时长达到第三时长;所述第三时长大于所述第二时长。
在一种可能的设计中,在所述后台播放状态下,所述第一业务包括界面显示相关的业务。示例性的,如图6,界面显示相关的业务包括渲染线程、用户界面线程执行的非关键业务。后台播放状态下,电子设备可以对应用的渲染线程、用户界面线程使用的内存进行回收,以降低应用都内存占用。
在所述后台服务状态下,所述第一业务包括所述第一任务(第一应用不在后台执行的任务)对应的业务。示例性的,如图6,后台服务状态下,第一应用通常不在后台执行media、surface对应的业务。检测到应用程序进入后台服务状态,电子设备可以对media、surface使用的内存进行压缩,得到压缩后的内存数据。
所述后台缓存状态下,所述第一业务包括后台服务,所述第二业务包括界面显示相关的业务。示例性的,如图6,后台缓存状态下,应用不再执行后台服务(比如service),且长期未切换到前台显示界面,第一业务包括后台服务,第二业务包括渲染线程、用户界面线程执行的业务。检测到应用程序进入后台缓存状态,电子设备可以对短期内未被应用使用的service的相关内存进行压缩,得到压缩后的内存数据。此外,电子设备可以对长期未被应用使用的渲染线程、用户界面线程相关的压缩内存进行换出。
所述浅冻结状态下,所述第二业务包括:所述第一任务对应的业务、后台服务。示例性的,如图6,浅冻结状态下,第二业务包括:media、surface(第一任务对应的业务)、service(后台服务)。检测到应用程序进入浅冻结状态,电子设备可以对长期未被应用使用的media、surface、service的压缩内存进行换出。
所述深度冻结状态下,所述第一业务包括对象、类、方法对应的业务。示例性的,如图6,检测到应用进入深度冻结状态,电子设备可以对对象、类、方法使用的内存进行压缩。
第二方面,提供一种内存管理装置,所述装置应用于电子设备或支持电子设备功能的组件(比如芯片系统),所述装置包括:
处理单元,用于检测第一应用的运行状态;检测到第一应用的运行状态由第一运行状态切换至第二运行状态,对所述第一应用使用的部分内存进行处理;所述部分内存为所述第一应用的目标业务关联的内存;所述目标业务为所述第一应用在所述第二运行状态下的非关键业务。
在一种可能的设计中,检测到第一应用的运行状态由第一运行状态切换至第二运行状态,对所述第一应用使用的部分内存进行处理,包括:检测到第一应用的运行状态由第一运行状态切换至第二运行状态,且所述第一应用处于所述第二运行状态的时长达到第一阈值,对所述第一应用使用的部分内存进行处理。
在一种可能的设计中,所述目标业务包括第一业务和/或第二业务;所述第一业务关联的内存包括第一部分内存,所述第二业务关联的内存包括第二部分内存;
所述第一部分内存为:在第一时间段内未被所述第一应用使用的页面;
所述第二部分内存为:在第二时间段内未被所述第一应用使用的压缩页面。
在一种可能的设计中,所述第二时间段的时长大于所述第一时间段的时长。
在一种可能的设计中,所述第二运行状态包括第一后台运行状态;
检测到第一应用的运行状态由第一运行状态切换至第二运行状态,对所述第一应用使用的部分内存进行处理,包括:
检测到所述第一应用的运行状态由所述第一运行状态切换至第一后台运行状态,对所述第一业务关联的第一部分内存进行压缩。
在一种可能的设计中,所述第二运行状态包括第一后台运行状态;
检测到第一应用的运行状态由第一运行状态切换至第二运行状态,对所述第一应用使用的部分内存进行处理,包括:
检测到所述第一应用的运行状态由所述第一运行状态切换至所述第一后台运行状态,将所述第二业务关联的第二部分内存换出至磁盘空间。
在一种可能的设计中,所述第一运行状态为前台运行状态。
在一种可能的设计中,所述第一运行状态为第二后台运行状态。
在一种可能的设计中,所述处理单元,还用于:
检测到所述第一应用的运行状态由所述第一后台运行状态切换至第三后台运行状态;
对所述第一应用的第三部分内存进行压缩,所述第三部分内存为:所述第一时间段内未被所述第一应用使用的内存;
和/或,
将所述第一应用的第四部分内存换出至磁盘空间,所述第四部分内存为:所述第二时间段内未被所述第一应用使用的内存;所述第四时间段的时长大于所述第三时间段的时长。
在一种可能的设计中,对所述第一应用使用的部分内存进行处理,包括:
获取所述部分内存对应的虚拟内存空间VMA以及所述VMA对应的链表;所述链表包括所述部分内存;
对所述链表中的所述部分内存进行处理。
在一种可能的设计中,所述装置还包括:
显示单元,用于显示第一界面;
输入单元,用于接收用户在所述第一界面上输入的操作,所述操作用于开启内存管理功能。
在一种可能的设计中,所述第一后台运行状态包括如下状态:后台播放状态、后台服务状态、后台缓存状态、浅冻结状态、深度冻结状态;
其中,所述后台播放状态下,所述第一应用在后台执行第一任务;
所述后台服务状态下,所述第一应用在后台提供后台服务,所述第一应用在后台不执行所述第一任务;
所述后台缓存状态下,所述第一应用在后台不执行所述第一任务,且不提供后台服务,且所述第一应用处于后台运行状态的时长达到第一时长;
所述浅冻结状态下,所述第一应用在后台不执行所述第一任务,且不提供后台服务,且所述第一应用处于后台运行状态的时长达到第二时长;所述第二时长大于所述第一时长;
所述深度冻结状态下,所述第一应用在后台不执行所述第一任务,且不提供后台服务,且所述第一应用处于后台运行状态的时长达到第三时长;所述第三时长大于所述第二时长。
在一种可能的设计中,在所述后台播放状态下,所述第一业务包括界面显示相关的业务;
在所述后台服务状态下,所述第一业务包括所述第一任务对应的业务;
所述后台缓存状态下,所述第一业务包括后台服务,所述第二业务包括界面显示相关的业务;
所述浅冻结状态下,所述第二业务包括:所述第一任务对应的业务、后台服务;
所述深度冻结状态下,所述第一业务包括对象、类、方法对应的业务。
第三方面,本申请实施例提供一种电子设备,该电子设备具有实现如上述任意方面及其中任一种可能的实现方式中所述的方法的功能。该功能可以通过硬件实现,也可以通过硬件执行相应地软件实现。该硬件或软件包括一个或多个与上述功能相对应的模块。
第四方面,本申请实施例提供一种计算机可读存储介质。计算机可读存储介质存储有计算机程序(也可称为指令或代码),当该计算机程序被电子设备执行时,使得电子设备执行第一方面或第一方面中任意 一种实施方式的方法。
第五方面,本申请实施例提供一种计算机程序产品,当计算机程序产品在电子设备上运行时,使得电子设备执行第一方面或第一方面中任意一种实施方式的方法。
第六方面,本申请实施例提供一种电路系统,电路系统包括处理电路,处理电路被配置为执行第一方面或第一方面中任意一种实施方式的方法。
第七方面,本申请实施例提供一种芯片系统,包括至少一个处理器和至少一个接口电路,至少一个接口电路用于执行收发功能,并将指令发送给至少一个处理器,当至少一个处理器执行指令时,至少一个处理器执行第一方面或第一方面中任意一种实施方式的方法。
附图说明
图1A为本申请实施例提供的内存划分的示意图;
图1B为本申请实施例提供的基于链表管理内存的示意图;
图1C为本申请实施例提供的内存页被使用时的链表管理机制的示意图;
图1D为本申请实施例提供的内存不足时不活跃链表中的回收过程的示意图;
图1E为本申请实施例提供的内存压缩、换出机制的示意图;
图1F为本申请实施例提供的内存不足时活跃链表中的衰退过程的示意图;
图1G为相关技术中的内存管理机制的示意图;
图2为本申请实施例提供的基于框架管理内存的示意图;
图3为本申请实施例提供的一种电子设备的结构示意图;
图4为本申请实施例提供的一种电子设备的软件结构示意图;
图5为本申请实施例提供的另一种电子设备的结构示意图;
图6为本申请实施例提供的内存管理方法的场景示意图;
图7为本申请实施例提供的虚拟内存空间与链表的示意图;
图8为本申请实施例提供的内存管理方法的示意图;
图9为本申请实施例提供的后台缓存状态下的内存管理操作的示意图;
图10为本申请实施例提供的浅冻结状态下的内存管理操作的示意图;
图11为本申请实施例提供的内存管理方法的场景示意图;
图12为本申请实施例提供的界面的示意图;
图13为本申请实施例提供的内存管理装置的结构示意图。
具体实施方式
在本申请实施例的描述中,除非另有说明,“/”表示或的意思,例如,A/B可以表示A或B;本文中的“和/或”仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。
以下,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个该特征。在本申请实施例的描述中,除非另有说明,“多个”的含义是两个或两个以上。
在本申请实施例中,“示例性的”或者“例如”等词用于表示作例子、例证或说明。本申请实施例中被描述为“示例性的”或者“例如”的任何实施例或设计方案不应被解释为比其它实施例或设计方案更优选或更具优势。确切而言,使用“示例性的”或者“例如”等词旨在以具体方式呈现相关概念。
首先,对本申请实施例涉及的一些术语进行介绍:
1、内存分页机制
目前,可以基于内存单元实现内存管理。内存单元包括但不限于内存页(page)。在一些方案中,可以将内存按照一定大小(比如4K)分割成内存页。通过内存分页机制,能够提升访问内存的效率。
需要说明的是,随着技术的演进,内存单元还可以为其他实现方式,不局限于内存页的方式。应理解,本申请实施例中以内存页作为内存单元举例,但这不构成对内存单元的限制。
2、匿名页和文件页
用户态内存包括匿名页(anonymous page)和文件页(file-backed page)。
其中,文件页,是指在外部存储空间(比如磁盘)中具有源备份页的内存页面(可以简称为内存页或页面),文件页与外部存储空间中的源备份页具有映射关系。在一些实施例中,文件页可以用于缓存文件 数据。示例性的,文件页包括核心库代码,应用代码或者图标资源等。
作为一种可能的实现方式,程序可以通过基础操作,比如读/映射(read/mmap)从磁盘读取文件,系统可以申请页面来存储从磁盘读取的内容,这些用来存储磁盘文件内容的页面,可以视为一种文件页。
匿名页,是指在外部存储空间中没有与其对应文件的内存页,如进程的堆、栈等使用的页面可以是匿名页。匿名页可以用于存储运行过程中进程的临时计算结果。
其中,外部存储空间,指的是除内存之外的存储空间。
除了文件页和匿名页,内存还可以包括内核或者内核中模块自主管理的内存,这部分内存可以用于存储维持系统正常运行的基础数据结构与驱动类数据。
在进行内存回收时,对于不同类型的内存,操作系统可以按照不同的方式、比例进行回收。
3、三种内存区域
在一些方案中,如图1A所示,电子设备的内存可以划分成三个部分:
第一部分内存:Java虚拟机的堆内存(on-heap memory)。虚拟机基于自身的内存管理机制,采用垃圾收集器(garbage collection,GC)管理堆内存。作为一种可能的实现方式,虚拟机调用mmap接口分配堆内存。操作系统会记录被虚拟机分配的这部分堆内存。
可选的,如图1A,虚拟机的堆内存可以用于存放对象(object)、类(class)、方法(method)中的一个或多个。
第二部分内存:直接(native)内存。示例性的,直接内存可以是堆外内存。与堆内存由虚拟机管理相比,直接内存由操作系统管理,如此,能够在一定程度上减少垃圾回收对应用程序造成的影响。
可选的,直接内存中的数据主要包括:与显示功能关联的数据、与渲染功能关联的数据以及与系统服务关联的数据。比如,如图1A,直接内存中存放用户界面(user interface,UI)线程、渲染(Render)线程等线程运行相关的数据。再比如,直接内存中存放线程的运行时元数据。再比如,直接内存中存放推送服务相关的数据。
作为一种可能的实现方式,操作系统通过诸如C++分配器分配直接内存。可选的,C++分配器包括但不限于jemalloc、scudo等。以采用C++分配器为例,C++分配器可以识别当前申请内存的线程的标识(比如线程名称),并根据线程的标识确定该线程的功能。比如C++分配器识别出当前申请内存的线程的线程名称是RenderThread,则可以确定该线程是渲染线程,该线程名称对应的功能是对应用的界面进行渲染。
第三部分内存:流媒体内存。目前芯片架构中基本采用硬件编解码来加速视频、音频的渲染。为了能够让数据流在CPU和硬件编解码器之间流转,业界通常采用(ION)或者dma_buf分配器来识别内存空间共性。通过ION或者dma_buf属性识别,即可明确这部分内存是用于做播放使用的内存。
可选的,流媒体内存可以用于存放显示类(surface)、多媒体类(media)、服务类(service)相关的数据。
4、操作系统统一管理内存
由于内存资源有限,当出现内存不足时,操作系统可以根据内存的使用频度对不常用的内存进行回收。在一些方案中,可以基于内存水线(watermark)进行内存回收。
作为一种可能的实现方式,在接收到内存分配请求时,系统检测剩余内存的数量,若剩余内存低于设定的低(low)水线阈值,则唤醒回收线程(kswapd)来实现异步内存回收,以维持系统内存剩余量,满足内存分配需求。
示例性的,以操作系统(如)为例,回收线程可以将空闲内存维持在100~200M。当空闲内存的数量低于100(即水线)时,回收线程可以对已经使用的内存进行扫描,并对部分已使用的内存进行回收。其中,回收线程可以根据参数Swappiness以及已回收内存的比例,计算待回收内存的数目,并根据待回收内存的数目对待回收的内存进行回收。该内存回收机制中,回收线程计算待回收内存的数目时,并未考虑各应用实际的内存实际需求,可能会导致内存过度回收或者内存回收量不够。比如,在电子设备的空闲内存很少时,电子设备过度回收大量内存,导致正常运行的应用异常退出,影响电子设备的性能。
5、链表以及基于链表的内存管理机制
如图1B,电子设备的整机内存可以包括内核内存、文件页、匿名页、空闲内存、其他内存。不同应用可以使用不同的内存,以支持应用的运行。比如,如图1B,应用1使用文件页、匿名页,应用2使用文件页、匿名页。需要说明的是,应用1、应用2使用的文件页可以是相同的文件页,此种情况下,该文件页可以称为共享页。或者,应用1、应用2使用的文件页可以是不同的文件页。类似的,应用1、应用2使用的匿名页可以是相同或不同的匿名页。
在一些方案中,内核可以通过最近最少使用(least recently used,LRU)链表对文件页、匿名页进行管理。根据链表的活跃状态,链表可分为两级:活跃和不活跃链表。其中,如图1B所示,活跃链表包括:用于管理匿名页的活跃匿名页链表,以及用于管理文件页的活跃文件页链表,不活跃链表包括:用于管理匿名页的不活跃匿名页链表,以及用于管理文件页的不活跃文件页链表。
链表中可以存储多个内存页面。比如,活跃文件页链表中可以存储多个活跃的文件页,不活文件页跃链表中可以存储多个不活跃的文件页。活跃匿名页链表中可以存储多个活跃的匿名页,不活跃匿名页链表中可以存储多个不活跃的匿名页。活跃的内存页可以是进程经常使用的内存页,不活跃的内存页可以是进程不经常使用的内存页。
可选的,每个内存页对应一个使用标识(比如标识位PG_referenced),该标识可以用于表示该内存页是否被使用(reference)过。
本申请实施例中,使用内存页,还可以称为访问内存页、调用内存页等。
5.1内存页被使用时基于链表的内存管理机制:
首先,介绍内存页被使用时,基于链表管理内存页的机制。如图1C,基于被使用的内存页所在的链表(活跃链表或不活跃链表)以及该内存页此前的使用标识,电子设备可以执行不同操作:
如图1C的(a),若活跃链表中的内存页A被使用,则电子设备将该内存页A的PG_referenced设置为1,表示该内存页A被使用过。
如图1C的(b),若不活跃链表中的内存页A被使用,并且该内存页A此前的标识位PG_referenced为0(比如此前未被使用过),则电子设备将该内存页A的标识位PG_referenced设置为1,表示该内存页A被使用过。
如图1C的(c),若不活跃链表中的内存页A被使用,并且该内存页A此前的标识位PG_referenced为1(比如此前已被使用过)。那么,考虑到该内存页A此前已被使用过,且本次又被使用,内存页A被使用的概率较高,电子设备可以将该内存页A从不活跃链表移动到活跃链表,并且可以将标识位PG_referenced设置为0,表示内存页A移动至活跃链表之后未被使用过。
5.2系统内存不足时基于链表的内存管理机制:
如下,介绍空闲内存不足(比如空闲内存低于水线)时,基于链表管理内存页的机制。如图1D,电子设备的内存不足时,可以优先从不活跃链表中进行内存页回收。作为一种可能的实现方式,电子设备扫描不活跃链表中的内存页,并根据该内存页的标识位确定是否回收该内存页。
以不活跃链表的尾部的内存页A为例,在图1D的(a)的示例中,内存页A的标识位PG_referenced为1,意味着该内存页A被使用过,此种情况下,可以认为该内存页A在短期内被再次使用的概率比较高,为避免短期内内存页A被再次使用时无法快速找到内存页A,则电子设备跳过此内存页A,不对该内存页进行回收,并将此内存页A的标识位PG_referenced设置为0。
在图1D的(b)所示示例中,内存页A的标识位PG_referenced为0,意味着在一定时间内内存页A没有被使用过。那么,为了提升整机的空闲内存,电子设备可以将内存页A回收到内存回收区域中。随着不活跃链表的尾部的内存页被回收,不活跃链表中靠前的内存页向尾部移动。
可选的,内存回收区域包括但不限于压缩空间(比如zram)、磁盘空间(比如硬盘等)。
作为一种可能的实现方式,电子设备将内存页回收到压缩空间,可以实现为:对内存页进行压缩,并将压缩后的内存页存储到压缩空间。
作为一种可能的实现方式,电子设备将内存页回收到磁盘空间,可以实现为:将内存页换出(swap)到磁盘空间。
在一些实施例中,电子设备可以通过页面压缩线程对内存中存储的一些页面进行回收。页面压缩线程回收内存的方式一般为压缩。示例性的,如图1E的(a),将匿名页压缩,得到压缩匿名页,并可以将压缩匿名页存储到内存的压缩空间中。压缩匿名页的内存占用小于对应的匿名页的内存占用。在部分场景中还可以释放内存中已分配的匿名页(如不常用的匿名页等),匿名页释放后,不再占用内存。从而实现内存回收,增加空闲内存供应用程序使用。
后续,检测到进程需要使用已压缩的页面的请求后,如图1E的(a),可从压缩空间中解压该页面,并将解压后的页面分配给进程使用。
在一些实施例中,电子设备可以通过页面交换线程对内存中的压缩页面进行回收。页面交换线程回收内存的方式为换出内存(比如压缩空间)中的压缩页面至磁盘。示例性的,如图1E的(b),将内存(比如压缩空间)中的压缩匿名页换出至磁盘中进行存储,从而降低内存占用,便于后续进行内存分配。比如, 页面交换线程将双倍速率(double data rate,DDR)同步动态随机存取存储器(synchronous dynamic random access memory,SDRAM)中存储的压缩匿名页,通过输入/输出(input/output,I/O)接口发送到磁盘中进行存储。从而降低DDR SDRAM中的内存占用,便于后续应用程序申请DDR SDRAM中的内存。
后续,检测到进程需要使用已换出的压缩页面的请求后,如图1E的(b),可将磁盘空间中的压缩页面(比如压缩匿名页)换回至内存(比如压缩空间)并解压,将解压后的页面分配给进程使用。
上述的换出过程,还可以理解为把随机存取存储器(random access memory,RAM)的数据存储到只读存储器(read-only memory,ROM)。如此,可以将一部分ROM的存储空间用作RAM,实现了对RAM存储空间的扩展。对于RAM空间本身不够大的设备(比如2G、4G内存的手机),能够有显著的降低系统卡顿的效果。
作为一种可能的实现方式,内存不足时,伴随着不活跃链表中内存页的回收,处于活跃链表中的内存页也有衰退的过程,如图1F,以活跃链表的尾部的内存页A为例,衰退过程如下:
1、在图1F的(a)所示情形中,内存页A的标识位PG_referenced为1,那么,电子设备将内存页A的标识位PG_referenced设置为0。
2、在图1F的(b)所示情形中,内存页A的标识位PG_referenced为0,意味着内存页A在一定时间内没有被使用,那么,电子设备可以将内存页A移动到不活跃链表中。随着活跃链表的尾部的内存页移出活跃链表,活跃链表中靠前的内存页向尾部移动。
综上,在内存回收过程可以简化为如下步骤:活跃链表中的内存页可以向活跃链表的尾部移动,在一些情况下,活跃链表尾部满足条件的内存页会迁移到不活跃链表的头部,实现内存页在链表间的迁移。不活跃链表中的内存页可以向该不活跃链表的尾部迁移,电子设备可以从该不活跃链表的尾部回收满足条件的内存页。
通常,应用程序可以以一个或多个进程的形式运行在电子设备的软件系统中。某些情况下,应用程序的某些进程可以运行在前台,运行在前台的进程通常可以具有可视界面。由于前台进程通常与用户可视的界面有关,因此,前台进程运行的流畅性通常对电子设备的流畅性影响较大。同样,应用程序运行在前台时,通常具有可视界面,前台应用对电子设备的流畅度的影响较大。
某些情况下,应用程序的某些进程可以运行在后台,后台进程虽然通常不具有可视界面,但是,某些后台进程的运行对电子设备的流畅性的影响同样较大。比如,后台下载任务虽然不在前台,但其直接影响电子设备的响应时延,影响电子设备的流畅度。再例如,点击相机拍照后,照片的后台处理进程虽然不可见,但是其处理速度影响用户的拍照体验以及电子设备的流畅度。同样,某些运行在后台的应用程序,虽然不具有可视界面,但其性能对电子设备的流畅度有较大影响。
目前,应用的内存可以由操作系统统一管理。操作系统可以根据应用对内存页的使用频度,优先对目标应用的内存页进行回收,目标应用即对内存页的使用频度较低的应用。这种内存管理方案中,部分后台应用的活跃度可能较高,相应的,后台应用对内存页的使用频度随之升高。那么,在进行内存回收时,由于后台应用的内存页使用频度高于前台应用的内存页使用频度,因此导致前台应用使用的内存页被回收,影响前台应用的性能,进而影响电子设备运行的流畅度。尤其,当电子设备整机的内存规格(比如3G、4G)较低时,前台应用的内存页被回收的概率大幅提升,用户体验急剧下降。示例性的,如图1G,电子设备上运行有应用A-应用D,由于后台应用B、C、D使用内存页的频度较高,前台应用A使用内存页的频度较低,很可能导致前台应用A的内存页被回收,影响前台应用A的运行性能,进而影响电子设备的流畅度。
此外,这种内存管理方案中,也可能导致某些重要的后台应用的内存被回收,同样可能影响电子设备的流畅度。
6、基于框架管理内存的机制
在一些方案中,当系统内存不足时,系统可以根据应用的内存占用情况确定需要杀掉(kill)的应用。可选的,当空闲内存不足时,系统会优先杀掉内存占用量高的应用。比如,应用A占用1G内存,应用B占用300M内存、应用C占用200M内存,则系统可以优先杀掉占用内存最多的应用A。
除了操作系统的内核提供全局统一的内存管理机制外,操作系统还提供了基于框架或称基于接口的内存管理方式。比如,在安卓开源项目(Android open source project,AOSP)的机制或Purgeable机制中,当应用切换到后台运行时,可以将该应用暂时不需要使用的内存进行释放。这样一来,对于已释放部分内存的目标应用来说,由于其已释放部分内存,则其占用的总内存相对减少,当电子设备的内存不足时,由于目标应用已经基于Purgeable Memory机制释放掉部分内存,占用的内存较少, 因此,目标应用被系统杀掉(kill)的概率通常会降低,能够提升目标应用的保活程度。即,目标应用通过主动释放部分内存,可以在系统中存活更久,如此,能够提高用户使用该目标应用的体验。
可选的,电子设备释放的内存包括如下任一项或多项数据占用的内存:文件、图片、动态生成的视图控件等。
以onTrimMemory机制为例,操作系统可以检测应用程序的运行状态,当电子设备的内存不足,且检测到存在处于特定的运行状态的应用程序时,操作系统向处于特定运行状态的应用程序发生通信消息,应用程序响应于该通知消息,调用onTrimMemory接口,以便执行onTrimMemory接口中的内存释放方法,释放部分内存。
其中,onTrimMemory接口由系统提供,应用程序的开发者(简称应用开发者)可以基于onTrimMemory接口实现内存释放方法。比如,应用开发者可以复写onTrimMemory接口,在onTrimMemory接口中定义应用程序的内存释放方法。后续,在不同的情况下,应用可以调用onTrimMemory接口释放自身的内存,以避免应用被系统直接杀掉,提升用户使用该应用的体验。
示例性的,onTrimMemory接口可以实现为如下格式:
onTrimMemory()
{
内存释放方法
...
}
其中,应用程序的运行状态可以为如下表1所示。
表1
示例性的,如图2,初始时,应用程序运行在前台,操作系统中的系统服务检测该应用程序运行状态。后续,应用程序切换到后台运行,系统服务监听到应用由前台运行切换至TRIM_MEMORY_BACKGROUND(应用已切换至后台运行)这一运行状态,则系统服务向该应用程序发送通知消息,触发应用程序调用onTrimMemory接口,执行onTrimMemory接口中的内存释放方法,释放部分内存,以降低系统的内存压力。
上述方案虽然能够释放一定数目的内存,但是由于onTrimMemory接口中的内存释放方法等需要由应用开发者自行实现,即,系统将内存回收交给应用程序自身执行。因此,内存释放的效果很多程度上取决于应用开发者的开发水平和能力。又由于很多应用开发者并不知道哪些内存是可以释放的或者基于其他原 因,使得绝大多数应用退到后台不释放内存,或者释放内存的效果不佳,或者内存释放出错导致系统崩溃。
比如,应用开发者为了保证自己所开发应用的存活以及用户体验,不在onTrimMemory接口中实现后台应用的内存释放方法。这样一来,当系统整体的内存不足时,系统将直接杀掉该高占用的后台应用。
此外,各应用的运行状态时刻发生变化,很有可能在执行应用的onTrimMemory方法的过程中,应用的运行状态突然变差,导致还没来得及释放该应用的内存,该应用就因高占用内存被杀掉,从而影响用户使用该应用的体验。
综上可见,目前内存释放方法的效果较差,不能保证电子设备的运行性能。
为了解决上述技术问题,本申请实施例提供一种内存管理方法,该方法中,电子设备可以检测应用的生命周期,并根据应用所处的生命周期的阶段,对应用对应的功能模块所使用的内存数据进行处理。考虑到应用在不同生命周期阶段的性能需求可能不同,应用处于不同的生命周期阶段时,应用的各功能模块的内存数据的处理方式可以不同。
比如,应用处于后台播放状态的情况下,对应用的用于界面显示的功能模块所使用的内存数据(比如渲染线程、UI线程使用的内存数据)进行压缩处理。应用处于后台缓存状态的情况下,对应用的用于界面显示的功能模块所使用的内存数据进行换出处理。
如此,能够契合应用在不同生命周期阶段的内存处理需求,换言之,能够在满足应用在相应生命周期阶段的性能需求的基础上,尽可能高效地从应用的功能模块处理内存,以便提升电子设备的空闲内存,进而提升电子设备的整机运行性能。
本申请实施例的内存管理方法可以应用在电子设备中。比如应用在AOSP系统或类似系统电子设备中。可选的,电子设备可以采用mmap、标准分配器、内核分配器等方式分配内存。示例性的,电子设备例如可以为手机、平板电脑、个人计算机(personal computer,PC)、上网本等需要进行内存优化的设备,本申请对电子设备的具体形式不做特殊限制。
以电子设备为手机为例,图3示出了电子设备100a的硬件结构示意图。其他电子设备的结构可参见电子设备100a的结构。
电子设备100a可以包括处理器110,外部存储器接口120,内部存储器121,通用串行总线(universal serial bus,USB)接口130,充电管理模块140,电源管理模块141,电池142,天线1,天线2,移动通信模块150,无线通信模块160,音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,传感器模块180,按键190,马达191,指示器192,摄像头193,显示屏194,以及用户标识模块(subscriber identification module,SIM)卡接口195等。其中传感器模块180可以包括压力传感器180A,陀螺仪传感器180B,气压传感器180C,磁传感器180D,加速度传感器180E,距离传感器180F,接近光传感器180G,指纹传感器180H,温度传感器180J,触摸传感器180K,环境光传感器180L,骨传导传感器180M等。
可以理解的是,本发明实施例示意的结构并不构成对电子设备100a的具体限定。在本申请另一些实施例中,电子设备100a可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
处理器110可以包括一个或多个处理单元,例如:处理器110可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。
控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。
处理器110中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器110中的存储器为高速缓冲存储器。该存储器可以保存处理器110刚用过或循环使用的指令或数据。如果处理器110需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器110的等待时间,因而提高了系统的效率。
在一些实施例中,处理器110可以包括一个或多个接口。接口可以包括集成电路(inter-integrated circuit,I2C)接口,集成电路内置音频(inter-integrated circuit sound,I2S)接口,脉冲编码调制(pulse code modulation,PCM)接口,通用异步收发传输器(universal asynchronous receiver/transmitter,UART)接口,移动产业处理器接口(mobile industry processor interface,MIPI),通用输入输出(general-purpose input/output,GPIO)接口,用户标识模块(subscriber identity module,SIM)接口,和/或通用串行总线(universal serial bus,USB)接口等。
其中,USB接口130是符合USB标准规范的接口,具体可以是Mini USB接口,Micro USB接口, USB Type C接口等。USB接口130可以用于连接充电器为电子设备100a充电,也可以用于电子设备100a与外围设备之间传输数据。也可以用于连接耳机,通过耳机播放音频。该接口还可以用于连接其他电子设备,例如AR设备等。
可以理解的是,本发明实施例示意的各模块间的接口连接关系,只是示意性说明,并不构成对电子设备100a的结构限定。在本申请另一些实施例中,电子设备100a也可以采用上述实施例中不同的接口连接方式,或多种接口连接方式的组合。
充电管理模块140用于从充电器接收充电输入。其中,充电器可以是无线充电器,也可以是有线充电器。在一些有线充电的实施例中,充电管理模块140可以通过USB接口130接收有线充电器的充电输入。在一些无线充电的实施例中,充电管理模块140可以通过电子设备100a的无线充电线圈接收无线充电输入。充电管理模块140为电池142充电的同时,还可以通过电源管理模块141为终端供电。
电源管理模块141用于连接电池142,充电管理模块140与处理器110。电源管理模块141接收电池142和/或充电管理模块140的输入,为处理器110,内部存储器121,显示屏194,摄像头193,和无线通信模块160等供电。电源管理模块141还可以用于监测电池容量,电池循环次数,电池健康状态(漏电,阻抗)等参数。在其他一些实施例中,电源管理模块141也可以设置于处理器110中。在另一些实施例中,电源管理模块141和充电管理模块140也可以设置于同一个器件中。
电子设备100a的无线通信功能可以通过天线1,天线2,移动通信模块150,无线通信模块160,调制解调处理器以及基带处理器等实现。
天线1和天线2用于发射和接收电磁波信号。电子设备100a中的每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线1复用为无线局域网的分集天线。在另外一些实施例中,天线可以和调谐开关结合使用。
移动通信模块150可以提供应用在电子设备100a上的包括2G/3G/4G/5G等无线通信的解决方案。移动通信模块150可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(low noise amplifier,LNA)等。移动通信模块150可以由天线1接收电磁波,并对接收的电磁波进行滤波,放大等处理,传送至调制解调处理器进行解调。移动通信模块150还可以对经调制解调处理器调制后的信号放大,经天线1转为电磁波辐射出去。在一些实施例中,移动通信模块150的至少部分功能模块可以被设置于处理器110中。在一些实施例中,移动通信模块150的至少部分功能模块可以与处理器110的至少部分模块被设置在同一个器件中。
调制解调处理器可以包括调制器和解调器。其中,调制器用于将待发送的低频基带信号调制成中高频信号。解调器用于将接收的电磁波信号解调为低频基带信号。随后解调器将解调得到的低频基带信号传送至基带处理器处理。低频基带信号经基带处理器处理后,被传递给应用处理器。应用处理器通过音频设备(不限于扬声器170A,受话器170B等)输出声音信号,或通过显示屏194显示图像或视频。在一些实施例中,调制解调处理器可以是独立的器件。在另一些实施例中,调制解调处理器可以独立于处理器110,与移动通信模块150或其他功能模块设置在同一个器件中。
无线通信模块160可以提供应用在电子设备100a上的包括无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(bluetooth,BT),全球导航卫星系统(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。无线通信模块160可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块160经由天线2接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器110。无线通信模块160还可以从处理器110接收待发送的信号,对其进行调频,放大,经天线2转为电磁波辐射出去。
在本申请的一些实施例中,电子设备100a可以通过无线通信模块160(比如WLAN模块)和天线2与其他终端或服务器建立无线连接,以实现电子设备100a和其他终端或服务器之间的通信。
在一些实施例中,电子设备100a的天线1和移动通信模块150耦合,天线2和无线通信模块160耦合,使得电子设备100a可以通过无线通信技术与网络以及其他设备通信。所述无线通信技术可以包括全球移动通讯系统(global system for mobile communications,GSM),通用分组无线服务(general packet radio service,GPRS),码分多址接入(code division multiple access,CDMA),宽带码分多址(wideband code division multiple access,WCDMA),时分码分多址(time-division code division multiple access,TD-SCDMA),长期演进(long term evolution,LTE),BT,GNSS,WLAN,NFC,FM,和/或IR技术等。所述GNSS可以包括全球卫星定位系统(global positioning system,GPS),全球导航卫星系统(global navigation satellite system, GLONASS),北斗卫星导航系统(beidou navigation satellite system,BDS),准天顶卫星系统(quasi-zenith satellite system,QZSS)和/或星基增强系统(satellite based augmentation systems,SBAS)。
电子设备100a通过GPU,显示屏194,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏194和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器110可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。
显示屏194用于显示图像,视频等。显示屏194包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode的,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。在一些实施例中,电子设备100a可以包括1个或N个显示屏194,N为大于1的正整数。
电子设备100a可以通过ISP,摄像头193,视频编解码器,GPU,显示屏194以及应用处理器等实现拍摄功能。
ISP用于处理摄像头193反馈的数据。例如,拍照时,打开快门,光线通过镜头被传递到摄像头感光元件上,光信号转换为电信号,摄像头感光元件将所述电信号传递给ISP处理,转化为肉眼可见的图像。ISP还可以对图像的噪点,亮度,肤色进行算法优化。ISP还可以对拍摄场景的曝光,色温等参数优化。在一些实施例中,ISP可以设置在摄像头193中。
摄像头193用于捕获静态图像或视频。物体通过镜头生成光学图像投射到感光元件。感光元件可以是电荷耦合器件(charge coupled device,CCD)或互补金属氧化物半导体(complementary metal-oxide-semiconductor,CMOS)光电晶体管。感光元件把光信号转换成电信号,之后将电信号传递给ISP转换成数字图像信号。ISP将数字图像信号输出到DSP加工处理。DSP将数字图像信号转换成标准的RGB,YUV等格式的图像信号。在一些实施例中,电子设备100a可以包括1个或N个摄像头193,N为大于1的正整数。
数字信号处理器用于处理数字信号,除了可以处理数字图像信号,还可以处理其他数字信号。例如,当电子设备100a在频点选择时,数字信号处理器用于对频点能量进行傅里叶变换等。
视频编解码器用于对数字视频压缩或解压缩。电子设备100a可以支持一种或多种视频编解码器。这样,电子设备100a可以播放或录制多种编码格式的视频,例如:动态图像专家组(moving picture experts group,MPEG)1,MPEG2,MPEG3,MPEG4等。
NPU为神经网络(neural-network,NN)计算处理器,通过借鉴生物神经网络结构,例如借鉴人脑神经元之间传递模式,对输入信息快速处理,还可以不断的自学习。通过NPU可以实现电子设备100a的智能认知等应用,例如:图像识别,人脸识别,语音识别,文本理解等。
外部存储器接口120可以用于连接外部存储卡,例如Micro SD卡,实现扩展电子设备100a的存储能力。外部存储卡通过外部存储器接口120与处理器110通信,实现数据存储功能。例如将音乐,视频等文件保存在外部存储卡中。
内部存储器121可以用于存储计算机可执行程序代码,所述可执行程序代码包括指令。内部存储器121可以包括存储程序区和存储数据区。其中,存储程序区可存储操作系统,至少一个功能所需的应用程序(比如声音播放功能,图像播放功能等)等。存储数据区可存储电子设备100a使用过程中所创建的数据(比如音频数据,电话本等)等。此外,内部存储器121可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。处理器110通过运行存储在内部存储器121的指令,和/或存储在设置于处理器中的存储器的指令,执行电子设备100a的各种功能应用以及数据处理。
电子设备100a可以通过音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,以及应用处理器等实现音频功能。例如音乐播放,录音等。
音频模块170用于将数字音频信息转换成模拟音频信号输出,也用于将模拟音频输入转换为数字音频信号。音频模块170还可以用于对音频信号编码和解码。在一些实施例中,音频模块170可以设置于处理器110中,或将音频模块170的部分功能模块设置于处理器110中。
扬声器170A,也称“喇叭”,用于将音频电信号转换为声音信号。电子设备100a可以通过扬声器170A收听音乐,或收听免提通话。
受话器170B,也称“听筒”,用于将音频电信号转换成声音信号。当电子设备100a接听电话或语音信 息时,可以通过将受话器170B靠近人耳接听语音。
麦克风170C,也称“话筒”,“传声器”,用于将声音信号转换为电信号。当拨打电话或发送语音信息时,用户可以通过人嘴靠近麦克风170C发声,将声音信号输入到麦克风170C。电子设备100a可以设置至少一个麦克风170C。在另一些实施例中,电子设备100a可以设置两个麦克风170C,除了采集声音信号,还可以实现降噪功能。在另一些实施例中,电子设备100a还可以设置三个,四个或更多麦克风170C,实现采集声音信号,降噪,还可以识别声音来源,实现定向录音功能等。
耳机接口170D用于连接有线耳机。耳机接口170D可以是USB接口130,也可以是3.5mm的开放终端平台(open mobile terminal platform,OMTP)标准接口,美国蜂窝电信工业协会(cellular telecommunications industry association of the USA,CTIA)标准接口。
按键190包括开机键,音量键等。按键190可以是机械按键。也可以是触摸式按键。电子设备100a可以接收按键输入,产生与电子设备100a的用户设置以及功能控制有关的键信号输入。
马达191可以产生振动提示。马达191可以用于来电振动提示,也可以用于触摸振动反馈。例如,作用于不同应用(例如拍照,音频播放等)的触摸操作,可以对应不同的振动反馈效果。作用于显示屏194不同区域的触摸操作,马达191也可对应不同的振动反馈效果。不同的应用场景(例如:时间提醒,接收信息,闹钟,游戏等)也可以对应不同的振动反馈效果。触摸振动反馈效果还可以支持自定义。
指示器192可以是指示灯,可以用于指示充电状态,电量变化,也可以用于指示消息,未接来电,通知等。
SIM卡接口195用于连接SIM卡。SIM卡可以通过插入SIM卡接口195,或从SIM卡接口195拔出,实现和电子设备100a的接触和分离。电子设备100a可以支持1个或N个SIM卡接口,N为大于1的正整数。SIM卡接口195可以支持Nano SIM卡,Micro SIM卡,SIM卡等。同一个SIM卡接口195可以同时插入多张卡。所述多张卡的类型可以相同,也可以不同。SIM卡接口195也可以兼容不同类型的SIM卡。SIM卡接口195也可以兼容外部存储卡。电子设备100a通过SIM卡和网络交互,实现通话以及数据通信等功能。在一些实施例中,电子设备100a采用eSIM,即:嵌入式SIM卡。eSIM卡可以嵌在电子设备100a中,不能和电子设备100a分离。
需要说明的是,电子设备的结构也可以参考图5所示结构,电子设备可以具有比图5所示的结构更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
可选的,电子设备100a的软件系统可以采用分层架构,事件驱动架构,微核架构,微服务架构,或云架构。本发明实施例以分层架构的系统为例,示例性说明电子设备100a的软件结构。
图4是本发明实施例的电子设备100a的软件结构框图。
分层架构将软件分成若干个层,每一层都有清晰的角色和分工。层与层之间通过软件接口通信。在一些实施例中,将Android系统分为四层,从上至下分别为应用程序层,应用程序框架层,安卓运行时(Android runtime)和系统库,以及内核层。
应用程序层可以包括一系列应用程序包。
如图4所示,应用程序包可以包括相机,图库,日历,通话,地图,导航,WLAN,蓝牙,音乐,视频,短信息等应用程序。
电子设备中可能运行有一个或者多个应用,每个应用至少有一个对应的进程。一个进程有至少一个线程在执行任务(task)。也就是,电子设备中运行有多个线程。线程包括java线程、C/C++线程。
为了保证线程的正常运行,电子设备可以按照一定的策略为线程分配处理单元(比如CPU核)。线程被分配了处理单元后,可以通过该处理单元执行相应的任务。
应用程序框架层为应用程序层的应用程序提供应用编程接口(application programming interface,API)和编程框架。应用程序框架层包括一些预先定义的函数。
如图4所示,应用程序框架层可以包括第一服务、虚拟机、直接内存分配器以及mmap接口。
其中,第一服务用于检测应用的运行状态,当检测到应用的运行状态发生切换,则向内核层的内存管理模块发送通知消息。可选的,通知消息用于指示应用的运行状态。内存管理模块接收通知消息后,根据应用的运行状态,对应用的一个或多个功能模块的内存进行管理。
即,本申请实施例中,可以基于应用中的功能模块或业务类型这一粒度,对各功能模块的内存进行管理。比如,对应用的第一组件,采用第一方式对该第一组件的内存进行管理;对该应用的第二组件,采用第二方式对第二组件的内存进行管理。
可选的,应用的功能模块包括但不限于如下一项或多项:组件、线程、方法(method)、类(class)、对象(object)、surface(可以对应一块内存区域,该内存区域中存储有屏幕中待显示的图像的信息)、media。
可选的,组件包括但不限于如下任一项或多项:活动(activity)、碎片(fragment)、服务(service)、内容提供者(contentProvider)、广播接收器。
可选的,应用的线程包括但不限于如下一项或多项:渲染(Render)线程、用户界面(user interface,UI)线程。
一些示例中,不同的功能模块可以对应不同的业务类型。比如,上述渲染线程、UI线程可以对应界面显示这一业务类型。
可选的,应用的运行状态包括但不限于如下任一项或多项:前台运行状态、后台播放状态、后台服务状态、后台缓存状态、浅冻结状态与深度冻结状态。
其中,前台运行状态:指应用正在前台运行。此种运行状态下,应用通常可以呈现图形界面(比如UI),用户可以通过UI与应用进行交互。
后台播放状态:指应用切换到后台运行,且不再呈现图形界面,但是应用的功能或任务仍在运行。例如,音乐应用在后台运行音乐播放任务,导航应用在后台运行导航任务。
后台服务状态:指后台服务类应用,后台服务类应用主要实现后台数据采集,消息推送或者常驻中断等待服务,例如蓝牙连接,再例如应用可以在后台推送消息,再例如应用可以采集一些数据,以便向用户推送一些消息。
后台缓存状态:指应用切换到后台后,通常不再运行任务,且用户在短时间(比如第一时长)内没有操作该应用。这部分应用驻留在系统后台,主要是为了保证从后台切回前台时,能够保留之前在前台运行的浏览记录,以便用户能够接续操作应用,浏览相应内容。
浅冻结状态:指应用退到后台后,不再运行任务,且用户已经一定时间(比如第二时长)内没有操作该应用。该运行状态下,该后台应用仅在一些特定系统事件发生时,响应系统行为,例如响应系统执行分辨率转换等。其中,第二时长大于第一时长。
深度冻结状态:指应用退到后台后,不再运行任务,且在较长一段时间(比如第三时长)内,用户没有再次使用该应用。此运行状态下,该后台应用不再响应系统行为,应用完全进入非运行状态。其中,第三时长大于第二时长。
需要说明的是,后台缓存状态、浅冻结状态、深度冻结状态,这三种运行状态下,应用均不运行任务,三种状态的区别在于应用驻留在后台的时长。当应用驻留在后台,且不运行任务的时长较短(比如第一时长),应用的运行状态处于后台缓存状态。当应用驻留在后台,且不运行任务的时长较长(比如第二时长),应用的运行状态处于浅冻结状态。当应用驻留在后台,且不运行任务的时长很长(比如第三时长),应用的运行状态处于深度冻结状态。处于后台缓存状态的应用,若在达到第二时长后,仍没有被用户操作(不运行任务),则可以切换到浅冻结状态。处于浅冻结状态的应用,若在达到第三时长后,仍没有被用户操作(不运行任务),则可以切换到深度冻结状态。
本申请实施例中,电子设备可以检测应用的运行状态,并根据应用的运行状态,对应用的功能模块所使用的内存进行内存。示例性的,当第一服务检测到音乐应用由后台服务状态切换至后台缓存状态,第一服务可以向内核层的内存管理模块发送通知消息,以指示音乐应用的运行状态。内存管理模块接收通知消息后,根据音乐应用的运行状态,对音乐应用的一个或多个功能模块的内存进行管理。比如,检测到音乐应用的运行状态切换至后台播放状态时,考虑到音乐应用可能在短期内被切换回前台运行,内存管理模块可以将用于显示的功能模块的内存页进行压缩,得到压缩内存页,以便在应用切换回前台时,能够快速解压并使用该内存页,实现界面显示功能。
再比如,音乐应用的运行状态切换至后台缓存状态,意味着用户将应用切回前台的概率较低,为了节省内存空间,内存管理模块可以将用于显示的功能模块的压缩内存页进行换出操作,比如换出(或称落盘)至存储器件(比如Flash)上。
可选的,应用处于后台缓存状态的情况下,内存管理模块还可以将该应用的其他功能模块(用于显示的功能模块之外的功能模块)中部分功能模块的内存进行压缩操作。如此,根据同一应用的不同功能模块的特性、运行需求,采用不同方式对不同功能模块的内存进行处理处理,以便尽可能保证应用在相应运行状态下的性能,同时提升电子设备的空闲内存。
比如,应用处于后台播放状态下,由于应用不再展示图形界面,因此,电子设备可以将该应用的用于界面显示的功能模块(比如渲染线程)所使用的内存数据进行压缩处理,而对支持后台运行的一些功能模 块(比如service)所使用的内存数据不进行处理。如此,可以尽可能保证应用在后台播放状态下的正常运行,同时提升电子设备的空闲内存。
虚拟机,用于管理堆内存。作为一种可能的实现方式,虚拟机调用mmap接口分配堆内存。操作系统会记录被虚拟机分配的这部分堆内存。
直接内存分配器,用于分配直接内存。作为一种可能的实现方式,虚拟机调用mmap接口分配直接内存。
硬件抽象层(hardware abstraction layer,HAL)包括流媒体内存分配器。流媒体内存分配器,用于分配流媒体内存。可选的,流媒体内存分配器包括dma_buf内存分配器或ION内存分配器。
内核层是硬件和软件之间的层。内核层可至少包含显示驱动,摄像头驱动,音频驱动,传感器驱动。
在本申请的一些实施例中,内核层可包括内存管理模块,用于对应用的内存进行管理。比如,从第一服务接收到通知消息(用于指示应用的运行状态)后,根据应用的运行状态,对应用的一个或多个功能模块的内存进行管理。
需要说明的是,图4所示软件架构仅是一个示例。电子设备的软件架构还可以是其他形态。比如,第一服务设置在其他层。本申请实施例对电子设备的软件架构不做限制。
如图5示出了电子设备的另一种可能的结构,电子设备可以包括处理器401(可选的,包括处理器408)、存储器403、收发器404等。
其中,上述各组件之间可包括一通路,用于在上述组件之间传送信息。
收发器404,用于诸如以太网,WLAN等协议与其他设备或通信网络通信。
处理器、存储器的详细内容请参考图2的电子设备中相关结构的描述,这里不再赘述。
下面对本申请实施例提供的技术方案进行详细说明。可选的,电子设备中的物理内存可以划分为多个内存单元。可选的,内存单元包括但不限于是内存页。下述实施例主要以内存页为内存单元为例进行说明,但本申请实施例的技术方案不限于此。
本申请实施例中,可以对应用的生命周期进行检测,并根据应用的生命周期对内存进行管理。比如,对于短期内用不到的内存数据,可以对这部分内存数据进行压缩,并存储至压缩空间,以便节省一部分内存空间。对于长期内用不到的压缩内存数据(比如压缩后的内存页),可以对这部分压缩内存数据进行换出处理,比如通过落盘机制,换出至磁盘空间(比如Flash器件上)。
可选的,应用的生命周期包括如下一个或多个运行阶段:前台运行、后台播放、后台服务、后台缓存、浅冻结、深度冻结。应用处于前台运行的阶段,也可以称为应用的运行状态是前台运行状态。类似的,应用处于后台播放的阶段,也可以称为应用的运行状态是后台播放状态。应用处于后台服务的阶段,也可以称为应用的运行状态是后台服务状态。应用处于后台缓存的阶段,也可以称为应用的运行状态是后台缓存状态。应用处于浅冻结的阶段,也可以称为应用的运行状态是浅冻结状态。应用处于深度冻结的阶段,也可以称为应用的运行状态是深度冻结状态。
如下,结合附图对应用的生命周期以及本申请实施例的技术方案进行介绍。
示例性的,以手机中的导航应用为例,如图6,假设导航应用初始时运行在前台,则为了保证导航应用在前台的正常运行,手机可以不对导航应用的内存进行处理。
t1时刻,手机检测到用户将导航应用切换至后台运行的操作,假设导航应用仍在后台运行导航功能,比如确定导航路线、播放导航路线语音,则确定导航应用进入后台播放状态。考虑到导航应用在后台运行时,不再显示图形界面(不再执行显示功能),因此,与界面显示相关的内存可以被处理。
又考虑到导航应用可能在短期内被切换回前台运行,作为一种可能的实现方式,在应用处于后台播放状态(此时应用虽然在后台运行,但可能仍在执行某些核心功能)下,处理与界面显示相关的内存,可以实现为:将该应用的与界面显示相关的内存数据进行压缩,得到与界面显示相关的压缩后的内存数据并将其存储在压缩空间中,而不会将与界面显示相关的压缩后的内存数据换出到磁盘空间(比如Flash器件)。如此,能够避免从磁盘重新读取数据导致的耗时长,应用被延迟切换回前台运行。
示例性的,如图6,在t1-t2时段内,手机可以处理界面显示相关的内存数据,界面显示相关的内存数据包括但不限于:用户界面线程相关的内存数据、渲染线程相关的内存数据。
之后,在t2时刻,手机检测到在后台运行的导航应用,其不再运行导航等核心功能。比如,当前导航任务已结束,且用户没有添加新的导航任务。那么,手机可以确定导航应用进入了后台服务状态。考虑到导航应用处于后台服务状态时,不再运行导航任务/功能,因此,手机可以在之前内存处理(即对界面显示 相关的内存数据进行处理)的基础上,处理导航等核心功能相关的内存数据。导航功能包括但不限于确定导航路线、播放导航音频。
可选的,处理导航等核心功能相关的内存数据,可以实现为:与导航等核心功能相关的内存数据进行压缩,并将这部分与导航等核心功能相关的压缩后内存数据存储至压缩空间中。
示例性的,如图6,在t2-t3时段内,在已处理界面显示相关的内存数据(比如渲染线程相关的内存数据、用户界面线程相关的内存数据)的基础上,手机还可以处理应用的核心功能(比如导航)相关的内存数据,导航功能相关的内存数据包括但不限于:surface线程相关的内存数据、media线程相关的内存数据。
随着导航应用退到后台运行的时间变长,比如,在t3时刻,导航应用在后台运行的时长达到第一时长,则导航应用进入后台缓存状态。考虑到应用处于后台缓存状态时,用户将导航应用切换回前台的概率较低,因此,为了节省更多的内存空间,作为一种可能的实现方式,手机可以将导航应用的用于界面显示的内存数据换出至磁盘空间(比如Flash器件上)。
示例性的,如图6,在t3-t4时段内,手机将界面显示相关的内存数据(比如渲染线程、用户界面线程相关的内存数据)换出到磁盘空间中。
可选的,考虑到t3-t4时段内(导航应用处于后台缓存状态),用户短期内操作导航应用的概率较低,或者用户短期内可能不再需要导航应用提供后台服务,手机可以对service使用的内存数据进行压缩。类似的,如图6所示,在t3-t4时段内,手机还可以对media、surface使用的相关内存数据进行压缩。
导航应用继续在后台运行,在t4时刻,导航应用在后台运行的时长达到第二时长,则导航应用进入浅冻结状态。考虑到此种状态下,导航应用在后台活跃的可能性大幅降低,因此,作为一种可能的实现方式,手机可以将Java虚拟机的堆内存数据之外的内存数据,全量换出至磁盘空间(比如Flash器件)。
示例性的,如图6,t4-t5时段内,手机将用户界面线程、渲染线程、surface、media、service相关的内存数据换出至磁盘空间。如此,能够从不活跃的后台应用处理尽可能多的内存,以提升整机的空闲内存数量,进而提升整机的运行性能。
导航应用继续在后台运行,在t5时刻,导航应用在后台运行的时长达到第三时长,则导航应用进入深度冻结状态。导航应用处于深度冻结状态,说明在较长一段时间里,导航应用没有被用户再次使用,基于此,手机可以预测在未来较长时段内,导航应用仍不会被用户再次使用。为了降低不活跃的后台应用占用较多的内存,手机可以将处于深度冻结状态的导航应用的Java虚拟机的堆内存数据进行压缩,并存储至压缩空间。如此,既能降低堆内存数据的内存占用量,又由于堆内存数据是被压缩至压缩空间(未被换出到Flash器件),因此,后续进程想要使用堆内存数据时,只需对压缩的堆内存数据进行解压,无需从磁盘空间中重新换回堆内存数据,能够避免换回堆内存数据导致的延时,进而能够降低用户将导航应用切换回前台时的卡顿感。
示例性的,如图6,在t5-t6时段内,手机在将用户界面线程、渲染线程、surface、media、service相关的内存数据换出至磁盘空间的基础上,可以对Java虚拟机的堆内存数据(比如method、class、object相关的内存数据)进行压缩,并将压缩结果存储至压缩空间。
上述方案中,电子设备可以检测应用的生命周期,并根据应用所处的生命周期的阶段,对应用对应的部分功能模块所使用的内存数据进行处理。
首先,由于是基于应用的功能模块的粒度进行内存处理,因此,内存处理的精细度更高。在一些场景中,对应用的部分功能模块(比如暂且不会使用的功能模块)所使用的内存数据进行处理,不至于影响整个应用的正常运行,能够提升应用的保活程度,降低应用被查杀或异常退出的概率。比如,对于某些后台应用,电子设备可以仅对后台应用的部分功能模块所使用的内存进行处理,如此,在尽可能保证后台应用存活的情况下,较大程度地释放应用的内存,增强电子设备的流畅度和稳定性(避免了内存颠簸),提高电子设备的内存管理效率。
其次,相比于相关技术中,依赖onTrimMemory机制,期望应用主动释放内存数据,导致应用实际上并未释放内存数据,系统的内存压力仍然较大,本申请实施例的技术方案,不依赖应用主动释放内存数据。具体的,本申请实施例中,在检测到应用处于生命周期的相应阶段时,电子设备可以自动从应用的部分功能模块处理内存数据。如此,一方面,有助于缓解系统的内存压力。另一方面,电子设备从应用的部分功能模块处理相应内存数据,使得该应用占用的总内存有所降低,因此,该应用被查杀或异常退出的概率将大幅降低。
此外,应用处于不同的生命周期阶段时,应用的各功能模块的内存数据的处理方式可以不同。如此,能够契合应用在不同生命周期阶段的内存处理需求,换言之,能够在满足应用在相应生命周期阶段的性能 需求的基础上,尽可能的多地从应用的功能模块处理内存,以便提升电子设备的空闲内存,进而提升电子设备的整机运行性能。
如下,对本申请实施例涉及的技术细节进行介绍。
电子设备可以基于虚拟内存(virtual memory)技术实现内存管理。线程发起内存分配请求,电子设备接收内存分配请求之后,通过内存分配器为该线程分配虚拟内存区域(virtual memory areas,VMA)以及物理内存。线程的VMA的虚拟地址与物理内存的物理地址之间具有映射关系(或称关联关系)。可选的,物理内存可以包括一个或多个链表,每个链表包括一个或多个内存页。
可选的,电子设备可以对分配的VMA进行标记。示例性的,如图7,示出了电子设备的内存分配器为用户界面(UI)线程、渲染线程分配的VMA,其中,UI线程对应的VMA与渲染线程对应的VMA可以有不同的标识,以便区分VMA所关联的线程。在一些示例中,内存分配器对VMA进行标记之后,可以将VMA的标识信息传递给内存管理模块,内存管理模块可以根据VMA的标识信息,区分不同VMA关联的线程。
可选的,电子设备还可以对分配的物理内存中的内存页进行标记。比如,为不同功能模块使用的内存页设置不同的标记。比如,UI线程使用的内存页的标记设置标记1,渲染线程使用的内存页的标记设置标记2。如此,可以区分不同内存页关联的功能模块(比如线程)或业务类型。
为线程分配VMA之后,线程可以申请使用VMA映射到的内存页。作为一种可能的实现方式,可以在使用请求中携带待使用的内存页的虚拟地址A’,虚拟地址A’中包括偏移量(offset)。电子设备可以通过查询页表,获取该虚拟地址A’对应的物理地址A的基址,并根据基址以及虚拟地址A’中的偏移量计算物理地址A,进而寻址到内存页的物理地址A,使得线程可以使用物理地址A中的内存页。
类似的,电子设备为应用的其他功能模块分配VMA以及物理内存的过程可以参考电子设备为应用的线程分配VMA以及物理内存的相关描述。
在一些场景中,若检测到应用的运行状态发生变化,则电子设备可以根据应用的运行状态,对应用的一个或多个功能模块所使用的内存数据进行管理。
图8示出了本申请实施例的内存管理方法的示例性流程。如图8,该方法包括如下步骤:
S101、第一服务检测到应用的运行状态发生变化。
可选的,第一服务位于框架层。
S102、第一服务向内存管理模块发送消息A。
其中,消息A用于指示运行状态发生变化。
需要说明的是,本申请实施例中,提及某个消息用于某个作用,指的是,该消息可以用于该作用,而并非该消息专用于该作用。
可选的,内存管理模块位于内核层。
S103、内存管理模块向处理线程发送消息B。
其中,消息B用于指示处理线程执行内存处理。
S104、响应于消息B,处理线程对应用的功能模块所使用的内存进行回收。
作为一种可能的实现方式,处理线程根据内存页关联的功能模块(或业务类型),对应用所使用的内存页进行处理。可选的,对于某种运行状态,处理线程尽可能多地对应用使用的非关键内存页(或称非重要内存页)进行处理,尽可能少地对关键内存页进行处理。关键内存页与非关键内存页的划分方式与特定运行状态下内存页关联的功能模块(或业务类型)有关。
比如,对于前台应用,可以将应用使用的全部内存页视为关键内存页。对于处于后台播放状态的应用,由于应用已不在前台展示UI,因此,与界面显示相关的业务视为非关键业务,或者说,与界面显示相关的应用功能模块视为非关键功能模块。相应的,与界面显示相关的内存页(比如UI线程、渲染线程、图层合成线程使用的内存页)可以视为非关键内存页。相应的,对于处于后台播放状态的应用,电子设备可以尽可能多地对界面显示相关的内存页进行处理,以降低电子设备的内存压力,提升电子设备的性能。
再比如,对于处于后台缓存状态的的应用,与界面显示相关的内存页以及surface、media、service可以视为非关键内存页。电子设备可以对这部分非关键内存页进行处理。可选的,非关键内存页可以包括两种类型,一种是较短时间内不被使用的内存页,一种是较长时间内不被使用的内存页。对于较短时间内不被使用的非关键内存页(比如后台缓存状态下,surface、media、service使用的内存页),电子设备可以对该部分非关键内存页进行压缩,并存储至压缩空间。对于较长时间内不被使用的非关键内存页(比如后台缓存状态下,界面显示相关的内存页),电子设备可以对该部分非关键内存页进行换出,并存储至 磁盘空间。
与相关技术中通过设置水线触发内存处理,会以高概率导致内存处理的数量与电子设备的实际内存使用需求不匹配相比,本申请实施例提供的内存管理方法,能够确定应用在不同运行状态下对应的关键业务(或关键功能模块)以及非关键业务(或非关键功能模块),并根据关键业务以及非关键业务,对于应用的关键业务实际需要使用的内存(关键内存),电子设备不进行处理,对于应用的关键业务(或关键功能模块)实际无需使用的内存(即非关键内存),电子设备进行处理。如此,能够匹配应用的实际内存使用需求,处理适量的内存。
如下以导航应用为例对处理线程的内存处理过程进行说明,在一个示例中,若检测到导航应用的运行状态由后台服务状态切换到后台缓存状态,则电子设备可以唤醒处理线程(比如kswapd)。示例性的,如图9,处理线程可以根据导航应用的当前运行状态,对应用的一个或多个功能模块所使用的内存进行处理。作为一种可能的实现方式,处理线程可以根据导航应用的UI线程的VMA,扫描该VMA映射到的链表1;根据导航应用的渲染线程的VMA,扫描该VMA映射到的链表2;根据导航应用的surface的VMA,扫描该VMA映射到的链表3;根据导航应用的media的VMA,扫描该VMA映射到的链表4;根据导航应用的service的VMA,扫描该VMA映射到的链表5。
其中,对于链表1中满足处理条件的内存页(即UI线程运行相关的内存页),由于导航应用已长期运行在后台,短时间内导航应用切换回前台运行的概率很小,因此,处理线程可以将压缩后的UI线程相关的内存页(比如内存页A、B)换出到磁盘空间中。类似的,对于链表2中满足处理条件的内存页(即渲染线程运行相关的内存页),处理线程可以将这部分内存页(比如内存页C)换出到磁盘空间中。
对于链表3中满足处理条件的内存页(surface相关的内存页),考虑到用户短期内操作导航应用的概率较低,那么,处理线程可以将这部分内存页(比如内存页D)压缩到压缩空间。类似的,处理线程可以将链表4中的可处理内存页(media相关的内存页)压缩到压缩空间,将链表5中的可处理内存页(service相关的内存页)压缩到压缩空间。
需要说明的是,图9以一个功能模块的VMA对应一个链表为例进行说明,在另一些实施例中,还可以是一个功能模块的VMA对应多个链表,或者,多个功能模块的VMA对应一个一个链表,或者,多个功能模块的VMA交叉对应多个链表,比如,应用的UI线程、渲染线程的VMA映射到链表1,应用的其余功能模块的VMA映射到链表2。本申请实施例对功能模块的VMA与物理内存的映射关系不做限制。
再示例性的,如图10,若检测到导航应用的运行状态切换到浅冻结状态,则电子设备可以唤醒处理线程,处理线程可以根据导航应用的当前运行状态,对应用的一个或多个功能模块所使用的内存进行处理。比如,处理线程可以根据导航应用的UI线程的VMA,扫描该VMA映射到的链表1,并将链表1中满足处理条件的内存页(即UI线程使用的内存页)换出至磁盘空间。类似的,处理线程可以对导航应用的其他功能模块所使用的内存页(比如渲染线程、surface、media、service使用的内存页)进行换出。
上述主要以应用切换运行状态后,电子设备就可以根据应用的运行状态,对应用的部分内存进行处理为例进行说明,在另一些实施例中,在应用切换运行状态后,电子设备可以延迟处理,换句话说,电子设备可以在应用切换运行状态后的一段时间后进行内存处理,以确保应用的运行状态稳定。示例性的,如图11,应用在t1时刻进入后台播放状态,为了避免用户短期内将应用切换回前台运行,电子设备可以延迟一段时间。比如,延迟时长为t1-t1’,t1-t1时段内,用户未将应用切换回前台运行,那么,电子设备预测该应用在未来的一小段时间内也不会被切换回前台,则电子设备可以在t1’时刻对界面显示相关的内存(比如渲染线程、UI线程使用的内存)进行处理。
在一些实施例中,电子设备可以提供内存管理的设置入口,用户可以通过设置入口设置内存管理的相关功能。示例性的,如图12,电子设备显示设置界面110(第一界面的一个示例),界面110包括设置选项1101,该设置选项1101用于开启或关闭内存管理功能。当开启内存管理功能后,电子设备可以根据应用的运行状态,以功能模块为粒度,对应用的一个或多个功能模块所使用的内存进行管理。或者,在另一些实施例中,电子设备可以一直开启内存管理功能。或者,电子设备可以在满足一定条件时自动开启内存管理功能。
下述表2示出了开启内存管理功能和未开启内存管理功能的情况下,电子设备的性能指标。
表2

需要说明的是,上述仅给出应用运行状态的几种示例,在另一些实施例中,还可以另行划分应用的运行状态或生命周期的阶段,并且,均可以使用上述技术方案,根据应用的运行状态(或者应用所处的生命周期阶段),对应用的部分内存(相应运行状态下非关键业务的内存)进行处理。
在一些方案中,可以对本申请的多个实施例进行组合,并实施组合后的方案。可选的,各方法实施例的流程中的一些操作任选地被组合,并且/或者一些操作的顺序任选地被改变。并且,各流程的步骤之间的执行顺序仅是示例性的,并不构成对步骤之间执行顺序的限制,各步骤之间还可以是其他执行顺序。并非旨在表明所述执行次序是可以执行这些操作的唯一次序。本领域的普通技术人员会想到多种方式来对本文所述的操作进行重新排序。另外,应当指出的是,本文某个实施例涉及的过程细节同样以类似的方式适用于其他实施例,或者,不同实施例之间可以组合使用。
此外,方法实施例中的某些步骤可等效替换成其他可能的步骤。或者,方法实施例中的某些步骤可以是可选的,在某些使用场景中可以删除。或者,可以在方法实施例中增加其他可能的步骤。
并且,各方法实施例之间可以单独实施,或结合起来实施。
可以理解的是,本申请实施例中的电子设备为了实现上述功能,其包含了执行各个功能相应的硬件结构和/或软件模块。结合本申请中所公开的实施例描述的各示例的单元及算法步骤,本申请实施例能够以硬件或硬件和计算机软件的结合形式来实现。某个功能究竟以硬件还是计算机软件驱动硬件的方式来执行,取决于技术方案的特定应用和设计约束条件。本领域技术人员可以对每个特定的应用来使用不同的方法来实现所描述的功能,但是这种实现不应认为超出本申请实施例的技术方案的范围。
本申请实施例可以根据上述方法示例对电子设备进行功能单元的划分,例如,可以对应各个功能划分各个功能单元,也可以将两个或两个以上的功能集成在一个处理单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。需要说明的是,本申请实施例中对单元的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。
图13示出了本申请实施例中提供的内存管理装置的一种示意性框图,该装置可以为上述的第一电子设备或具有相应功能的组件。该装置1700可以以软件的形式存在,还可以为可用于设备的芯片。装置1700包括:处理单元1702。处理单元1702可以用于支持图8所示的S101、S104等,和/或用于本文所描述的方案的其它过程。
可选的,装置1700还可包括通信单元1703。可选的,通信单元1703还可以划分为发送单元(并未在图13中示出)和接收单元(并未在图13中示出)。其中,发送单元,用于支持装置1700向其他电子设备发送信息。接收单元,用于支持装置1700从其他电子设备接收信息。
可选的,装置1700还可以包括存储单元1701,用于存储装置1700的程序代码和数据,数据可以包括不限于原始数据或者中间数据等。
一种可能的方式中,处理单元1702可以是控制器或图5所示的处理器401和/或408,例如可以是中央处理器(Central Processing Unit,CPU),通用处理器,数字信号处理(Digital Signal Processing,DSP),应用专用集成电路(Application Specific Integrated Circuit,ASIC),现场可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、晶体管逻辑器件、硬件部件或者其任意组合。其可以实现或执行结合本申请公开内容所描述的各种示例性的逻辑方框,模块和电路。处理器也可以是实现计算功能的组合,例如包含一个或多个微处理器组合,DSP和微处理器的组合等等。
一种可能的方式中,通信单元1703可以包括图5所示的收发器404、还可以包括收发电路、射频器件等。
一种可能的方式中,存储单元1701可以是图5所示的存储器403。
本申请实施例还提供一种电子设备,包括一个或多个处理器以及一个或多个存储器。该一个或多个存储器与一个或多个处理器耦合,一个或多个存储器用于存储计算机程序代码,计算机程序代码包括计算机指令,当一个或多个处理器执行计算机指令时,使得电子设备执行上述相关方法步骤实现上述实施例中的 方法。
本申请实施例还提供一种芯片系统,包括:处理器,所述处理器与存储器耦合,所述存储器用于存储程序或指令,当所述程序或指令被所述处理器执行时,使得该芯片系统实现上述任一方法实施例中的方法。
可选地,该芯片系统中的处理器可以为一个或多个。该处理器可以通过硬件实现也可以通过软件实现。当通过硬件实现时,该处理器可以是逻辑电路、集成电路等。当通过软件实现时,该处理器可以是一个通用处理器,通过读取存储器中存储的软件代码来实现。
可选地,该芯片系统中的存储器也可以为一个或多个。该存储器可以与处理器集成在一起,也可以和处理器分离设置,本申请并不限定。示例性的,存储器可以是非瞬时性处理器,例如只读存储器ROM,其可以与处理器集成在同一块芯片上,也可以分别设置在不同的芯片上,本申请对存储器的类型,以及存储器与处理器的设置方式不作具体限定。
示例性的,该芯片系统可以是现场可编程门阵列(field programmable gatearray,FPGA),可以是专用集成芯片(application specific integrated circuit,ASIC),还可以是系统芯片(system on chip,SoC),还可以是中央处理器(central processorunit,CPU),还可以是网络处理器(network processor,NP),还可以是数字信号处理电路(digital signal processor,DSP),还可以是微控制器(micro controller unit,MCU),还可以是可编程控制器(programmable logic device,PLD)或其他集成芯片。
应理解,上述方法实施例中的各步骤可以通过处理器中的硬件的集成逻辑电路或者软件形式的指令完成。结合本申请实施例所公开的方法步骤可以直接体现为硬件处理器执行完成,或者用处理器中的硬件及软件模块组合执行完成。
本申请实施例还提供一种计算机可读存储介质,该计算机可读存储介质中存储有计算机指令,当该计算机指令在电子设备上运行时,使得电子设备执行上述相关方法步骤实现上述实施例中的方法。
本申请实施例还提供一种计算机程序产品,当该计算机程序产品在计算机上运行时,使得计算机执行上述相关步骤,以实现上述实施例中的方法。
另外,本申请的实施例还提供一种装置,该装置具体可以是组件或模块,该装置可包括相连的处理器和存储器;其中,存储器用于存储计算机执行指令,当装置运行时,处理器可执行存储器存储的计算机执行指令,以使装置执行上述各方法实施例中的方法。
其中,本申请实施例提供的电子设备、计算机可读存储介质、计算机程序产品或芯片均用于执行上文所提供的对应的方法,因此,其所能达到的有益效果可参考上文所提供的对应的方法中的有益效果,此处不再赘述。
可以理解的是,为了实现上述功能,电子设备包含了执行各个功能相应的硬件和/或软件模块。结合本文中所公开的实施例描述的各示例的算法步骤,本申请能够以硬件或硬件和计算机软件的结合形式来实现。某个功能究竟以硬件还是计算机软件驱动硬件的方式来执行,取决于技术方案的特定应用和设计约束条件。本领域技术人员可以结合实施例对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
本实施例可以根据上述方法示例对电子设备进行功能模块的划分,例如,可以对应各个功能划分各个功能模块,也可以将两个或两个以上的功能集成在一个处理模块中。上述集成的模块可以采用硬件的形式实现。需要说明的是,本实施例中对模块的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。
通过以上的实施方式的描述,所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将装置的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。上述描述的系统,装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的方法,可以通过其它的方式实现。例如,以上所描述的终端设备实施例仅仅是示意性的,例如,所述模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,模块或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)或处理器执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:快闪存储器、移动硬盘、只读存储器、随机存取存储器、磁碟或者光盘等各种可以存储程序指令的介质。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何在本申请揭露的技术范围内的变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (16)

  1. 一种内存管理方法,其特征在于,所述方法应用于电子设备,所述方法包括:
    检测第一应用的运行状态;
    检测到第一应用的运行状态由第一运行状态切换至第二运行状态,对所述第一应用使用的部分内存进行处理;所述部分内存为所述第一应用的目标业务关联的内存;所述目标业务为所述第一应用在所述第二运行状态下的非关键业务。
  2. 根据权利要求1所述的方法,其特征在于,检测到第一应用的运行状态由第一运行状态切换至第二运行状态,对所述第一应用使用的部分内存进行处理,包括:检测到第一应用的运行状态由第一运行状态切换至第二运行状态,且所述第一应用处于所述第二运行状态的时长达到第一阈值,对所述第一应用使用的部分内存进行处理。
  3. 根据权利要求1或2所述的方法,其特征在于,所述目标业务包括第一业务和/或第二业务;所述第一业务关联的内存包括第一部分内存,所述第二业务关联的内存包括第二部分内存;
    所述第一部分内存为:在第一时间段内未被所述第一应用使用的页面;
    所述第二部分内存为:在第二时间段内未被所述第一应用使用的压缩页面。
  4. 根据权利要求3所述的方法,其特征在于,所述第二时间段的时长大于所述第一时间段的时长。
  5. 根据权利要求1-4中任一项所述的方法,其特征在于,所述第二运行状态包括第一后台运行状态;
    检测到第一应用的运行状态由第一运行状态切换至第二运行状态,对所述第一应用使用的部分内存进行处理,包括:
    检测到所述第一应用的运行状态由所述第一运行状态切换至第一后台运行状态,对所述第一业务关联的第一部分内存进行压缩。
  6. 根据权利要求3或4所述的方法,其特征在于,所述第二运行状态包括第一后台运行状态;
    检测到第一应用的运行状态由第一运行状态切换至第二运行状态,对所述第一应用使用的部分内存进行处理,包括:
    检测到所述第一应用的运行状态由所述第一运行状态切换至所述第一后台运行状态,将所述第二业务关联的第二部分内存换出至磁盘空间。
  7. 根据权利要求1-6中任一项所述的方法,其特征在于,所述第一运行状态为前台运行状态。
  8. 根据权利要求1-6中任一项所述的方法,其特征在于,所述第一运行状态为第二后台运行状态。
  9. 根据权利要求3或4所述的方法,其特征在于,所述方法还包括:
    检测到所述第一应用的运行状态由所述第一后台运行状态切换至第三后台运行状态;
    对所述第一应用的第三部分内存进行压缩,所述第三部分内存为:所述第一时间段内未被所述第一应用使用的内存;
    和/或,
    将所述第一应用的第四部分内存换出至磁盘空间,所述第四部分内存为:所述第二时间段内未被所述第一应用使用的内存;所述第四时间段的时长大于所述第三时间段的时长。
  10. 根据权利要求1-9中任一项所述的方法,其特征在于,对所述第一应用使用的部分内存进行处理,包括:
    获取所述部分内存对应的虚拟内存空间VMA以及所述VMA对应的链表;所述链表包括所述部分内存;
    对所述链表中的所述部分内存进行处理。
  11. 根据权利要求5或6所述的方法,其特征在于,所述方法还包括:
    显示第一界面;
    接收用户在所述第一界面上输入的操作,所述操作用于开启内存管理功能。
  12. 根据权利要求5或6所述的方法,其特征在于,所述第一后台运行状态包括如下状态:后台播放状态、后台服务状态、后台缓存状态、浅冻结状态、深度冻结状态;
    其中,所述后台播放状态下,所述第一应用在后台执行第一任务;
    所述后台服务状态下,所述第一应用在后台提供后台服务,所述第一应用在后台不执行所述第一任务;
    所述后台缓存状态下,所述第一应用在后台不执行所述第一任务,且不提供后台服务,且所述第一应用处于后台运行状态的时长达到第一时长;
    所述浅冻结状态下,所述第一应用在后台不执行所述第一任务,且不提供后台服务,且所述第一应用 处于后台运行状态的时长达到第二时长;所述第二时长大于所述第一时长;
    所述深度冻结状态下,所述第一应用在后台不执行所述第一任务,且不提供后台服务,且所述第一应用处于后台运行状态的时长达到第三时长;所述第三时长大于所述第二时长。
  13. 根据权利要求12所述的方法,其特征在于,
    在所述后台播放状态下,所述第一业务包括界面显示相关的业务;
    在所述后台服务状态下,所述第一业务包括所述第一任务对应的业务;
    所述后台缓存状态下,所述第一业务包括后台服务,所述第二业务包括界面显示相关的业务;
    所述浅冻结状态下,所述第二业务包括:所述第一任务对应的业务、后台服务;
    所述深度冻结状态下,所述第一业务包括对象、类、方法对应的业务。
  14. 一种电子设备,其特征在于,包括:处理器和存储器,所述存储器与所述处理器耦合,所述存储器用于存储计算机程序代码,所述计算机程序代码包括计算机指令,所述处理器从所述存储器中读取所述计算机指令,以使得所述电子设备执行如权利要求1-13中任一项所述的方法。
  15. 一种计算机可读存储介质,其特征在于,包括计算机指令,当所述计算机指令在电子设备上运行时,使得所述电子设备执行如权利要求1-13中任一项所述的方法。
  16. 一种计算机程序产品,其特征在于,当所述计算机程序产品在电子设备上运行时,使得所述电子设备执行如权利要求1-13中任一项所述的方法。
PCT/CN2023/109436 2022-07-30 2023-07-26 内存管理方法及电子设备 WO2024027544A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210912594.8A CN117519959A (zh) 2022-07-30 2022-07-30 内存管理方法及电子设备
CN202210912594.8 2022-07-30

Publications (1)

Publication Number Publication Date
WO2024027544A1 true WO2024027544A1 (zh) 2024-02-08

Family

ID=89765051

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/109436 WO2024027544A1 (zh) 2022-07-30 2023-07-26 内存管理方法及电子设备

Country Status (2)

Country Link
CN (1) CN117519959A (zh)
WO (1) WO2024027544A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117971712A (zh) * 2024-03-29 2024-05-03 阿里云计算有限公司 内存回收方法、装置、电子设备、存储介质及程序产品

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130311643A1 (en) * 2012-05-18 2013-11-21 Cisco Technology, Inc. System and method for latency reduction in a network environment
CN105808447A (zh) * 2016-03-29 2016-07-27 海信集团有限公司 一种终端设备的内存回收方法和装置
WO2019028912A1 (zh) * 2017-08-11 2019-02-14 华为技术有限公司 一种应用切换方法及装置
CN112433831A (zh) * 2020-11-17 2021-03-02 珠海格力电器股份有限公司 应用冻结方法、存储介质及电子设备

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10552179B2 (en) * 2014-05-30 2020-02-04 Apple Inc. Resource management with dynamic resource policies
CN111198759B (zh) * 2018-11-16 2024-04-19 深圳市优必选科技有限公司 一种内存优化方法、系统、终端设备及可读存储介质
CN113590500A (zh) * 2020-04-30 2021-11-02 华为技术有限公司 一种内存管理方法及终端设备
CN111966492B (zh) * 2020-08-05 2024-02-02 Oppo广东移动通信有限公司 内存回收方法、装置、电子设备及计算机可读存储介质
CN116107742A (zh) * 2021-06-10 2023-05-12 荣耀终端有限公司 虚拟内存管理方法和电子设备
CN113918287A (zh) * 2021-11-11 2022-01-11 杭州逗酷软件科技有限公司 启动应用程序的方法、装置、终端设备及存储介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130311643A1 (en) * 2012-05-18 2013-11-21 Cisco Technology, Inc. System and method for latency reduction in a network environment
CN105808447A (zh) * 2016-03-29 2016-07-27 海信集团有限公司 一种终端设备的内存回收方法和装置
WO2019028912A1 (zh) * 2017-08-11 2019-02-14 华为技术有限公司 一种应用切换方法及装置
CN112433831A (zh) * 2020-11-17 2021-03-02 珠海格力电器股份有限公司 应用冻结方法、存储介质及电子设备

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117971712A (zh) * 2024-03-29 2024-05-03 阿里云计算有限公司 内存回收方法、装置、电子设备、存储介质及程序产品

Also Published As

Publication number Publication date
CN117519959A (zh) 2024-02-06

Similar Documents

Publication Publication Date Title
CN113722087B (zh) 虚拟内存管理方法和电子设备
CN113553130B (zh) 应用执行绘制操作的方法及电子设备
CN112527476B (zh) 资源调度方法及电子设备
CN114116191A (zh) 内存冷页的处理方法及电子设备
CN114461375B (zh) 内存资源管理方法及电子设备
US20230385112A1 (en) Memory Management Method, Electronic Device, and Computer-Readable Storage Medium
WO2024027544A1 (zh) 内存管理方法及电子设备
CN110413383B (zh) 事件处理方法、装置、终端及存储介质
CN113472477A (zh) 无线通信系统及方法
CN116700913B (zh) 嵌入式文件系统的调度方法、设备及存储介质
CN115729684B (zh) 输入输出请求处理方法和电子设备
CN117130541A (zh) 存储空间配置方法及相关设备
CN114461589B (zh) 读取压缩文件的方法、文件系统及电子设备
CN112783418B (zh) 一种存储应用程序数据的方法及移动终端
CN113760191A (zh) 数据读取方法、装置、存储介质和程序产品
WO2024032430A1 (zh) 管理内存的方法和电子设备
WO2023005783A1 (zh) 数据处理方法及电子设备
WO2023051056A1 (zh) 内存管理方法、电子设备、计算机存储介质和程序产品
CN116991302B (zh) 应用与手势导航栏兼容运行方法、图形界面及相关装置
WO2024041219A1 (zh) 内存管理方法、电子设备、芯片系统及可读存储介质
WO2023116415A1 (zh) 一种应用程序的抑制方法和电子设备
CN116860429A (zh) 内存管理方法及电子设备
WO2024007970A1 (zh) 线程调度方法及电子设备
CN116932178A (zh) 内存管理方法及电子设备
CN115840528A (zh) 存储盘的水线设置方法、电子设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23849252

Country of ref document: EP

Kind code of ref document: A1