WO2019071610A1 - 一种压缩和解压处理器所占内存的方法及装置 - Google Patents

一种压缩和解压处理器所占内存的方法及装置 Download PDF

Info

Publication number
WO2019071610A1
WO2019071610A1 PCT/CN2017/106173 CN2017106173W WO2019071610A1 WO 2019071610 A1 WO2019071610 A1 WO 2019071610A1 CN 2017106173 W CN2017106173 W CN 2017106173W WO 2019071610 A1 WO2019071610 A1 WO 2019071610A1
Authority
WO
WIPO (PCT)
Prior art keywords
physical memory
memory page
page
compressed
terminal device
Prior art date
Application number
PCT/CN2017/106173
Other languages
English (en)
French (fr)
Inventor
党茂昌
胡笑鸣
陈国栋
周喜渝
李毅
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP17928601.8A priority Critical patent/EP3674846B1/en
Priority to CN201780074040.2A priority patent/CN110023906A/zh
Priority to PCT/CN2017/106173 priority patent/WO2019071610A1/zh
Publication of WO2019071610A1 publication Critical patent/WO2019071610A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/023Free address space management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/10Providing a specific technical effect
    • G06F2212/1041Resource optimization
    • G06F2212/1044Space efficiency improvement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/17Embedded application
    • G06F2212/171Portable consumer electronics, e.g. mobile phone
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/30Providing cache or TLB in specific location of a processing system
    • G06F2212/302In image processor or graphics adapter
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/40Specific encoding of data in memory or cache
    • G06F2212/401Compressed data

Definitions

  • the present application relates to the field of terminal technologies, and in particular, to a method and apparatus for compressing and decompressing a memory occupied by a processor.
  • a mobile phone can run an application in the foreground or run an application in the background.
  • the memory of the mobile phone usually has a relatively large memory; while the mobile phone runs the application in the background, although the user does not perform any operation on the application. Operation, but the memory of the mobile phone processor is still relatively large.
  • the memory of the mobile phone's processor can be saved by turning off the application.
  • the above method of closing the application causes the mobile phone to close other applications when running an application in the foreground, and needs to reopen the other application when the user uses the other application again.
  • the present invention provides a method and a device for compressing and decompressing a memory occupied by a processor, which can save the memory occupied by the processor of the terminal device when the terminal device runs the application in the background.
  • the first aspect provides a method for compressing a memory occupied by a processor, where the method can be applied to a scenario in which a first application running by a terminal device is switched from a foreground to a background, and the terminal device is determined by determining that the terminal device runs the first application in the foreground.
  • the virtual memory address range occupied by the processor (for a range of all or part of the virtual memory address occupied by the processor) includes at least one virtual memory page; and according to the at least one virtual memory page, the process of the first application
  • Each physical memory page corresponding to each virtual memory page is determined in the page table (ie, the first page table); then each physical memory page is compressed using a predefined compression algorithm.
  • the terminal device may adopt a predefined algorithm to compress the physical memory page occupied by the processor of the terminal device when the terminal device runs the first application in the foreground. Therefore, the terminal device can save the memory occupied by the processor of the terminal device in the case that the terminal device runs the application in the background.
  • the compression algorithm in this application may be an ARM frame buffer compress (AFBC) algorithm or an LZ4 compression algorithm.
  • AFBC ARM frame buffer compress
  • a compressed index table may also be created. In this way, the compression of physical memory pages can be recorded.
  • a flag may be separately set for each physical memory page in the first page table. Bit (used to indicate that the physical memory page with this flag set is compressed). In this way, when the terminal device reads the flag bit, it can be known that the physical memory page set with the flag bit has been compressed, thereby improving recognition and being compressed. The efficiency and accuracy of physical memory pages.
  • the method for compressing each physical memory page by using a predefined compression algorithm may include: pre-defining one for each physical memory page The compression algorithm corresponding to the physical memory page compresses one physical memory page to compress each physical memory page.
  • the method for compressing the one physical memory page by using a predefined compression algorithm corresponding to a physical memory page may include: acquiring the physical memory page. a content type; and determining, according to the content type of the one physical memory page, a predefined compression algorithm corresponding to the content type of the one physical memory page; and then adopting a predefined compression corresponding to the content type of the one physical memory page
  • the algorithm compresses one physical memory page. In this way, since each physical memory page can be compressed by a compression algorithm corresponding to each physical memory page, an optimum compression ratio and compression speed can be achieved.
  • the method may be applied to a scenario in which the first application running by the terminal device is switched from the background to the foreground.
  • the compressed index table may also be used according to the compressed index table. Re-allocating at least one physical memory page; decompressing each compressed physical memory page using a predefined decompression algorithm; and then filling the contents of each decompressed physical memory page into at least one physical memory page.
  • the terminal device compresses the physical memory page occupied by the processor, if the first application running by the terminal device is switched back from the background to the foreground (that is, the terminal device runs in the background, the first application switches to the terminal device and runs in the foreground. Application), then the terminal device can also decompress the compressed physical memory page to recover the memory occupied by the processor.
  • the compressed index table may also be modified. In this way, the accuracy of the compressed index table can be guaranteed.
  • the flag may be set in the first page table.
  • Each physical memory page uses a re-allocated address range of one physical memory page to cover an address range of one physical memory page in the first page table with the flag bit set to restore the first page table. In this way, the accuracy of the first page table can be guaranteed.
  • the page fault interrupt before the reallocating the at least one physical memory page according to the compressed index table, the page fault interrupt may also be detected (the page fault interrupt is in the first page table)
  • the interrupt generated when the flag set for the compressed physical memory page is an invalid physical address is detected; and the page fault interrupt processing function is called to respond to the page fault interrupt, and the compression is obtained according to the process number of the first application process. direction chart.
  • a second aspect provides a method for decompressing a memory occupied by a processor, where the method can be applied to a scenario in which a first application running by a terminal device switches from a background to a foreground, and is obtained by acquiring a compressed physical memory page to be compressed. Compressing the index table; and reallocating at least one physical memory page according to the compressed index table; and decompressing each compressed physical memory page by using a predefined decompression algorithm; and then decompressing each physical memory page The content is populated into at least one physical memory page.
  • the compressed physical memory page can also be performed. Unzip to recover the memory occupied by the processor.
  • the compressed index table after compressing the content of each physical memory page after decompressing into at least one physical memory page, the compressed index table may also be modified. In this way, the accuracy of the compressed index table can be guaranteed.
  • each of the physical memory pages compressed in the page table (ie, the first page table) of the process of the first application is set with a flag bit (for indicating that the setting is The physical memory page of this flag bit has been compressed).
  • a physical device that is reassigned may be used for each physical memory page in which the flag bit is set in the first page table.
  • the address range of the memory page covering the address range of one physical memory page in the first page table with the flag bit set to restore the first page table. In this way, the accuracy of the first page table can be guaranteed.
  • the page fault interrupt before the obtaining the compressed index table, the page fault interrupt may also be detected (for the flag set to be the compressed physical memory page detected in the first page table) The interrupt generated when the bit is an invalid physical address).
  • the method for obtaining the compressed index table may include: calling the page fault interrupt processing function to obtain the compressed index table according to the process number of the first application running by the terminal device in response to the page fault interrupt.
  • the method for decompressing each compressed physical memory page by using a predefined decompression algorithm may include: pre-defining each physical memory page after compression A compressed physical memory page corresponding to the decompression algorithm decompresses one physical memory page to decompress each compressed physical memory page.
  • the method for decompressing the one physical memory page by using a predefined decompression algorithm corresponding to one compressed physical memory page may include: acquiring a content type of the one physical memory page And determining, according to the content type of the one physical memory page, a predefined decompression algorithm corresponding to the content type of the one physical memory page; and then adopting a predefined decompression algorithm corresponding to the content type of the one physical memory page, Decompress the one physical memory page.
  • a decompression algorithm corresponding to each compressed physical memory page an optimum decompression ratio and decompression speed can be achieved.
  • a terminal device where a first application running by the terminal device is switched from a foreground to a background, and the terminal device may include a determining module and a decompression module.
  • the determining module is configured to determine at least one virtual memory page included in the virtual memory address range occupied by the processor of the terminal device (the range of all or part of the virtual memory address occupied by the processor) when the terminal device runs the first application in the foreground; And determining, according to the at least one virtual memory page, each physical memory page corresponding to each virtual memory page in a page table (ie, a first page table) of the process of the first application; the decompression module is configured to adopt a predefined The compression algorithm compresses each physical memory page determined by the determining module.
  • the foregoing terminal device may further include a creating module.
  • the creation module is used to create a compressed index table after compressing each physical memory page by using a predefined compression algorithm in the decompression module.
  • the foregoing terminal device may further include a setting module.
  • the setting module is configured to adopt a predefined compression algorithm in the decompression module, and after compressing each physical memory page, set a flag bit for each physical memory page in the first page table (for indicating that the flag bit is set) The physical memory page has been compressed).
  • the decompression module is specifically configured to: for each physical memory page, use a predefined compression algorithm corresponding to one physical memory page, and the physical memory page Compress to compress each physical memory page.
  • the foregoing decompression module is specifically configured to acquire a content type of a physical memory page, and determine a predefined one and the physical one according to the content type of the one physical memory page.
  • the compression algorithm corresponding to the content type of the memory page then compressing the one physical memory page by using a predefined compression algorithm corresponding to the content type of the one physical memory page.
  • the first application running by the terminal device is switched from the background to the foreground, and the terminal device may further include an allocation module and a filling module.
  • the allocation module is configured to: after the creating module creates the compressed index table, reallocate at least one physical memory page according to the compressed index table; and the decompressing module is further configured to perform, by using a predefined decompression algorithm, each compressed physical memory page.
  • the decompression module is used to fill the content of each physical memory page decompressed by the decompression module into at least one physical memory page reassigned by the allocation module.
  • the creating module is further configured to modify the compressed index table after the filling module fills the content of each physical memory page after decompressing into at least one physical memory page.
  • the filling module is further configured to: after the content of each physical memory page after decompressing is filled into the at least one physical memory page, set in the first page table
  • Each physical memory page having a flag bit uses a re-allocated address range of one physical memory page to cover an address range of a physical memory page in which the flag bit is set in the first page table to restore the first page. table.
  • the foregoing terminal device may further include a detection module and an acquisition module.
  • the detecting module is configured to detect a page fault interrupt before the allocation module reallocates at least one physical memory page according to the compressed index table (the page fault interrupt is set when the compressed physical memory page is detected in the first page table)
  • the interrupt generated by the flag is an invalid physical address
  • the obtaining module is configured to call the page fault interrupt processing function to obtain the compressed index table according to the process number of the process of the first application in response to the page fault interrupt detected by the detecting module.
  • the fourth aspect provides a terminal device, where the first application running by the terminal device is switched from the background to the foreground, and the terminal device may include an obtaining module, an allocating module, a decompressing module, and a filling module.
  • the obtaining module is configured to obtain a compressed index table created when compressing each physical memory page to be compressed;
  • the allocating module is configured to reallocate at least one physical memory page according to the compressed index table obtained by the obtaining module;
  • the decompressing module is configured to adopt The decompression algorithm is defined to decompress each compressed physical memory page;
  • the filling module is configured to fill the content of each physical memory page decompressed by the decompression module into at least one physical memory page reassigned by the allocation module.
  • the foregoing terminal device may further include a creating module.
  • the creation module is used to fill the content of each physical memory page after decompression in the filling module to at least one physical memory page After the modification, modify the compressed index table obtained by the acquisition module.
  • each of the physical memory pages compressed in the page table (ie, the first page table) of the process of the first application is set with a flag bit (for indicating that the setting is The physical memory page of the flag bit has been compressed; the padding module is further configured to: after the content of each physical memory page after decompressing is filled into at least one physical memory page, the flag is set in the first page table
  • Each physical memory page of the bit uses a re-allocated address range of one physical memory page to cover an address range of a physical memory page in which the flag bit is set in the first page table to restore the first page table.
  • the foregoing terminal device may further include a detection module.
  • the detecting module is configured to detect a page fault interrupt (for an interrupt generated when a flag set to the compressed physical memory page is an invalid physical address is detected in the first page table) before the obtaining module obtains the compressed index table.
  • the obtaining module is specifically configured to invoke the page fault interrupt processing function to obtain the compressed index table according to the process number of the process of the first application run by the terminal device, in response to the page fault interrupt detected by the detecting module.
  • the decompression module is specifically configured to: for each physical memory page after compression, use a predefined decompression algorithm corresponding to one compressed physical memory page, and the physical memory is The page is decompressed to decompress each compressed physical memory page.
  • the decompression module is specifically configured to acquire a content type of a physical memory page, and determine a predefined content type of the physical memory page according to the content type of the one physical memory page. Corresponding decompression algorithm; then decompressing the one physical memory page by using a predefined decompression algorithm corresponding to the content type of the one physical memory page.
  • the compressed index table includes a plurality of first entries.
  • Each first entry includes an index number (indicating a page number of a physical memory page corresponding to the virtual memory page), an address range of the virtual memory page, and a compressed physical memory page (for the physical corresponding to the virtual memory page) The address range of the physical memory page after the memory page is compressed and the content type of the compressed physical memory page.
  • the modified compressed index table includes a plurality of second entries.
  • Each second entry includes an index number, an address range of the virtual memory page, and a restored physical memory page (a physical memory page filled with decompressed content for at least one physical memory page, the decompressed content is a compressed physical memory) The address range of the page after decompression and the content type of the restored physical memory page.
  • the content type of the physical memory page is a zero page, a texture page, a vertex queue, or a drawing command queue.
  • a terminal device in a fifth aspect, can include a memory and one or more processors coupled to the memory.
  • the memory is for storing one or more programs, the one or more programs comprising computer instructions, when the one or more processors execute the computer instructions, causing the terminal device to perform any of the first aspect, the first aspect, An optional implementation, the second aspect, or the method of any one of the alternative implementations of the second aspect.
  • a computer readable storage medium which can include computer instructions that, when executed on a terminal device, cause the terminal device to perform any of the first aspect, the first aspect described above
  • An optional implementation, the second aspect, or any optional implementation of the second aspect The method in the way.
  • a seventh aspect a computer program product comprising computer instructions, when the computer program product is run on a terminal device, causing the terminal device to perform any one of the foregoing first aspect, the optional implementation of the first aspect, The method of any of the alternative implementations of the second aspect or the second aspect.
  • FIG. 1 is a schematic structural diagram of an Android operating system according to an embodiment of the present application
  • FIG. 2 is a schematic diagram of hardware of a mobile phone according to an embodiment of the present application.
  • FIG. 3 is a schematic diagram 1 of a method for compressing a memory occupied by a graphics processing unit (GPU) according to an embodiment of the present disclosure
  • FIG. 4 is a schematic diagram 2 of a method for compressing a memory occupied by a GPU according to an embodiment of the present disclosure
  • FIG. 5 is a third schematic diagram of a method for compressing a memory occupied by a GPU according to an embodiment of the present disclosure
  • FIG. 6 is a schematic diagram 1 of a method for decompressing a memory occupied by a GPU according to an embodiment of the present disclosure
  • FIG. 7 is a second schematic diagram of a method for decompressing a memory occupied by a GPU according to an embodiment of the present disclosure
  • FIG. 8 is a schematic structural diagram of a method for compressing and decompressing a memory occupied by a GPU according to an Android operating system according to an embodiment of the present disclosure
  • FIG. 9 is a schematic structural diagram 1 of a terminal device according to an embodiment of the present disclosure.
  • FIG. 10 is a schematic structural diagram 2 of a terminal device according to an embodiment of the present disclosure.
  • FIG. 11 is a schematic structural diagram 3 of a terminal device according to an embodiment of the present disclosure.
  • FIG. 12 is a schematic structural diagram 4 of a terminal device according to an embodiment of the present disclosure.
  • FIG. 13 is a schematic structural diagram 5 of a terminal device according to an embodiment of the present disclosure.
  • FIG. 14 is a schematic structural diagram 6 of a terminal device according to an embodiment of the present disclosure.
  • FIG. 15 is a schematic structural diagram 7 of a terminal device according to an embodiment of the present disclosure.
  • FIG. 16 is a schematic structural diagram 8 of a terminal device according to an embodiment of the present disclosure.
  • FIG. 17 is a schematic diagram of hardware of a terminal device according to an embodiment of the present disclosure.
  • first and second in the specification and claims of the present application are used to distinguish different objects, and are not intended to describe a particular order of the objects.
  • first entry and the second entry are used to distinguish different entries, rather than a specific order for describing the entries.
  • the words “exemplary” or “such as” are used to mean an example, illustration, or illustration. Any embodiment or design described as “exemplary” or “for example” in the embodiments of the invention should not be construed as preferred or advantageous over other embodiments or designs. Rather, the use of the words “exemplary” or “such as” is intended to present the concepts in a particular manner.
  • a plurality means two or two
  • the plurality of first entries refer to two or more first entries
  • the plurality of second entries refer to two or more second entries and the like.
  • Virtual memory address refers to the logical address used by the processor to access memory. For example, the logical address used by the terminal device's processor to access the memory when the first device is running.
  • Physical memory address refers to the actual address in the memory, also known as the actual memory address or absolute memory address.
  • the physical memory address corresponds to the virtual memory address.
  • the processor accesses the memory, the physical memory address corresponding to the virtual memory address can be found in the memory according to the virtual memory address, and then the physical memory address is accessed (for example, reading/writing the physical memory address, etc.) ).
  • Virtual memory address range refers to the range of several consecutive virtual memory addresses.
  • Physical memory address range refers to the range of several consecutive physical memory addresses.
  • Virtual memory page refers to dividing the virtual memory address range into several equal-sized segments, each of which can be called a virtual memory page. Each virtual memory page can have a page number.
  • Physical memory page refers to dividing the physical memory address range into several equal-sized segments, each of which can be called a physical memory page. Each virtual memory page can also have a page number. The size of each physical memory page is the same as the size of each virtual memory page. For example, for a 32-bit processor, each physical memory page and each virtual memory page is 4 kilobytes (KB) in size.
  • Page table refers to a table that stores the correspondence between virtual memory pages and physical memory pages. Each process has its own page table. For example, in the embodiment of the present application, the process of the first application running by the terminal device has its own page table, which is referred to as the first page table in the embodiment of the present application.
  • the application can be closed.
  • this way of closing the application will cause the mobile phone to close other applications when running an application in the foreground, and will need to reopen the other application when the user uses the other application again.
  • the terminal device cannot save the memory occupied by the processor of the terminal device in the case where the application runs in the background.
  • the embodiment of the present application provides a method and a device for compressing and decompressing a memory occupied by a processor.
  • the terminal device may be determined to run in the foreground.
  • a virtual memory address range occupied by a processor of the terminal device (the virtual memory address range is a range of all or part of the virtual memory address occupied by the processor), and is virtualized according to at least one of the virtual memory address ranges
  • the memory page determines each physical memory page corresponding to each virtual memory page in the first page table of the process of the first application, and then compresses each physical memory page by using a predefined compression algorithm.
  • a predefined algorithm may be adopted, and the physical memory page occupied by the processor of the terminal device is compressed when the terminal device runs the first application in the foreground, so When the terminal device runs the application in the background, the memory occupied by the processor of the terminal device is saved.
  • the memory occupied by the processor of the terminal device refers to the memory occupied by the processor when the terminal device runs the first application.
  • the memory occupied by the processor includes the memory occupied by the processor when the first device runs the first application in the foreground, and the memory occupied by the processor when the terminal device runs the first application in the background.
  • the memory occupied by the processor of the terminal device refers to the memory occupied by the processor when the terminal device runs the first application in the foreground;
  • the memory occupied by the processor of the terminal device refers to the terminal design.
  • the method for compressing and decompressing the memory occupied by the processor in the embodiment of the present application may be applied to a terminal device, and may also be applied to a function module or a functional entity in the terminal device capable of implementing the method, for example, an application that can be applied to the terminal device.
  • the application management system (AMS), etc. may be determined according to the actual application scenario, and is not specifically limited in this embodiment.
  • the processor in the embodiment of the present application may be a GPU, a central processing unit (CPU), a digital signal processing (DSP), an application processor (AP), and a general purpose processor.
  • CPU central processing unit
  • DSP digital signal processing
  • AP application processor
  • general purpose processor a combination of one or more of a processor, an application-specific integrated circuit (ASIC), and a field programmable gate array (FPGA).
  • ASIC application-specific integrated circuit
  • FPGA field programmable gate array
  • the terminal device in the embodiment of the present application may be a terminal device having an operating system.
  • the operating system may be an Android (Android) operating system, and may be an iOS operating system, and may also be other possible operating systems, which are not specifically limited in this embodiment.
  • the software environment of the method for compressing and decompressing the memory occupied by the processor provided by the embodiment of the present application is described below by taking the Android operating system as an example.
  • FIG. 1 is a schematic structural diagram of a possible Android operating system provided by an embodiment of the present application.
  • the architecture of the Android operating system includes four layers: an application layer, an application framework layer, a system runtime layer, and a kernel layer (specifically, a Linux kernel layer).
  • the application layer is a collection of applications in the Android operating system. As shown in Figure 1, the Android operating system provides a number of system applications such as the home screen, settings, contacts, and browsers. Developers of the application can also use the application framework layer to develop other applications, such as installing on the terminal device. And running third-party applications.
  • the application framework layer is actually the application framework, and developers can develop some applications based on the application framework layer while adhering to the development principles of the application framework.
  • some important components included in the application framework layer are: activity manager, window manager, memory provider, view system, notification manager, package manager, phone manager, resource manager, local management, And extensible messaging and presence protocol (XMPP) services.
  • XMPP extensible messaging and presence protocol
  • the system runtime layer includes libraries (also known as system libraries) and the Android operating system runtime environment.
  • the library mainly includes interface manager, media framework, data storage, three-dimensional (3D) engine, bitmap and vector, browser engine, two-dimensional (2D) graphics engine, and the middle. Protocol and Libc function library (a function library of C language).
  • the Android operating system includes the Android runtime (ART) virtual machine and the core library. The ART virtual machine is used to run the Android operating system based on the core library. In the Android operating system, each application has an ART. The virtual machine provides services for it.
  • the kernel layer is the operating system layer of the Android operating system and belongs to the lowest level of the Android operating system software layer. It provides core system services based on the Linux kernel. In addition to providing these core system services, it also provides drivers related to the terminal device hardware, such as the camera driver, Bluetooth driver, universal serial bus (USB) driver, keyboard driver, and wireless protection as shown in FIG. (wireless-fidelity, Wi-Fi) driver, etc.
  • the developer can develop the memory occupied by the compression and decompression processor provided by the embodiment of the present application based on the system architecture of the Android operating system shown in FIG. 1 .
  • the software program of the method such that the method of compressing and decompressing the memory occupied by the processor can be run based on the Android operating system as shown in FIG. That is, the processor or the terminal device can implement the method for compressing and decompressing the memory occupied by the processor provided by the embodiment of the present application by running the software program in the Android operating system.
  • the terminal device in the embodiment of the present application may include a mobile terminal device and a non-mobile terminal device.
  • the mobile terminal device may be a mobile device, a tablet computer, a notebook computer, an ultra-mobile personal computer (UMPC), a netbook, a personal digital assistant (PDA), a smart watch or a smart wristband;
  • the non-mobile terminal device may be a terminal device such as a personal computer (PC), a television (television, TV), a teller machine, or a kiosk; the embodiment of the present application is not specifically limited.
  • the terminal device provided by the embodiment of the present application is a mobile phone as an example, and the components of the mobile phone are specifically introduced in conjunction with FIG. 2 .
  • the mobile phone provided by the embodiment of the present application may include: a processor 10, a radio frequency (RF) circuit 11, a power source 12, a memory 13, an input module 14, a display module 15, and an audio circuit. 16 and other components.
  • RF radio frequency
  • FIG. 2 does not constitute a limitation to the mobile phone, and may include more or less components such as those shown in FIG. 2, or may be combined as shown in FIG. Some of the components may be different from the components shown in Figure 2.
  • the processor 10 is the control center of the handset, which connects various parts of the entire handset using various interfaces and lines.
  • the mobile phone is monitored overall by running or executing software programs and/or modules stored in the memory 13, and recalling data stored in the memory 13, performing various functions and processing data of the mobile phone.
  • the processor 10 may include one or more processing modules.
  • the processor 10 may integrate an application processor and a modem processor, where the application processor mainly processes an operating system, a user interface, an application, and the like;
  • the demodulation processor mainly processes wireless communication and the like. It can be understood that the above-mentioned modem processor can also be a processor that exists separately from the processor 10.
  • the RF circuit 11 can be used to receive and transmit signals during transmission or reception of information or calls. For example, after the downlink information of the base station is received, it is processed by the processor 10; in addition, the uplink data is transmitted to the base station.
  • RF circuits include, but are not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier (LNA), a duplexer, and the like.
  • the mobile phone can also communicate wirelessly with other devices in the network through the RF circuit 11.
  • Wireless communication can use any communication standard or protocol, including but not limited to global system of mobile communication (GSM), general packet radio service (GPRS), code division multiple Access, CDMA), wideband code division multiple access (WCDMA), long term evolution (LTE), e-mail, and short messaging service (SMS).
  • GSM global system of mobile communication
  • GPRS general packet radio service
  • CDMA code division multiple Access
  • WCDMA wideband code division multiple access
  • LTE long term evolution
  • SMS short messaging service
  • the power source 12 can be used to power various components of the handset, and the power source 12 can be a battery.
  • the power supply can be logically coupled to the processor 10 through a power management system to manage functions such as charging, discharging, and power management through the power management system.
  • the memory 13 can be used to store software programs and/or modules, and the processor 10 executes various functional applications and data processing of the mobile phone by running software programs and/or modules stored in the memory 13.
  • the memory 13 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application required for at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may be stored according to the mobile phone. Use the created data (such as audio data, image data, phone book, etc.).
  • the memory 13 may include a high speed random access memory, and may also include a nonvolatile memory such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
  • the input module 14 can be configured to receive input numeric or character information and to generate key signal inputs related to user settings and function controls of the handset.
  • input module 14 may include touch screen 141 and other input devices 142.
  • the touch screen 141 also referred to as a touch panel, can collect touch operations on or near the user (such as the operation of the user using any suitable object or accessory on the touch screen 141 or near the touch screen 141 using a finger, a stylus, etc.), and
  • the preset program drives the corresponding connection device.
  • the touch screen 141 may include two parts of a touch detection device and a touch controller.
  • the touch detection device detects the touch orientation of the user, and detects a signal brought by the touch operation, and transmits the signal to the touch controller; the touch controller receives the touch information from the touch detection device, converts the touch information into contact coordinates, and sends the touch information.
  • the processor 10 is provided and can receive commands from the processor 10 and execute them.
  • the touch screen 141 can be implemented in various types such as resistive, capacitive, infrared, and surface acoustic waves.
  • Other input devices 142 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control buttons, power switch buttons, etc.), trackballs, mice, and joysticks.
  • the display module 15 can be used to display information input by the user or information provided to the user as well as various menus of the mobile phone.
  • the display module 15 can include a display panel 151.
  • the display panel 151 can be configured in the form of a liquid crystal display (LCD), an organic light-emitting diode (OLED), or the like.
  • the touch screen 141 may cover the display panel 151, and when the touch screen 141 detects a touch operation on or near it, transmits to the processor 10 to determine the type of the touch event, and then the processor 10 displays the panel according to the type of the touch event.
  • a corresponding visual output is provided on 151.
  • the touch screen 141 and the display panel 151 are two separate components to implement the input and output functions of the mobile phone, in some embodiments, the touch screen 141 can be integrated with the display panel 151 to implement the input of the mobile phone. And output function.
  • the audio circuit 16, the speaker 161 and the microphone 162 are used to provide an audio interface between the user and the handset.
  • the audio circuit 16 can transmit the converted electrical data of the received audio data to the speaker 161 for conversion to a sound signal output by the speaker 161.
  • the microphone 162 converts the collected sound signal into an electrical signal, which is received by the audio circuit 16 and converted into audio data, and then the audio data is output to the RF circuit 11 through the processor 10 for transmission to, for example, another mobile phone, or The audio data is output to the memory 13 by the processor 10 for further processing.
  • the mobile phone shown in FIG. 2 may further include various sensors.
  • a gyro sensor, a hygrometer sensor, an infrared sensor, a magnetometer sensor, and the like are not described herein.
  • the mobile phone shown in FIG. 2 may further include a Wi-Fi module, a Bluetooth module, and the like, and details are not described herein.
  • the method for compressing and decompressing the memory occupied by the processor provided by the embodiment of the present application is applied to the terminal device, and the processor is a GPU, and the method for compressing and decompressing the memory occupied by the processor provided by the embodiment of the present application is introduced.
  • the method for compressing and decompressing the memory occupied by the processor provided by the embodiment of the present application may be a method for compressing and decompressing the memory occupied by the GPU.
  • the terminal device performs the method for compressing and decompressing the memory occupied by the GPU provided by the embodiment of the present application
  • the CPU of the terminal device may perform the method.
  • the CPU of the terminal device executes the method by calling the foregoing AMS.
  • the following embodiments of the present application are exemplarily described by taking a terminal device as an example.
  • the embodiment of the present application provides a method for compressing a memory occupied by a GPU, and the method is applied to the end.
  • the scenario in which the first application running by the end device is switched from the foreground to the background that is, the terminal device runs the first application in the foreground and the terminal device runs the first application in the background
  • the method may include the following S101-S105.
  • the terminal device determines a virtual memory address range.
  • the virtual memory address range is a range of all or part of the virtual memory address occupied by the GPU when the terminal device runs the first application in the foreground.
  • the virtual memory address range includes at least one virtual memory page.
  • a GPU is also known as a display core, a visual processor, or a display chip.
  • a GPU is a microprocessor that performs image computing operations exclusively on terminal devices such as several terminal devices listed in the above-described embodiments of the present application. Specifically, the purpose of the GPU is to convert and display the display information required by the computer system, and provide a line scan signal to the display to control the correct display of the display.
  • the GPU is an important part of the terminal device that connects the display and the motherboard, and is also an important component of "human-computer interaction". GPUs are similar to CPUs, but GPUs are designed to perform complex mathematical and geometric operations that are required for graphics rendering and rendering.
  • the CPU wants to draw a two-dimensional graphic on the display
  • the CPU only needs to send a drawing instruction to the GPU.
  • the drawing instruction indicates "raw a length and width of a ⁇ b at coordinates (x, y)).
  • the GPU can quickly calculate all the pixels of the two-dimensional graphic, and draw corresponding graphics on the specified position on the display.
  • the GPU After the drawing is completed, the GPU notifies the CPU that “the drawing has been completed”, and then the GPU waits. The CPU issues an instruction for the next graphic operation.
  • the terminal device when the terminal device runs the first application in the foreground, the terminal device responds to the user's various operations on the first application, for example, after the user performs a click operation on the current interface of the first application, the terminal device In response to the click operation, the current interface is switched to the target interface, so the GPU of the terminal device usually has a relatively large memory due to the need to redraw the interface.
  • the terminal device switches the first application to run in the background, that is, when the terminal device runs the first application in the background, since the user cannot perform any operation on the first application, the terminal device does not need to respond, and the GPU occupies the memory due to The memory is wasted without being released, that is, other components cannot use the memory occupied by the GPU, resulting in lower memory utilization.
  • the embodiment of the present application provides a method for compressing a memory occupied by a GPU.
  • the terminal device executes the method, the terminal device first determines all virtual memory addresses occupied by the GPU when the terminal device runs the first application in the foreground. The range, and then the terminal device determines the final virtual memory address range based on the compression requirements. Specifically, if the memory occupied by the GPU is all compressed, the final virtual memory address range is a range of the entire virtual memory address; if the memory portion of the GPU is to be compressed, the final virtual memory address range is A portion of the range of all virtual memory addresses, that is, the range of the above partial virtual memory addresses.
  • the virtual memory address occupied by the GPU ranges from 000000 to 2FFFFF, and the virtual memory address range may be part of 000000 to 2FFFFF or 000000 to 2FFFFF (for example, It is 010000 ⁇ 03FFFF).
  • the terminal device determines, according to at least one virtual memory page in the virtual memory address range, each physical memory page corresponding to each virtual memory page in the first page table.
  • the first page table is a page table of processes of the first application.
  • the first page table includes the correspondence between multiple virtual memory pages and multiple physical memory pages.
  • At least one virtual memory page in the above S102 is a virtual memory page in the plurality of virtual memory pages; each of the S102 corresponding to each of the at least one virtual memory page
  • the physical memory page is the physical memory page in the multiple physical memory pages. That is, the first page table includes a correspondence between at least one virtual memory page and at least one physical memory page (the at least one physical memory page is a physical memory page corresponding to at least one virtual memory page).
  • the first page table is exemplarily described below by taking Table 1 as an example.
  • the numbers 0, 1, 2, 3, 4, 5, 6, and 7 of the virtual memory page column indicate the page number of the virtual memory page, which is used to indicate a virtual memory page, for example, 0 for indicating
  • the virtual memory page 0, 1 is used to indicate the virtual memory page 1 and the like;
  • the address range following each digit in the virtual memory page column indicates the address range of a virtual memory page, for example, (000000 to 00FFFF) represents the address of the virtual memory page 0.
  • the range, (010000 to 01FFFF) indicates the address range of the virtual memory page 1.
  • the numbers 2, 3, 6, 8, 9, 1, 4, and 5 in the column of the physical memory page indicate the page number of the physical memory page, and the page number is used to indicate the physical memory page, for example, 2 is used to indicate the physical memory page 2, 3 is used to indicate the physical memory page 3, etc.; the address range following each digit in the column of the physical memory page indicates the address range of one physical memory page, for example, (020000 to 02FFFF) indicates the address range of the physical memory page 2, (030000 ⁇ 03FFFF) indicates the address range of the physical memory page 3.
  • the page number of the virtual memory page and the address range of the virtual memory page are continuous, and the page number of the physical memory page and the address range of the physical memory page are discontinuous, because in practical applications, the virtual memory is
  • the GPU accesses the memory specifically allocated, and the physical memory is the actual memory, and the physical memory may be occupied by multiple components in the terminal device, so the physical memory page occupied by the GPU may be discontinuous.
  • the physical page corresponding to a certain virtual memory page may be found according to the first page table shown in Table 1, and then the physical memory page is accessed.
  • the terminal device may determine, according to the at least one virtual memory page in the virtual memory address range, in the first page table. Each physical memory page corresponds to each virtual memory page.
  • the virtual memory address range is 010000 to 03FFFF
  • the virtual memory address range includes three virtual memory pages
  • the three virtual memory pages are respectively virtual memory pages 1 (the address range of the virtual memory page 1 is 010000 ⁇ ) 01FFFF), virtual memory page 2 (virtual memory page 2 address range is 020000 to 02FFFF) and virtual memory page 3 (virtual memory page 3 address range is 030000 to 03FFFF).
  • the terminal device determines, according to the virtual memory page 1, the virtual memory page 2, and the virtual memory page 3, each physical memory page corresponding to each of the three virtual memory pages in the first page table, for example, The terminal device is in the first page table as shown in Table 1.
  • a physical memory page 3 corresponding to the virtual memory page 1, a physical memory page 6 corresponding to the virtual memory page 2, and a physical memory page 8 corresponding to the virtual memory page 3 are determined.
  • the terminal device compresses each physical memory page by using a predefined compression algorithm.
  • one or more compression algorithms may be defined in advance.
  • the terminal device may compress the physical memory page to be compressed by using the one or more compression algorithms.
  • the terminal device may compress different physical memory pages to be compressed by using the same compression algorithm; or compress different physical memory pages to be compressed by using different compression algorithms, specifically, according to The actual compression requirement is determined, and the embodiment of the present application does not specifically limit it.
  • the physical memory page may be compressed by using a compression algorithm corresponding to each physical memory page, so that the compression ratio may be improved to some extent. Compression speed.
  • the terminal device may perform S103a to compress the physical memory page, that is, the foregoing S103 may be specifically implemented by repeatedly performing S103a described below.
  • S103a The terminal device compresses the one physical memory page by using a predefined compression algorithm corresponding to one physical memory page.
  • the three physical memory pages of the physical memory page 3, the physical memory page 6, and the physical memory page 8 determined in the above S102 are taken as an example, and the compression algorithm corresponding to the physical memory page 3 is assumed to be the compression algorithm 1, and The compression algorithm corresponding to the physical memory page 6 is the compression algorithm 2, and the compression algorithm corresponding to the physical memory page 8 is the compression algorithm 3. Then, the terminal device can compress the physical memory page 3 by using the compression algorithm 1 and use the compression algorithm 2 for the physical memory. Page 6 is compressed, and the physical memory page 8 is compressed using the compression algorithm 3.
  • the above description is based on the physical compression page 3, the physical memory page 6 and the physical memory page 8 respectively corresponding to different compression algorithms.
  • the physical memory page 3, the physical memory page 6 and the physical memory page 8 The same compression algorithm may also be used in whole or in part.
  • physical memory page 3 and physical memory page 6 correspond to the same compression algorithm
  • physical memory page 8 corresponds to another different compression algorithm.
  • the content type of the multiple physical memory pages may be the same, and may be partially the same or all different. Therefore, in the embodiment of the present application, in order to further improve the compression ratio and the compression speed, the terminal is further improved.
  • the device may determine a compression algorithm corresponding to the content type according to the content type of each physical memory page, and then compress the corresponding physical memory page by using the compression algorithm.
  • S103a can be specifically implemented by S103a1-S103a3 described below.
  • the terminal device acquires a content type of a physical memory page.
  • the terminal device determines, according to the content type of the one physical memory page, a predefined compression algorithm corresponding to the content type of the one physical memory page.
  • the terminal device compresses the one physical memory page by using a predefined compression algorithm corresponding to the content type of the one physical memory page.
  • the content type of the physical memory page may be a zero page, a texture page, a vertex buffer, or a drawing command queue.
  • the foregoing compression algorithm may be an AFBC algorithm or an LZ4 compression algorithm.
  • the zero page since the zero page is a physical memory page that does not contain content, the zero page may not be compressed. In the compression process, only the page number of the zero page may be recorded in the compressed index table. Since the texture page is usually the content of the picture format, and the AFBC algorithm is mainly suitable for compressing the content of the picture format, the AFBC algorithm compression for the texture page can achieve the optimal compression ratio and compression speed. Since the vertex queue and the drawing command queue are usually in the format of data or text, and the LZ4 compression algorithm is mainly suitable for compressing the content of data or text, the compression of the vertex queue and the drawing command queue by the LZ4 compression algorithm can be achieved. The best compression ratio and compression speed.
  • the method for the terminal device to obtain the content type of the physical memory page may be: the terminal device acquires the content type of the physical memory page by reading the bit pattern of the physical memory page.
  • Each physical memory page has a bit pattern that can be used to indicate which type of content the physical memory page is.
  • bit pattern can be represented by 2 bits
  • content type of each physical memory page can be represented by 2 bits of different values.
  • An exemplary description will be made below by taking Table 2 as an example.
  • each bit pattern can also indicate the content type of the physical memory page different from Table 2.
  • a texture page can be represented by 00, a zero page by 01, a vertex queue by 10, and a drawing command queue by 11. Specifically, it can be determined according to actual conditions, and will not be enumerated here.
  • the terminal device creates a compressed index table.
  • the compressed index table may include multiple first entries. Each first entry includes an index number, an address range of the virtual memory page, an address range of the compressed physical memory page, and a content type of the compressed physical memory page.
  • the index number may be used to indicate a page number of a physical memory page corresponding to the virtual memory page, and the compressed physical memory page is a physical memory page compressed to a physical memory page corresponding to the virtual memory page.
  • the compressed index table created by the terminal device is corresponding to the application, for example, corresponding to the first application, but at the terminal.
  • the terminal device running the first application may be a process for the terminal device to run the first application. Therefore, in the embodiment of the present application, after the terminal device creates the compressed index table, the compressed index table and the first application may also be saved.
  • the compressed index table 1 corresponds to the process number of the process of the first application
  • the compressed index table 2 corresponds to the process number of the process of the second application.
  • the compressed index table provided in the embodiment of the present application is exemplified in the following.
  • the index number 3 in the above table 3 is used to indicate the page number of the physical memory page corresponding to the virtual memory page whose address range is 010000 to 01FFFF after the terminal device performs the compression process, that is, the index number 3 is used to indicate the physical memory.
  • the meanings of the index numbers 6 and 8 in the above-mentioned Table 3 are similar to those of the index number 3. For details, refer to the related description of the index number 3, and details are not described herein again.
  • the terminal device separately sets a flag bit for each physical memory page in the first page table.
  • the flag bit can be used to indicate that the physical memory page set with the flag bit has been compressed. In this way, when the terminal device reads the flag bit, it can be known that the physical memory page set with the flag bit has been compressed, thereby improving the efficiency and accuracy of identifying the physical memory page that has been compressed.
  • the flag bit may be a combination of one or more of an address, a number, a letter, a sequence, and a character, and may be specifically set according to an actual use requirement, which is not specifically limited in this embodiment.
  • the flag may be set to be an invalid physical address (ie, the physical address does not exist). Therefore, when the terminal device reads the flag bit, since the physical address does not exist, a page fault interrupt can be generated, and the decompression process of the physical memory page in which the flag bit is set is automatically triggered.
  • the terminal device can respectively set the flag bit 1 for the three physical memory pages. Used to indicate that the corresponding physical memory page has been compressed.
  • the flag bit may not be set, or the flag bit is set to 0, indicating that the corresponding physical memory page is not compressed.
  • the embodiment of the present application may not limit the execution order of S104 and S105. That is, the embodiment of the present application may first execute S104, and then execute S105; or S105 may be performed first, then S104 may be performed; and S104 and S105 may be simultaneously executed.
  • the above FIG. 3 is illustrated by taking S105 as an example after S104.
  • the method for compressing the memory occupied by the GPU provided by the embodiment of the present application, when the first application running on the terminal device is switched from the foreground to the background, the terminal device may adopt a predefined algorithm, and the terminal device runs the first application in the foreground.
  • the physical memory page occupied by the GPU of the device is compressed, so that the terminal device can save the memory occupied by the GPU of the terminal device when the application runs in the background.
  • the terminal device compresses the physical memory page occupied by the GPU, if the first application running by the terminal device is switched back from the background to the foreground (ie, the terminal The device runs the first application in the background and switches to the terminal device to run the first application in the foreground.
  • the terminal device can also decompress the compressed physical memory page to restore the memory occupied by the GPU.
  • the terminal device may decompress the first application before switching from the background to the foreground (corresponding to the decompression process 1 described below); or after the first application switches from the background to the foreground, the terminal The device decompresses the interface of the first application (corresponding to the decompression process 2 described below).
  • the two decompression processes are exemplarily described below.
  • the method for compressing the memory occupied by the GPU provided by the embodiment of the present application may further include the following S106-S110.
  • the terminal device re-allocates at least one physical memory page according to the compressed index table.
  • the compressed index table is created when the terminal device compresses the physical memory page.
  • the terminal device may re-allocate at least one physical memory page according to the compressed index table created during the compression process, where the at least one physical memory page is used for decompressing the compressed physical memory page. After the content.
  • the physical memory page may be used by other components. Therefore, during the decompression process, the terminal device may reallocate the physical memory page for storing the compressed physical memory page. content.
  • the terminal device can determine that the compressed physical memory page has three according to the compressed index table as shown in Table 3, which are physical memory page 3, physical memory page 6, and physical memory.
  • the terminal device can reallocate three physical memory pages for storing the decompressed contents of the compressed three physical memory pages.
  • the terminal device can reallocate the three physical memory pages of the physical memory page 4, the physical memory page 5, and the physical memory page 9, for storing the compressed physical memory page 3, the physical memory page 6, and the physical memory page 8. After the content.
  • the terminal device uses a predefined decompression algorithm to decompress each compressed physical memory page.
  • one or more decompression algorithms may be defined in advance.
  • the terminal device may use the one or more decompression algorithms to compress the compressed physical memory page. Decompress.
  • the terminal device may decompress the compressed different physical memory pages by using the same decompression algorithm; or different decompression algorithms may be used to decompress the compressed different physical memory pages.
  • different decompression algorithms may be used to decompress the compressed different physical memory pages.
  • It can be determined according to the actual compression requirement, and is not specifically limited in this embodiment.
  • the terminal device When the terminal device decompresses the compressed different physical memory pages by using different decompression algorithms, the physical memory page can be decompressed by using a decompression algorithm corresponding to each compressed physical memory page, so that the decompression ratio and the decompression speed can be improved. .
  • the terminal device may perform the following S107a to decompress each compressed physical memory page, that is, the foregoing S107 may be implemented by repeatedly performing the following S107a. .
  • S107a The terminal device decompresses a physical memory page by using a predefined decompression algorithm corresponding to one compressed physical memory page.
  • the three physical memory pages of the compressed physical memory page 3, the physical memory page 6, and the physical memory page 8 determined in the above S106 are taken as an example, and the decompression algorithm corresponding to the compressed physical memory page 3 is assumed.
  • the decompression algorithm 1 the decompression algorithm corresponding to the compressed physical memory page 6 is the decompression algorithm 2
  • the decompression algorithm corresponding to the compressed physical memory page 8 is the decompression algorithm 3
  • the terminal device can adopt the decompression algorithm 1 after compression
  • the physical memory page 3 is decompressed, and the compressed physical memory page 6 is decompressed by the decompression algorithm 2, and the compressed physical memory page 8 is decompressed by the decompression algorithm 3.
  • the compressed physical memory page 3 and the physical memory page 6 may also correspond to the same decompression algorithm in whole or in part.
  • the compressed physical memory page 3 and the physical memory page 6 correspond to the same decompression algorithm
  • the compressed physical memory page 8 corresponds to another different type. Decompression algorithm.
  • the content types of the compressed physical memory pages may all be the same, or may be partially the same or all different.
  • the terminal device may determine a decompression algorithm corresponding to the content type according to the content type of each compressed physical memory page, and then decompress the corresponding compressed physical memory page by using the decompression algorithm. In this way, it is guaranteed that the same type of compression and decompression algorithms are used to compress and decompress the same physical memory page.
  • S107a can be specifically implemented by S107a1-S107a3 described below.
  • the terminal device acquires a content type of a physical memory page.
  • the terminal device determines, according to the content type of the one physical memory page, a predefined decompression algorithm corresponding to the content type of the one physical memory page.
  • the terminal device decompresses the one physical memory page by using a predefined decompression algorithm corresponding to the content type of the one physical memory page.
  • the decompression algorithm may be an ARM frame buffer compress (AFBC) algorithm or an LZ4 decompression algorithm.
  • AFBC ARM frame buffer compress
  • the number of zero pages may be directly determined according to the page number of the zero page recorded in the compressed index table, and then the same number of physical memory pages are reassigned. Clear it. Since the texture page is usually the content of the image format, and the AFBC algorithm is mainly suitable for decompressing the content of the image format, the decompression of the texture page using the AFBC algorithm can achieve the optimal decompression ratio and decompression speed.
  • the vertex queue and the drawing command queue are usually in the format of data or text, and the LZ4 decompression algorithm is mainly used to decompress the content of data or text, the decompression of the vertex queue and the drawing command queue by the LZ4 decompression algorithm can be achieved. The best decompression ratio and decompression speed.
  • the terminal device fills the content of each physical memory page after decompression into at least one physical memory page.
  • the terminal device After the terminal device decompresses the compressed physical memory page, the content of each physical memory page after decompression is correspondingly filled into at least one physical memory page that is re-allocated to complete recovery of the compressed physical memory page. For example, the terminal device may fill the decompressed physical memory page 3, the physical memory page 6, and the physical memory page 8 into the reallocated physical memory page 4, physical memory page 5, and physical memory page 9, respectively. .
  • the terminal device modifies the compressed index table.
  • the modified compressed index table includes multiple second entries. Each second entry includes an index number, virtual The address range of the memory page, the address range of the restored physical memory page, and the content type of the restored physical memory page.
  • the restored physical memory page is a physical memory page filled with decompressed content in at least one physical memory page, and the decompressed content is the content after decompressing the compressed physical memory page.
  • the modified compression index table provided by the embodiment of the present application is exemplified in the following.
  • the index number 4 in the above table 4 is used to indicate the page number of the physical memory page corresponding to the virtual memory page whose address range is 010000 to 01FFFF after the decompression process is executed, that is, the index number 4 is used to indicate the physical memory page 4 , indicating that physical memory page 4 is the restored physical memory page.
  • the meanings of the index numbers 5 and 9 in the above Table 4 are similar to those of the index number 4. For details, refer to the related description of the index number 4, and details are not described herein again.
  • the terminal device when the decompressing process is performed, the physical memory page is re-allocated for the content of the compressed physical memory page, which may result in a correspondence between the virtual memory page and the physical memory page.
  • the change occurs, so that after the decompression process is performed, the terminal device can also modify the first page table to restore the first page table, thereby ensuring the accuracy of the first page table.
  • the terminal device For each physical memory page in which the flag bit is set in the first page table, the terminal device performs the method shown in S110a below to restore the first page table.
  • the terminal device adopts an address range of one physical memory page that is reassigned, and covers an address range of one physical memory page in which the flag bit is set in the first page table.
  • the terminal device adopts the address range of the reallocated physical memory page 4, covering the address range of the physical memory page 3 in which the flag bit 1 is set in the first page table as shown in Table 1, and the terminal device adopts the re
  • the address range of the allocated physical memory page 5 covers the address range of the physical memory page 6 in which the flag bit 1 is set in the first page table as shown in Table 1, and the terminal device adopts the address range of the reallocated physical memory page 9.
  • Covering the address range of the physical memory page 8 in which the flag bit 1 is set in the first page table as shown in Table 1, and the physical memory page 3, the object Memory page 6 and physical memory page 8 may be idle or have been used by other components.
  • the method for compressing the memory occupied by the GPU may further include the following S111-S117.
  • the terminal device detects a page fault interrupt.
  • the page fault interrupt is an interrupt generated when the terminal device detects that the flag bit set for the compressed physical memory page is an invalid physical address in the first page table.
  • the terminal device since the flag set by the terminal device in the first page table for the compressed physical memory page is an invalid physical address (that is, the physical address does not exist), the terminal device reads in the first page table.
  • the flag bit is obtained, since the physical address does not exist, a page fault interrupt can be generated, and the decompression process of the physical memory page in which the flag bit is set is automatically triggered. That is, after the terminal device detects the page fault interrupt, the terminal device may automatically execute S112 described below to start decompressing the compressed physical memory page.
  • the terminal device invokes a page fault interrupt processing function to obtain a compressed index table corresponding to the process number according to the process ID of the process of the first application.
  • the terminal device can be configured according to the correspondence between the compressed index table and the process ID of the process of the first application after the compression of the compressed index table is performed.
  • the process index of the process of the first application obtains the compressed index table corresponding to the process ID, that is, the compressed index table of the memory occupied by the GPU when the terminal device runs the first application in the foreground.
  • the page fault interrupt processing function may include a page fault interrupt function, a page fault processing function, and a memory management unit (MMU) interrupt function.
  • MMU memory management unit
  • Page fault interrupt function kbase_gmc_handle_gpu_page_fault
  • Page fault handling function page_fault_worker
  • the terminal device re-allocates at least one physical memory page according to the compressed index table.
  • the terminal device uses a predefined decompression algorithm to decompress each compressed physical memory page.
  • the terminal device fills the content of each physical memory page after decompression into at least one physical memory page.
  • the terminal device modifies the compressed index table.
  • the terminal device For each physical memory page in which the flag bit is set in the first page table, the terminal device performs the method shown in the following S117a to restore the first page table.
  • the terminal device adopts an address range of a physical memory page that is reassigned, and covers an address range of one physical memory page in which the flag bit is set in the first page table.
  • the method for decompressing the memory occupied by the CPU provided by the embodiment of the present application can restore the memory occupied by the GPU by decompressing the compressed physical memory page when the first application running by the terminal device switches from the background to the foreground.
  • the embodiment of the present application further provides a method for decompressing a memory occupied by a CPU, and the method is applied to operation of a terminal device.
  • the first application switches from the background to the foreground (that is, the terminal device runs the first application in the background to switch to the terminal device and runs the first application in the foreground), and the terminal device compresses the memory occupied by the GPU when the first application runs in the background. Scenes.
  • the terminal device may decompress the first application before switching from the background to the foreground (corresponding to the decompression process shown in FIG. 6 described below). After the first application is switched from the background to the foreground, the terminal device decompresses the interface of the first application (corresponding to the decompression process shown in FIG. 7 below).
  • the two decompression processes are exemplarily described below.
  • the method for decompressing the memory occupied by the GPU may include the following S201-S206.
  • the terminal device acquires a compression index table.
  • the compressed index table is created when the terminal device compresses each physical memory page to be compressed.
  • the terminal device may create a compressed index table to The information about the physical memory page that has been compressed is recorded, so that when the compressed physical memory page needs to be decompressed, the terminal device can decompress each compressed physical memory page according to the compressed index table.
  • the terminal device re-allocates at least one physical memory page according to the compressed index table.
  • the terminal device uses a predefined decompression algorithm to decompress each compressed physical memory page.
  • the terminal device fills the content of each physical memory page after decompression into at least one physical memory page.
  • the terminal device modifies the compressed index table.
  • the terminal device For each physical memory page in which the flag bit is set in the first page table, the terminal device performs the method shown in S206a below to restore the first page table.
  • each physical memory page after compression in the first page table is provided with a flag bit. This flag is used to indicate that the physical memory page with this flag set has been compressed.
  • the first page table is a page table of the process of the first application in which the terminal device runs.
  • the terminal device adopts an address range of a physical memory page that is reassigned, and covers an address range of a physical memory page in which the flag bit is set in the first page table.
  • the method for decompressing the memory occupied by the GPU may include the following S301-S307.
  • the terminal device detects a page fault interrupt.
  • the page fault interrupt is an interrupt generated when the terminal device detects that the flag bit set for the compressed physical memory page is an invalid physical address in the first page table.
  • the terminal device invokes the page fault interrupt processing function to obtain the compressed index table according to the process ID of the process of the first application run by the terminal device, in response to the page fault interrupt.
  • the terminal device re-allocates at least one physical memory page according to the compressed index table.
  • the terminal device decompresses each compressed physical memory page by using a predefined decompression algorithm.
  • the terminal device fills the content of each physical memory page after decompression into at least one physical memory page.
  • the terminal device modifies the compressed index table.
  • the terminal device For each physical memory page in which the flag bit is set in the first page table, the terminal device performs the method shown in S307a below to restore the first page table.
  • the terminal device adopts an address range of a physical memory page that is reassigned, and covers an address range of one physical memory page in which the flag bit is set in the first page table.
  • the method for decompressing the memory occupied by the CPU provided by the embodiment of the present application can restore the memory occupied by the GPU by decompressing the compressed physical memory page when the first application running by the terminal device switches from the background to the foreground.
  • the specific implementation process of the method for compressing and decompressing the memory occupied by the GPU provided by the embodiment of the present application is exemplarily described from the perspective of the Android operating system. As shown in FIG. 8 , the method for compressing and decompressing the memory occupied by the GPU provided by the embodiment of the present application is based on the architecture of the Android operating system.
  • the method for compressing the memory occupied by the GPU provided by the embodiment of the present application may be implemented by using a function module, and the function module may be recorded as a gmc_compress module.
  • the method for decompressing the memory occupied by the GPU provided by the embodiment of the present application (that is, the decompression process 1 or the decompression process shown in FIG. 6) may be implemented by another function module, and the function module may be recorded as a gmc_decompress_write module.
  • the method for decompressing the memory occupied by the GPU provided by the embodiment of the present application that is, the decompression process 2 or the decompression process shown in FIG.
  • gmc_compress module can be implemented by using another function module, and the function module can be recorded as a gmc_decompress module.
  • the gmc_compress module, the gmc_decompress_write module, and the gmc_decompress module can all be implemented by the developer based on the Android operating system programming shown in FIG. 1.
  • the embodiments of the present application are not limited, that is, all the functional modules that can implement the method for compressing and decompressing the memory occupied by the GPU provided by the embodiments of the present application are within the protection scope of the present application.
  • the architecture diagram includes four modules: an application framework layer, a GPU driver layer, a memory, and a system on chip (SoC).
  • an application framework layer As shown in FIG. 8, the architecture diagram includes four modules: an application framework layer, a GPU driver layer, a memory, and a system on chip (SoC).
  • SoC system on chip
  • the application framework layer includes a scheduling module (iAware), an AMS, a process of the first application running by the terminal device (denoted as process 1 in FIG. 8), a fast visiable attribute, and a window management system (WMS). ).
  • the GPU driver layer includes the ZpoolGPU memory manager, the page table of the pre-compression process 1 (including the correspondence between the virtual memory page and the physical memory page occupied by the GPU before compression), the physical memory page occupied by the GPU before compression, and the physics occupied by the GPU after compression.
  • Memory page includes the memory occupied by the CPU, the memory occupied by the GPU, and the physical page table of the GPU.
  • compression module ie gmc_compress module above
  • decompression module 1 ie gmc_decompress_write module described above
  • decompression module 2 ie gmc_decompress module described above
  • page fault interrupt handler and page table of compressed process 1 (including The correspondence between the virtual memory page and the physical memory page occupied by the GPU after compression).
  • Memory includes the memory occupied by the CPU, the memory occupied by the GPU, and the physical page table of the GPU.
  • System-on-a-chip includes CPU and GPU.
  • each module in the application framework layer shown in FIG. 8 may be implemented in an application framework layer of the Android operating system as shown in FIG. 1; each module in the GPU driver layer may be as shown in FIG. Android operation As the kernel layer implementation of the system.
  • the memory and SoC are the hardware components of the Android operating system application shown in Figure 1.
  • the AMS when the AMS determines that the first application running by the terminal device is switched from the foreground to the background by the fast visiable, the AMS can invoke the compression module (ie, the gmc_compress module described above) to perform the compression provided by the embodiment of the present application.
  • the AMS may invoke the compression module to determine a virtual memory page occupied by the GPU according to the terminal device running the first application in the foreground, and determine a physical memory page corresponding to the virtual memory page in the page table of the process 1 before compression, and then The physical memory page is compressed, and a flag is set for each compressed physical memory page in the page table of process 1 before compression.
  • the AMS may invoke the decompression module 1 (ie, the gmc_decompress_write module described above) to execute the first application before the first application is switched.
  • the method for decompressing the memory occupied by the GPU provided by the embodiment (that is, the decompression process 1 described above or the decompression process shown in FIG. 6) is applied.
  • the AMS may invoke the decompression module 1 to decompress the compressed physical memory page, and restore the page table of the compressed process 1.
  • the AMS when the AMS determines that the first application running by the terminal device is switched from the background to the foreground, after the first application is switched, before the interface of the first application is displayed, the AMS performs the embodiment provided by the present application.
  • the method of decompressing the memory occupied by the GPU that is, the decompression process 2 described above or the decompression process shown in FIG. 7).
  • the AMS may call the page fault interrupt processing function to respond to the page fault interrupt, and then the AMS calls the decompression module 2 (ie, the gmc_decompress module described above) to decompress the compressed physical memory page. And restore the page table of Process 1 after compression.
  • the decompression process 1 and the decompression process 2 as shown in FIG. 8, reference may be made to the description of the method for compressing and decompressing the memory occupied by the GPU provided by the embodiment of the present application. Let me repeat.
  • the terminal device and the like provided by the embodiments of the present application include corresponding hardware structures and/or software modules for performing the respective functions.
  • the modules and algorithm steps of the examples described in connection with the embodiments disclosed herein can be implemented in a combination of hardware or hardware and computer software. Whether a function is implemented in hardware or computer software to drive hardware depends on the specific application and design constraints of the solution. A person skilled in the art can implement the described functions using different methods for each particular application, but such implementation should not be considered to be beyond the scope of the present application.
  • the embodiment of the present application can perform functional division of a terminal device or the like according to the above method.
  • each function module can be divided for each function, or two or more functions can be integrated into one module.
  • the above integrated modules can be implemented in the form of hardware or in the form of software functional modules. It should be noted that the division of the module in the embodiment of the present application is schematic, and is only a logical function division, and the actual implementation may have another division manner.
  • FIG. 9 shows an embodiment of the present application.
  • the terminal device may include: a determining module 20 and a decompression module 21.
  • the determining module 20 may be configured to support the terminal device to perform S101 and S102 performed by the terminal device in the foregoing method embodiment;
  • the decompression module 21 may be configured to support the terminal device to perform S103 performed by the terminal device in the foregoing method embodiment (including S103a or S103a1-S103a3), S107 (including S107a or S107a1-S107a3) and S114.
  • S103a or S103a1-S103a3 including S107a or S107a1-S107a3
  • the terminal device provided by the embodiment of the present application may further include a creating module 22 and a setting module 23.
  • the creating module 22 may be configured to support the terminal device to perform S104, S109, and S116 performed by the terminal device in the foregoing method embodiment.
  • the setting module 23 may be configured to support the terminal device to perform S105 performed by the terminal device in the foregoing method embodiment.
  • the terminal device provided by the embodiment of the present application may further include an allocation module 24 and a filling module 25.
  • the allocating module 24 can be used to support the terminal device to perform the S106 and S113 performed by the terminal device in the foregoing method embodiment; the filling module 25 can be used to support the terminal device to perform the S108, S110 performed by the terminal device in the foregoing method embodiment (including S110a) ), S115 and S117 (including S117a).
  • the terminal device provided by the embodiment of the present application may further include a detecting module 26 and an obtaining module 27.
  • the detecting module 26 can be used to support the terminal device to perform S111 performed by the terminal device in the foregoing method embodiment; the obtaining module 27 can be used to support the terminal device to perform S112 performed by the terminal device in the foregoing method embodiment. All the related content of the steps involved in the foregoing method embodiments may be referred to the functional descriptions of the corresponding functional modules, and details are not described herein again.
  • FIG. 13 is a schematic diagram showing another possible structure of the terminal device provided by the embodiment of the present application.
  • the terminal device may include: an obtaining module 30, an allocating module 31, a decompressing module 32, and a filling module 33.
  • the obtaining module 30 may be configured to support the terminal device to perform S201 and S302 performed by the terminal device in the foregoing method embodiment; the allocating module 31 may be configured to support the terminal device to perform S202 and S303 performed by the terminal device in the foregoing method embodiment; and decompressing
  • the module 32 can be used to support the terminal device to perform the S203 and S304 performed by the terminal device in the foregoing method embodiment; the filling module 33 can be used to support the terminal device to perform the S204, S206 (including S206a) performed by the terminal device in the foregoing method embodiment. , S305 and S307 (including S307a).
  • the terminal device provided by the embodiment of the present application may further include a creating module 34.
  • the creating module 34 can be used to support the terminal device to perform S205 and S306 performed by the terminal device in the above method embodiment.
  • the terminal device provided by the embodiment of the present application may further include a detecting module 35.
  • the detecting module 35 can be used to support the terminal device to perform S301 performed by the terminal device in the foregoing method embodiment. All the related content of the steps involved in the foregoing method embodiments may be referred to the functional descriptions of the corresponding functional modules, and details are not described herein again.
  • FIG. 16 is a schematic diagram showing a possible structure of a terminal device provided by an embodiment of the present application.
  • the terminal device may include: a processing module 40, a communication module 41, and a storage module 42.
  • the processing module 40 can be used to control and manage the actions of the terminal device.
  • the processing module 40 can be used to support the terminal device to perform all the steps performed by the terminal device in the foregoing method embodiment, and/or the techniques described herein. Other processes.
  • the communication module 41 can be used to support communication between the terminal device and other devices.
  • the communication module 41 can be used to support interaction between the terminal device and other terminal devices.
  • the storage module 42 can be used to store program code and data of the terminal device.
  • the processing module 40 may be a processor or a controller, and may be, for example, a GPU, a CPU, a DSP, an AP, or A general purpose processor, ASIC, FPGA, or other programmable logic device, transistor logic device, hardware component, or any combination thereof. It is possible to implement or carry out the various illustrative logical blocks, modules and circuits described in connection with the present disclosure.
  • the processor may also be a combination of computing functions, for example, including one or more microprocessor combinations, a combination of a DSP and a microprocessor, and the like.
  • the communication module 41 can be a transceiver, a transceiver circuit, a communication interface, or the like.
  • the storage module 42 can be a memory.
  • the processing module 40 can be the processor 10 shown in FIG. 2 above.
  • the communication module 41 may be a communication interface such as the RF circuit 11 and/or the input module 14 as shown in FIG. 2 described above.
  • the storage module 42 may be the memory 13 as shown in FIG. 2 described above.
  • the processing module 40 is a processor
  • the communication module 41 is a transceiver
  • the storage module 42 is a memory
  • FIG. 17 it is a hardware schematic diagram of a terminal device provided by the embodiment of the present application.
  • the terminal device includes a processor 50, a memory 51, and a transceiver 52.
  • the processor 50, the memory 51 and the transceiver 52 can be connected to one another via a bus 53.
  • the bus 53 may be a peripheral component interconnect (PCI) bus or an extended industry standard architecture (EISA) bus or the like.
  • the bus 53 can be divided into an address bus, a data bus, a control bus, and the like.
  • the above embodiments it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof.
  • a software program it may be implemented in whole or in part in the form of a computer program product.
  • the computer program product includes one or more computer instructions.
  • the computer instructions When the computer instructions are loaded and executed on a computer, the processes or functions in accordance with embodiments of the present application are generated in whole or in part.
  • the computer can be a general purpose computer, a special purpose computer, a computer network, or other programmable device.
  • the computer instructions can be stored in a computer readable storage medium or transferred from one computer readable storage medium to another computer readable storage medium, for example, the computer instructions can be wired from a website site, computer, server or data center (for example, coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (eg infrared, wireless, microwave, etc.) to another website, computer, server or data center.
  • the computer readable storage medium can be any available media that can be accessed by a computer or a data storage device such as a server, data center, or the like that includes one or more available media.
  • the usable medium may be a magnetic medium (for example, a floppy disk, a magnetic disk, a magnetic tape), an optical medium (for example, a digital video disc (DVD)), or a semiconductor medium (such as a solid state drives (SSD)).
  • a magnetic medium for example, a floppy disk, a magnetic disk, a magnetic tape
  • an optical medium for example, a digital video disc (DVD)
  • a semiconductor medium such as a solid state drives (SSD)
  • the disclosed system, apparatus, and method may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of the modules or units is only a logical function division.
  • there may be another division manner for example, multiple units or components may be used. Combinations can be integrated into another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be in an electrical, mechanical or other form.
  • the unit described as a separate component may or may not be physically separated as a unit display
  • the illustrated components may or may not be physical units, ie may be located in one place or may be distributed over multiple network elements. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of a software functional unit.
  • the integrated unit if implemented in the form of a software functional unit and sold or used as a standalone product, may be stored in a computer readable storage medium.
  • a computer readable storage medium A number of instructions are included to cause a computer device (which may be a personal computer, server, or network device, etc.) or processor to perform all or part of the steps of the methods described in various embodiments of the present application.
  • the foregoing storage medium includes: a flash memory, a mobile hard disk, a read only memory, a random access memory, a magnetic disk, or an optical disk, and the like, which can store program codes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

一种压缩和解压处理器(10)所占内存的方法及装置,涉及终端技术领域,能够使得终端设备在后台运行应用的情况下,节省终端设备的处理器(10)所占内存。该压缩处理器(10)所占内存的方法应用于终端设备运行的第一应用从前台切换到后台的场景,包括:确定虚拟内存地址范围(S101),该虚拟内存地址范围为终端设备在前台运行第一应用时终端设备的处理器(10)所占的全部或者部分虚拟内存地址的范围,该虚拟内存地址范围包括至少一个虚拟内存页;并根据该至少一个虚拟内存页,在第一页表中确定与每个虚拟内存页对应的每个物理内存页(S102),该第一页表为第一应用的进程的页表;以及采用预定义的压缩算法,对每个物理内存页进行压缩(S103)。

Description

一种压缩和解压处理器所占内存的方法及装置 技术领域
本申请涉及终端技术领域,尤其涉及一种压缩和解压处理器所占内存的方法及装置。
背景技术
随着终端技术的不断发展,能够运行应用的终端设备越来越多。以手机为例,手机可以在前台运行应用,也可以在后台运行应用。
通常,手机在前台运行应用时,由于手机会响应用户对应用的各种操作,因此手机的处理器所占内存通常比较大;而手机在后台运行应用时,虽然用户不会对该应用进行任何操作,但是手机的处理器所占内存仍然比较大。当手机不在前台运行应用时,可以采用将应用关闭的方式节省手机的处理器所占内存。
然而,上述将应用关闭的方式会导致手机在前台运行一个应用时,会关闭其他的应用,而在用户再次使用该其他应用时需要重新打开该其他应用。
发明内容
本申请提供一种压缩和解压处理器所占内存的方法及装置,能够使得终端设备在后台运行应用的情况下,节省终端设备的处理器所占内存。
为达到上述目的,本申请采用如下技术方案:
第一方面,提供一种压缩处理器所占内存的方法,该方法可以应用于终端设备运行的第一应用从前台切换到后台的场景,通过确定终端设备在前台运行第一应用时终端设备的处理器所占的虚拟内存地址范围(为该处理器所占的全部或者部分虚拟内存地址的范围)包括的至少一个虚拟内存页;并根据该至少一个虚拟内存页,在第一应用的进程的页表(即第一页表)中确定与每个虚拟内存页对应的每个物理内存页;然后再采用预定义的压缩算法,对该每个物理内存页进行压缩。如此,由于在终端设备运行的第一应用从前台切换到后台时,终端设备可以采用预定义的算法,对终端设备在前台运行第一应用时终端设备的处理器所占物理内存页进行压缩,因此能够使得终端设备在后台运行应用的情况下,节省终端设备的处理器所占内存。
例如,本申请中的压缩算法可以为ARM帧缓冲压缩(ARM frame buffer compress,AFBC)算法或者LZ4压缩算法等。
在第一方面的第一种可选的实现方式中,上述采用预定义的压缩算法,对每个物理内存页进行压缩之后,还可以创建压缩索引表。如此,可以记录对物理内存页的压缩情况。
在第一方面的第二种可选的实现方式中,上述采用预定义的压缩算法,对每个物理内存页进行压缩之后,还可以在第一页表中为每个物理内存页分别设置标志位(用于指示设置有该标志位的物理内存页已被压缩)。如此,当终端设备读取到该标志位时,就可以获知设置有该标志位的物理内存页已被压缩,从而能够提高识别已被压缩 的物理内存页的效率和准确性。
在第一方面的第三种可选的实现方式中,上述采用预定义的压缩算法,对每个物理内存页进行压缩的方法可以包括:对于每个物理内存页,均采用预定义的与一个物理内存页对应的压缩算法,对该一个物理内存页进行压缩,以对每个物理内存页进行压缩。
在第一方面的第四种可选的实现方式中,上述采用预定义的与一个物理内存页对应的压缩算法,对该一个物理内存页进行压缩的方法可以包括:获取该一个物理内存页的内容类型;并根据该一个物理内存页的内容类型,确定预定义的与该一个物理内存页的内容类型对应的压缩算法;然后再采用预定义的与该一个物理内存页的内容类型对应的压缩算法,对该一个物理内存页进行压缩。如此,由于可以采用与每个物理内存页对应的压缩算法对每个物理内存页进行压缩,因此能够达到最佳的压缩比和压缩速度。
在第一方面的第五种可选的实现方式中,该方法可以应用于终端设备运行的第一应用从后台切换到前台的场景,上述创建压缩索引表之后,还可以根据该压缩索引表,重新分配至少一个物理内存页;并采用预定义的解压算法,对压缩后的每个物理内存页进行解压;然后再将解压后的每个物理内存页的内容填充到至少一个物理内存页中。如此,在终端设备对处理器所占物理内存页进行压缩之后,如果终端设备运行的第一应用从后台切回到前台(即终端设备在后台运行第一应用切换到终端设备在前台运行第一应用),那么终端设备还可以对压缩后的物理内存页进行解压,以恢复处理器所占内存。
在第一方面的第六种可选的实现方式中,上述将解压后的每个物理内存页的内容填充到至少一个物理内存页中之后,还可以修改压缩索引表。如此,可以保证压缩索引表的准确性。
在第一方面的第七种可选的实现方式中,上述将解压后的每个物理内存页的内容填充到至少一个物理内存页中之后,还可以对于第一页表中设置有标志位的每个物理内存页,均采用重新分配的一个物理内存页的地址范围,覆盖该第一页表中设置有该标志位的一个物理内存页的地址范围,以恢复该第一页表。如此,可以保证第一页表的准确性。
在第一方面的第八种可选的实现方式中,上述根据压缩索引表,重新分配至少一个物理内存页之前,还可以检测到缺页中断(该缺页中断为当在第一页表中检测到为压缩后的物理内存页设置的标志位为无效的物理地址时产生的中断);并调用缺页中断处理函数响应于该缺页中断,根据第一应用的进程的进程号,获取压缩索引表。如此,当检测到缺页中断时,可以自动触发对设置有标志位的物理内存页(即已被压缩的物理内存页)的解压流程。
第二方面,提供一种解压处理器所占内存的方法,该方法可以应用于终端设备运行的第一应用从后台切换到前台的场景,通过获取压缩待压缩的每个物理内存页时创建的压缩索引表;并根据该压缩索引表,重新分配至少一个物理内存页;以及采用预定义的解压算法,对压缩后的每个物理内存页进行解压;然后再将解压后的每个物理内存页的内容填充到至少一个物理内存页中。如此,在对处理器所占物理内存页进行 压缩之后,如果终端设备运行的第一应用从后台切回到前台(即终端设备在后台运行第一应用切换到终端设备在前台运行第一应用),那么还可以对压缩后的物理内存页进行解压,以恢复处理器所占内存。
在第二方面的第一种可选的实现方式中,上述将解压后的每个物理内存页的内容填充到至少一个物理内存页中之后,还可以修改压缩索引表。如此,可以保证压缩索引表的准确性。
在第二方面的第二种可选的实现方式中,第一应用的进程的页表(即第一页表)中压缩后的每个物理内存页均设置有标志位(用于指示设置有该标志位的物理内存页已被压缩)。上述将解压后的每个物理内存页的内容填充到至少一个物理内存页中之后,还可以对于该第一页表中设置有该标志位的每个物理内存页,均采用重新分配的一个物理内存页的地址范围,覆盖该第一页表中设置有该标志位的一个物理内存页的地址范围,以恢复该第一页表。如此,可以保证第一页表的准确性。
在第二方面的第三种可选的实现方式中,上述获取压缩索引表之前,还可以检测到缺页中断(为当在第一页表中检测到为压缩后的物理内存页设置的标志位为无效的物理地址时产生的中断)。上述获取压缩索引表的方法可以包括:调用缺页中断处理函数响应于该缺页中断,根据终端设备运行的第一应用的进程的进程号,获取该压缩索引表。如此,当检测到缺页中断时,可以自动触发对设置有标志位的物理内存页(即已被压缩的物理内存页)的解压流程。
在第一方面和第二方面中,上述采用预定义的解压算法,对压缩后的每个物理内存页进行解压的方法可以包括:对于压缩后的每个物理内存页,均采用预定义的与压缩后的一个物理内存页对应的解压算法,对该一个物理内存页进行解压,以对压缩后的每个物理内存页进行解压。
在第一方面和第二方面中,上述采用预定义的与压缩后的一个物理内存页对应的解压算法,对该一个物理内存页进行解压的方法可以包括:获取该一个物理内存页的内容类型;并根据该一个物理内存页的内容类型,确定预定义的与该一个物理内存页的内容类型对应的解压算法;然后再采用预定义的与该一个物理内存页的内容类型对应的解压算法,对该一个物理内存页进行解压。如此,由于可以采用与压缩后的每个物理内存页对应的解压算法对每个物理内存页进行解压,因此能够达到最佳的解压比和解压速度。
第三方面,提供一种终端设备,该终端设备运行的第一应用从前台切换到后台,该终端设备可以包括确定模块和解压缩模块。确定模块用于确定终端设备在前台运行第一应用时终端设备的处理器所占的虚拟内存地址范围(为处理器所占的全部或者部分虚拟内存地址的范围)包括的至少一个虚拟内存页;并根据该至少一个虚拟内存页,在第一应用的进程的页表(即第一页表)中确定与每个虚拟内存页对应的每个物理内存页;解压缩模块用于采用预定义的压缩算法,对确定模块确定的每个物理内存页进行压缩。
在第三方面的第一种可选的实现方式中,上述终端设备还可以包括创建模块。创建模块用于在解压缩模块采用预定义的压缩算法,对每个物理内存页进行压缩之后,创建压缩索引表。
在第三方面的第二种可选的实现方式中,上述终端设备还可以包括设置模块。设置模块用于在解压缩模块采用预定义的压缩算法,对每个物理内存页进行压缩之后,在第一页表中为每个物理内存页分别设置标志位(用于指示设置有该标志位的物理内存页已被压缩)。
在第三方面的第三种可选的实现方式中,上述解压缩模块具体用于对于每个物理内存页,均采用预定义的与一个物理内存页对应的压缩算法,对该一个物理内存页进行压缩,以对每个物理内存页进行压缩。
在第三方面的第四种可选的实现方式中,上述解压缩模块具体用于获取一个物理内存页的内容类型;并根据该一个物理内存页的内容类型,确定预定义的与该一个物理内存页的内容类型对应的压缩算法;然后再采用预定义的与该一个物理内存页的内容类型对应的压缩算法,对该一个物理内存页进行压缩。
在第三方面的第五种可选的实现方式中,该终端设备运行的第一应用从后台切换到前台,该终端设备还可以包括分配模块和填充模块。分配模块用于在创建模块创建压缩索引表之后,根据该压缩索引表,重新分配至少一个物理内存页;解压缩模块还用于采用预定义的解压算法,对压缩后的每个物理内存页进行解压;填充模块用于将解压缩模块解压后的每个物理内存页的内容填充到分配模块重新分配的至少一个物理内存页中。
在第三方面的第六种可选的实现方式中,上述创建模块还用于在填充模块将解压后的每个物理内存页的内容填充到至少一个物理内存页中之后,修改压缩索引表。
在第三方面的第七种可选的实现方式中,上述填充模块还用于在将解压后的每个物理内存页的内容填充到至少一个物理内存页中之后,对于第一页表中设置有标志位的每个物理内存页,均采用重新分配的一个物理内存页的地址范围,覆盖该第一页表中设置有该标志位的一个物理内存页的地址范围,以恢复该第一页表。
在第三方面的第八种可选的实现方式中,上述终端设备还可以包括检测模块和获取模块。检测模块用于在分配模块根据压缩索引表,重新分配至少一个物理内存页之前,检测到缺页中断(该缺页中断为当在第一页表中检测到为压缩后的物理内存页设置的标志位为无效的物理地址时产生的中断);获取模块用于调用缺页中断处理函数响应于检测模块检测到的缺页中断,根据第一应用的进程的进程号,获取该压缩索引表。
对于第三方面及其任意一种可选的实现方式的技术效果的描述具体可以参见上述对第一方面及其任意一种可选的实现方式的技术效果的相关描述,此处不再赘述。
第四方面,提供一种终端设备,该终端设备运行的第一应用从后台切换到前台,该终端设备可以包括获取模块、分配模块、解压缩模块和填充模块。获取模块用于获取压缩待压缩的每个物理内存页时创建的压缩索引表;分配模块用于根据获取模块获取的该压缩索引表,重新分配至少一个物理内存页;解压缩模块用于采用预定义的解压算法,对压缩后的每个物理内存页进行解压;填充模块用于将解压缩模块解压后的每个物理内存页的内容填充到分配模块重新分配的至少一个物理内存页中。
在第四方面的第一种可选的实现方式中,上述终端设备还可以包括创建模块。创建模块用于在填充模块将解压后的每个物理内存页的内容填充到至少一个物理内存页 中之后,修改获取模块获取的压缩索引表。
在第四方面的第二种可选的实现方式中,第一应用的进程的页表(即第一页表)中压缩后的每个物理内存页均设置有标志位(用于指示设置有该标志位的物理内存页已被压缩);填充模块还用于在将解压后的每个物理内存页的内容填充到至少一个物理内存页中之后,对于该第一页表中设置有该标志位的每个物理内存页,均采用重新分配的一个物理内存页的地址范围,覆盖该第一页表中设置有该标志位的一个物理内存页的地址范围,以恢复该第一页表。
在第四方面的第三种可选的实现方式中,上述终端设备还可以包括检测模块。检测模块用于在获取模块获取压缩索引表之前,检测到缺页中断(为当在第一页表中检测到为压缩后的物理内存页设置的标志位为无效的物理地址时产生的中断);获取模块具体用于调用缺页中断处理函数响应于检测模块检测到的该缺页中断,根据终端设备运行的第一应用的进程的进程号,获取该压缩索引表。
在第三方面和第四方面中,上述解压缩模块具体用于对于压缩后的每个物理内存页,均采用预定义的与压缩后的一个物理内存页对应的解压算法,对该一个物理内存页进行解压,以对压缩后的每个物理内存页进行解压。
在第三方面和第四方面中,上述解压缩模块具体用于获取一个物理内存页的内容类型;并根据该一个物理内存页的内容类型,确定预定义的与该一个物理内存页的内容类型对应的解压算法;然后再采用预定义的与该一个物理内存页的内容类型对应的解压算法,对该一个物理内存页进行解压。
对于第四方面及其任意一种可选的实现方式的技术效果的描述具体可以参见上述对第二方面及其任意一种可选的实现方式的技术效果的相关描述,此处不再赘述。
在第一方面至第四方面中,上述压缩索引表包括多个第一表项。每个第一表项包括索引号(用于指示与虚拟内存页对应的物理内存页的页号)、虚拟内存页的地址范围、压缩后的物理内存页(为对与虚拟内存页对应的物理内存页压缩后的物理内存页)的地址范围以及压缩后的物理内存页的内容类型。
在第一方面至第四方面中,上述修改后的压缩索引表包括多个第二表项。每个第二表项包括索引号、虚拟内存页的地址范围、恢复后的物理内存页(为至少一个物理内存页中填充有解压内容的物理内存页,该解压内容为对压缩后的物理内存页解压后的内容)的地址范围以及恢复后的物理内存页的内容类型。
在第一方面至第四方面中,上述物理内存页的内容类型为零页、纹理页、顶点队列或者绘图命令队列。
第五方面,提供一种终端设备,该终端设备可以包括存储器以及与该存储器耦合的一个或多个处理器。该存储器用于存储一个或多个程序,该一个或多个程序包括计算机指令,当该一个或多个处理器执行该计算机指令时,使得该终端设备执行上述第一方面、第一方面的任意一种可选的实现方式、第二方面或者第二方面的任意一种可选的实现方式中的方法。
第六方面,提供一种计算机可读存储介质,该计算机可读存储介质可以包括计算机指令,当该计算机指令在终端设备上运行时,使得该终端设备执行上述第一方面、第一方面的任意一种可选的实现方式、第二方面或者第二方面的任意一种可选的实现 方式中的方法。
第七方面,提供一种包括计算机指令的计算机程序产品,当该计算机程序产品在终端设备上运行时,使得该终端设备执行上述第一方面、第一方面的任意一种可选的实现方式、第二方面或者第二方面的任意一种可选的实现方式中的方法。
对于第五方面、第六方面以及第七方面的技术效果的描述具体可以参见上述对第一方面、第一方面的任意一种可选的实现方式、第二方面或者第二方面的任意一种可选的实现方式的技术效果的相关描述,此处不再赘述。
附图说明
图1为本申请实施例提供的安卓操作系统的架构示意图;
图2为本申请实施例提供的手机的硬件示意图;
图3为本申请实施例提供的压缩图形处理器(graphics processing unit,GPU)所占内存的方法示意图一;
图4为本申请实施例提供的压缩GPU所占内存的方法示意图二;
图5为本申请实施例提供的压缩GPU所占内存的方法示意图三;
图6为本申请实施例提供的解压GPU所占内存的方法示意图一;
图7为本申请实施例提供的解压GPU所占内存的方法示意图二;
图8为本申请实施例提供的压缩和解压GPU所占内存的方法基于安卓操作系统实现的架构示意图;
图9为本申请实施例提供的终端设备的结构示意图一;
图10为本申请实施例提供的终端设备的结构示意图二;
图11为本申请实施例提供的终端设备的结构示意图三;
图12为本申请实施例提供的终端设备的结构示意图四;
图13为本申请实施例提供的终端设备的结构示意图五;
图14为本申请实施例提供的终端设备的结构示意图六;
图15为本申请实施例提供的终端设备的结构示意图七;
图16为本申请实施例提供的终端设备的结构示意图八;
图17为本申请实施例提供的终端设备的硬件示意图。
具体实施方式
本文中术语“和/或”,是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。本文中符号“/”表示关联对象是或者的关系,例如A/B表示A或者B。
本申请的说明书和权利要求书中的术语“第一”和“第二”等是用于区别不同的对象,而不是用于描述对象的特定顺序。例如,第一表项和第二表项等是用于区别不同的表项,而不是用于描述表项的特定顺序。
在本发明实施例中,“示例性的”或者“例如”等词用于表示作例子、例证或说明。本发明实施例中被描述为“示例性的”或者“例如”的任何实施例或设计方案不应被解释为比其它实施例或设计方案更优选或更具优势。确切而言,使用“示例性的”或者“例如”等词旨在以具体方式呈现相关概念。
在本发明实施例的描述中,除非另有说明,“多个”的含义是指两个或者两个以 上,例如,多个第一表项是指两个或者两个以上的第一表项,多个第二表项是指两个或者两个以上的第二表项等。
下面首先对本申请中涉及到的一些术语和名词做一下解释说明。
虚拟内存地址:是指处理器访问内存时使用的逻辑地址。例如,终端设备在运行第一应用的情况下,终端设备的处理器访问内存时使用的逻辑地址。
物理内存地址:是指内存中存放数据的实际地址,也称为实际内存地址或者绝对内存地址。物理内存地址与虚拟内存地址一一对应。处理器在访问内存时,可以根据虚拟内存地址在内存中找到与该虚拟内存地址对应的物理内存地址,然后再对该物理内存地址进行访问(例如对该物理内存地址进行读/写的操作等)。
虚拟内存地址范围:是指若干个连续的虚拟内存地址组成的范围。
物理内存地址范围:是指若干个连续的物理内存地址组成的范围。
虚拟内存页:是指将虚拟内存地址范围划分为若干个大小相等的片段,每个片段可以称为一个虚拟内存页。每个虚拟内存页可以有一个页号。
物理内存页:是指将物理内存地址范围划分为若干个大小相等的片段,每个片段可以称为一个物理内存页。每个虚拟内存页也可以有一个页号。每个物理内存页的大小与每个虚拟内存页的大小均相同。例如,对于32位的处理器来说,每个物理内存页和每个虚拟内存页的大小均是4千字节(KB)。
页表:是指存放虚拟内存页和物理内存页的对应关系的表。每个进程都有一个自己的页表。例如,本申请实施例中,终端设备运行的第一应用的进程有一个自己的页表,该页表在本申请实施例中称为第一页表。
目前,为了节省手机的处理器所占内存,当手机不在前台运行某个应用时,可以将该应用关闭。然而,这种将应用关闭的方式会导致手机在前台运行一个应用时,会关闭其他的应用,而在用户再次使用该其他应用时需要重新打开该其他应用。如此,无法使得终端设备在后台运行应用的情况下,节省终端设备的处理器所占内存。
为了解决该问题,本申请实施例提供一种压缩和解压处理器所占内存的方法及装置,在终端设备运行的第一应用从前台切换到后台的场景中,可以确定终端设备在前台运行第一应用时终端设备的处理器所占的虚拟内存地址范围(该虚拟内存地址范围为该处理器所占的全部或者部分虚拟内存地址的范围),并根据该虚拟内存地址范围中的至少一个虚拟内存页,在该第一应用的进程的第一页表中确定与每个虚拟内存页对应的每个物理内存页,然后再采用预定义的压缩算法,对每个物理内存页进行压缩。如此,由于在终端设备运行的第一应用从前台切换到后台时,可以采用预定义的算法,对终端设备在前台运行第一应用时终端设备的处理器所占物理内存页进行压缩,因此能够使得终端设备在后台运行应用的情况下,节省终端设备的处理器所占内存。
可以理解,本申请实施例中,终端设备的处理器所占内存是指终端设备运行第一应用时该处理器所占内存。终端设备运行第一应用时该处理器所占内存包括终端设备在前台运行第一应用时该处理器所占内存和终端设备在后台运行第一应用时该处理器所占内存。具体的,在本申请实施例提供的压缩处理器所占内存的方法中,终端设备的处理器所占内存是指终端设备在前台运行第一应用时该处理器所占内存;在本申请实施例提供的解压处理器所占内存的方法中,终端设备的处理器所占内存是指终端设 备在后台运行第一应用时该处理器所占内存。
本申请实施例提供的压缩和解压处理器所占内存的方法可以应用于终端设备,也可以应用于终端设备中能够实现该方法的功能模块或者功能实体,例如可以应用于终端设备中的应用程序管理系统(application management system,AMS)等,具体可以根据实际应用场景确定,本申请实施例不作具体限定。
可选的,本申请实施例中的处理器可以为GPU、中央处理器(central processing unit,CPU)、数字信号处理器(digital signal processing,DSP)、应用处理器(application processor,AP)、通用处理器、专用集成电路(application-specific integrated circuit,ASIC)以及现场可编程门阵列(field programmable gate array,FPGA)等中的一种或者多种的组合。
本申请实施例中的终端设备可以为具有操作系统的终端设备。该操作系统可以为安卓(Android)操作系统,可以为ios操作系统,还可以为其他可能的操作系统,本申请实施例不作具体限定。
下面以安卓操作系统为例,介绍一下本申请实施例提供的压缩和解压处理器所占内存的方法应用的软件环境。
如图1所示,为本申请实施例提供的一种可能的安卓操作系统的架构示意图。在图1中,安卓操作系统的架构包括4层,分别为:应用层、应用框架层、系统运行库层和内核层(具体可以为Linux内核层)。
其中,应用层是安卓操作系统中应用的集合。如图1所示,安卓操作系统提供了主屏幕、设置、联系人以及浏览器等众多的系统应用,同时应用的开发者还可以使用应用框架层开发其他的应用,例如可以在终端设备上安装和运行的第三方应用。
应用框架层实际上是应用的框架,开发人员可以在遵守应用的框架的开发原则的情况下,基于应用框架层开发一些应用。如图1所示,应用框架层包括的一些重要组件有:活动管理器、窗口管理器、内存提供器、视图系统、通知管理器、包管理器、电话管理器、资源管理器、本地管理,以及可扩展通讯和表示协议(extensible messaging and presence protocol,XMPP)服务等。
系统运行库层包括库(也称为系统库)和安卓操作系统运行环境。如图1所示,库主要包括接口管理器、媒体框架、数据存储、三维(three-dimensional,3D)引擎、位图及矢量、浏览器引擎、二维(two dimensional,2D)图形引擎、中间协议以及Libc函数库(C语言的一种函数库)。安卓操作系统运行环境包括安卓运行环境(Android runtime,ART)虚拟机和核心库,ART虚拟机用于基于核心库运行安卓操作系统中的应用,在安卓操作系统中,每个应用都有一个ART虚拟机为其提供服务。
内核层是安卓操作系统的操作系统层,属于安卓操作系统软件层次的最底层。其基于Linux内核提供核心系统服务。除了提供这些核心系统服务外,还提供与终端设备硬件相关的驱动程序,例如如图1所示的摄像头驱动、蓝牙驱动、通用串行总线(universal serial bus,USB)驱动、键盘驱动以及无线保真(wireless-fidelity,Wi-Fi)驱动等。
以安卓操作系统为例,本申请实施例中,开发人员可以基于上述如图1所示的安卓操作系统的系统架构,开发实现本申请实施例提供的压缩和解压处理器所占内存的 方法的软件程序,从而使得该压缩和解压处理器所占内存的方法可以基于如图1所示的安卓操作系统运行。即处理器或者终端设备可以通过在安卓操作系统中运行该软件程序实现本申请实施例提供的压缩和解压处理器所占内存的方法。
本申请实施例中的终端设备可以包括移动终端设备和非移动终端设备。移动终端设备可以为手机、平板电脑、笔记本电脑、超级移动个人计算机(ultra-mobile personal computer,UMPC)、上网本、个人数字助理(personal digital assistant,PDA)、智能手表或者智能手环等终端设备;非移动终端设备可以为个人计算机(personal computer,PC)、电视机(television,TV)、柜员机或者自助机等终端设备;本申请实施例不作具体限定。
下面再以本申请实施例提供的终端设备是手机为例,结合图2对手机的各个构成部件做具体介绍。
示例性的,如图2所示,本申请实施例提供的手机可以包括:处理器10、射频(radio frequency,RF)电路11、电源12、存储器13、输入模块14、显示模块15以及音频电路16等部件。本领域技术人员可以理解,图2中示出的手机的结构并不构成对手机的限定,其可以包括比如图2所示的部件更多或更少的部件,或者可以组合如图2所示的部件中的某些部件,或者可以与如图2所示的部件布置不同。
处理器10是手机的控制中心,利用各种接口和线路连接整个手机的各个部分。通过运行或执行存储在存储器13内的软件程序和/或模块,以及调用存储在存储器13内的数据,执行手机的各种功能和处理数据,从而对手机进行整体监控。可选的,处理器10可包括一个或多个处理模块,例如,处理器10可集成应用处理器和调制解调处理器,其中,应用处理器主要处理操作系统、用户界面和应用等;调制解调处理器主要处理无线通信等。可以理解的是,上述调制解调处理器也可以为与处理器10单独存在的处理器。
RF电路11可用于在收发信息或通话过程中,接收和发送信号。例如,将基站的下行信息接收后,给处理器10处理;另外,将上行的数据发送给基站。通常,RF电路包括但不限于天线、至少一个放大器、收发信机、耦合器、低噪声放大器(low noise amplifier,LNA)以及双工器等。此外,手机还可以通过RF电路11与网络中的其他设备实现无线通信。无线通信可以使用任一通信标准或协议,包括但不限于全球移动通讯系统(global system of mobile communication,GSM)、通用分组无线服务(general packet radio service,GPRS)、码分多址(code division multiple access,CDMA)、宽带码分多址(wideband code division multiple access,WCDMA)、长期演进(long term evolution,LTE)、电子邮件以及短消息服务(short messaging service,SMS)等。
电源12可用于给手机的各个部件供电,电源12可以为电池。可选的,电源可以通过电源管理系统与处理器10逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。
存储器13可用于存储软件程序和/或模块,处理器10通过运行存储在存储器13的软件程序和/或模块,从而执行手机的各种功能应用以及数据处理。存储器13可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用(比如声音播放功能、图像播放功能等)等;存储数据区可存储根据手机 的使用所创建的数据(比如音频数据、图像数据、电话本等)等。此外,存储器13可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件或其他易失性固态存储器件。
输入模块14可用于接收输入的数字或字符信息,以及产生与手机的用户设置以及功能控制有关的键信号输入。具体地,输入模块14可包括触摸屏141以及其他输入设备142。触摸屏141,也称为触摸面板,可收集用户在其上或附近的触摸操作(比如用户使用手指、触笔等任何适合的物体或附件在触摸屏141上或在触摸屏141附近的操作),并根据预先设定的程式驱动相应的连接装置。可选的,触摸屏141可包括触摸检测装置和触摸控制器两个部分。其中,触摸检测装置检测用户的触摸方位,并检测触摸操作带来的信号,将信号传送给触摸控制器;触摸控制器从触摸检测装置上接收触摸信息,并将它转换成触点坐标,再送给处理器10,并能接收处理器10发来的命令并加以执行。此外,可以采用电阻式、电容式、红外线以及表面声波等多种类型实现触摸屏141。其他输入设备142可以包括但不限于物理键盘、功能键(比如音量控制按键、电源开关按键等)、轨迹球、鼠标以及操作杆等中的一种或多种。
显示模块15可用于显示由用户输入的信息或提供给用户的信息以及手机的各种菜单。显示模块15可包括显示面板151。可选的,可以采用液晶显示器(liquid crystal display,LCD)、有机发光二极管(organic light-emitting diode,OLED)等形式来配置显示面板151。进一步的,触摸屏141可覆盖显示面板151,当触摸屏141检测到在其上或附近的触摸操作后,传送给处理器10以确定触摸事件的类型,随后处理器10根据触摸事件的类型在显示面板151上提供相应的视觉输出。虽然在图2中,触摸屏141与显示面板151是作为两个独立的部件来实现手机的输入和输出功能,但是在某些实施例中,可以将触摸屏141与显示面板151集成而实现手机的输入和输出功能。
音频电路16、扬声器161和麦克风162,用于提供用户与手机之间的音频接口。一方面,音频电路16可将接收到的音频数据转换后的电信号,传输到扬声器161,由扬声器161转换为声音信号输出。另一方面,麦克风162将收集的声音信号转换为电信号,由音频电路16接收后转换为音频数据,再将音频数据通过处理器10输出至RF电路11以发送给比如另一手机,或者将音频数据通过处理器10输出至存储器13以便进一步处理。
可选的,如图2所示的手机还可以包括各种传感器。例如陀螺仪传感器、湿度计传感器、红外线传感器、磁力计传感器等,在此不再赘述。
可选的,如图2所示的手机还可以包括Wi-Fi模块、蓝牙模块等,在此不再赘述。
下面以本申请实施例提供的压缩和解压处理器所占内存的方法应用于终端设备,且该处理器是GPU为例,介绍本申请实施例提供的压缩和解压处理器所占内存的方法。具体的,当本申请实施例提供的处理器是GPU时,本申请实施例提供的压缩和解压处理器所占内存的方法可以为压缩和解压GPU所占内存的方法。
可以理解,终端设备执行本申请实施例提供的压缩和解压GPU所占内存的方法,可以是终端设备的CPU执行该方法,具体可以是终端设备的CPU通过调用上述AMS执行该方法。本申请的下述实施例均以终端设备执行该方法为例进行示例性的说明。
如图3所示,本申请实施例提供一种压缩GPU所占内存的方法,该方法应用于终 端设备运行的第一应用从前台切换到后台(即终端设备在前台运行第一应用切换到终端设备在后台运行第一应用)的场景,该方法可以包括下述的S101-S105。
S101、终端设备确定虚拟内存地址范围。
其中,该虚拟内存地址范围为终端设备在前台运行第一应用时GPU所占的全部或者部分虚拟内存地址的范围。该虚拟内存地址范围包括至少一个虚拟内存页。
对虚拟内存地址范围和虚拟内存页的描述具体可以参见上述术语和名词解释中对虚拟内存地址范围和虚拟内存页的解释说明,此处不再赘述。
GPU又称显示核心、视觉处理器或者显示芯片。GPU是一种专门在终端设备(例如本申请的上述实施例中列举的几种终端设备)上进行图像运算工作的微处理器。具体的,GPU的用途是将计算机系统所需要的显示信息进行转换驱动,并向显示器提供行扫描信号,控制显示器的正确显示。GPU是终端设备中连接显示器和主板的重要部件,也是“人机交互”的重要部件之一。GPU与CPU类似,但是GPU是专门为执行复杂的数学和几何运算而设计的,这些运算是图形绘制和渲染所必需的。示例性的,如果CPU要在显示器上绘制一个二维图形,CPU只需要发送绘制指令给GPU,例如该绘制指令指示“在坐标为(x,y)处绘制一个长和宽为a×b大小的长方形”,GPU就可以根据该绘制指令,迅速计算出该二维图形的所有像素,并在显示器上指定位置绘制出相应的图形,绘制完成后GPU通知CPU“已绘制完成”,然后GPU等待CPU发出下一条图形操作的指令。
本申请实施例中,当终端设备在前台运行第一应用时,由于终端设备会响应用户对第一应用的各种操作,例如,用户在第一应用的当前界面上进行点击操作后,终端设备响应于该点击操作会切换当前界面到目标界面,因此终端设备的GPU由于要重新绘制界面,所以该GPU所占的内存通常比较大。而当终端设备将第一应用切换到后台运行,即终端设备在后台运行第一应用时,由于用户无法对第一应用进行任何操作,因此终端设备也无需进行响应,此时GPU所占内存由于没有释放而导致内存浪费,即其他部件无法使用GPU所占内存,从而导致内存的利用率比较低。
基于这种情况,本申请实施例提供一种压缩GPU所占内存的方法,在终端设备执行该方法时,终端设备首先确定终端设备在前台运行第一应用时GPU所占的全部虚拟内存地址的范围,然后终端设备再根据压缩需求,确定最终的虚拟内存地址范围。具体的,如果要对GPU所占内存全部压缩,那么该最终的虚拟内存地址范围为该全部虚拟内存地址的范围;如果要对GPU所占内存部分压缩,那么该最终的虚拟内存地址范围为该全部虚拟内存地址的范围中的一部分,即上述部分虚拟内存地址的范围。
示例性的,假设终端设备在前台运行第一应用时GPU所占的全部虚拟内存地址的范围为000000~2FFFFF,那么上述虚拟内存地址范围可以为000000~2FFFFF或者000000~2FFFFF中的一部分(例如可以为010000~03FFFF)。
S102、终端设备根据虚拟内存地址范围中的至少一个虚拟内存页,在第一页表中确定与每个虚拟内存页对应的每个物理内存页。
其中,该第一页表为第一应用的进程的页表。第一页表中包括多个虚拟内存页和多个物理内存页的对应关系。上述S102中的至少一个虚拟内存页为该多个虚拟内存页中的虚拟内存页;上述S102中与至少一个虚拟内存页中的每个虚拟内存页对应的每个 物理内存页为该多个物理内存页中的物理内存页。即第一页表中包括至少一个虚拟内存页和至少一个物理内存页的对应关系(该至少一个物理内存页为与至少一个虚拟内存页一一对应的物理内存页)。
下面以表1为例,对第一页表进行示例性的说明。
表1
虚拟内存页 物理内存页 标志位
0(000000~00FFFF) 2(020000~02FFFF)  
1(010000~01FFFF) 3(030000~03FFFF) 1
2(020000~02FFFF) 6(060000~06FFFF) 1
3(030000~03FFFF) 8(080000~08FFFF) 1
4(040000~04FFFF) 9(090000~09FFFF)  
5(050000~05FFFF) 1(010000~01FFFF)  
6(060000~06FFFF) 4(040000~04FFFF)  
7(070000~07FFFF) 5(050000~05FFFF)  
…… …… ……
在表1中,虚拟内存页一栏的数字0、1、2、3、4、5、6和7表示虚拟内存页的页号,该页号用于指示虚拟内存页,例如0用于指示虚拟内存页0,1用于指示虚拟内存页1等;虚拟内存页一栏中每个数字后面的地址范围表示一个虚拟内存页的地址范围,例如(000000~00FFFF)表示虚拟内存页0的地址范围,(010000~01FFFF)表示虚拟内存页1的地址范围等。物理内存页一栏的数字2、3、6、8、9、1、4和5表示物理内存页的页号,该页号用于指示物理内存页,例如2用于指示物理内存页2,3用于指示物理内存页3等;物理内存页一栏中每个数字后面的地址范围表示一个物理内存页的地址范围,例如(020000~02FFFF)表示物理内存页2的地址范围,(030000~03FFFF)表示物理内存页3的地址范围等。可以看出,虚拟内存页的页号和虚拟内存页的地址范围是连续的,物理内存页的页号和物理内存页的地址范围是不连续的,这是因为在实际应用中,虚拟内存是为GPU访问内存专门分配的,而物理内存则是实际的内存,且物理内存可能被终端设备中的多个部件共同占用,因此GPU所占物理内存页可能会出现不连续的情况。本申请实施例中,当GPU访问内存时,可以根据表1所示的第一页表,找到与某个虚拟内存页对应的物理内存页,然后再访问该物理内存页。
本申请实施例提供的压缩GPU所占内存的方法中,终端设备确定上述虚拟内存地址范围之后,终端设备可以根据该虚拟内存地址范围中的至少一个虚拟内存页,在第一页表中确定与每个虚拟内存页对应的每个物理内存页。
示例性的,假设上述虚拟内存地址范围为010000~03FFFF,该虚拟内存地址范围包括3个虚拟内存页,该3个虚拟内存页分别为虚拟内存页1(虚拟内存页1的地址范围为010000~01FFFF)、虚拟内存页2(虚拟内存页2的地址范围为020000~02FFFF)和虚拟内存页3(虚拟内存页3的地址范围为030000~03FFFF)。终端设备根据虚拟内存页1、虚拟内存页2和虚拟内存页3,在第一页表中确定出与这3个虚拟内存页中的每个虚拟内存页对应的每个物理内存页,例如,终端设备在如表1所示的第一页表 中确定出与虚拟内存页1对应的物理内存页3、与虚拟内存页2对应的物理内存页6,以及与虚拟内存页3对应的物理内存页8。
S103、终端设备采用预定义的压缩算法,对每个物理内存页进行压缩。
本申请实施例中,可以预先定义一种或者多种压缩算法,在终端设备对待压缩的物理内存页进行压缩时,终端设备可以采用该一种或者多种压缩算法对待压缩的物理内存页进行压缩。
可选的,本申请实施例中,终端设备可以采用同一种压缩算法对待压缩的不同物理内存页进行压缩;也可以采用不同种压缩算法对待压缩的不同物理内存页进行压缩,具体的,可以根据实际压缩需求确定,本申请实施例不作具体限定。
当终端设备采用不同种压缩算法对待压缩的不同物理内存页进行压缩时,可以采用与每个物理内存页对应的压缩算法对该物理内存页进行压缩,如此,可以在一定程度上提高压缩比和压缩速度。具体的,对于上述S102中确定的每个物理内存页,终端设备均可以执行下述的S103a,以对该每个物理内存页进行压缩,即上述S103具体可以通过重复执行下述的S103a实现。
S103a、终端设备采用预定义的与一个物理内存页对应的压缩算法,对该一个物理内存页进行压缩。
示例性的,仍以上述S102中确定的物理内存页3、物理内存页6和物理内存页8这3个物理内存页为例,假设与物理内存页3对应的压缩算法为压缩算法1,与物理内存页6对应的压缩算法为压缩算法2,与物理内存页8对应的压缩算法为压缩算法3,那么终端设备可以采用压缩算法1对物理内存页3进行压缩,采用压缩算法2对物理内存页6进行压缩,采用压缩算法3对物理内存页8进行压缩。
可以理解,上述是以物理内存页3、物理内存页6和物理内存页8分别对应不同的压缩算法为例进行说明的,实际实现中,物理内存页3、物理内存页6和物理内存页8也可以全部或者部分对应相同的压缩算法,例如物理内存页3和物理内存页6对应同一种压缩算法,物理内存页8对应另一种不同的压缩算法。
可选的,本申请实施例中,由于多个物理内存页的内容类型可能全部相同,也可能部分相同或者全部不同,因此本申请实施例中,为了更进一步地提高压缩比和压缩速度,终端设备可以根据每个物理内存页的内容类型,确定与该内容类型对应的压缩算法,然后再采用该压缩算法对对应的物理内存页进行压缩。
示例性的,对于每个物理内存页,上述S103a具体可以通过下述的S103a1-S103a3实现。
S103a1、终端设备获取一个物理内存页的内容类型。
S103a2、终端设备根据该一个物理内存页的内容类型,确定预定义的与该一个物理内存页的内容类型对应的压缩算法。
S103a3、终端设备采用预定义的与该一个物理内存页的内容类型对应的压缩算法,对该一个物理内存页进行压缩。
可选的,本申请实施例中,上述物理内存页的内容类型可以为零页、纹理页、顶点队列(buffer)或者绘图命令队列等。
可选的,本申请实施例中,上述压缩算法可以为AFBC算法或者LZ4压缩算法等。
本申请实施例中,由于零页是不包含内容的物理内存页,因此零页可以不压缩,在压缩过程中,只需在压缩索引表中记录零页的页号即可。由于纹理页通常是图片格式的内容,而AFBC算法主要适用于对图片格式的内容进行压缩,因此对纹理页采用AFBC算法压缩能够达到最佳的压缩比和压缩速度。由于顶点队列和绘图命令队列通常是数据或者文本等格式的内容,而LZ4压缩算法主要适用于对数据或者文本等格式的内容进行压缩,因此对顶点队列和绘图命令队列采用LZ4压缩算法压缩能够达到最佳的压缩比和压缩速度。
可选的,本申请实施例中,终端设备获取物理内存页的内容类型的方法可以为:终端设备通过读取物理内存页的比特位模式,获取物理内存页的内容类型。
每个物理内存页均有一个比特位模式,该比特位模式可以用于指示该物理内存页的内容为哪种类型。
示例性的,假设该比特位模式可以用2个比特位表示,那么可以用不同数值的2个比特位表示每个物理内存页的内容类型。下面以表2为例进行示例性的说明。
表2
比特位模式 物理内存页的内容类型
00 零页
01 纹理页
10 顶点队列
11 绘图命令队列
可以理解,上述表2是示例性的列举,在实际实现中,每个比特位模式的数值还可以指示与表2不同的物理内存页的内容类型。例如可以用00表示纹理页,用01表示零页,用10表示顶点队列,用11表示绘图命令队列等。具体的,可以根据实际情况确定,此处不再一一列举。
S104、终端设备创建压缩索引表。
其中,该压缩索引表可以包括多个第一表项。每个第一表项包括索引号、虚拟内存页的地址范围、压缩后的物理内存页的地址范围以及压缩后的物理内存页的内容类型。该索引号可以用于指示与该虚拟内存页对应的物理内存页的页号,压缩后的物理内存页为对与该虚拟内存页对应的物理内存页压缩后的物理内存页。
本申请实施例中,由于终端设备压缩的是终端设备在前台运行第一应用时GPU所占内存,因此终端设备创建的压缩索引表是与应用对应的,例如与第一应用对应,而在终端设备的操作系统中,终端设备运行第一应用可以为终端设备运行第一应用的进程,因此本申请实施例中,终端设备创建压缩索引表之后,还可以保存该压缩索引表与第一应用的进程的进程号之间的对应关系。例如,压缩索引表1对应第一应用的进程的进程号,压缩索引表2对应第二应用的进程的进程号等。
下面以表3为例,对本申请实施例提供的压缩索引表进行示例性的说明。
表3
Figure PCTCN2017106173-appb-000001
可以理解,上述表3中的索引号3用于指示终端设备执行压缩流程后,与地址范围是010000~01FFFF的虚拟内存页对应的物理内存页的页号,即索引号3用于指示物理内存页3,表示物理内存页3是被压缩过的物理内存页。上述表3中的索引号6和8的含义与索引号3的含义类似,具体可以参见对索引号3的相关描述,此处不再赘述。
S105、终端设备在第一页表中为每个物理内存页分别设置标志位。
其中,该标志位可以用于指示设置有该标志位的物理内存页已被压缩。这样,当终端设备读取到该标志位时,就可以获知设置有该标志位的物理内存页已被压缩,从而能够提高识别已被压缩的物理内存页的效率和准确性。
可选的,本申请实施例中,该标志位可以为地址、数字、字母、序列以及字符中的一个或多个的组合,具体的可以根据实际使用需求设置,本申请实施例不作具体限定。
可选的,本申请实施例中,可以设置该标志位为无效的物理地址(即该物理地址不存在)。从而,当终端设备读取到该标志位时,由于该物理地址不存在,因此可以产生缺页中断,进而自动触发对设置有该标志位的物理内存页的解压流程。
示例性的,结合表3,如表1所示,终端设备对物理内存页3、物理内存页6和物理内存页8压缩之后,终端设备可以为这3个物理内存页分别设置标志位1,用于表示对应物理内存页已被压缩。另外,对于其余未被压缩的物理内存页,可以不设置标志位,或者设置标志位为0,用于表示对应物理内存页没有被压缩。
需要说明的是,本申请实施例可以不限定S104和S105的执行顺序。即本申请实施例可以先执行S104,后执行S105;也可以先执行S105,后执行S104;还可以同时执行S104和S105。其中,上述图3是以S105在S104之后执行为例进行示意的。
本申请实施例提供的压缩GPU所占内存的方法,由于在终端设备运行的第一应用从前台切换到后台时,终端设备可以采用预定义的算法,对终端设备在前台运行第一应用时终端设备的GPU所占物理内存页进行压缩,因此能够使得终端设备在后台运行应用的情况下,节省终端设备的GPU所占内存。
可选的,本申请实施例提供的压缩GPU所占内存的方法中,在终端设备对GPU所占物理内存页进行压缩之后,如果终端设备运行的第一应用从后台切回到前台(即终端设备在后台运行第一应用切换到终端设备在前台运行第一应用),那么终端设备还可以对压缩后的物理内存页进行解压,以恢复GPU所占内存。
当终端设备解压压缩后的物理内存页时,终端设备可以在第一应用从后台切换到前台之前解压(对应下述的解压流程一);也可以在第一应用从后台切换到前台之后,终端设备显示第一应用的界面之前解压(对应下述的解压流程二)。下面分别对两种解压流程进行示例性的说明。
解压流程一
结合图3,如图4所示,在上述S104之后,本申请实施例提供的压缩GPU所占内存的方法还可以包括下述的S106-S110。
S106、终端设备根据压缩索引表,重新分配至少一个物理内存页。
本申请实施例中,压缩索引表是终端设备在压缩物理内存页时创建的。终端设备在解压压缩后的物理内存页时,终端设备可以根据压缩过程中创建的压缩索引表,重新分配至少一个物理内存页,该至少一个物理内存页用于存放对压缩后的物理内存页解压后的内容。
由于终端设备对物理内存页压缩后,该物理内存页可能会被别的部件使用,所以在解压过程中,终端设备可以重新分配物理内存页,用于存放对压缩后的物理内存页解压后的内容。
示例性的,仍以上述表3为例,终端设备根据如表3所示的压缩索引表可以确定压缩后的物理内存页有3个,分别为物理内存页3、物理内存页6和物理内存页8,那么终端设备可以重新分配3个物理内存页,用于存放对压缩后的这3个物理内存页解压后的内容。例如,终端设备可以重新分配物理内存页4、物理内存页5和物理内存页9这3个物理内存页,用于存放对压缩后的物理内存页3、物理内存页6和物理内存页8解压后的内容。
S107、终端设备采用预定义的解压算法,对压缩后的每个物理内存页进行解压。
本申请实施例中,可以预先定义一种或者多种解压算法,在终端设备对压缩后的物理内存页进行解压时,终端设备可以采用该一种或者多种解压算法对压缩后的物理内存页进行解压。
可选的,本申请实施例中,终端设备可以采用同一种解压算法对压缩后的不同物理内存页进行解压;也可以采用不同种解压算法对压缩后的不同物理内存页进行解压,具体的,可以根据实际压缩需求确定,本申请实施例不作具体限定。
当终端设备采用不同种解压算法对压缩后的不同物理内存页进行解压时,可以采用与压缩后的每个物理内存页对应的解压算法解压该物理内存页,如此,可以提高解压比和解压速度。具体的,对于压缩后的每个物理内存页,终端设备均可以执行下述的S107a,以对该压缩后的每个物理内存页进行解压,即上述S107具体可以通过重复执行下述的S107a实现。
S107a、终端设备采用预定义的与压缩后的一个物理内存页对应的解压算法,对一个物理内存页进行解压。
示例性的,仍以上述S106中确定的压缩后的物理内存页3、物理内存页6和物理内存页8这3个物理内存页为例,假设与压缩后的物理内存页3对应的解压算法为解压算法1,与压缩后的物理内存页6对应的解压算法为解压算法2,与压缩后的物理内存页8对应的解压算法为解压算法3,那么终端设备可以采用解压算法1对压缩后的物理内存页3进行解压,采用解压算法2对压缩后的物理内存页6进行解压,采用解压算法3对压缩后的物理内存页8进行解压。
可以理解,上述是以压缩后的物理内存页3、物理内存页6和物理内存页8分别对应不同的解压算法为例进行说明的,实际实现中,压缩后的物理内存页3、物理内 存页6和物理内存页8也可以全部或者部分对应相同的解压算法,例如压缩后的物理内存页3和物理内存页6对应同一种解压算法,压缩后的物理内存页8对应另一种不同的解压算法。
需要说明的是,在本申请实施例的压缩和解压流程中,对于同一个物理内存页而言,如果采用某个压缩算法对该物理内存页进行压缩,那么就要采用与该压缩算法对应的解压算法对压缩后的该物理内存页进行解压。
可选的,本申请实施例中,由于压缩后的多个物理内存页的内容类型可能全部相同,也可能部分相同或者全部不同,因此本申请实施例中,为了更进一步地提高解压比和解压速度,终端设备可以根据压缩后的每个物理内存页的内容类型,确定与该内容类型对应的解压算法,然后再采用该解压算法对对应的压缩后的物理内存页进行解压。如此,可以保证采用同一类型的压缩算法和解压算法对同一个物理内存页进行压缩和解压。
示例性的,对于压缩后的每个物理内存页,上述S107a具体可以通过下述的S107a1-S107a3实现。
S107a1、终端设备获取一个物理内存页的内容类型。
S107a2、终端设备根据该一个物理内存页的内容类型,确定预定义的与该一个物理内存页的内容类型对应的解压算法。
S107a3、终端设备采用预定义的与该一个物理内存页的内容类型对应的解压算法,对该一个物理内存页进行解压。
可选的,本申请实施例中,上述解压算法可以为ARM帧缓冲压缩(ARM frame buffer compress,AFBC)算法或者LZ4解压算法等。
对于物理内存页的内容类型的描述、获取物理内存页的内容类型的方法的描述、解压算法的其他描述以及解压算法和物理内存页的内容类型的对应关系的描述具体可以参见上述S103a1-S103a3中的相关描述,此处不再赘述。
需要说明的是,本申请实施例中,在解压零页时,可以直接根据压缩索引表中记录的零页的页号,确定零页的数量,然后再将重新分配的相同数量的物理内存页清零即可。由于纹理页通常是图片格式的内容,而AFBC算法主要适用于对图片格式的内容进行解压,因此对纹理页采用AFBC算法解压能够达到最佳的解压比和解压速度。由于顶点队列和绘图命令队列通常是数据或者文本等格式的内容,而LZ4解压算法主要适用于对数据或者文本等格式的内容进行解压,因此对顶点队列和绘图命令队列采用LZ4解压算法解压能够达到最佳的解压比和解压速度。
S108、终端设备将解压后的每个物理内存页的内容填充到至少一个物理内存页中。
终端设备对压缩后的物理内存页解压之后,再将解压后的每个物理内存页的内容分别对应填充到重新分配的至少一个物理内存页中,以完成对压缩后的物理内存页的恢复。例如,终端设备可以将对压缩后的物理内存页3、物理内存页6和物理内存页8进行解压后的内容分别填充到重新分配的物理内存页4、物理内存页5和物理内存页9中。
S109、终端设备修改压缩索引表。
其中,修改后的压缩索引表包括多个第二表项。每个第二表项包括索引号、虚拟 内存页的地址范围、恢复后的物理内存页的地址范围以及恢复后的物理内存页的内容类型。恢复后的物理内存页为至少一个物理内存页中填充有解压内容的物理内存页,解压内容为对压缩后的物理内存页解压后的内容。
下面以表4为例,对本申请实施例提供的修改后的压缩索引表进行示例性的说明。
表4
Figure PCTCN2017106173-appb-000002
可以理解,上述表4中的索引号4用于指示执行解压流程后,与地址范围是010000~01FFFF的虚拟内存页对应的物理内存页的页号,即索引号4用于指示物理内存页4,表示物理内存页4是被恢复后的物理内存页。上述表4中的索引号5和9的含义与索引号4的含义类似,具体可以参见对索引号4的相关描述,此处不再赘述。
可选的,本申请实施例中,由于执行解压流程时,为对压缩后的物理内存页进行解压后的内容重新分配了物理内存页,因此可能会导致虚拟内存页和物理内存页的对应关系发生变化,从而,在执行解压流程后,终端设备还可以对第一页表进行修改,以恢复第一页表,进而保证第一页表的准确性。
S110、对于第一页表中设置有标志位的每个物理内存页,终端设备均执行下述S110a所示的方法,以恢复第一页表。
S110a、终端设备采用重新分配的一个物理内存页的地址范围,覆盖第一页表中设置有标志位的一个物理内存页的地址范围。
下面以表5为例,对恢复后的第一页表进行示例性的说明。
表5
虚拟内存页 物理内存页 标志位
0(000000~00FFFF) 2(020000~02FFFF)  
1(010000~01FFFF) 4(040000~04FFFF) 1
2(020000~02FFFF) 5(050000~05FFFF) 1
3(030000~03FFFF) 9(090000~09FFFF) 1
4(040000~04FFFF) 3(030000~03FFFF)  
5(050000~05FFFF) 1(010000~01FFFF)  
6(060000~06FFFF) 6(060000~06FFFF)  
7(070000~07FFFF) 8(080000~08FFFF)  
…… …… ……
如表5所示,终端设备采用重新分配的物理内存页4的地址范围,覆盖如表1所示的第一页表中设置有标志位1的物理内存页3的地址范围,终端设备采用重新分配的物理内存页5的地址范围,覆盖如表1所示的第一页表中设置有标志位1的物理内存页6的地址范围,终端设备采用重新分配的物理内存页9的地址范围,覆盖如表1所示的第一页表中设置有标志位1的物理内存页8的地址范围,而物理内存页3、物 理内存页6和物理内存页8可能空闲或者已被其他部件使用了。
解压流程二
结合图3,如图5所示,在上述S104之后,本申请实施例提供的压缩GPU所占内存的方法还可以包括下述的S111-S117。
S111、终端设备检测到缺页中断。
其中,该缺页中断为当终端设备在第一页表中检测到为压缩后的物理内存页设置的标志位为无效的物理地址时产生的中断。
本申请实施例中,由于终端设备在第一页表中为压缩后的物理内存页设置的标志位为无效的物理地址(即该物理地址不存在)因此当终端设备在第一页表中读取到该标志位时,由于该物理地址不存在,因此可以产生缺页中断,进而自动触发对设置有该标志位的物理内存页的解压流程。即终端设备检测到缺页中断之后,终端设备可以自动执行下述的S112,以开始对压缩后的物理内存页进行解压。
S112、终端设备调用缺页中断处理函数响应于该缺页中断,根据第一应用的进程的进程号,获取与该进程号对应的压缩索引表。
本申请实施例中,由于终端设备在压缩过程中,创建压缩索引表之后保存了压缩索引表和第一应用的进程的进程号之间的对应关系,因此本申请实施例中,终端设备可以根据第一应用的进程的进程号,获取与该进程号对应的压缩索引表,即为终端设备压缩终端设备在前台运行第一应用时GPU所占内存的压缩索引表。
可选的,本申请实施例中,缺页中断处理函数可以包括缺页中断函数、缺页处理函数以及内存管理单元(memory management unit,MMU)的中断函数3个函数。具体的,这3个函数分别可以为:
缺页中断函数:kbase_gmc_handle_gpu_page_fault
缺页处理函数:page_fault_worker
MMU的中断函数:Kbase_mmu_irq_handle
S113、终端设备根据压缩索引表,重新分配至少一个物理内存页。
S114、终端设备采用预定义的解压算法,对压缩后的每个物理内存页进行解压。
S115、终端设备将解压后的每个物理内存页的内容填充到至少一个物理内存页中。
S116、终端设备修改压缩索引表。
S117、对于第一页表中设置有标志位的每个物理内存页,终端设备均执行下述S117a所示的方法,以恢复第一页表。
对于S113-S117的描述具体可以参见上述如图4所示的实施例中对S106-S110的相关描述,此处不再赘述。
S117a、终端设备采用重新分配的一个物理内存页的地址范围,覆盖第一页表中设置有标志位的一个物理内存页的地址范围。
对于S117a的描述具体可以参见上述如图4所示的实施例中对S110a的相关描述,此处不再赘述。
本申请实施例提供的解压CPU所占内存的方法,可以在终端设备运行的第一应用从后台切换到前台时,通过解压压缩后的物理内存页,恢复GPU所占内存。
本申请实施例还提供一种解压CPU所占内存的方法,该方法应用于终端设备运行 的第一应用从后台切换到前台(即终端设备在后台运行第一应用切换到终端设备在前台运行第一应用),并且终端设备在后台运行第一应用时对GPU所占内存进行了压缩的场景。
在终端设备解压CPU所占内存的方法中,当终端设备解压压缩后的物理内存页时,终端设备可以在第一应用从后台切换到前台之前解压(对应下述如图6所示的解压流程);也可以在第一应用从后台切换到前台之后,终端设备显示第一应用的界面之前解压(对应下述如图7所示的解压流程)。下面分别对两种解压流程进行示例性的说明。
如图6所示,本申请实施例提供的解压GPU所占内存的方法可以包括下述的S201-S206。
S201、终端设备获取压缩索引表。
其中,该压缩索引表为终端设备压缩待压缩的每个物理内存页时创建的。
本申请实施例中,终端设备在压缩终端设备在前台运行第一应用时GPU所占内存(即上述实施例中的待压缩的每个物理内存页)之后,终端设备可以创建压缩索引表,以记录已被压缩的物理内存页的相关信息,如此,可以使得在需要解压压缩后的每个物理内存页时,终端设备可以根据该压缩索引表解压压缩后的每个物理内存页。
S202、终端设备根据该压缩索引表,重新分配至少一个物理内存页。
S203、终端设备采用预定义的解压算法,对压缩后的每个物理内存页进行解压。
S204、终端设备将解压后的每个物理内存页的内容填充到至少一个物理内存页中。
S205、终端设备修改该压缩索引表。
S206、对于第一页表中设置有标志位的每个物理内存页,终端设备均执行下述S206a所示的方法,以恢复第一页表。
本申请实施例中,第一页表中压缩后的每个物理内存页均设置有标志位。该标志位用于指示设置有该标志位的物理内存页已被压缩。第一页表为终端设备运行的第一应用的进程的页表。
对于S202-S206的描述具体可以参见上述如图4所示的实施例中对S106-S110的相关描述,此处不再赘述。
S206a、终端设备采用重新分配的一个物理内存页的地址范围,覆盖第一页表中设置有标志位的一个物理内存页的地址范围。
对于S206a的描述具体可以参见上述如图4所示的实施例中对S110a的相关描述,此处不再赘述。
如图7所示,本申请实施例提供的解压GPU所占内存的方法可以包括下述的S301-S307。
S301、终端设备检测到缺页中断。
其中,该缺页中断为当终端设备在第一页表中检测到为压缩后的物理内存页设置的标志位为无效的物理地址时产生的中断。
S302、终端设备调用缺页中断处理函数响应于该缺页中断,根据终端设备运行的第一应用的进程的进程号,获取压缩索引表。
S303、终端设备根据压缩索引表,重新分配至少一个物理内存页。
S304、终端设备采用预定义的解压算法,对压缩后的每个物理内存页进行解压。
S305、终端设备将解压后的每个物理内存页的内容填充到至少一个物理内存页中。
S306、终端设备修改压缩索引表。
S307、对于第一页表中设置有标志位的每个物理内存页,终端设备均执行下述S307a所示的方法,以恢复第一页表。
对于S301-S307的描述具体可以参见上述如图5所示的实施例中对S111-S117的相关描述,此处不再赘述。
S307a、终端设备采用重新分配的一个物理内存页的地址范围,覆盖第一页表中设置有标志位的一个物理内存页的地址范围。
对于S307a的描述具体可以参见上述如图5所示的实施例中对S117a的相关描述,此处不再赘述。
本申请实施例提供的解压CPU所占内存的方法,可以在终端设备运行的第一应用从后台切换到前台时,通过解压压缩后的物理内存页,恢复GPU所占内存。
下面再从安卓操作系统的角度示例性的描述一下本申请实施例提供的压缩和解压GPU所占内存的方法的具体实现流程。结合图1,如图8所示,为本申请实施例提供的压缩和解压GPU所占内存的方法基于安卓操作系统实现的架构示意图。
首先,本申请实施例中,可以通过一个功能模块实现本申请实施例提供的压缩GPU所占内存的方法,该功能模块可以记为gmc_compress模块。可以通过另一个功能模块实现本申请实施例提供的解压GPU所占内存的方法(即上述的解压流程一或者如图6所示的解压流程),该功能模块可以记为gmc_decompress_write模块。可以通过又一个功能模块实现本申请实施例提供的解压GPU所占内存的方法(即上述的解压流程二或者如图7所示的解压流程),该功能模块可以记为gmc_decompress模块。可以理解,gmc_compress模块、gmc_decompress_write模块以及gmc_decompress模块均可以由开发人员基于如图1所示的安卓操作系统编程实现。对于这3个功能模块的具体实现,本申请实施例不作限定,即所有能实现本申请实施例提供的压缩和解压GPU所占内存的方法的功能模块都在本申请的保护范围之内。
如图8所示,该架构示意图包括4个模块,分别为:应用框架层、GPU驱动层、内存和系统级芯片(system on chip,SoC)。
其中,应用框架层包括调度模块(iAware)、AMS、终端设备运行的第一应用的进程(图8中记为进程1)、快速可见属性(fast visiable)以及窗口管理系统(window management system,WMS)。GPU驱动层包括ZpoolGPU内存管理器、压缩前进程1的页表(包括压缩前GPU所占虚拟内存页和物理内存页的对应关系)、压缩前GPU所占物理内存页、压缩后GPU所占物理内存页、压缩模块(即上述的gmc_compress模块)、解压模块1(即上述的gmc_decompress_write模块)、解压模块2(即上述的gmc_decompress模块)、缺页中断处理函数以及压缩后进程1的页表(包括压缩后GPU所占虚拟内存页和物理内存页的对应关系)。内存包括CPU所占内存、GPU所占内存和GPU的物理页表。系统级芯片包括CPU和GPU。
本申请实施例中,如图8所示的应用框架层中的各个模块可以在如图1所示的安卓操作系统的应用框架层实现;GPU驱动层中的各个模块可以在如图1所示的安卓操 作系统的内核层实现。内存和SoC为如图1所示的安卓操作系统应用的硬件部件。
如图8中的压缩流程所示,当AMS通过fast visiable确定终端设备运行的第一应用从前台切换到后台时,AMS可以调用压缩模块(即上述的gmc_compress模块)执行本申请实施例提供的压缩GPU所占内存的方法。具体的,AMS可以调用压缩模块根据终端设备在前台运行第一应用时GPU所占虚拟内存页,在压缩前进程1的页表中确定出与该虚拟内存页对应的物理内存页,然后再对该物理内存页进行压缩,并且在压缩前进程1的页表中为压缩后的每个物理内存页设置标志位。
如图8中的解压流程1所示,当AMS确定终端设备运行的第一应用从后台切换到前台时,在第一应用切换之前,AMS可以调用解压模块1(即上述的gmc_decompress_write模块)执行本申请实施例提供的解压GPU所占内存的方法(即上述的解压流程一或者如图6所示的解压流程)。具体的,AMS可以调用解压模块1解压压缩后的物理内存页,并且恢复压缩后进程1的页表。
如图8中的解压流程2所示,当AMS确定终端设备运行的第一应用从后台切换到前台时,在第一应用切换之后,第一应用的界面显示之前,AMS执行本申请实施例提供的解压GPU所占内存的方法(即上述的解压流程二或者如图7所示的解压流程)。具体的,AMS检测到GPU产生的缺页中断之后,AMS可以调用缺页中断处理函数响应于该缺页中断,然后AMS再调用解压模块2(即上述的gmc_decompress模块)解压压缩后的物理内存页,并且恢复压缩后进程1的页表。
对于如图8所示的压缩流程、解压流程1和解压流程2的详细描述具体可以参见上述方法实施例中对本申请实施例提供的压缩和解压GPU所占内存的方法的相关描述,此处不再赘述。
需要说明的是,上述方法实施例主要是以压缩和解压GPU所占内存为例进行说明的,本领域技术人员可以理解,由于本申请实施例中列举的几种处理器的特性均类似,因此上述方法实施例中描述的压缩和解压GPU所占内存的方法同样适用于本申请实施例中的其他类型的处理器,具体的,可以参见上述方法实施例中的相关描述,此处不再赘述。
上述实施例主要从终端设备的角度对本申请实施例提供的方案进行了介绍。可以理解的是,本申请实施例提供的终端设备等为了实现上述功能,其包含了执行各个功能相应的硬件结构和/或软件模块。本领域技术人员可以很容易意识到,结合本文中所公开的实施例描述的各示例的模块及算法步骤,本申请实施例能够以硬件或硬件和计算机软件的结合形式来实现。某个功能究竟以硬件还是计算机软件驱动硬件的方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法实现所描述的功能,但是这种实现不应认为超出本申请的范围。
本申请实施例可以根据上述方法示例性的对终端设备等进行功能模块的划分。例如,可以对应各个功能划分各个功能模块,也可以将两个或两个以上的功能集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。需要说明的是,本申请实施例中对模块的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。
在采用对应各个功能划分各个功能模块的情况下,图9示出了本申请实施例提供 的终端设备的一种可能的结构示意图。如图9所示,该终端设备可以包括:确定模块20和解压缩模块21。确定模块20可以用于支持该终端设备执行上述方法实施例中终端设备执行的S101和S102;解压缩模块21可以用于支持该终端设备执行上述方法实施例中终端设备执行的S103(包括S103a或者S103a1-S103a3)、S107(包括S107a或者S107a1-S107a3)和S114。可选的,结合图9,如图10所示,本申请实施例提供的终端设备还可以包括创建模块22和设置模块23。创建模块22可以用于支持该终端设备执行上述方法实施例中终端设备执行的S104、S109和S116;设置模块23可以用于支持该终端设备执行上述方法实施例中终端设备执行的S105。可选的,结合图10,如图11所示,本申请实施例提供的终端设备还可以包括分配模块24和填充模块25。分配模块24可以用于支持该终端设备执行上述方法实施例中终端设备执行的S106和S113;填充模块25可以用于支持该终端设备执行上述方法实施例中终端设备执行的S108、S110(包括S110a)、S115和S117(包括S117a)。可选的,结合图11,如图12所示,本申请实施例提供的终端设备还可以包括检测模块26和获取模块27。检测模块26可以用于支持该终端设备执行上述方法实施例中终端设备执行的S111;获取模块27可以用于支持该终端设备执行上述方法实施例中终端设备执行的S112。其中,上述方法实施例涉及的各步骤的所有相关内容均可以援引到对应功能模块的功能描述,在此不再赘述。
在采用对应各个功能划分各个功能模块的情况下,图13示出了本申请实施例提供的终端设备的另一种可能的结构示意图。如图13所示,该终端设备可以包括:获取模块30、分配模块31、解压缩模块32和填充模块33。获取模块30可以用于支持该终端设备执行上述方法实施例中终端设备执行的S201和S302;分配模块31可以用于支持该终端设备执行上述方法实施例中终端设备执行的S202和S303;解压缩模块32可以用于支持该终端设备执行上述方法实施例中终端设备执行的S203和S304;填充模块33可以用于支持该终端设备执行上述方法实施例中终端设备执行的S204、S206(包括S206a)、S305和S307(包括S307a)。可选的,结合图13,如图14所示,本申请实施例提供的终端设备还可以包括创建模块34。创建模块34可以用于支持该终端设备执行上述方法实施例中终端设备执行的S205和S306。可选的,结合图14,如图15所示,本申请实施例提供的终端设备还可以包括检测模块35。检测模块35可以用于支持该终端设备执行上述方法实施例中终端设备执行的S301。其中,上述方法实施例涉及的各步骤的所有相关内容均可以援引到对应功能模块的功能描述,在此不再赘述。
在采用集成的功能模块的情况下,图16示出了本申请实施例提供的终端设备的一种可能的结构示意图。如图16所示,该终端设备可以包括:处理模块40、通信模块41和存储模块42。处理模块40可以用于对该终端设备的动作进行控制管理,例如,处理模块40可以用于支持该终端设备执行上述方法实施例中终端设备执行的所有步骤,和/或本文所描述的技术的其它过程。通信模块41可以用于支持该终端设备与其他设备的通信,例如通信模块41可以用于支持该终端设备与其他终端设备的交互。存储模块42可以用于存储该终端设备的程序代码和数据。
其中,处理模块40可以是处理器或控制器,例如可以是GPU、CPU、DSP、AP、 通用处理器、ASIC、FPGA或者其他可编程逻辑器件、晶体管逻辑器件、硬件部件或者其任意组合。其可以实现或执行结合本发明公开内容所描述的各种示例性的逻辑方框,模块和电路。所述处理器也可以是实现计算功能的组合,例如包含一个或多个微处理器组合,DSP和微处理器的组合等等。通信模块41可以是收发器、收发电路或通信接口等。存储模块42可以是存储器。
示例性的,该处理模块40可以是上述如图2所示的处理器10。通信模块41可以是上述如图2所示的RF电路11和/或输入模块14等通信接口。存储模块42可以是上述如图2所示的存储器13。
当处理模块40是处理器、通信模块41是收发器,存储模块42是存储器时,如图17所示,为本申请实施例提供的一种终端设备的硬件示意图。如图17所示,该终端设备包括处理器50、存储器51和收发器52。处理器50、存储器51和收发器52可以通过总线53相互连接。总线53可以是外设部件互连标准(peripheral component interconnect,PCI)总线或扩展工业标准结构(extended Industry standard architecture,EISA)总线等。总线53可以分为地址总线、数据总线、控制总线等。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件程序实现时,可以全部或部分地以计算机程序产品的形式实现。该计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行该计算机指令时,全部或部分地产生按照本申请实施例中的流程或功能。该计算机可以是通用计算机、专用计算机、计算机网络或者其他可编程装置。该计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,该计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(digital subscriber line,DSL))方式或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心传输。该计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包括一个或多个可用介质集成的服务器、数据中心等数据存储设备。该可用介质可以是磁性介质(例如,软盘、磁盘、磁带)、光介质(例如,数字视频光盘(digital video disc,DVD))、或者半导体介质(例如固态硬盘(solid state drives,SSD))等。
通过以上的实施方式的描述,所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将装置的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。上述描述的系统,装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统,装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显 示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)或处理器执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:快闪存储器、移动硬盘、只读存储器、随机存取存储器、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何在本申请揭露的技术范围内的变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (35)

  1. 一种压缩处理器所占内存的方法,其特征在于,应用于终端设备运行的第一应用从前台切换到后台的场景,所述方法包括:
    确定虚拟内存地址范围,所述虚拟内存地址范围为所述终端设备在前台运行所述第一应用时所述终端设备的处理器所占的全部或者部分虚拟内存地址的范围,所述虚拟内存地址范围包括至少一个虚拟内存页;
    根据所述至少一个虚拟内存页,在第一页表中确定与每个虚拟内存页对应的每个物理内存页,所述第一页表为所述第一应用的进程的页表;
    采用预定义的压缩算法,对所述每个物理内存页进行压缩。
  2. 根据权利要求1所述的方法,其特征在于,所述采用预定义的压缩算法,对所述每个物理内存页进行压缩之后,所述方法还包括:
    创建压缩索引表,所述压缩索引表包括多个第一表项,每个第一表项包括索引号、虚拟内存页的地址范围、压缩后的物理内存页的地址范围以及所述压缩后的物理内存页的内容类型,所述索引号用于指示与所述虚拟内存页对应的物理内存页的页号,所述压缩后的物理内存页为对与所述虚拟内存页对应的物理内存页压缩后的物理内存页。
  3. 根据权利要求2所述的方法,其特征在于,所述采用预定义的压缩算法,对所述每个物理内存页进行压缩之后,所述方法还包括:
    在所述第一页表中为所述每个物理内存页分别设置标志位,所述标志位用于指示设置有所述标志位的物理内存页已被压缩。
  4. 根据权利要求1至3任意一项所述的方法,其特征在于,所述采用预定义的压缩算法,对所述每个物理内存页进行压缩,包括:
    对于所述每个物理内存页,均执行下述方法,以对所述每个物理内存页进行压缩:
    采用预定义的与一个物理内存页对应的压缩算法,对所述一个物理内存页进行压缩。
  5. 根据权利要求4所述的方法,其特征在于,所述采用预定义的与一个物理内存页对应的压缩算法,对所述一个物理内存页进行压缩,包括:
    获取所述一个物理内存页的内容类型;
    根据所述一个物理内存页的内容类型,确定预定义的与所述一个物理内存页的内容类型对应的压缩算法;
    采用预定义的与所述一个物理内存页的内容类型对应的压缩算法,对所述一个物理内存页进行压缩。
  6. 根据权利要求3所述的方法,其特征在于,应用于所述终端设备运行的所述第一应用从后台切换到前台的场景,所述创建压缩索引表之后,所述方法还包括:
    根据所述压缩索引表,重新分配至少一个物理内存页;
    采用预定义的解压算法,对压缩后的所述每个物理内存页进行解压;
    将解压后的所述每个物理内存页的内容填充到所述至少一个物理内存页中。
  7. 根据权利要求6所述的方法,其特征在于,所述将解压后的所述每个物理内存页的内容填充到所述至少一个物理内存页中之后,所述方法还包括:
    修改所述压缩索引表,修改后的所述压缩索引表包括多个第二表项,每个第二表 项包括所述索引号、所述虚拟内存页的地址范围、恢复后的物理内存页的地址范围以及所述恢复后的物理内存页的内容类型,所述恢复后的物理内存页为所述至少一个物理内存页中填充有解压内容的物理内存页,所述解压内容为对所述压缩后的物理内存页解压后的内容。
  8. 根据权利要求6或7所述的方法,其特征在于,所述将解压后的所述每个物理内存页的内容填充到所述至少一个物理内存页中之后,所述方法还包括:
    对于所述第一页表中设置有所述标志位的所述每个物理内存页,均执行下述方法,以恢复所述第一页表:
    采用重新分配的一个物理内存页的地址范围,覆盖所述第一页表中设置有所述标志位的一个物理内存页的地址范围。
  9. 根据权利要求6至8任意一项所述的方法,其特征在于,所述根据所述压缩索引表,重新分配至少一个物理内存页之前,所述方法还包括:
    检测到缺页中断,所述缺页中断为当在所述第一页表中检测到为压缩后的物理内存页设置的标志位为无效的物理地址时产生的中断;
    调用缺页中断处理函数响应于所述缺页中断,根据所述第一应用的进程的进程号,获取所述压缩索引表。
  10. 根据权利要求6至9任意一项所述的方法,其特征在于,所述采用预定义的解压算法,对压缩后的所述每个物理内存页进行解压,包括:
    对于压缩后的所述每个物理内存页,均执行下述方法,以对压缩后的所述每个物理内存页进行解压:
    采用预定义的与压缩后的一个物理内存页对应的解压算法,对所述一个物理内存页进行解压。
  11. 根据权利要求10所述的方法,其特征在于,所述采用预定义的与压缩后的一个物理内存页对应的解压算法,对所述一个物理内存页进行解压,包括:
    获取所述一个物理内存页的内容类型;
    根据所述一个物理内存页的内容类型,确定预定义的与所述一个物理内存页的内容类型对应的解压算法;
    采用预定义的与所述一个物理内存页的内容类型对应的解压算法,对所述一个物理内存页进行解压。
  12. 根据权利要求5或11所述的方法,其特征在于,
    所述内容类型为零页、纹理页、顶点队列或者绘图命令队列。
  13. 一种解压处理器所占内存的方法,其特征在于,应用于终端设备运行的第一应用从后台切换到前台的场景,所述方法包括:
    获取压缩索引表,所述压缩索引表为压缩待压缩的每个物理内存页时创建的;
    根据所述压缩索引表,重新分配至少一个物理内存页;
    采用预定义的解压算法,对压缩后的所述每个物理内存页进行解压;
    将解压后的所述每个物理内存页的内容填充到所述至少一个物理内存页中。
  14. 根据权利要求13所述的方法,其特征在于,
    所述压缩索引表包括多个第一表项,每个第一表项包括索引号、虚拟内存页的地 址范围、压缩后的物理内存页的地址范围以及所述压缩后的物理内存页的内容类型,所述索引号用于指示与所述虚拟内存页对应的物理内存页的页号,所述压缩后的物理内存页为对与所述虚拟内存页对应的物理内存页压缩后的物理内存页。
  15. 根据权利要求14所述的方法,其特征在于,所述将解压后的所述每个物理内存页的内容填充到所述至少一个物理内存页中之后,所述方法还包括:
    修改所述压缩索引表,修改后的所述压缩索引表包括多个第二表项,每个第二表项包括所述索引号、所述虚拟内存页的地址范围、恢复后的物理内存页的地址范围以及所述恢复后的物理内存页的内容类型,所述恢复后的物理内存页为所述至少一个物理内存页中填充有解压内容的物理内存页,所述解压内容为对所述压缩后的物理内存页解压后的内容。
  16. 根据权利要求13至15任意一项所述的方法,其特征在于,第一页表中压缩后的所述每个物理内存页均设置有标志位,所述标志位用于指示设置有所述标志位的物理内存页已被压缩,所述第一页表为所述第一应用的进程的页表;
    所述将解压后的所述每个物理内存页的内容填充到所述至少一个物理内存页中之后,所述方法还包括:
    对于所述第一页表中设置有所述标志位的所述每个物理内存页,均执行下述方法,以恢复所述第一页表:
    采用重新分配的一个物理内存页的地址范围,覆盖所述第一页表中设置有所述标志位的一个物理内存页的地址范围。
  17. 根据权利要求16所述的方法,其特征在于,所述获取压缩索引表之前,所述方法还包括:
    检测到缺页中断,所述缺页中断为当在所述第一页表中检测到为压缩后的物理内存页设置的标志位为无效的物理地址时产生的中断;
    所述获取压缩索引表,包括:
    调用缺页中断处理函数响应于所述缺页中断,根据终端设备运行的第一应用的进程的进程号,获取所述压缩索引表。
  18. 根据权利要求13至17任意一项所述的方法,其特征在于,所述采用预定义的解压算法,对压缩后的所述每个物理内存页进行解压,包括:
    对于压缩后的所述每个物理内存页,均执行下述方法,以对压缩后的所述每个物理内存页进行解压:
    采用预定义的与压缩后的一个物理内存页对应的解压算法,对所述一个物理内存页进行解压。
  19. 根据权利要求18所述的方法,其特征在于,所述采用预定义的与压缩后的一个物理内存页对应的解压算法,对所述一个物理内存页进行解压,包括:
    获取所述一个物理内存页的内容类型;
    根据所述一个物理内存页的内容类型,确定预定义的与所述一个物理内存页的内容类型对应的解压算法;
    采用预定义的与所述一个物理内存页的内容类型对应的解压算法,对所述一个物理内存页进行解压。
  20. 根据权利要求19所述的方法,其特征在于,
    所述内容类型为零页、纹理页、顶点队列或者绘图命令队列。
  21. 一种终端设备,其特征在于,所述终端设备运行的第一应用从前台切换到后台,所述终端设备包括确定模块和解压缩模块;
    所述确定模块,用于确定虚拟内存地址范围,所述虚拟内存地址范围为所述终端设备在前台运行所述第一应用时所述终端设备的处理器所占的全部或者部分虚拟内存地址的范围,所述虚拟内存地址范围包括至少一个虚拟内存页;并根据所述至少一个虚拟内存页,在第一页表中确定与每个虚拟内存页对应的每个物理内存页,所述第一页表为所述第一应用的进程的页表;
    所述解压缩模块,用于采用预定义的压缩算法,对所述确定模块确定的所述每个物理内存页进行压缩。
  22. 根据权利要求21所述的终端设备,其特征在于,所述终端设备还包括创建模块;
    所述创建模块,用于在所述解压缩模块采用预定义的压缩算法,对所述每个物理内存页进行压缩之后,创建压缩索引表,所述压缩索引表包括多个第一表项,每个第一表项包括索引号、虚拟内存页的地址范围、压缩后的物理内存页的地址范围以及所述压缩后的物理内存页的内容类型,所述索引号用于指示与所述虚拟内存页对应的物理内存页的页号,所述压缩后的物理内存页为对与所述虚拟内存页对应的物理内存页压缩后的物理内存页。
  23. 根据权利要求22所述的终端设备,其特征在于,所述终端设备还包括设置模块;
    所述设置模块,用于在所述解压缩模块采用预定义的压缩算法,对所述每个物理内存页进行压缩之后,在所述第一页表中为所述每个物理内存页分别设置标志位,所述标志位用于指示设置有所述标志位的物理内存页已被压缩。
  24. 根据权利要求23所述的终端设备,其特征在于,所述终端设备运行的所述第一应用从后台切换到前台,所述终端设备还包括分配模块和填充模块;
    所述分配模块,用于在所述创建模块创建压缩索引表之后,根据所述压缩索引表,重新分配至少一个物理内存页;
    所述解压缩模块,还用于采用预定义的解压算法,对压缩后的所述每个物理内存页进行解压;
    所述填充模块,用于将所述解压缩模块解压后的所述每个物理内存页的内容填充到所述分配模块重新分配的所述至少一个物理内存页中。
  25. 根据权利要求24所述的终端设备,其特征在于,
    所述创建模块,还用于在所述填充模块将解压后的所述每个物理内存页的内容填充到所述至少一个物理内存页中之后,修改所述压缩索引表,修改后的所述压缩索引表包括多个第二表项,每个第二表项包括所述索引号、所述虚拟内存页的地址范围、恢复后的物理内存页的地址范围以及所述恢复后的物理内存页的内容类型,所述恢复后的物理内存页为所述至少一个物理内存页中填充有解压内容的物理内存页,所述解压内容为对所述压缩后的物理内存页解压后的内容。
  26. 根据权利要求24或25所述的终端设备,其特征在于,
    所述填充模块,还用于在将解压后的所述每个物理内存页的内容填充到所述至少一个物理内存页中之后,对于所述第一页表中设置有所述标志位的所述每个物理内存页,均执行下述方法,以恢复所述第一页表:
    采用重新分配的一个物理内存页的地址范围,覆盖所述第一页表中设置有所述标志位的一个物理内存页的地址范围。
  27. 根据权利要求24至26任意一项所述的终端设备,其特征在于,所述终端设备还包括检测模块和获取模块;
    所述检测模块,用于在所述分配模块根据所述压缩索引表,重新分配至少一个物理内存页之前,检测到缺页中断,所述缺页中断为当在所述第一页表中检测到为压缩后的物理内存页设置的标志位为无效的物理地址时产生的中断;
    所述获取模块,用于调用缺页中断处理函数响应于所述检测模块检测到的所述缺页中断,根据所述第一应用的进程的进程号,获取所述压缩索引表。
  28. 一种终端设备,其特征在于,所述终端设备运行的第一应用从后台切换到前台,所述终端设备包括获取模块、分配模块、解压缩模块和填充模块;
    所述获取模块,用于获取压缩索引表,所述压缩索引表为压缩待压缩的每个物理内存页时创建的;
    所述分配模块,用于根据所述获取模块获取的所述压缩索引表,重新分配至少一个物理内存页;
    所述解压缩模块,用于采用预定义的解压算法,对压缩后的所述每个物理内存页进行解压;
    所述填充模块,用于将所述解压缩模块解压后的所述每个物理内存页的内容填充到所述分配模块重新分配的所述至少一个物理内存页中。
  29. 根据权利要求28所述的终端设备,其特征在于,
    所述压缩索引表包括多个第一表项,每个第一表项包括索引号、虚拟内存页的地址范围、压缩后的物理内存页的地址范围以及所述压缩后的物理内存页的内容类型,所述索引号用于指示与所述虚拟内存页对应的物理内存页的页号,所述压缩后的物理内存页为对与所述虚拟内存页对应的物理内存页压缩后的物理内存页。
  30. 根据权利要求29所述的终端设备,其特征在于,所述终端设备还包括创建模块;
    所述创建模块,用于在所述填充模块将解压后的所述每个物理内存页的内容填充到所述至少一个物理内存页中之后,修改所述获取模块获取的所述压缩索引表,修改后的所述压缩索引表包括多个第二表项,每个第二表项包括所述索引号、所述虚拟内存页的地址范围、恢复后的物理内存页的地址范围以及所述恢复后的物理内存页的内容类型,所述恢复后的物理内存页为所述至少一个物理内存页中填充有解压内容的物理内存页,所述解压内容为对所述压缩后的物理内存页解压后的内容。
  31. 根据权利要求28至30任意一项所述的终端设备,其特征在于,第一页表中压缩后的所述每个物理内存页均设置有标志位,所述标志位用于指示设置有所述标志位的物理内存页已被压缩,所述第一页表为所述第一应用的进程的页表;
    所述填充模块,还用于在将解压后的所述每个物理内存页的内容填充到所述至少一个物理内存页中之后,对于所述第一页表中设置有所述标志位的所述每个物理内存页,均执行下述方法,以恢复所述第一页表:
    采用重新分配的一个物理内存页的地址范围,覆盖所述第一页表中设置有所述标志位的一个物理内存页的地址范围。
  32. 根据权利要求31所述的终端设备,其特征在于,所述终端设备还包括检测模块;
    所述检测模块,用于在所述获取模块获取压缩索引表之前,检测到缺页中断,所述缺页中断为当在所述第一页表中检测到为压缩后的物理内存页设置的标志位为无效的物理地址时产生的中断;
    所述获取模块,具体用于调用缺页中断处理函数响应于所述检测模块检测到的所述缺页中断,根据终端设备运行的第一应用的进程的进程号,获取所述压缩索引表。
  33. 一种终端设备,其特征在于,包括存储器以及与所述存储器耦合的一个或多个处理器;
    所述存储器用于存储一个或多个程序,所述一个或多个程序包括计算机指令,当所述一个或多个处理器执行所述计算机指令时,使得所述终端设备执行如权利要求1至20任意一项所述的方法。
  34. 一种计算机可读存储介质,其特征在于,包括计算机指令,当所述计算机指令在终端设备上运行时,使得所述终端设备执行如权利要求1至20任意一项所述的方法。
  35. 一种包括计算机指令的计算机程序产品,其特征在于,当所述计算机程序产品在终端设备上运行时,使得所述终端设备执行如权利要求1至20任意一项所述的方法。
PCT/CN2017/106173 2017-10-13 2017-10-13 一种压缩和解压处理器所占内存的方法及装置 WO2019071610A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
EP17928601.8A EP3674846B1 (en) 2017-10-13 2017-10-13 Method and apparatus for compressing and decompressing memory occupied by processor
CN201780074040.2A CN110023906A (zh) 2017-10-13 2017-10-13 一种压缩和解压处理器所占内存的方法及装置
PCT/CN2017/106173 WO2019071610A1 (zh) 2017-10-13 2017-10-13 一种压缩和解压处理器所占内存的方法及装置

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/106173 WO2019071610A1 (zh) 2017-10-13 2017-10-13 一种压缩和解压处理器所占内存的方法及装置

Publications (1)

Publication Number Publication Date
WO2019071610A1 true WO2019071610A1 (zh) 2019-04-18

Family

ID=66101271

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/106173 WO2019071610A1 (zh) 2017-10-13 2017-10-13 一种压缩和解压处理器所占内存的方法及装置

Country Status (3)

Country Link
EP (1) EP3674846B1 (zh)
CN (1) CN110023906A (zh)
WO (1) WO2019071610A1 (zh)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111367828A (zh) * 2020-02-27 2020-07-03 Oppo广东移动通信有限公司 内存压缩方法、装置、终端及存储介质
CN111400052A (zh) * 2020-04-22 2020-07-10 Oppo广东移动通信有限公司 解压缩方法、装置、电子设备及存储介质
CN113296940A (zh) * 2021-03-31 2021-08-24 阿里巴巴新加坡控股有限公司 数据处理方法及装置
CN113722087A (zh) * 2021-06-10 2021-11-30 荣耀终端有限公司 虚拟内存管理方法和电子设备

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113835872A (zh) * 2020-06-24 2021-12-24 北京小米移动软件有限公司 一种用于减少内存开销的数据处理方法、装置及存储介质
CN112114965A (zh) * 2020-09-15 2020-12-22 深圳市欢太科技有限公司 应用程序的运行方法、装置、终端及存储介质
US11861395B2 (en) * 2020-12-11 2024-01-02 Samsung Electronics Co., Ltd. Method and system for managing memory for applications in a computing system
CN115712500A (zh) * 2022-11-10 2023-02-24 阿里云计算有限公司 内存释放、内存恢复方法、装置、计算机设备及存储介质
CN115794413B (zh) * 2023-01-09 2024-05-14 荣耀终端有限公司 一种内存处理方法及相关装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104503740A (zh) * 2014-12-01 2015-04-08 小米科技有限责任公司 内存管理方法和装置
CN106803860A (zh) * 2017-01-23 2017-06-06 努比亚技术有限公司 一种终端应用的存储处理方法和装置
CN106844033A (zh) * 2017-01-23 2017-06-13 努比亚技术有限公司 一种应用快速启动方法和终端
CN106843450A (zh) * 2017-01-23 2017-06-13 努比亚技术有限公司 一种终端应用的存储处理方法和装置
CN106844032A (zh) * 2017-01-23 2017-06-13 努比亚技术有限公司 一种终端应用的存储处理方法和装置

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6879266B1 (en) * 1997-08-08 2005-04-12 Quickshift, Inc. Memory module including scalable embedded parallel data compression and decompression engines
JP3729759B2 (ja) * 2001-08-07 2005-12-21 株式会社ルネサステクノロジ 圧縮された命令コードを読み出すマイクロコントローラ、命令コードを圧縮して格納するプログラムメモリ
CN101315602B (zh) * 2008-05-09 2011-01-26 浙江大学 硬件化的进程内存管理核的方法
US9007239B1 (en) * 2012-07-02 2015-04-14 Amazon Technologies, Inc. Reduction of memory consumption
CN105468542B (zh) * 2014-09-03 2019-03-26 杭州华为数字技术有限公司 地址分配方法及装置
CN105740303B (zh) * 2014-12-12 2019-09-06 国际商业机器公司 改进的对象存储的方法及装置
CN105357260B (zh) * 2015-09-28 2019-03-26 深信服科技股份有限公司 实现虚拟桌面的系统、vdi数据缓存方法和vdi缓存设备
CN105468426A (zh) * 2016-01-05 2016-04-06 珠海市魅族科技有限公司 一种应用冻结的方法及终端

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104503740A (zh) * 2014-12-01 2015-04-08 小米科技有限责任公司 内存管理方法和装置
CN106803860A (zh) * 2017-01-23 2017-06-06 努比亚技术有限公司 一种终端应用的存储处理方法和装置
CN106844033A (zh) * 2017-01-23 2017-06-13 努比亚技术有限公司 一种应用快速启动方法和终端
CN106843450A (zh) * 2017-01-23 2017-06-13 努比亚技术有限公司 一种终端应用的存储处理方法和装置
CN106844032A (zh) * 2017-01-23 2017-06-13 努比亚技术有限公司 一种终端应用的存储处理方法和装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3674846A4 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111367828A (zh) * 2020-02-27 2020-07-03 Oppo广东移动通信有限公司 内存压缩方法、装置、终端及存储介质
CN111367828B (zh) * 2020-02-27 2023-10-20 Oppo广东移动通信有限公司 内存压缩方法、装置、终端及存储介质
CN111400052A (zh) * 2020-04-22 2020-07-10 Oppo广东移动通信有限公司 解压缩方法、装置、电子设备及存储介质
CN113296940A (zh) * 2021-03-31 2021-08-24 阿里巴巴新加坡控股有限公司 数据处理方法及装置
CN113296940B (zh) * 2021-03-31 2023-12-08 阿里巴巴新加坡控股有限公司 数据处理方法及装置
CN113722087A (zh) * 2021-06-10 2021-11-30 荣耀终端有限公司 虚拟内存管理方法和电子设备
CN113722087B (zh) * 2021-06-10 2023-01-31 荣耀终端有限公司 虚拟内存管理方法和电子设备

Also Published As

Publication number Publication date
EP3674846A1 (en) 2020-07-01
EP3674846B1 (en) 2023-12-06
EP3674846A4 (en) 2020-09-30
CN110023906A (zh) 2019-07-16

Similar Documents

Publication Publication Date Title
WO2019071610A1 (zh) 一种压缩和解压处理器所占内存的方法及装置
US11822805B2 (en) Method and terminal for reclaiming memory after freezing program
US20190324615A1 (en) Application switching method and apparatus and graphical user interface
CN106504185B (zh) 一种渲染优化方法和装置
US8842133B2 (en) Buffers for display acceleration
CN106843715B (zh) 用于远程化的应用的触摸支持
US9274839B2 (en) Techniques for dynamic physical memory partitioning
CN107925749B (zh) 用于调整电子设备的分辨率的方法和设备
US8289333B2 (en) Multi-context graphics processing
KR102219861B1 (ko) 화면 공유 방법 및 그 전자 장치
US20180018067A1 (en) Electronic device having touchscreen and input processing method thereof
CN112596648B (zh) 页面处理方法、装置、电子设备及可读存储介质
US20160350543A1 (en) Electronic device and method of accessing kernel data
WO2018161534A1 (zh) 一种显示图像的方法、双屏终端和计算机可读的非易失性存储介质
KR102586628B1 (ko) 전자 장치 및 전자 장치의 메모리 관리 방법
CN109144723B (zh) 一种分配存储空间的方法和终端
WO2021128929A1 (zh) 一种全景应用中图像渲染的方法及终端设备
CN105095259B (zh) 瀑布流对象显示方法及装置
US20200211254A1 (en) Method and portable electronic device for changing graphics processing resolution according to scenario
US10599444B2 (en) Extensible input stack for processing input device data
CN110383255B (zh) 用于管理对物理设备的客户分区访问的方法和计算设备
JP2007200145A (ja) クライアント装置、サーバー装置、サーバーベースコンピューティングシステムおよびプログラム
WO2019139738A1 (en) Extensible input stack for processing input device data
US20180004380A1 (en) Screen display method and electronic device supporting the same
EP3674867B1 (en) Human-computer interaction method and electronic device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17928601

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2017928601

Country of ref document: EP

Effective date: 20200323

NENP Non-entry into the national phase

Ref country code: DE