CN113722023A - Application data processing method and device - Google Patents

Application data processing method and device Download PDF

Info

Publication number
CN113722023A
CN113722023A CN202010439953.3A CN202010439953A CN113722023A CN 113722023 A CN113722023 A CN 113722023A CN 202010439953 A CN202010439953 A CN 202010439953A CN 113722023 A CN113722023 A CN 113722023A
Authority
CN
China
Prior art keywords
target interface
application data
interface
data
cache pool
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010439953.3A
Other languages
Chinese (zh)
Inventor
李刚
陈亮
陈寒冰
乔永红
彭军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202010439953.3A priority Critical patent/CN113722023A/en
Publication of CN113722023A publication Critical patent/CN113722023A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0877Cache access modes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/445Program loading or initiating
    • G06F9/44505Configuring for program initiating, e.g. using registry, configuration files

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

The embodiment of the application discloses an application data processing method and device. According to the method, a target interface is determined according to the display frequency of each interface of the APP and/or the time required for loading the application data of each interface, and the application data of the target interface is cached to a cache pool of a memory. When the target interface needs to be displayed, the cache pool can be inquired preferentially to obtain the application data of the target interface, and the application data in the cache pool does not need to be loaded again. And if the required application data does not exist in the cache pool, acquiring the required application data in a loading mode. Therefore, by the scheme of the embodiment of the application, I/O resources consumed in the loading process of the application data can be reduced, and further, the thread congestion phenomenon and the blocking phenomenon of the processor can be reduced.

Description

Application data processing method and device
Technical Field
The application relates to the technical field of terminal equipment, in particular to an application data processing method and device.
Background
In order to meet the diversified demands of users, various applications (apps) are often installed in the terminal devices. For example, an instant messaging APP, a shopping APP, a game APP, and the like are often installed in a smart phone.
In the APP application process, the terminal device often needs to display a corresponding interface according to APP application data. For example, when the APP is started, the terminal device needs to display an initial interface of the APP according to the application data of the APP, and when the interface of the APP is switched, the terminal device needs to display the switched interface according to the application data of the APP. The APP application data generally includes: library files such as an Application Package (APK), and resource files such as an extensible markup language (XML) and a picture.
At present, in order to enable the terminal device to display the interface corresponding to the APP according to the APP application data, the APP application data is loaded to the memory by the processor of the terminal device in the running process of the APP. Or, a preloading mode can be adopted, and before the APP is started, the processor loads the application data into the memory in advance.
However, when the method of loading the application data into the memory during the running of the APP is adopted, since a large number of input/output (I/O) operations are required in the loading process, thread congestion may be caused, and even a processor may be stuck. In addition, when a preloading mode is adopted, due to limited memory space, only part of the APP application data can be loaded, and in the running process of other APPs, the application data of other APPs still need to be loaded through a large number of I/O operations, so that thread congestion may still occur in the running process of other APPs, and even the processor may be stuck.
Disclosure of Invention
In order to solve the problems of thread congestion and processor stalling caused by a large number of I/O operations in the process of processing application data in the prior art, embodiments of the present application provide an application data processing method and apparatus.
In a first aspect, an embodiment of the present application discloses an application data processing method and apparatus, including:
determining a target interface according to the display frequency of each interface of an application program APP and/or the loading time of application data of each interface;
and caching the application data of the target interface into a cache pool of a memory.
In the embodiment of the application, the application data of the target interface is cached in the cache pool of the memory, when the target interface needs to be displayed, the cache pool can be preferentially inquired to obtain the application data of the target interface, and the application data in the cache pool does not need to be loaded again, so that the I/O resources consumed in the loading process of the application data can be reduced, further, the thread congestion phenomenon can be reduced, and the pause phenomenon of the processor can be reduced.
In an optional design, the determining a target interface according to a display frequency of each interface of an application program APP and/or a loading time of application data of each interface includes:
determining the target interface according to a first frequency threshold and a first time threshold respectively corresponding to the storage capacity of the memory;
the display frequency of the target interface is greater than the first frequency threshold, and/or the loading time of the application data of the target interface is greater than the first time threshold.
In an optional design, the caching the application data of the target interface into a cache pool of a memory includes:
dividing the application data of the target interface into at least one data block;
determining a target data block of the at least one data block, wherein the load time of the target data block is greater than a second time threshold;
and caching the target data block into a cache pool of the memory.
Through the scheme, only the target data block in the application data needs to be cached in the cache pool of the memory, so that the occupation of the storage space of the memory can be reduced.
In an alternative design, the cache pool includes: the first cache pool is used for caching the library file and/or the second cache pool is used for caching the resource file;
the capacity of the first cache pool and/or the capacity of the second cache pool correspond to the storage capacity of the memory respectively.
Through the scheme, the capacity of the first cache pool and/or the capacity of the second cache pool can be respectively corresponding to the storage capacity of the memory, so that the memory occupied by the cache pools is prevented from being too large, and the memory is prevented from storing other data.
In an optional design, the caching the application data of the target interface into a cache pool of a memory includes:
determining the priority of the target interface according to the display frequency of the target interface and/or the loading time of the application data of the target interface;
according to the sequence of the priority of the target interface from high to low, sequentially caching the application data of the target interface in the first cache pool and/or the second cache pool.
By the scheme, the application data of the target interface with high priority can be cached in the cache pool preferentially.
In an optional design, the caching the application data of the target interface into a cache pool of a memory includes:
decoding the picture data;
and caching the decoded picture data into a shared cache space in the second cache pool.
By the scheme, the decoded picture data are cached in the cache pool of the memory, when a corresponding interface needs to be displayed, the decoded picture data can be directly extracted from the cache pool without decoding the picture data, so that the consumption of processor resources is reduced, correspondingly, thread congestion and processor jamming can be reduced, and the display efficiency of the interface is improved.
In an alternative design, the method further comprises:
determining whether the decoded picture data is preset picture data or not, or determining whether the decoded picture data is picture data applied to at least two interfaces or not;
when the decoded picture data is determined to be preset picture data, or when the decoded picture data is determined to be picture data applied to the at least two interfaces, storing the decoded picture data into a shared cache space in the second cache pool.
In an alternative design, the method further comprises:
when application data of a new target interface needs to be stored in a cache pool of the memory, and the free storage capacity in the cache pool is not enough to cache the application data of the new target interface, determining a target interface to be processed, wherein the application data of the target interface to be processed is cached in the cache pool, and the priority of the target interface to be processed is lower than that of the new target interface;
removing the application data of the target interface to be processed from the cache pool;
and caching the application data of the new target interface into a cache pool of the memory.
By the scheme, the application data of the target interface with low priority can be removed from the cache pool, and the application data of the target interface with high priority can be cached, so that the application data with high priority can be preferentially cached in the cache pool.
In a second aspect, an embodiment of the present application provides an application data processing apparatus, including:
the determining unit is used for determining a target interface according to the display frequency of each interface of the application program APP and/or the loading time of application data of each interface;
and the processing unit is used for caching the application data of the target interface into a cache pool of a memory.
In an optional design, the processing unit is specifically configured to determine the target interface according to a first frequency threshold and a first time threshold respectively corresponding to a storage capacity of the memory;
the display frequency of the target interface is greater than the first frequency threshold, and/or the loading time of the application data of the target interface is greater than the first time threshold.
In an alternative design, the processing unit is specifically configured to:
dividing the application data of the target interface into at least one data block;
determining a target data block of the at least one data block, wherein the load time of the target data block is greater than a second time threshold;
and caching the target data block into a cache pool of the memory.
In an alternative design, the cache pool includes: the first cache pool is used for caching the library file and/or the second cache pool is used for caching the resource file;
the capacity of the first cache pool and/or the capacity of the second cache pool correspond to the storage capacity of the memory respectively.
In an alternative design, the processing unit is specifically configured to:
determining the priority of the target interface according to the display frequency of the target interface and/or the loading time of the application data of the target interface;
according to the sequence of the priority of the target interface from high to low, sequentially caching the application data of the target interface in the first cache pool and/or the second cache pool.
In an alternative design, the processing unit is specifically configured to:
decoding the picture data;
and caching the decoded picture data into a shared cache space in the second cache pool.
In an alternative design, the processing unit is further configured to:
determining whether the decoded picture data is preset picture data or not, or determining whether the decoded picture data is picture data applied to at least two interfaces or not;
when the decoded picture data is determined to be preset picture data, or when the decoded picture data is determined to be picture data applied to the at least two interfaces, storing the decoded picture data into a shared cache space in the second cache pool.
In an alternative design, the processing unit is further configured to:
when application data of a new target interface needs to be stored in a cache pool of the memory, and the free storage capacity in the cache pool is not enough to cache the application data of the new target interface, determining a target interface to be processed, wherein the application data of the target interface to be processed is cached in the cache pool, and the priority of the target interface to be processed is lower than that of the new target interface;
removing the application data of the target interface to be processed from the cache pool;
and caching the application data of the new target interface into a cache pool of the memory.
In a third aspect, an embodiment of the present application provides an application data processing apparatus, including:
the system comprises a resource loading module, a resource cache identification module and a resource cache module;
the resource loading module is used for loading application data of an application program APP;
the resource cache identification module is used for determining a target interface according to the display frequency of each interface of the APP and/or the loading time of application data of each interface;
the resource caching module is used for caching the application data of the target interface into a caching pool of a memory.
In a fourth aspect, an embodiment of the present application provides a terminal apparatus, where the apparatus includes a processor and a memory, where the memory is used to store program instructions, and the processor is configured to call the program instructions stored in the memory to perform all or part of the steps in the embodiment corresponding to the first aspect.
In a fifth aspect, the present application provides a computer-readable storage medium, where the computer-readable storage medium is configured to store instructions, and when the instructions are executed on a computer or a processor, the computer or the processor may implement all or part of the steps in the corresponding embodiment of the first aspect.
In a sixth aspect, embodiments of the present application provide a computer program product including instructions, which, when run on an electronic device, enable the electronic device to implement all or part of the steps in the corresponding embodiments of the first aspect.
According to the scheme of the embodiment of the application, the target interface can be determined according to the display frequency of each interface of the APP and/or the time required for loading the application data of each interface, and the application data of the target interface is cached to the cache pool of the memory. The target interface is an interface which is displayed more frequently and/or an interface which has a longer loading time of application data. Therefore, by the scheme disclosed by the embodiment of the application, the application data of the interface which is displayed more frequently and/or the interface which has longer application data loading time can be cached in the cache pool. In this case, when the target interface needs to be displayed, the cache pool can be preferentially queried to obtain the application data of the target interface, and the application data in the cache pool does not need to be loaded again. And if the required application data does not exist in the cache pool, acquiring the required application data in a loading mode.
The application data of the interface which is displayed more frequently is cached in the cache pool, the frequency of loading the application data can be reduced, and in addition, the application data with longer application data loading time is cached in the cache pool, and the capacity of the loaded application data can be reduced. Therefore, by the scheme of the embodiment of the application, I/O resources consumed in the loading process of the application data can be reduced, and further, the thread congestion phenomenon and the blocking phenomenon of the processor can be reduced.
Drawings
Fig. 1 is a schematic structural diagram of a terminal device;
FIG. 2 is a diagram of a network architecture of a terminal device;
fig. 3 is a schematic workflow diagram of an application data processing method disclosed in an embodiment of the present application;
fig. 4 is a schematic workflow diagram of an application data processing method disclosed in an embodiment of the present application;
fig. 5 is a schematic workflow diagram of another application data processing method disclosed in the embodiment of the present application;
fig. 6 is a schematic structural diagram of an application data processing apparatus according to an embodiment of the present application;
FIG. 7 is a schematic structural diagram of another application data processing apparatus disclosed in an embodiment of the present application;
fig. 8 is a schematic structural diagram of a terminal device disclosed in an embodiment of the present application.
Detailed Description
The terms "first", "second" and "third", etc. in the description and claims of this application and the description of the drawings are used for distinguishing between different objects and not for limiting a particular order.
In the embodiments of the present application, words such as "exemplary" or "for example" are used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" or "e.g.," is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present concepts related in a concrete fashion.
For clarity and conciseness of the following descriptions of the various embodiments, a brief introduction to the related art is first given:
the application data processing method provided in the embodiment of the present application may be applied to various terminal devices capable of installing an APP, such as a mobile phone, a tablet computer, a notebook computer, an ultra-mobile personal computer (UMPC), a handheld computer, a netbook, a Personal Digital Assistant (PDA), a wearable device, and a virtual reality device, and the method is not limited in this embodiment.
Taking the mobile phone 100 as an example of the terminal device, fig. 1 shows a schematic structural diagram of the mobile phone.
The mobile phone 100 may include a processor 110, an external memory interface 120, an internal memory 121, a Universal Serial Bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a radio frequency module 150, a communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, a button 190, a motor 191, an indicator 192, a camera 193, a screen 301, a Subscriber Identity Module (SIM) card interface 195, and the like.
It is to be understood that the illustrated structure of the embodiment of the present application does not specifically limit the mobile phone 100. In other embodiments of the present application, the handset 100 may include more or fewer components than shown, or some components may be combined, some components may be separated, or a different arrangement of components may be used. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 110 may include one or more processing units, such as: the processor 110 may include a Central Processing Unit (CPU), an Application Processor (AP), a modem processor, a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a controller, a memory, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), etc. The different processing units may be separate devices or may be integrated into one or more processors.
The controller may be a neural center and a command center of the cell phone 100, among others. The controller can generate an operation control signal according to the instruction operation code and the timing signal to complete the control of instruction fetching and instruction execution.
A memory may also be provided in processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Avoiding repeated accesses reduces the latency of the processor 110, thereby increasing the efficiency of the system.
In some embodiments, processor 110 may include one or more interfaces. The interface may include an integrated circuit (I2C) interface, an integrated circuit built-in audio (I2S) interface, a Pulse Code Modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a Mobile Industry Processor Interface (MIPI), a general-purpose input/output (GPIO) interface, a Subscriber Identity Module (SIM) interface, and/or a Universal Serial Bus (USB) interface, etc.
The I2C interface is a bi-directional synchronous serial bus that includes a serial data line (SDA) and a Serial Clock Line (SCL). In some embodiments, processor 110 may include multiple sets of I2C buses. The processor 110 may be coupled to the touch sensor 180K, the charger, the flash, the camera 193, etc. through different I2C bus interfaces, respectively. For example: the processor 110 may be coupled to the touch sensor 180K through an I2C interface, so that the processor 110 and the touch sensor 180K communicate through an I2C bus interface to implement the touch function of the mobile phone 100.
The I2S interface may be used for audio communication. In some embodiments, processor 110 may include multiple sets of I2S buses. The processor 110 may be coupled to the audio module 170 via an I2S bus to enable communication between the processor 110 and the audio module 170. In some embodiments, the audio module 170 may communicate audio signals to the communication module 160 via the I2S interface, enabling answering of calls via a bluetooth headset.
The PCM interface may also be used for audio communication, sampling, quantizing and encoding analog signals. In some embodiments, the audio module 170 and the communication module 160 may be coupled by a PCM bus interface. In some embodiments, the audio module 170 may also transmit audio signals to the communication module 160 through the PCM interface, so as to implement a function of answering a call through a bluetooth headset. Both the I2S interface and the PCM interface may be used for audio communication.
The UART interface is a universal serial data bus used for asynchronous communications. The bus may be a bidirectional communication bus. It converts the data to be transmitted between serial communication and parallel communication. In some embodiments, a UART interface is generally used to connect the processor 110 with the communication module 160. For example: the processor 110 communicates with a bluetooth module in the communication module 160 through a UART interface to implement a bluetooth function. In some embodiments, the audio module 170 may transmit the audio signal to the communication module 160 through the UART interface, so as to realize the function of playing music through the bluetooth headset.
The MIPI interface may be used to connect the processor 110 with peripheral devices such as the screen 301, the camera 193, and the like. The MIPI interface includes a Camera Serial Interface (CSI), a Display Serial Interface (DSI), and the like. In some embodiments, the processor 110 and the camera 193 communicate through a CSI interface to implement the camera function of the handset 100. The processor 110 and the screen 301 communicate through the DSI interface to implement the display function of the mobile phone 100.
The GPIO interface may be configured by software. The GPIO interface may be configured as a control signal and may also be configured as a data signal. In some embodiments, a GPIO interface may be used to connect the processor 110 with the camera 193, the screen 301, the communication module 160, the audio module 170, the sensor module 180, and the like. The GPIO interface may also be configured as an I2C interface, an I2S interface, a UART interface, a MIPI interface, and the like.
The USB interface 130 is an interface conforming to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 130 may be used to connect a charger to charge the mobile phone 100, and may also be used to transmit data between the mobile phone 100 and peripheral devices. And the earphone can also be used for connecting an earphone and playing audio through the earphone. The interface may also be used to connect other terminal devices, such as AR devices and the like.
It should be understood that the interface connection relationship between the modules illustrated in the embodiment of the present application is only an exemplary illustration, and does not constitute a limitation on the structure of the mobile phone 100. In other embodiments of the present application, the mobile phone 100 may also adopt different interface connection manners or a combination of multiple interface connection manners in the above embodiments.
The charging management module 140 is configured to receive charging input from a charger. The charger may be a wireless charger or a wired charger. In some wired charging embodiments, the charging management module 140 may receive charging input from a wired charger via the USB interface 130. In some wireless charging embodiments, the charging management module 140 may receive a wireless charging input through a wireless charging coil of the cell phone 100. The charging management module 140 may also supply power to the terminal device through the power management module 141 while charging the battery 142.
The power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140, and supplies power to the processor 110, the internal memory 121, the external memory, the screen 301, the camera 193, the communication module 160, and the like. The power management module 141 may also be used to monitor parameters such as battery capacity, battery cycle count, battery state of health (leakage, impedance), etc. In some other embodiments, the power management module 141 may also be disposed in the processor 110. In other embodiments, the power management module 141 and the charging management module 140 may be disposed in the same device.
The wireless communication function of the mobile phone 100 can be implemented by the antenna 1, the antenna 2, the rf module 150, the communication module 160, the modem processor, and the baseband processor.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the handset 100 may be used to cover a single or multiple communication bands. Different antennas can also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed as a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The rf module 150 may provide a solution including 2G/3G/4G/5G wireless communication applied to the mobile phone 100. The rf module 150 may include at least one filter, a switch, a power amplifier, a Low Noise Amplifier (LNA), and the like. The rf module 150 may receive the electromagnetic wave from the antenna 1, and filter, amplify, etc. the received electromagnetic wave, and transmit the filtered electromagnetic wave to the modem processor for demodulation. The rf module 150 may also amplify the signal modulated by the modem processor, and convert the signal into electromagnetic wave through the antenna 1 to radiate the electromagnetic wave. In some embodiments, at least some of the functional modules of the rf module 150 may be disposed in the processor 110. In some embodiments, at least some functional modules of the rf module 150 may be disposed in the same device as at least some modules of the processor 110.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating a low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then passes the demodulated low frequency baseband signal to a baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs a sound signal through an audio device (not limited to the speaker 170A, the receiver 170B, etc.) or displays an image or video through the screen 301. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be separate from the processor 110 and may be disposed in the same device as the rf module 150 or other functional modules.
The communication module 160 may provide solutions for wireless communication applied to the mobile phone 100, including Wireless Local Area Networks (WLANs) (e.g., wireless fidelity (Wi-Fi) networks), Bluetooth (BT), Global Navigation Satellite System (GNSS), Frequency Modulation (FM), Near Field Communication (NFC), Infrared (IR), and the like. The communication module 160 may be one or more devices integrating at least one communication processing module. The communication module 160 receives electromagnetic waves via the antenna 2, performs frequency modulation and filtering processing on electromagnetic wave signals, and transmits the processed signals to the processor 110. The communication module 160 may also receive a signal to be transmitted from the processor 110, frequency-modulate it, amplify it, and convert it into electromagnetic waves via the antenna 2 to radiate it.
In some embodiments, the antenna 1 of the handset 100 is coupled to the radio frequency module 150 and the antenna 2 is coupled to the communication module 160 so that the handset 100 can communicate with networks and other devices via wireless communication techniques. The wireless communication technology may include global system for mobile communications (GSM), General Packet Radio Service (GPRS), code division multiple access (code division multiple access, CDMA), Wideband Code Division Multiple Access (WCDMA), time-division code division multiple access (time-division code division multiple access, TD-SCDMA), Long Term Evolution (LTE), LTE, BT, GNSS, WLAN, NFC, FM, and/or IR technologies, etc. The GNSS may include a Global Positioning System (GPS), a global navigation satellite system (GLONASS), a beidou navigation satellite system (BDS), a quasi-zenith satellite system (QZSS), and/or a Satellite Based Augmentation System (SBAS).
The mobile phone 100 implements a display function through the GPU, the screen 301, and the application processor. The GPU is a microprocessor for image processing, connecting the screen 301 and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. The processor 110 may include one or more GPUs that execute program instructions to generate or alter display information. In the embodiment of the present application, the screen 301 may include a display and a touch device therein. The display is used for outputting display contents to a user, and the touch device is used for receiving a touch event input by the user on the screen 301.
The terminal equipment is internally provided with a plurality of APPs, wherein, when the APP starts and the interfaces of the APPs are switched, the terminal equipment needs to display corresponding interfaces. For example, when the APP is started, the terminal device needs to display an initial interface of the APP, and when receiving an operation of switching the interface of the APP, the terminal device needs to display the interface after the APP is switched.
When the terminal device displays the interface of the APP, the content to be displayed in the interface and the layout of the interface are determined according to the application data of the APP, a corresponding view is drawn accordingly, the corresponding content is filled in the view, a corresponding graph is drawn according to the view, and the drawn graph is displayed on a display screen of the terminal device, so that the terminal device can display the interface of the APP.
According to the above process, it can be determined that the terminal device needs the APP application data when displaying the APP interface.
The application data of the APP generally comprises a library file and a resource file, the library file is generally an application package of the APP, and when the APP is an android system, the application package of the APP is generally an APK; the resource file comprises XML data, picture data and the like.
In a commonly used scheme at present, the terminal device loads the application data of the APP in the APP application process to load the application data in the disk into the memory of the terminal device, so that the terminal device can display a corresponding interface according to the application data in the memory. For example, when a current interface of an APP needs to be switched to another interface, the terminal device loads application data of the APP.
In this case, a system framework diagram of the terminal device may be as shown in fig. 2. In fig. 2, a resource loading module, an application processing platform, and an application platform including each application are provided.
The application processing platform receives relevant operations aiming at each APP, such as application starting, interface switching, interface sliding, application quitting and the like, controls the corresponding APP to execute the corresponding operation according to the received relevant operations, and obtains application data of the corresponding interface from the resource loading module according to the relevant operations.
The resource loading module is configured to load application data of an APP, such as a library file and a resource file, and in addition, when the resource file includes image data, the resource loading module is further configured to decode the image data. Under the condition, the application processing platform can acquire the application data of the APP from the resource loading module and trigger the terminal device to display a corresponding interface according to the application data of the APP and the related operation.
However, in this scheme, a large amount of I/O operations are required to be performed during the process of loading the application data to the memory by the terminal device, which consumes I/O resources. In this case, the thread may be congested, for example, in some cases, the thread may be congested for more than 10 milliseconds. Even in some cases, the processor may be stuck, for example, in the process of rendering the interface, the rendering thread and the thread loading the application data are performed simultaneously, which easily causes the processor to be stuck.
In another common scheme, a pre-loading manner is adopted, that is, before the APP is started, the processor of the terminal device loads application data of each interface of the APP into the memory, so that in the running process of the APP, the terminal device can display the corresponding interface of the APP according to the application data pre-loaded into the memory.
However, since the memory space of the memory is limited, all the application data cannot be preloaded into the memory, in this scheme, only part of the APP application data can be preloaded into the memory. In this case, when other APPs are running, the application data of the other APPs still need to be loaded through a large number of I/O operations, and therefore, thread congestion may still be caused, and even processor stalls may be caused.
In addition, in this scheme, since only part of the APP application data can be preloaded in the memory, a Least Recently Used (LRU) memory management method is generally adopted for the preloaded application data. When the LRU memory management method is adopted, if an APP is applied for a recent period of time and the storage space occupied by the application data of the APP is large, the application data of the APP is often preloaded. In addition, because the memory space occupied by the application data of the APP is large, the processor does not preload the application data of other APPs any more.
In this case, during the running of other APPs, the processor still needs to use a large amount of I/O operations to load the application data of other APPs, and therefore, the phenomenon of thread congestion during the running of other APPs may still occur, and even the processor is stuck.
Particularly, when the other APPs are common APPs, since the common APPs are frequently applied and correspondingly, the processor needs to frequently perform I/O operations to load application data of the common APPs, thereby consuming a large amount of I/O resources, and further increasing the possibility of thread congestion and processor stalling.
In order to solve the above technical problem, an embodiment of the present application discloses an application data processing method and apparatus.
Embodiments of the present application will be described below with reference to the drawings in order to clarify the method and apparatus for adjusting brightness of a display screen provided by the present application.
In an embodiment of the present application, an application data processing method is provided. Referring to the workflow diagram shown in fig. 3, the application data processing method includes the following steps:
and step S11, determining a target interface according to the display frequency of each interface of the application program APP and/or the loading time of the application data of each interface. The target interface is typically one that is displayed more frequently and/or requires more time to load application data.
Wherein the application data of the APP generally includes at least one of a library file and a resource file of the APP. The library file generally comprises an application package of the APP, and when the APP is an android system, the application package of the APP is generally an android Application Package (APK); the resource file typically includes XML data and picture data of the APP, etc.
In addition, the application data of the interface of the APP refers to application data required when the interface of the APP needs to be displayed. For example, when the APP is a mail sending and receiving APP and an interface of an inbox of a mail needs to be displayed, the application data of the interface refers to application data required when the interface of the inbox is displayed.
In the terminal equipment, at least one APP is usually installed, each APP can usually display a plurality of interfaces, and when a user needs to realize different functions of the APP, the APP can switch to display different interfaces. In this embodiment of the application, when a plurality of APPs are installed in the terminal device, a target interface of the plurality of APPs may be determined according to display frequency of each interface of the plurality of APPs and/or time required for loading application data of each interface. Correspondingly, when a plurality of target interfaces are determined according to step S11, the plurality of target interfaces may be interfaces of the same APP or interfaces of different APPs, which is not limited in this embodiment of the application.
In addition, in the embodiment of the present application, the target interface is determined according to the display frequency of each interface of the APP and/or the loading time of the application data of each interface, in this case, each time one interface of the APP is displayed, the display frequency of the interface may be counted once, so as to determine the display frequency of each interface, and the larger the count is, the more frequent the display of the interface is indicated. When the application data of each interface is loaded, the loading process can be timed, so that the loading time of the application data is determined.
And step S12, caching the application data of the target interface into a cache pool of a memory.
In the embodiment of the application, a cache pool is set in a memory, and the application data of the target interface is cached in the cache pool.
According to the scheme of the embodiment of the application, the target interface can be determined according to the display frequency of each interface of the APP and/or the time required for loading the application data of each interface, and the application data of the target interface is cached to the cache pool of the memory. The target interface is an interface which is displayed more frequently and/or an interface which has a longer loading time of application data. Therefore, by the scheme disclosed by the embodiment of the application, the application data of the interface which is displayed more frequently and/or the interface which has longer application data loading time can be cached in the cache pool. In this case, when the target interface needs to be displayed, the cache pool can be preferentially queried to obtain the application data of the target interface, and the application data in the cache pool does not need to be loaded again. And if the required application data does not exist in the cache pool, acquiring the required application data in a loading mode.
The application data of the interface which is displayed more frequently is cached in the cache pool, the frequency of loading the application data can be reduced, and in addition, the application data with longer application data loading time is cached in the cache pool, and the capacity of the loaded application data can be reduced. Therefore, by the scheme of the embodiment of the application, I/O resources consumed in the loading process of the application data can be reduced, and further, the thread congestion phenomenon and the blocking phenomenon of the processor can be reduced.
In addition, in the first scheme of the prior art, during the running process of the APP, the processor generally loads all application data of the APP into the memory; in a second scheme in the prior art, each time before the APP starts, the processor preloads all application data of the APP to the memory in advance. The method of loading all data of the APP into the memory may cause that the application data that is not commonly used and/or the application data with a short loading time may also be loaded into the memory, thereby occupying a large space of the memory.
In the solution of the embodiment of the present application, after the target interface is determined, only the application data of the target interface may be cached in the cache pool. The APP can comprise a plurality of interfaces, wherein the target interface can be an interface which is displayed more frequently by the APP and/or an interface which is loaded with longer application data in the plurality of interfaces. That is to say, according to the scheme of the embodiment of the application, only the application data of part of the interfaces of the APP can be cached, so that the occupation of the memory space is effectively reduced.
In the embodiment of the application, a target interface is determined according to the display frequency of each interface of an APP and/or the loading time of application data of each interface, and the operation can be specifically realized through the following steps:
and determining the target interface according to a first frequency threshold and a first time threshold which respectively correspond to the storage capacity of the memory.
The display frequency of the target interface is greater than the first frequency threshold, and/or the loading time of the application data of the target interface is greater than the first time threshold.
In this embodiment of the application, a first frequency threshold and a first time threshold may be respectively set according to a storage capacity of a memory, so as to determine a target interface according to the first frequency threshold and/or the first time threshold. Different terminal devices are often configured with memories of different specifications, and correspondingly, for storage capacities of different sizes, the first frequency threshold and the first time threshold corresponding to the storage capacity may be different.
In general, when the storage capacity of the memory is larger, the first frequency threshold corresponding to the storage capacity of the memory is smaller, and the first time threshold corresponding to the storage capacity of the memory is smaller, so that the application data is stored more when the storage capacity of the memory is larger.
In one example, the display frequency of different interfaces and the loading time of the application data of the interfaces are shown in table 1:
TABLE 1
Interface 1 Interface 2 Interface 3 Interface 4
Frequency of display 30 20 15 5
Load time 200ms 100ms 50ms 10ms
Table 1 shows that the display frequency of the interface 1 is 30 times, and the loading time of the application data of the interface 1 is 200 ms; the display frequency of the interface 2 is 20 times, and the loading time of the application data of the interface 2 is 100 ms; the display frequency of the interface 3 is 15 times, and the loading time of the application data of the interface 3 is 50 ms; the display frequency of the interface 4 is 5 times, and the loading time of the application data of the interface 4 is 10 ms. In addition, the interface 1, the interface 2, the interface 3, and the interface 4 may be interfaces of the same APP, or interfaces of different APPs.
In this example, the storage capacity of the memory of the terminal device is set to be 4G, the first frequency threshold is 20 times, and the first time threshold is 80ms, in which case, the interface 1 and the interface 2 are both target interfaces.
Or, the storage capacity of the memory of the terminal device is set to be 6G, the first frequency threshold is 10 times, and the first time threshold is 50ms, in which case, the interface 1, the interface 2, and the interface 3 are all target interfaces.
In addition, through the steps, the interface with the display frequency larger than the first frequency threshold value in each interface is determined to be the target interface, so that the interface with the display frequency more frequently can be determined to be the target interface, and/or the interface with the loading time of the application data larger than the first time threshold value can be determined to be the target interface, and the interface with the application data loading time longer can be determined to be the target interface.
In this embodiment of the application, through the operation in step S12, the application data of the target interface is cached in the cache pool of the memory. The operation can be implemented in various ways, and in one feasible way, all the application data of the target interface can be cached in the cache pool of the memory.
Alternatively, in another feasible implementation manner, referring to the workflow diagram shown in fig. 4, the caching the application data of the target interface into a cache pool of a memory includes the following steps:
step S121, dividing the application data of the target interface into at least one data block.
Step S122, determining a target data block in the at least one data block, where a loading time of the target data block is greater than a second time threshold.
In an embodiment of the present application, the application data of the target interface may be divided into at least one data block (data block), and the loading time of the at least one data block may be compared with the second time threshold. When the loading time of a data block is greater than the second time threshold, the data block may be determined to be a target data block, and thus, the target data block may be determined to be a data block with a longer loading time.
Step S123, caching the target data block into a cache pool of the memory.
Through the operations from step S121 to step S123, another method for caching the application data of the target interface into the cache pool of the memory is disclosed. According to the method, application data is divided into at least one data block, and a target data block with long loading time is cached. In this case, when the application data of the target interface needs to be applied, the target data block does not need to be loaded, but the target data block in the cache pool is directly applied, so that the I/O operation in the loading process is reduced, the thread congestion phenomenon is reduced, and the CPU stuck phenomenon can be reduced.
In addition, in the scheme, only the target data block in the application data needs to be cached in the cache pool of the memory, so that the occupation of the storage space of the memory can be reduced.
The cache pool of the memory provided in the embodiment of the present application is configured to cache application data of a target interface, where the application data generally includes at least one of a library file and a resource file, and correspondingly, the cache pool includes: the first cache pool is used for caching the library file and/or the second cache pool is used for caching the resource file. And the capacity of the first cache pool and/or the second cache pool corresponds to the storage capacity of the memory respectively.
In order to avoid that the memory occupied by the cache pool is too large and the memory is affected to store other data, in the embodiment of the present application, the capacity of the first cache pool and/or the second cache pool is set according to the storage capacity of the memory. The capacity of the first cache pool and/or the capacity of the second cache pool correspond to the storage capacity of the memory respectively. Accordingly, generally, the larger the storage capacity of the memory is, the larger the capacity of the first cache pool and/or the second cache pool is.
In addition, the resource file usually includes picture data, and in this case, the second cache pool may be further divided into a portion for caching the picture data and a portion for caching other resource files. The sizes of the two parts also correspond to the storage capacity of the memory, and in general, the larger the storage capacity of the memory is, the larger the sizes of the two parts are, so that more picture data and other resource files can be cached.
For example, the size of the cache space of the first cache pool and the second cache pool may be set by table 2:
TABLE 2
4G 6G 8G 12G
Library file 300MB 500MB 800MB 1000MB
Picture data 80MB 120MB 160MB 240MB
Other resource files 50MB 80MB 120MB 180MB
Table 2 shows that, when the storage capacity of the memory is 4G, the capacity of the memory for caching the library file is 300MB, the capacity for caching the picture data is 80MB, and the capacity for caching other resource files is 50 MB; when the storage capacity of the memory is 6G, the capacity for caching various application data is increased, wherein the capacity for caching the library file in the memory is 500MB, the capacity for caching the picture data is 120MB, and the capacity for caching other resource files is 80 MB; when the storage capacity of the memory is 8G, the capacity for caching various application data is further increased, wherein the capacity for caching the library file in the memory is 800MB, the capacity for caching the picture data is 160MB, and the capacity for caching other resource files is 120 MB; further, when the storage capacity of the memory is 12G, the capacity for caching various application data is increased, wherein the capacity for caching the library file in the memory is 1000MB, the capacity for caching the picture data is 240MB, and the capacity for caching other resource files is 180 MB.
Of course, when the storage capacity of the memory is 4G, 6G, 8G and 12G, respectively, a capacity different from that of table 2 may also be allocated to the cache space for caching various application data, which is not limited in this embodiment of the present application.
According to the scheme of the embodiment of the application, the cache space with the corresponding capacity can be configured for different types of application data according to the storage capacity of the memory, so that a cache pool is prevented from occupying a large cache space, and the memory is further prevented from being influenced to store other data except the application data.
In this case, when the application data of the target interface is less and the first cache pool and the second cache pool are sufficient to store the application data of the target interface, the application data of the target interface may be directly cached in the corresponding cache pool.
In another possible design, the caching the application data of the target interface into a cache pool of a memory includes the following steps:
firstly, determining the priority of the target interface according to the display frequency of the target interface and/or the loading time of the application data of the target interface;
and then sequentially caching the application data of the target interface in the first cache pool and/or the second cache pool according to the sequence of the priority of the target interface from high to low.
In this embodiment, the priority of the target interface is determined according to the display frequency of the target interface and/or the loading time of the application data of the target interface. Specifically, the priority of the target interface is generally higher as the display frequency is higher, and the priority of the target interface is generally higher as the application data is loaded for a longer time.
Through the steps, the priority of the target interface can be determined, and the application data of the target interface with higher priority can be stored preferentially. In this case, when the capacity of the cache pool cannot store the application data of all the target interfaces, the application data of the target interface with a lower priority may not be stored.
In the embodiment of the application, the application data of the APP includes at least one of a library file and a resource file, and the resource file generally includes picture data. When the application data of the target interface includes the picture data, caching the application data of the target interface into a cache pool of a memory, including the following operations:
decoding the picture data;
and caching the decoded picture data into a shared cache space in the second cache pool.
In the embodiment of the present application, after the picture data is loaded, the picture data is further decoded, and the decoded picture data is cached in a cache pool of an internal memory.
In the prior art, a terminal device loads picture data before decoding, and decodes the picture data in a process of displaying a corresponding interface according to application data. Specifically, in the first scheme in the prior art, when the interface of the APP needs to be displayed, the terminal device loads the application data of the APP, and then decodes the picture data included in the application data, so as to generate the interface of the APP through the decoded picture data.
In addition, in a second scheme in the prior art, application data of an APP is preloaded into a memory, and when an interface of the APP needs to be displayed, the terminal device decodes the preloaded picture data, so that the interface of the APP is generated through the decoded picture data.
That is, in the prior art, picture data is usually decoded during the process of displaying the interface of the APP. In the process of generating an interface to be displayed by an APP, a view corresponding to the interface needs to be drawn and rendered, a large amount of processor resources are consumed in the drawing and rendering processes, and a large amount of processor resources are consumed in decoding picture data. Therefore, if the picture data is decoded in the process of displaying the interface of the APP, if the capacity of the processor is insufficient or the scheduling is not timely, the congestion of the thread and the jamming of the processor are often caused, and the display efficiency of the interface is affected.
In the solution provided in the embodiment of the present application, the decoded picture data is cached in a cache pool of the memory. Under the condition, when a corresponding interface needs to be displayed, the decoded picture data can be directly extracted from the cache pool without decoding the picture data, so that the consumption of processor resources is reduced, correspondingly, the thread congestion and the processor jam can be reduced, and the display efficiency of the interface is improved.
In addition, in the prior art, application data of different interfaces may be loaded into different storage spaces of the memory, and correspondingly, picture data of each interface may be loaded into a storage space corresponding to each interface. In this case, even if the application data of different interfaces includes the same picture data, the loading operation is performed a plurality of times, and the same picture data is loaded into the storage spaces corresponding to the different interfaces, respectively.
In the solution disclosed in the embodiment of the present application, the decoded picture data is cached in the shared cache space in the second cache pool, in this case, if the application data of different interfaces includes the same picture data, the same picture data corresponding to the different interfaces may be cached in the shared cache space only by one loading operation, and when one of the interfaces needs to be displayed, the picture data cached in the shared cache space may be applied. Compared with the prior art, the scheme of the application can reduce the loading operation of the picture data, thereby reducing the I/O operation and reducing the occupation of the storage space of the memory.
In this embodiment of the present application, the decoded picture data may be cached in the shared cache space, wherein if the two picture data are the same, only one of the picture data may be loaded and decoded, and then the decoded picture data is cached in the shared cache space.
In addition, in some cases, only part of the decoded picture data may be buffered in the shared buffer space, and the other decoded picture data may be buffered in the non-shared buffer space of the second buffer pool. For example, some APPs have a high requirement on security of their corresponding picture data, and in such a case, it is often desirable to cache their corresponding picture data in an unshared cache space.
In this case, in the embodiment of the present application, the following operations may be further included:
determining whether the decoded picture data is preset picture data or not, or determining whether the decoded picture data is picture data applied to at least two interfaces or not;
when the decoded picture data is determined to be preset picture data, or when the decoded picture data is determined to be picture data applied to the at least two interfaces, storing the decoded picture data into a shared cache space in the second cache pool.
In this embodiment, the picture data that can be cached in the shared cache space may be preset. In this case, when it is determined that the decoded picture data is preset picture data, the decoded picture data is stored in the shared buffer space in the second buffer pool.
For example, a protocol may be set in advance with each APP developer, and by the protocol, the picture data that can be cached in the shared cache space is set. In this case, when the decoded picture data is the picture data that can be cached in the shared cache space and is indicated by the protocol, it may be determined that the decoded picture data is the preset picture data.
In addition, in this embodiment of the application, the terminal device may further determine whether the decoded picture data is picture data applied to the at least two interfaces, and when the decoded picture data is the picture data applied to the at least two interfaces, it indicates that the decoded picture data needs to be shared by the at least two interfaces, and in this case, the decoded picture data may be stored in a shared cache space in the second cache pool.
That is to say, when it is determined that the decoded picture data is the preset picture data or the decoded picture data is picture data applied to at least two interfaces, the decoded picture data is stored in the shared cache space in the second cache pool, and other decoded picture data may be cached in the unshared space of the second cache pool, so as to improve the confidentiality of other decoded picture data and prevent other decoded picture data from being applied to other interfaces.
Further, referring to the workflow diagram shown in fig. 5, in the embodiment of the present application, the following operations are further included:
step S131, when the application data of a new target interface needs to be stored in the cache pool of the memory, and the free storage capacity in the cache pool is not enough to cache the application data of the new target interface, determining a target interface to be processed, wherein the application data of the target interface to be processed is cached in the cache pool, and the priority of the target interface to be processed is lower than that of the new target interface.
The priority of each target interface can be determined by the display frequency of the target interface and/or the loading time of the application data of the target interface. In this case, generally, the higher the display frequency, the higher the priority of the target interface; in addition, generally, the longer the loading time of the application data, the higher the priority of the target interface.
And S132, removing the application data of the target interface to be processed from the cache pool.
In this embodiment, the application data of the target interface with the high priority is preferentially cached in the cache pool. In this case, since the priority of the target interface to be processed is lower than that of the new target interface, in this case, the application data of the target interface to be processed needs to be removed.
Step S133, caching the application data of the new target interface into the cache pool of the memory.
Through the operation, the application data of the target interface with the low priority can be removed from the cache pool, and the application data of the target interface with the high priority can be cached, so that the high-priority application data can be preferentially cached in the cache pool.
The following are embodiments of the apparatus of the present application that may be used to perform embodiments of the method of the present application. For details which are not disclosed in the embodiments of the apparatus of the present application, reference is made to the embodiments of the method of the present application.
Corresponding to the above method embodiments, the present application embodiment discloses an application data processing apparatus. Referring to the schematic structural diagram shown in fig. 6, the application data processing apparatus includes: a determination unit 610 and a processing unit 620.
The determining unit 610 is configured to determine a target interface according to display frequency of each interface of an application program APP and/or loading time of application data of each interface;
the processing unit 620 is configured to cache the application data of the target interface into a cache pool of a memory.
According to the scheme of the embodiment of the application, the target interface can be determined according to the display frequency of each interface of the APP and/or the time required for loading the application data of each interface, and the application data of the target interface is cached to the cache pool of the memory. The target interface is an interface which is displayed more frequently and/or an interface which has a longer loading time of application data. Therefore, by the scheme disclosed by the embodiment of the application, the application data of the interface which is displayed more frequently and/or the interface which has longer application data loading time can be cached in the cache pool. In this case, when the target interface needs to be displayed, the cache pool can be preferentially queried to obtain the application data of the target interface, and the application data in the cache pool does not need to be loaded again. And if the required application data does not exist in the cache pool, acquiring the required application data in a loading mode.
The application data of the interface which is displayed more frequently is cached in the cache pool, the frequency of loading the application data can be reduced, and in addition, the application data with longer application data loading time is cached in the cache pool, and the capacity of the loaded application data can be reduced. Therefore, by the scheme of the embodiment of the application, I/O resources consumed in the loading process of the application data can be reduced preferentially, and further, the thread congestion phenomenon and the blocking phenomenon of the processor can be reduced.
Further, in the apparatus disclosed in the embodiment of the present application, the processing unit is specifically configured to determine the target interface according to a first frequency threshold and a first time threshold respectively corresponding to a storage capacity of the memory;
the display frequency of the target interface is greater than the first frequency threshold, and/or the loading time of the application data of the target interface is greater than the first time threshold.
In this embodiment of the application, a first frequency threshold and a first time threshold may be respectively set according to a storage capacity of a memory, so as to determine a target interface according to the first frequency threshold and/or the first time threshold. Different terminal devices are often configured with memories of different specifications, and correspondingly, for storage capacities of different sizes, the first frequency threshold and the first time threshold corresponding to the storage capacity may be different.
In addition, in the embodiment of the application, the interface with the display frequency greater than the first frequency threshold value in each interface is determined as the target interface, so that the interface with the display frequency more frequently can be determined as the target interface, and/or the interface with the application data loading time greater than the first time threshold value can be determined as the target interface, so that the interface with the application data loading time longer can be determined as the target interface.
Further, in the apparatus according to the embodiment of the present application, the processing unit is specifically configured to:
dividing the application data of the target interface into at least one data block;
determining a target data block of the at least one data block, wherein the load time of the target data block is greater than a second time threshold;
and caching the target data block into a cache pool of the memory.
Further, in the apparatus according to the embodiment of the present application, the cache pool includes: the first cache pool is used for caching the library file and/or the second cache pool is used for caching the resource file;
the capacity of the first cache pool and/or the capacity of the second cache pool correspond to the storage capacity of the memory respectively.
In order to avoid that the memory occupied by the cache pool is too large and the memory is affected to store other data, in the embodiment of the present application, the capacity of the first cache pool and/or the second cache pool is set according to the storage capacity of the memory. The capacity of the first cache pool and/or the capacity of the second cache pool correspond to the storage capacity of the memory respectively.
Further, in the apparatus according to the embodiment of the present application, the processing unit is specifically configured to:
determining the priority of the target interface according to the display frequency of the target interface and/or the loading time of the application data of the target interface;
according to the sequence of the priority of the target interface from high to low, sequentially caching the application data of the target interface in the first cache pool and/or the second cache pool.
In this embodiment, the priority of the target interface is determined according to the display frequency of the target interface and/or the loading time of the application data of the target interface. Specifically, the priority of the target interface is generally higher as the display frequency is higher, and the priority of the target interface is generally higher as the application data is loaded for a longer time.
By the embodiment, the priority of the target interface can be determined, and the application data of the target interface with higher priority can be stored preferentially.
Further, in the apparatus according to the embodiment of the present application, the processing unit is specifically configured to:
decoding the picture data;
and caching the decoded picture data into a shared cache space in the second cache pool.
In the scheme provided by the embodiment of the application, the decoded picture data is cached in a cache pool of a memory. Under the condition, when a corresponding interface needs to be displayed, the decoded picture data can be directly extracted from the cache pool without decoding the picture data, so that the consumption of processor resources is reduced, correspondingly, the thread congestion and the processor jam can be reduced, and the display efficiency of the interface is improved.
Further, in the apparatus according to the embodiment of the present application, the processing unit is further configured to:
determining whether the decoded picture data is preset picture data or not, or determining whether the decoded picture data is picture data applied to at least two interfaces or not;
when the decoded picture data is determined to be preset picture data, or when the decoded picture data is determined to be picture data applied to the at least two interfaces, storing the decoded picture data into a shared cache space in the second cache pool.
Further, in the apparatus according to the embodiment of the present application, the processing unit is further configured to:
when application data of a new target interface needs to be stored in a cache pool of the memory, and the free storage capacity in the cache pool is not enough to cache the application data of the new target interface, determining a target interface to be processed, wherein the application data of the target interface to be processed is cached in the cache pool, and the priority of the target interface to be processed is lower than that of the new target interface;
removing the application data of the target interface to be processed from the cache pool;
and caching the application data of the new target interface into a cache pool of the memory.
Through the operation, the application data of the target interface with the low priority can be removed from the cache pool, and the application data of the target interface with the high priority can be cached, so that the high-priority application data can be preferentially cached in the cache pool.
Corresponding to the above method embodiments, the present application embodiment discloses an application data processing apparatus. Referring to the schematic structural diagram shown in fig. 7, the application data processing apparatus includes: resource loading module 710, resource cache identification module 720, and resource cache module 730.
The resource loading module 710 is configured to load application data of an application program APP;
the resource cache identification module 720 is configured to determine a target interface according to the display frequency of each interface of the APP and/or the loading time of the application data of each interface;
the resource caching module 730 is configured to cache the application data of the target interface into a cache pool of a memory.
In the solution disclosed in the embodiment of the present application, the resource loading module 710 may load application data of an APP. The application data loaded by the resource loading module 710 includes at least one of a library file and a resource file, and the resource file typically includes picture data.
In addition, in the process of loading the APP application data by the resource loading module 710, the resource cache identification module 720 may count the loading time of the APP application data of each interface. In addition, the resource cache identification module 720 may also count the display frequency of each interface. Then, the resource cache identification module 720 determines a target interface according to the display frequency of each interface of the APP and/or the loading time of the application data of each interface.
Further, the resource cache identification module 720 may determine the target interface according to a first frequency threshold and a first time threshold respectively corresponding to the storage capacity of the memory. The display frequency of the target interface is greater than the first frequency threshold, and/or the loading time of the application data of the target interface is greater than the first time threshold.
After the resource cache identification module 720 determines the target interface, the resource cache module 730 caches the application data of the target interface in a cache pool of a memory.
In addition, the resource cache identification module 720 may further determine a target data block in the application data of the target interface, where a load time of the target data block is greater than a second time threshold. In this case, the resource caching module 730 may cache the target data block in the application data when the application data of the target interface is cached.
Further, the resource cache identification module 720 may also determine the priority of the target interface according to the display frequency of the target interface and/or the loading time of the application data of the target interface. In this case, the resource caching module 730 may sequentially cache the application data of the target interface in the first cache pool and/or the second cache pool according to the priority of the target interface from high to low.
The resource loading module 710 may also decode the loaded picture data. When the application data of the target interface includes picture data, the resource caching module 730 may cache the decoded picture data in the shared cache space in the second cache pool.
Further, the resource cache identification module 720 and/or the resource cache module 730 may further determine whether the decoded picture data is preset picture data, or determine whether the decoded picture data is picture data applied by at least two interfaces. When it is determined that the decoded picture data is the preset picture data, or when it is determined that the decoded picture data is the picture data applied to the at least two interfaces, the resource cache module 730 may store the decoded picture data in the shared cache space in the second cache pool.
Further, when the application data of the new target interface needs to be stored in the cache pool of the memory, and the free storage capacity in the cache pool is not enough to cache the application data of the new target interface, the resource cache identification module 720 and/or the resource cache module 730 may further determine the target interface to be processed. The application data of the target interface to be processed is cached in the cache pool, and the priority of the target interface to be processed is lower than that of the new target interface.
Then, the resource caching module 730 removes the application data of the target interface to be processed from the cache pool, and caches the application data of the new target interface in the cache pool of the memory
Accordingly, the present embodiment discloses a terminal device, referring to the schematic structural diagram shown in fig. 8, the device includes a processor 1101 and a memory, the memory is used for storing program instructions, and the processor is configured to call the program instructions stored in the memory to execute the application data processing method disclosed in the above embodiments of the present application.
Further, the terminal device may further include: a transceiver 1102 and a bus 1103 that includes a random access memory 1104 and a read only memory 1105.
The processor is coupled to the transceiver, the random access memory and the read only memory through the bus respectively. When the terminal device needs to be operated, the device is guided to enter a normal operation state by starting a basic input and output system solidified in a read only memory or a bootloader guiding system in an embedded system. After the device enters a normal operation state, an application program and an operating system are operated in a random access memory, so that the terminal device performs all or part of the steps in the application data processing method disclosed in the above embodiments of the present application.
The apparatus according to the embodiment of the present invention may correspond to the application data processing apparatus in the embodiment corresponding to fig. 6, and a processor in the apparatus may implement the functions of the apparatus and/or various steps and methods implemented in the embodiment corresponding to fig. 6, which are not described herein again for brevity.
In particular implementations, embodiments of the present application also provide a computer-readable storage medium, which includes instructions. Wherein a computer readable medium disposed in any apparatus, which when executed on a computer, may perform all or a portion of the steps of the embodiments corresponding to fig. 3-5. The storage medium of the computer readable medium may be a magnetic disk, an optical disk, a read-only memory (ROM) or a Random Access Memory (RAM).
In addition, another embodiment of the present application further provides a computer program product containing instructions, which when run on an electronic device, enables the electronic device to implement all or part of the steps in the embodiments corresponding to fig. 3 to 5.
Those of skill in the art will further appreciate that the various illustrative logical blocks and steps (step) set forth in the embodiments of the present application may be implemented in electronic hardware, computer software, or combinations of both. Whether such functionality is implemented as hardware or software depends upon the particular application and design requirements of the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the embodiments of the present application.
The various illustrative logical units and circuits described in this application may be implemented or operated upon by design of a general purpose processor, a digital signal processor, an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof. A general-purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a digital signal processor and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a digital signal processor core, or any other similar configuration.
The steps of a method or algorithm described in the embodiments herein may be embodied directly in hardware, in a software element executed by a processor, or in a combination of the two. The software cells may be stored in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. For example, a storage medium may be coupled to the processor such the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC, which may be located in a UE. In the alternative, the processor and the storage medium may reside in different components in the UE.
It should be understood that, in the various embodiments of the present application, the size of the serial number of each process does not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
All parts of the specification are described in a progressive mode, the same and similar parts of all embodiments can be referred to each other, and each embodiment is mainly introduced to be different from other embodiments. In particular, as to the apparatus and system embodiments, since they are substantially similar to the method embodiments, the description is relatively simple and reference may be made to the description of the method embodiments in relevant places.
While the preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all alterations and modifications as fall within the scope of the application.
Those skilled in the art will readily appreciate that the techniques of the embodiments of the present invention may be implemented as software plus a required general purpose hardware platform. Based on such understanding, the technical solutions in the embodiments of the present invention may be essentially or partially implemented in the form of a software product, which may be stored in a storage medium, such as ROM/RAM, magnetic disk, optical disk, etc., and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method according to the embodiments or some parts of the embodiments.
The same and similar parts in the various embodiments in this specification may be referred to each other. Especially, for the … … embodiment, since it is basically similar to the method embodiment, the description is simple, and the relevant points can be referred to the description in the method embodiment.
The above-described embodiments of the present invention should not be construed as limiting the scope of the present invention.

Claims (19)

1. An application data processing method, comprising:
determining a target interface according to the display frequency of each interface of an application program APP and/or the loading time of application data of each interface;
and caching the application data of the target interface into a cache pool of a memory.
2. The method according to claim 1, wherein the determining a target interface according to the display frequency of each interface of the application program APP and/or the loading time of the application data of each interface comprises:
determining the target interface according to a first frequency threshold and a first time threshold respectively corresponding to the storage capacity of the memory;
the display frequency of the target interface is greater than the first frequency threshold, and/or the loading time of the application data of the target interface is greater than the first time threshold.
3. The method of claim 1, wherein caching the application data of the target interface into a cache pool of a memory comprises:
dividing the application data of the target interface into at least one data block;
determining a target data block of the at least one data block, wherein the load time of the target data block is greater than a second time threshold;
and caching the target data block into a cache pool of the memory.
4. The method according to any one of claims 1 to 3,
the cache pool comprises: the first cache pool is used for caching the library file and/or the second cache pool is used for caching the resource file;
the capacity of the first cache pool and/or the capacity of the second cache pool correspond to the storage capacity of the memory respectively.
5. The method of claim 4, wherein caching the application data of the target interface into a cache pool of a memory comprises:
determining the priority of the target interface according to the display frequency of the target interface and/or the loading time of the application data of the target interface;
according to the sequence of the priority of the target interface from high to low, sequentially caching the application data of the target interface in the first cache pool and/or the second cache pool.
6. The method according to claim 4 or 5, wherein the application data of the target interface includes picture data, and the caching the application data of the target interface into a cache pool of a memory includes:
decoding the picture data;
and caching the decoded picture data into a shared cache space in the second cache pool.
7. The method of claim 6, further comprising:
determining whether the decoded picture data is preset picture data or not, or determining whether the decoded picture data is picture data applied to at least two interfaces or not;
when the decoded picture data is determined to be preset picture data, or when the decoded picture data is determined to be picture data applied to the at least two interfaces, storing the decoded picture data into a shared cache space in the second cache pool.
8. The method of claim 5, further comprising:
when application data of a new target interface needs to be stored in a cache pool of the memory, and the free storage capacity in the cache pool is not enough to cache the application data of the new target interface, determining a target interface to be processed, wherein the application data of the target interface to be processed is cached in the cache pool, and the priority of the target interface to be processed is lower than that of the new target interface;
removing the application data of the target interface to be processed from the cache pool;
and caching the application data of the new target interface into a cache pool of the memory.
9. An application data processing apparatus, comprising:
the determining unit is used for determining a target interface according to the display frequency of each interface of the application program APP and/or the loading time of application data of each interface;
and the processing unit is used for caching the application data of the target interface into a cache pool of a memory.
10. The apparatus of claim 9,
the processing unit is specifically configured to determine the target interface according to a first frequency threshold and a first time threshold respectively corresponding to the storage capacity of the memory;
the display frequency of the target interface is greater than the first frequency threshold, and/or the loading time of the application data of the target interface is greater than the first time threshold.
11. The apparatus according to claim 9, wherein the processing unit is specifically configured to:
dividing the application data of the target interface into at least one data block;
determining a target data block of the at least one data block, wherein the load time of the target data block is greater than a second time threshold;
and caching the target data block into a cache pool of the memory.
12. The apparatus according to any one of claims 9 to 11,
the cache pool comprises: the first cache pool is used for caching the library file and/or the second cache pool is used for caching the resource file;
the capacity of the first cache pool and/or the capacity of the second cache pool correspond to the storage capacity of the memory respectively.
13. The apparatus according to claim 12, wherein the processing unit is specifically configured to:
determining the priority of the target interface according to the display frequency of the target interface and/or the loading time of the application data of the target interface;
according to the sequence of the priority of the target interface from high to low, sequentially caching the application data of the target interface in the first cache pool and/or the second cache pool.
14. The apparatus according to claim 12 or 13, wherein the processing unit is specifically configured to:
decoding the picture data;
and caching the decoded picture data into a shared cache space in the second cache pool.
15. The apparatus of claim 14, wherein the processing unit is further configured to:
determining whether the decoded picture data is preset picture data or not, or determining whether the decoded picture data is picture data applied to at least two interfaces or not;
when the decoded picture data is determined to be preset picture data, or when the decoded picture data is determined to be picture data applied to the at least two interfaces, storing the decoded picture data into a shared cache space in the second cache pool.
16. The apparatus of claim 15, wherein the processing unit is further configured to:
when application data of a new target interface needs to be stored in a cache pool of the memory, and the free storage capacity in the cache pool is not enough to cache the application data of the new target interface, determining a target interface to be processed, wherein the application data of the target interface to be processed is cached in the cache pool, and the priority of the target interface to be processed is lower than that of the new target interface;
removing the application data of the target interface to be processed from the cache pool;
and caching the application data of the new target interface into a cache pool of the memory.
17. An application data processing apparatus, comprising:
the system comprises a resource loading module, a resource cache identification module and a resource cache module;
the resource loading module is used for loading application data of an application program APP;
the resource cache identification module is used for determining a target interface according to the display frequency of each interface of the APP and/or the loading time of application data of each interface;
the resource caching module is used for caching the application data of the target interface into a caching pool of a memory.
18. A terminal device, characterized in that the device comprises a processor and a memory for storing program instructions, the processor being configured to invoke the program instructions stored in the memory to perform the method according to any of claims 1 to 8.
19. A computer-readable storage medium for storing instructions which, when executed on a computer or processor, cause the computer or processor to carry out the method of any one of claims 1-8.
CN202010439953.3A 2020-05-22 2020-05-22 Application data processing method and device Pending CN113722023A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010439953.3A CN113722023A (en) 2020-05-22 2020-05-22 Application data processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010439953.3A CN113722023A (en) 2020-05-22 2020-05-22 Application data processing method and device

Publications (1)

Publication Number Publication Date
CN113722023A true CN113722023A (en) 2021-11-30

Family

ID=78671400

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010439953.3A Pending CN113722023A (en) 2020-05-22 2020-05-22 Application data processing method and device

Country Status (1)

Country Link
CN (1) CN113722023A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114185621A (en) * 2021-12-17 2022-03-15 广东德生科技股份有限公司 Application program interface picture loading method, device, equipment and storage medium
CN114385257A (en) * 2021-12-02 2022-04-22 广州歌神信息科技有限公司 Program preheating method, device, electronic equipment and computer readable storage medium
CN114489551A (en) * 2022-02-09 2022-05-13 广东乐心医疗电子股份有限公司 Data display method and device and electronic equipment
CN116679900A (en) * 2022-12-23 2023-09-01 荣耀终端有限公司 Audio service processing method, firmware loading method and related devices

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114385257A (en) * 2021-12-02 2022-04-22 广州歌神信息科技有限公司 Program preheating method, device, electronic equipment and computer readable storage medium
CN114185621A (en) * 2021-12-17 2022-03-15 广东德生科技股份有限公司 Application program interface picture loading method, device, equipment and storage medium
CN114489551A (en) * 2022-02-09 2022-05-13 广东乐心医疗电子股份有限公司 Data display method and device and electronic equipment
CN114489551B (en) * 2022-02-09 2024-09-03 广东乐心医疗电子股份有限公司 Data display method and device and electronic equipment
CN116679900A (en) * 2022-12-23 2023-09-01 荣耀终端有限公司 Audio service processing method, firmware loading method and related devices
CN116679900B (en) * 2022-12-23 2024-04-09 荣耀终端有限公司 Audio service processing method, firmware loading method and related devices

Similar Documents

Publication Publication Date Title
CN113722023A (en) Application data processing method and device
WO2021185105A1 (en) Method for switching between sim card and esim card, and electronic device
EP4024948A1 (en) Method and apparatus for switching sim card, and electronic device
CN113204377B (en) Method and device for loading dynamic link library
CN113993226B (en) Service processing method and device in terminal equipment supporting double cards
CN111078376A (en) Process management method and device
CN114449576A (en) Application data sending method, device and equipment
CN114498028B (en) Data transmission method, device, equipment and storage medium
CN114079642B (en) Mail processing method and electronic equipment
CN115904297A (en) Screen display detection method, electronic device and storage medium
CN116679900B (en) Audio service processing method, firmware loading method and related devices
CN116048772B (en) Method and device for adjusting frequency of central processing unit and terminal equipment
CN116049535B (en) Information recommendation method, device, terminal device and storage medium
CN115729684B (en) Input/output request processing method and electronic equipment
CN111147861A (en) Image compression method, device, user equipment and computer readable storage medium
CN115562772B (en) Scene recognition and preprocessing method and electronic equipment
CN117130541A (en) Storage space configuration method and related equipment
CN113760192B (en) Data reading method, data reading apparatus, storage medium, and program product
CN116009763A (en) Storage method, device, equipment and storage medium
CN116489736B (en) Redirection control method, terminal equipment and storage medium
CN117081949B (en) Information detection method and electronic equipment
CN117082170B (en) On-off test method, test system and shared host
WO2022143165A1 (en) Method and apparatus for determining network standard
EP4137973A1 (en) Method and apparatus for applying file
CN115686765A (en) Data processing method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination