Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present application. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
The present application presents a solution to improve system performance of a device. In some embodiments, the device may be a desktop computer, and in some embodiments, the device may be portable (e.g., a notebook computer, a tablet computer, or a handheld device). In some embodiments, the device may be wearable (e.g., a smart watch, smart glasses, etc.), and in some embodiments, the device may be mounted on other devices (e.g., an in-vehicle terminal, a navigator, etc.).
In some examples, the logic architecture inside the device may refer to fig. 1, the device has an Operating System 100, and the type of the Operating System (OS) is not limited in this application, and may be, for example, Windows OS, iOS, MacOS, Android OS, Linux OS, YunOS (cloud OS), and the like. Some operating systems may rely on Virtual machines, e.g., Android OS, YunOS, so the upper layer of the operating system 100 may be the Virtual Machine 110, which generally refers to a complete computer system with complete hardware system functions, operating in a completely isolated environment, simulated by software. It will be readily appreciated that in some instances, the logical framework within the device may not include the virtual machine 110. The system Framework (Framework)120 is a language development software, and provides a Framework for software development, so that the development is more engineering, simple and stable. The device may be loaded with a variety of applications, each installed in the application layer 130. When an application or a service provided inside the operating system is run, the operating system 100 starts a corresponding process and allocates an independent memory space for the process.
The conventional technology generally compresses the memory by using a conventional GC (Garbage Collection) strategy, which leads to a higher fragmentation rate of the heap area, so that data generated by a process in the heap area is usually compressed once when the process is in a non-sensitive period (such as when an application switches from a foreground to a background when the application is not used). Since the process related to the interface (referred to as the Activity property process in this application) is easier to determine the state of entering the non-sensitive period from the sensitive period, the conventional GC policy generally compresses and defragments data in the memory occupied by the Activity property process, and in some OSs, the time point of compression may be automatically controlled by the virtual machine. However, the applicant finds that the problem of insufficient memory resources cannot be solved by compressing data in a memory occupied by an Activity process, and the reason is that a large number of background processes exist during the operation of the device, for example, the proportion of the memory occupied by the processes of the Services attribute and the processes of the persistence attribute is high, and the processes of the Services attribute or the processes of the persistence attribute are usually background resident processes and have no chance to switch to a non-sensitive period, so that the processes cannot be subjected to the arrangement and optimization of the heap area through a GC policy, and the operation time of the system is prolonged and more memories are consumed. Therefore, the applicant proposes a scheme for compressing data in the memory occupied by the background process. Some of the process flow in some examples can be seen in figure 2 a.
S201a, the operating system sends a trigger instruction of memory compression to the virtual machine;
s202a, the virtual machine responds to the trigger instruction of memory compression, and compresses the data in the memory occupied by the appointed background process. It should be noted that specifying that the background process includes a process of a Service attribute and/or a persistence process of an attribute does not preclude the possibility of compressing background processes of other attributes when needed for design.
The applicant finds that many fragments can be sorted out when a forced trigger is used to compress the memory of a certain background process, and in order to compress the data in the memory occupied by the background process and not to affect the normal use of a user, in the example shown in fig. 2a, the compression process of the background process is actively triggered by a trigger instruction of memory compression through selecting a proper time to compress the background process.
The example illustrated in fig. 2a may be implemented by a virtual machine, but in other examples, the memory compression may be implemented by an operating system or other execution entities according to differences of the operating system. In some examples, the entity performing the memory compression may perform some of the steps shown in fig. 2 b:
s201b, receiving a trigger instruction of memory compression;
s202b, compressing the data in the internal memory occupied by the appointed background process; the specified background process may include a process of Service attributes and/or a persistence process of attributes, but does not preclude the possibility of compressing background processes of other attributes when design requirements dictate.
Taking the execution subject as a virtual machine as an example, the trigger instruction may be from outside the virtual machine; or a certain trigger condition can be preset, and the trigger condition is automatically sent out by the interior of the virtual machine when the trigger condition is met. In some examples, the trigger instruction may be a trigger instruction that actively triggers memory compression when the operating system calls an interface provided by the virtual machine or when a specified signal is sent to the virtual machine by the operating system. As an example, the interface provided by the virtual machine may be a newly added interface extended by the virtual machine layer, for example, an extended interface of an AMS (Activity Manager Service) module, and the AMS module may manage the application programs, such as downloading, moving, deleting, upgrading, and purchasing rights, and the like.
The opportunity of memory compression has a great influence on whether a user can normally use the functions of the device, and through a great deal of experimental research, the applicant researches the sensitive period of the resident background process and determines several moments for actively triggering the background process to compress, for example, the active triggering event can be one or more of the following ways:
1. when the equipment is started, the equipment is used as an event for actively triggering background process compression;
2. a specified control can be added, for example, a button for one-key acceleration and the like, and when a user triggers the specified control, the event of background process compression is actively triggered;
3. the event of background process compression can be triggered when the existing function is triggered by some existing control, for example, the current time can be when the background process cleaning function is triggered.
It will be readily understood that the several instances of time listed are merely illustrative and do not preclude other instances.
For example, the compression process may refer to a manner of performing GC compression on the process of the Activity property, for example, the adopted compression algorithm may be mark-copy, mark-compact, or the like. After the background process memory compression flow is triggered, all processes of Services attributes and persistence attributes can be traversed, and a memory compression triggering instruction is sent to the virtual machine. To reduce memory usage, the request may be an asynchronous request. After receiving a trigger instruction of memory compression, the virtual machine tries to compress data in a memory occupied by a background process needing to be compressed.
In some examples, after receiving the trigger instruction of the memory compression, the virtual machine may record a timestamp of the trigger instruction of the memory compression, and record the time of the memory compression after each memory compression. If the time of receiving the trigger instruction of memory compression at this time and the time interval of last memory compression meet the preset conditions, executing the step of compressing the data in the memory occupied by the specified background process, if the time does not meet the preset conditions, ignoring the trigger instruction of memory compression, wherein the preset conditions can be determined according to the design requirements of designers, for example, 5 minutes can be set as the time interval, and the like. The applicant fully considers that if the time interval of two times of memory compression is too short, newly generated heap fragments are less, and the system resources are more consumed by conducting again the heap compression, so that the time partition of the virtual machine ensures that the high benefit of the memory compression is ensured even when the virtual machine frequently receives the trigger instruction of the memory compression.
In order to ensure security, in some examples, the method may further include performing permission verification on the process that sends the memory trigger instruction, and may set a system process as a default security process, and if the process that sends the memory trigger instruction is the system process, the permission verification is passed; otherwise, returning the result of failed authority verification.
In some examples, the processes of the Activity property may also be compressed at the time of background process compression, and the compression mode may refer to a process in which the virtual machine in the conventional scheme sends a memory compression request through an interface receiving the AMS module to compress data in a memory occupied by the processes of the Activity property.
In some examples, in addition to compressing data in a memory occupied by the Activity property process when the background process is compressed, the method may further refer to a conventional scheme in which the virtual machine automatically compresses the Activity property process when an interface of the AMS module is called according to a preset rule.
FIG. 3a is an example of an application scenario in which a mobile terminal has multiple applications, and the AMS module of the virtual machine 320 provides a first interface for the virtual machine to automatically compress an Activity process; and providing an extended second interface for the virtual machine to receive an external actively-triggered compression instruction to compress background processes (such as a process of Services attribute and a process of persistence attribute). In this example, the device is turned on and the user triggers the "one-key acceleration" button as a trigger event for actively triggering the background process compression. Fig. 3b shows a partial process of compressing the memory data occupied by the background process when the device is powered on.
Stage 1, triggering a memory compression process: when the mobile terminal is started, starting a specific system process to call a second interface of the AMS module, and sending a trigger instruction of memory compression to the virtual machine through the second interface;
stage 2 is an identity verification process: the second interface checks whether the caller is the set system process, if yes, the AMS module traverses all the background processes of the Services attribute and the persistence attribute, informs each background process which is in the active state at present, the process releases the processing resources firstly, then sends the asynchronous request of the memory compression to the virtual machine interface through the background process, informs the virtual machine of the memory compression, and if the checking result is not the set system process, discards the trigger instruction of the received memory compression;
stage 3 is a memory compression process: after receiving the request, the virtual machine tries to perform memory compression on the process, and records the timestamp of the compression request.
Stage 4, for the processing flow of triggering the memory compression process again: when the user triggers the triggering instruction of memory compression again through the 'one-key acceleration' button in the process of using the mobile device, a specific system process is started again to call the second interface to execute the steps. If the time from the last compression of the received request by the virtual machine is less than the specified time, the virtual machine ignores the compression (not shown in the figure), and if the time is not less than the specified time, the virtual machine performs the compression again.
It should be noted that, when the second interface is called to compress the memory, all the processes of the Activity property may be traversed to compress the memory. In addition, when the virtual machine automatically calls the first interface, the process of the Activity property can be triggered to be compressed.
For the present example, the following test data is compared with the test data that is not compressed by the background process to show the advantages of the background process compression on the system performance. Take the memory of the mobile terminal to be tested as 1G as an example.
Step 1, starting to record memory data 5 seconds after the mobile device 300 is unlocked. This data is taken as "boot memory data 1";
and 2, standing for 3 minutes on the basis of the step 1, and recording the memory data. This data is used as "boot memory data 2";
and 3, starting recording the memory data 10 minutes after unlocking. This data is used as "boot memory data 3"; meanwhile, the data is used as 'memory data before one-key acceleration';
and 4, triggering a one-key acceleration button on the basis of the step 2, and recording the stored data after standing for 10 seconds. This data is used as "memory data 1 after one-key acceleration";
and 5, standing for 10 minutes after unlocking, then triggering a one-key acceleration button, and recording the stored data after standing for 10 seconds. This data is used as "one-key accelerated memory data 2";
the optimization effect of the one-key acceleration on the memory can mainly refer to the memory data 2 after the one-key acceleration and the memory data before the one-key acceleration.
Through the above test steps, the following test results were obtained:
1. "boot memory data 1" shows that the version boot memory compressed by the memory is: the used memory is reduced by 8.9M, and the idle memory is increased by 8.4M;
2. "boot memory data 2" shows that the version boot memory compressed by the memory is: the used memory is reduced by 9.5M, and the idle memory is increased by 8.6M;
3. "boot memory data 3" shows that the version boot memory compressed by the memory is: the used memory is reduced by 3.9M, and the idle memory is increased by 5.3M;
4. "memory data before one-key acceleration" and "memory data 1 after one-key acceleration" show that the version compressed by the memory is compared with the version not compressed by the memory, and the one-key acceleration makes: the used memory is reduced by 16.3M, and the idle memory is increased by 15.3M;
5. "memory data before one-touch acceleration", "memory data 2 after one-touch acceleration" shows that compared to the version that is not memory compressed, one-touch acceleration causes: used memory is reduced by 19.6M and Free memory is increased by 18.5M.
Therefore, it can be presumed that about 2% more memory can be saved once the memory compression process is triggered in the normal use process of the terminal device 300.
Corresponding to the foregoing embodiments of the method for memory compression, the present application further provides embodiments of an apparatus for memory compression.
The embodiment of the memory compression device can be applied to electronic equipment. The device embodiments may be implemented by software, or by hardware, or by a combination of hardware and software. Taking a software implementation as an example, as a logical device, the device is formed by reading, by a processor of the electronic device where the device is located, a corresponding computer program instruction in the nonvolatile memory into the memory for operation. From a hardware aspect, as shown in fig. 4, the present application is a hardware structure diagram of an electronic device in which a memory compression apparatus is located, where, in addition to the processor, the memory, the network interface, and the nonvolatile memory shown in fig. 4, the memory stores processor executable instructions, and the electronic device in which the apparatus is located in the embodiment may also include other hardware according to an actual function of the electronic device, which is not described again. The processor is configured to perform the related actions described above, for example, in some examples, the processor is configured to receive a trigger instruction of memory compression, and compress, by the virtual machine, data in the memory occupied by the specified background process; the specified background process comprises a process of a Service attribute and/or a process of a persistence attribute.
Referring to fig. 5, the apparatus 500 for memory compression may include:
the interface module 501 is configured to receive a trigger instruction of memory compression;
a memory compression module 502, which compresses data in a memory occupied by a designated background process; the specified background process may include a process for a Service attribute and/or a process for a persistence attribute.
As an example, the trigger instruction for memory compression may be sent when an active trigger event is satisfied; the active triggering event may include at least one of:
the device is started, the user triggers the designated control, and the background process cleaning function is triggered, without excluding other active triggering conditions.
In some examples, the apparatus 500 may further include a checking module 502, configured to perform permission verification on the process that sends the memory trigger instruction, where the permission verification specifically includes:
if the process sending the memory trigger instruction is a set system process, the permission verification is passed, and the memory compression module 504 is notified; otherwise, returning the result of failed authority verification.
In some examples, the determining module 503 may further be configured to determine, according to the recorded time of the trigger instruction of the memory compression received this time, whether a time interval from the last memory compression meets a predetermined condition, if yes, notify the memory compression module to perform a step of compressing data in the memory occupied by the specified background process, and if not, ignore the trigger instruction of the memory compression.
As an example, the apparatus 500 may be located in a virtual machine, and the interface module 501 may receive a trigger instruction of memory compression from outside the virtual machine.
The interface module 501 may be an extended interface of the application management service AMS module.
In addition, the memory compression module 504 may also be configured to compress data in the memory occupied by the process of the Activity attribute.
FIG. 6 is a partial logic block diagram of an operating system 600, comprising:
a process management module 601, configured to manage multiple processes, where the processes include a background process;
the virtual machine-oriented interface 602 is configured to send a memory compression trigger instruction to the virtual machine, where the memory compression trigger instruction is used to trigger the virtual machine to compress data in a memory occupied by a specified background process; the specified background process comprises a process of a Service attribute and/or a process of a Persistent attribute.
The operating system 600 may further include a configured system process for invoking the virtual machine-oriented interface to send a memory compression trigger instruction to the virtual machine, so that the virtual machine-oriented interface is invoked in response to an active trigger event.
As an example, the active triggering event may include at least one of:
the method comprises the steps of starting equipment, triggering a specified control by a user and triggering a background process cleaning function.
The trigger instruction for memory compression may be an instruction for calling an interface provided by the virtual machine, or may be a designation signal sent to the virtual machine.
As an example, the interface provided by the virtual machine may be an extended interface of the AMS module.
The implementation process of the functions and actions of each unit in the above device is specifically described in the implementation process of the corresponding step in the above method, and is not described herein again.
For the device embodiments, since they substantially correspond to the method embodiments, reference may be made to the partial description of the method embodiments for relevant points. The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules can be selected according to actual needs to achieve the purpose of the scheme of the application. One of ordinary skill in the art can understand and implement it without inventive effort.
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the scope of protection of the present application.