CN112131009B - Memory scheduling method and device and computer readable storage medium - Google Patents

Memory scheduling method and device and computer readable storage medium Download PDF

Info

Publication number
CN112131009B
CN112131009B CN202011062616.3A CN202011062616A CN112131009B CN 112131009 B CN112131009 B CN 112131009B CN 202011062616 A CN202011062616 A CN 202011062616A CN 112131009 B CN112131009 B CN 112131009B
Authority
CN
China
Prior art keywords
memory
page
target
task process
target task
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011062616.3A
Other languages
Chinese (zh)
Other versions
CN112131009A (en
Inventor
李培锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202011062616.3A priority Critical patent/CN112131009B/en
Publication of CN112131009A publication Critical patent/CN112131009A/en
Application granted granted Critical
Publication of CN112131009B publication Critical patent/CN112131009B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5038Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the execution order of a plurality of tasks, e.g. taking priority or time dependency constraints into consideration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/544Buffers; Shared memory; Pipes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/546Message passing systems or structures, e.g. queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5018Thread allocation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/54Indexing scheme relating to G06F9/54
    • G06F2209/545Gui
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/54Indexing scheme relating to G06F9/54
    • G06F2209/548Queue
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The application provides a memory scheduling method, a memory scheduling device and a computer readable storage medium, wherein the memory scheduling method comprises the following steps: acquiring a target task process from a process waiting queue; locking a target file page corresponding to a target task process in a page cache; and unlocking the target file page in the page cache when the memory locking period corresponding to the target task process arrives. Through implementation of the scheme, the target task process in the process waiting queue is purposefully subjected to page locking, so that the accuracy of page locking is effectively ensured, and the page unlocking is timely performed in the page cache, so that the effective utilization rate of the page cache can be improved.

Description

Memory scheduling method and device and computer readable storage medium
Technical Field
The present disclosure relates to the field of electronic technologies, and in particular, to a memory scheduling method, apparatus, and computer readable storage medium.
Background
Along with the rapid development of terminal technology, electronic devices such as mobile phones and tablet computers are used more and more frequently in the work and life of users, and in practical application, the electronic devices meet the business requirements of the users by executing task processes, and the execution of the task processes needs to call corresponding file pages.
At present, in the related art, a file page corresponding to a specific task process is usually pre-locked in a page buffer in the development process, so that when the corresponding task process is executed, the file page in the page buffer can be directly called without reading the file page from a disk. However, on one hand, it is often difficult for a developer to predict which task processes are performed at high frequency in the electronic process by the user, so that part of the file pages cached in advance may not be used effectively, and part of the file pages corresponding to the task processes performed at high frequency are not cached in advance, that is, the accuracy of locking the file pages is low; on the other hand, since the locked file pages in the related art are always locked in the life cycle of the system, when the storage space of the page cache is relatively tight, the effective utilization rate of the page cache is relatively low.
Disclosure of Invention
The embodiment of the application provides a memory scheduling method, a memory scheduling device and a computer readable storage medium, which at least can solve the problems of lower accuracy of file page locking in page cache and lower effective utilization rate of page cache in the related technology.
An embodiment of the present application provides a memory scheduling method, including:
acquiring a target task process from a process waiting queue;
locking a target file page corresponding to the target task process in a page cache;
and unlocking the target file page in the page cache when a memory locking period corresponding to the target task process arrives.
A second aspect of the embodiments of the present application provides a memory scheduling device, including:
the acquisition module is used for acquiring a target task process from the process waiting queue;
the locking module is used for locking the target file page corresponding to the target task process in the page cache;
and the unlocking module is used for unlocking the target file page in the page cache when the memory locking period corresponding to the target task process arrives.
A third aspect of an embodiment of the present application provides an electronic device, including: the memory, the processor and the computer program stored in the memory and capable of running on the processor, when the processor executes the computer program, the steps in the memory scheduling method provided in the first aspect of the embodiment of the present application are implemented.
A fourth aspect of the present embodiment provides a computer readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements each step in the memory scheduling method provided in the first aspect of the present embodiment.
From the above, according to the memory scheduling method, the memory scheduling device and the computer readable storage medium provided by the scheme of the application, a target task process is obtained from a process waiting queue; locking a target file page corresponding to a target task process in a page cache; and unlocking the target file page in the page cache when the memory locking period corresponding to the target task process arrives. Through implementation of the scheme, the target task process in the process waiting queue is purposefully subjected to page locking, so that the accuracy of page locking is effectively ensured, and the page unlocking is timely performed in the page cache, so that the effective utilization rate of the page cache can be improved.
Drawings
Fig. 1 is a basic flow chart of a memory scheduling method according to a first embodiment of the present application;
fig. 2 is a flow chart of a page locking method according to a first embodiment of the present application;
fig. 3 is a detailed flowchart of a memory scheduling method according to a second embodiment of the present application;
fig. 4 is a schematic program module of a memory scheduling device according to a third embodiment of the present application;
fig. 5 is a schematic structural diagram of an electronic device according to a fourth embodiment of the present application.
Detailed Description
In order to make the objects, features and advantages of the present invention more obvious and understandable, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
In order to solve the defects of lower accuracy of file page locking in page caches and lower effective utilization rate of page caches in the related art, a first embodiment of the present application provides a memory scheduling method, such as a basic flowchart of the memory scheduling method provided in fig. 1, where the memory scheduling method includes the following steps:
step 101, obtaining a target task process from a process waiting queue.
Specifically, in this embodiment, the waiting queue in the process management includes a task process to be executed, and when there are multiple task processes in the process waiting queue, there may be one or more target task processes in this embodiment, which may cause a large amount of CPU overhead due to the fact that if the memory scheduling operation in this embodiment is performed on all the processes during the process scheduling, this embodiment only determines the task process with a relatively strong necessity as the target task process for performing the memory scheduling.
In one aspect, the present embodiment may determine, when determining the target task process, based on the process type, for example, a specific task process, such as a UI process, may be determined as the target task process. In practical application, the UI process is responsible for presenting a display screen to a user, and in the process that the user uses the electronic device, the process is frequently invoked and requires a higher process response speed, if a file page corresponding to the process is read from a disk, a thread is blocked, and thus, a jam is caused when an interface is rendered, so that the UI process in the process waiting queue can be preferably determined as a target task process.
On the other hand, the present embodiment may also determine the target task process based on the process state of each process in the process waiting queue. Taking Linux system processes as an example, the process states may include: ready state, interruptible waiting state, uninterruptible waiting state, stop state, and suspend state, etc., the state flag state of ready state is task_running, at this time, the program has been suspended into the RUNNING queue, and is in ready RUNNING state. Once processor usage rights are obtained, the running state may be entered. When the process gets the processor to run, the state value is still task_running and is not changed; linux will point a pointer current to it that is dedicated to the currently running task to indicate that it is a running process. In practical applications, linux native kernels have designed task_running processes, and these processes are divided into two types: the RUNNING and running_ready processes are referred to as RUNNING processes, and the RUNNING process is a READY process, and the target task process requiring memory to be locked in advance in this embodiment may be the running_ready process.
In some implementations of the present embodiment, before the target task process is obtained from the process waiting queue, further comprising: and detecting the memory state of the page cache in real time.
Specifically, in this embodiment, when the memory state is a memory tension state, a step of acquiring a target task process from a process waiting queue is performed. It should be noted that, in this embodiment, considering that the memory of the page buffer needs to be reasonably scheduled in the memory shortage state, the embodiment first determines whether the memory is shortage, and if yes, triggers the memory scheduling flow of this embodiment to promote the memory scheduling rationality.
Further, in some implementations of the present embodiment, detecting the memory state of the page cache in real time includes: detecting the current memory recovery time length when the memory of the page cache is insufficient; comparing the memory recovery time with a preset time threshold; and determining the corresponding memory state according to the comparison result. When the memory recovery time is longer than the time threshold, the memory state is a memory tension state; and when the memory recovery time is less than the time threshold, the memory state is a memory sufficient state.
Specifically, when the process allocates the memory, it is determined whether the memory is sufficient, if so, the memory allocation is performed, and if not, the memory is allocated to be reallocated when the memory is recovered sufficiently, so the duration of a single memory recovery represents the duration of the memory allocation delay. In this embodiment, the determination of the memory reclamation time length is added to the memory reclamation flow, if the memory reclamation time length exceeds the threshold value for a certain time, the memory state at this time is recorded as tense, and if the memory allocation delay for a time is lower than the threshold value for a certain time, the memory state at this time is recorded as sufficient.
Step 102, locking a target file page corresponding to the target task process in the page cache.
Specifically, in this embodiment, it is avoided that file pages to be accessed by processes to be operated are flushed out of a cache due to memory tension or the like, so that page missing interruption occurs when the processes access corresponding file pages, or process blocking is caused by reading corresponding file pages from a disk, and in this embodiment, the file pages responded by the target task process are locked in the page cache, so that it is ensured that the file pages are not moved out of the page cache.
Because the file pages corresponding to the target task process are locked in the page buffer, when the target task process is executed, the corresponding file pages in the page buffer can be directly called, and the process execution efficiency is improved.
In some implementations of this embodiment, before locking, in the page cache, a target file page corresponding to the target task process, the method further includes: and acquiring the task response speed required by the target task process.
Specifically, considering that the space of the page cache is limited and the performance of the CPU is limited, the rationality of scheduling the page cache should be ensured as much as possible. In practical applications, the task response speeds required by different task processes are different, for example, in order to ensure the output instantaneity of the display screen when executing the UI process, a higher task response speed is generally required, so that the embodiment may execute the step of locking the target file page corresponding to the target task process in the page cache when the task response speed is higher than the preset speed threshold. The task process aiming at partial non-time delay sensitivity can be realized by adopting the prior art scheme, namely if the file page is moved out of the page buffer memory when the process is executed, the corresponding file page is moved back to the page buffer memory and then the process is executed.
It should be noted that, in practical application, when the target file page is locked in the page buffer, the backup locking can be performed on the target file page in the page buffer, so that when the target task process is executed, the cached main file page can be preferentially called, and if an error occurs in the process execution process, the backup file page can be called again for executing the process.
And 103, unlocking the target file page in the page cache when the memory locking period corresponding to the target task process arrives.
Specifically, in this embodiment, unlike the situation that in the related art, the locked file page is kept locked for a long time in the system life cycle to cause the memory to be strained, in this embodiment, when the memory locking cycle of the file page arrives, the file page can be automatically unlocked and moved out of the page cache, so that the memory utilization rate can be improved. In practical applications, the memory lock period may be a fixed value, or may be dynamically set according to a currently executed target task process, and of course, the memory lock period may also be associated with a process execution behavior, that is, the memory lock period is a time interval between a process execution start time and a process execution end time.
In some implementations of the present embodiment, before unlocking the target file page in the page cache, the method further includes: acquiring a process execution frequency corresponding to a target task process; and determining a corresponding memory locking period based on the process execution frequency.
Specifically, in practical applications, the execution frequencies of different task processes are different, and for task processes that execute at high frequency, because the task processes execute again in a relatively short time after execution is finished, if memory locking and memory unlocking are frequently performed, relatively large CPU overhead is caused, so that the embodiment determines a corresponding memory locking period based on the process execution frequency of the executed task processes, where the memory locking period is positively related to the process execution frequency, that is, the process execution frequency is relatively high, a relatively long memory locking period can be correspondingly set to keep locking of a file page for a period of time, so that the task processes can be called for multiple times.
In some implementations of the present embodiment, after the target task process is obtained from the process waiting queue, further comprising: acquiring an associated calling process of a target task process; and synchronously locking the file pages corresponding to the associated calling processes in the page cache.
Specifically, in practical application, the execution of a part of task processes may also need to pull up other processes, for example, when executing a ticket purchasing application, the payment application is temporarily called, so that the two processes cooperate to realize a service required by a user, thereby, the embodiment also determines an associated calling process to be operated by a target task process, so that the associated calling process is executed simultaneously in the execution process of the target task process, when locking a file page of the target task process, the file page of the associated calling process is also obtained and synchronously locked in a page cache, and thus, the corresponding file page can be obtained in the page cache in the execution process of the associated calling process, and the execution efficiency and execution smoothness of a plurality of associated processes in the execution process can be effectively improved.
It should also be noted that, for the page unlocking of the associated calling process, the page unlocking may be consistent with the target task process, i.e. when the execution of the target task process is finished, the file pages of the target task process and the target task process are simultaneously unlocked in the page buffer.
Fig. 2 is a schematic flow chart of a page locking method provided in this embodiment, where in some implementations of this embodiment, when locking a target file page corresponding to a target task process in a page cache, the method specifically includes the following steps:
step 201, determining subdivision task types of corresponding responses required by a target task process according to a current system operation scene;
step 202, determining a target file page from all file pages corresponding to the target task process based on the subdivision task type;
and 203, locking the target file page in the page cache.
Specifically, in this embodiment, multiple task types may be responded by a single task process, and under different operation scenarios, the types of file pages that need to be invoked in the process execution process are different, for example, the target task process supports a response task type A, B, C, where under the system operation scenario a, the task type that needs to be responded by the target task process is a, and under the system operation scenario b, the task type that needs to be responded by the target task process is A, B, and because the types of the subdivision task types are different, the types of the file pages that need to be invoked correspondingly in the process execution process are also different.
Based on the technical scheme of the embodiment of the application, a target task process is obtained from a process waiting queue; locking a target file page corresponding to a target task process in a page cache; and unlocking the target file page in the page cache when the memory locking period corresponding to the target task process arrives. Through implementation of the scheme, the target task process in the process waiting queue is purposefully subjected to page locking, so that the accuracy of page locking is effectively ensured, and the page unlocking is timely performed in the page cache, so that the effective utilization rate of the page cache can be improved.
The method in fig. 3 is a refined memory scheduling method provided in the second embodiment of the present application, where the memory scheduling method includes:
step 301, detecting a current memory recovery duration when the memory of the page cache is insufficient.
Step 302, determining the memory state of the page cache according to the memory reclamation time.
Specifically, in this embodiment, when the memory recovery time is longer than the time threshold, the memory state is a memory tension state; and when the memory recovery time is less than the time threshold, the memory state is a memory sufficient state.
Step 303, when the memory state is a memory tension state, the target task process to be executed is obtained from the process waiting queue.
Specifically, in this embodiment, considering that the memory of the page cache needs to be reasonably scheduled under the state of memory tension, the embodiment first determines whether the memory is tension, and if yes, triggers the subsequent page locking and unlocking processes of this embodiment to improve the memory scheduling rationality.
In addition, considering that if the memory scheduling operation of the present embodiment is performed on all the processes during the process scheduling, the subsequent memory locking and memory unlocking may be performed at a high frequency, resulting in a large amount of CPU overhead, so the present embodiment determines only the task process with a high necessity as the target task process (e.g., UI process) for performing the memory scheduling of the present embodiment.
Step 304, determining the subdivision task type of the corresponding response required by the target task process according to the current system operation scene.
Specifically, in this embodiment, multiple task types may be responded by a single task process, and under different operation scenarios, the subdivided task types that are required to respond by using the task process are different.
Step 305, determining a target file page from all file pages corresponding to the target task process based on the subdivision task type.
And 306, locking the target file page in the page cache.
Specifically, since the types of the subdivided tasks are different, the types of the file pages required to be correspondingly called in the process execution process are also different, and based on this, the embodiment correspondingly locks the file pages according to the types of the subdivided tasks, so that the memory occupation is large and the locking/unlocking operation is complicated caused by locking all the file pages corresponding to the target task process each time are avoided.
And step 307, unlocking the target file page in the page cache when the target task process is finished.
Specifically, when the target task process is executed, the file page can be automatically unlocked and moved out of the page buffer, so that unnecessary occupation of the page buffer by the corresponding file page is avoided, and the memory utilization rate can be improved.
It should be understood that, the sequence number of each step in this embodiment does not mean the order of execution of the steps, and the execution order of each step should be determined by its functions and internal logic, and should not be construed as a unique limitation on the implementation process of the embodiments of the present application.
The embodiment of the application discloses a memory scheduling method, which is used for acquiring a target task process from a process waiting queue; locking a target file page corresponding to a target task process in a page cache; and unlocking the target file page in the page cache when the memory locking period corresponding to the target task process arrives. Through implementation of the scheme, the target task process in the process waiting queue is purposefully subjected to page locking, so that the accuracy of page locking is effectively ensured, and the page unlocking is timely performed in the page cache, so that the effective utilization rate of the page cache can be improved.
Fig. 4 is a memory scheduling device according to a third embodiment of the present application. The memory scheduling device can be used for realizing the memory scheduling method in the previous embodiment. As shown in fig. 4, the memory scheduling device mainly includes:
an obtaining module 401, configured to obtain a target task process from a process waiting queue;
the locking module 402 is configured to lock, in the page cache, a target file page corresponding to a target task process;
and the unlocking module 403 is configured to unlock the target file page in the page cache when the memory lock period corresponding to the target task process arrives.
In some implementations of the present embodiment, the memory scheduling device further includes: the detection module is used for: and before the target task process is acquired from the process waiting queue, detecting the memory state of the page cache in real time. Correspondingly, the obtaining module 401 is specifically configured to: and when the memory state is the memory tension state, acquiring the target task process from the process waiting queue.
Further, in some implementations of this embodiment, the detection module is specifically configured to: detecting the current memory recovery time length when the memory of the page cache is insufficient; comparing the memory recovery time with a preset time threshold; and determining the corresponding memory state according to the comparison result. When the memory recovery time is longer than the time threshold, the memory state is a memory tension state; and when the memory recovery time is less than the time threshold, the memory state is a memory sufficient state.
In some implementations of the present embodiment, the obtaining module 401 is further configured to: and acquiring the task response speed required by the target task process before locking the target file page corresponding to the target task process in the page cache. Correspondingly, the locking module 402 is specifically configured to: and when the task response speed is higher than a preset speed threshold, locking a target file page corresponding to the target task process in the page cache.
In some implementations of the present embodiment, the memory scheduling device further includes: the determining module is used for acquiring the process execution frequency corresponding to the target task process before unlocking the target file page in the page cache; and determining a corresponding memory locking period based on the process execution frequency.
In some implementations of this embodiment, the obtaining module 401 is further configured to: and acquiring an associated calling process of the target task process. Correspondingly, the locking module 402 is further configured to: and synchronously locking the file pages corresponding to the associated calling processes in the page cache.
In some implementations of the present embodiment, the locking module 402 is specifically configured to: determining subdivision task types of corresponding responses required by a target task process according to a current system operation scene; determining a target file page from all file pages corresponding to the target task process based on the subdivision task type; and locking the target file page in the page cache.
It should be noted that, the memory scheduling methods in the first and second embodiments may be implemented based on the memory scheduling device provided in the first embodiment, and those skilled in the art can clearly understand that, for convenience and brevity of description, the specific working process of the memory scheduling device described in the embodiment may refer to the corresponding process in the foregoing method embodiment, which is not repeated herein.
According to the memory scheduling device provided by the embodiment, a target task process is obtained from a process waiting queue; locking a target file page corresponding to a target task process in a page cache; and unlocking the target file page in the page cache when the memory locking period corresponding to the target task process arrives. Through implementation of the scheme, the target task process in the process waiting queue is purposefully subjected to page locking, so that the accuracy of page locking is effectively ensured, and the page unlocking is timely performed in the page cache, so that the effective utilization rate of the page cache can be improved.
Referring to fig. 5, fig. 5 is an electronic device according to a fourth embodiment of the present application. The electronic device can be used for realizing the memory scheduling method in the previous embodiment. As shown in fig. 5, the electronic device mainly includes:
memory 501, processor 502, bus 503, and a computer program stored in memory 501 and executable on processor 502, memory 501 and processor 502 being connected by bus 503. When the processor 502 executes the computer program, the memory scheduling method in the foregoing embodiment is implemented. Wherein the number of processors may be one or more.
The memory 501 may be a high-speed random access memory (RAM, random Access Memory) memory or a non-volatile memory (non-volatile memory), such as a disk memory. The memory 501 is used for storing executable program codes, and the processor 502 is coupled to the memory 501.
Further, the embodiment of the application further provides a computer readable storage medium, which may be provided in the electronic device in each embodiment, and the computer readable storage medium may be a memory in the embodiment shown in fig. 5.
The computer readable storage medium stores a computer program which, when executed by a processor, implements the memory scheduling method of the foregoing embodiment. Further, the computer-readable medium may be any medium capable of storing a program code, such as a usb (universal serial bus), a removable hard disk, a Read-Only Memory (ROM), a RAM, a magnetic disk, or an optical disk.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of modules is merely a logical function division, and there may be additional divisions of actual implementation, e.g., multiple modules or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or modules, which may be in electrical, mechanical, or other forms.
The modules illustrated as separate components may or may not be physically separate, and components shown as modules may or may not be physical modules, i.e., may be located in one place, or may be distributed over a plurality of network modules. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional module in each embodiment of the present application may be integrated into one processing module, or each module may exist alone physically, or two or more modules may be integrated into one module. The integrated modules may be implemented in hardware or in software functional modules.
The integrated modules, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a readable storage medium, including several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods of the embodiments of the present application. And the aforementioned readable storage medium includes: a usb disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk, etc.
It should be noted that, for the sake of simplicity of description, the foregoing method embodiments are all expressed as a series of combinations of actions, but it should be understood by those skilled in the art that the present application is not limited by the order of actions described, as some steps may be performed in other order or simultaneously in accordance with the present application. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily all necessary for the present application.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to the related descriptions of other embodiments.
The foregoing describes a memory scheduling method, apparatus, and computer readable storage medium provided in the present application, and those skilled in the art, based on the concepts of the embodiments of the present application, may change in terms of specific implementations and application ranges, and in summary, the disclosure should not be construed as limiting the present application.

Claims (9)

1. The memory scheduling method is characterized by comprising the following steps:
detecting the memory state of the page cache in real time;
when the memory state is a memory tension state, a target task process is obtained from a process waiting queue, and the target task process is determined according to the process type and the process state of the task process;
locking a target file page corresponding to the target task process in the page cache;
and unlocking the target file page in the page cache when a memory locking period corresponding to the target task process arrives.
2. The memory scheduling method according to claim 1, wherein the detecting, in real time, the memory state of the page buffer includes:
detecting the current memory recovery time length when the memory of the page cache is insufficient;
comparing the memory recovery time length with a preset time length threshold;
determining a corresponding memory state according to the comparison result; when the memory recovery time is longer than the time threshold, the memory state is a memory tension state; and when the memory recovery time is smaller than the time threshold, the memory state is a memory sufficiency state.
3. The memory scheduling method according to claim 1, wherein before the target file page corresponding to the target task process is locked in the page buffer, the method further comprises:
acquiring a task response speed required by the target task process;
and executing the step of locking the target file page corresponding to the target task process in the page cache when the task response speed is higher than a preset speed threshold.
4. The memory scheduling method according to claim 1, wherein before the target file page is unlocked in the page cache, further comprising:
acquiring a process execution frequency corresponding to the target task process;
and determining a corresponding memory locking period based on the process execution frequency.
5. The memory scheduling method according to claim 1, wherein after the target task process is obtained from the process waiting queue, the method further comprises:
acquiring an associated calling process of the target task process;
and synchronously locking the file pages corresponding to the associated calling processes in the page cache.
6. The memory scheduling method according to any one of claims 1 to 5, wherein locking, in the page cache, the target file page corresponding to the target task process includes:
determining the subdivision task type of the corresponding response required by the target task process according to the current system operation scene;
determining a target file page from all file pages corresponding to the target task process based on the subdivision task type;
and locking the target file page in a page cache.
7. A memory scheduling apparatus, comprising:
the acquisition module is used for detecting the memory state of the page cache in real time; when the memory state is a memory tension state, a target task process is obtained from a process waiting queue, and the target task process is determined according to the process type and the process state of the task process;
the locking module is used for locking the target file page corresponding to the target task process in the page cache;
and the unlocking module is used for unlocking the target file page in the page cache when the memory locking period corresponding to the target task process arrives.
8. An electronic device, comprising: memory, processor, and bus;
the bus is used for realizing connection communication between the memory and the processor;
the processor is used for executing the computer program stored on the memory;
the processor, when executing the computer program, implements the steps of the method of any one of claims 1 to 6.
9. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 6.
CN202011062616.3A 2020-09-30 2020-09-30 Memory scheduling method and device and computer readable storage medium Active CN112131009B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011062616.3A CN112131009B (en) 2020-09-30 2020-09-30 Memory scheduling method and device and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011062616.3A CN112131009B (en) 2020-09-30 2020-09-30 Memory scheduling method and device and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN112131009A CN112131009A (en) 2020-12-25
CN112131009B true CN112131009B (en) 2024-04-02

Family

ID=73843563

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011062616.3A Active CN112131009B (en) 2020-09-30 2020-09-30 Memory scheduling method and device and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN112131009B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1825283A (en) * 2006-03-31 2006-08-30 浙江大学 Method for implementing hardware image starting optimizing of embedded operating system
CN108205472A (en) * 2017-08-15 2018-06-26 珠海市魅族科技有限公司 Memory release method, release device, computer installation and readable storage medium storing program for executing
CN109446799A (en) * 2018-11-14 2019-03-08 深圳市腾讯网络信息技术有限公司 Internal storage data guard method, security component and computer equipment and storage medium
CN110895515A (en) * 2018-09-12 2020-03-20 中兴通讯股份有限公司 Memory cache management method, multimedia server and computer storage medium
CN111400052A (en) * 2020-04-22 2020-07-10 Oppo广东移动通信有限公司 Decompression method, decompression device, electronic equipment and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10540098B2 (en) * 2016-07-19 2020-01-21 Sap Se Workload-aware page management for in-memory databases in hybrid main memory systems

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1825283A (en) * 2006-03-31 2006-08-30 浙江大学 Method for implementing hardware image starting optimizing of embedded operating system
CN108205472A (en) * 2017-08-15 2018-06-26 珠海市魅族科技有限公司 Memory release method, release device, computer installation and readable storage medium storing program for executing
CN110895515A (en) * 2018-09-12 2020-03-20 中兴通讯股份有限公司 Memory cache management method, multimedia server and computer storage medium
CN109446799A (en) * 2018-11-14 2019-03-08 深圳市腾讯网络信息技术有限公司 Internal storage data guard method, security component and computer equipment and storage medium
CN111400052A (en) * 2020-04-22 2020-07-10 Oppo广东移动通信有限公司 Decompression method, decompression device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN112131009A (en) 2020-12-25

Similar Documents

Publication Publication Date Title
US11106579B2 (en) System and method to manage and share managed runtime memory for java virtual machine
EP2972885B1 (en) Memory object reference count management with improved scalability
US8996811B2 (en) Scheduler, multi-core processor system, and scheduling method
US9098337B2 (en) Scheduling virtual central processing units of virtual machines among physical processing units
JP6333848B2 (en) System and method for implementing a statistical counter with scalable competitive adaptability
KR102236419B1 (en) Method, apparatus, device and storage medium for managing access request
JP6341931B2 (en) System and method for implementing a shared probabilistic counter that stores update probability values
CN109992366B (en) Task scheduling method and task scheduling device
WO2007028807A1 (en) Managing a resource lock
US10083058B2 (en) Batched memory page hinting
US20120144406A1 (en) Wait on address synchronization interface
CN112764904A (en) Method for preventing starvation of low priority tasks in multitask-based system
US11782761B2 (en) Resource management unit for capturing operating system configuration states and offloading tasks
JP6310943B2 (en) System and method for implementing a NUMA aware statistics counter
CN112131009B (en) Memory scheduling method and device and computer readable storage medium
CN112306980A (en) Log processing method and terminal equipment
Gough et al. Kernel scalability—expanding the horizon beyond fine grain locks
KR102443089B1 (en) Synchronization in a computing device
Rexha et al. A comparison of three page replacement algorithms: FIFO, LRU and optimal
CN110795231B (en) Virtual CPU processing method and device
JP2004192052A (en) Software processing method and software processing system
CN112540886A (en) CPU load value detection method and device
CN111831390B (en) Resource management method and device of server and server
CN115858107A (en) Multithreading batch operation monitoring method and device, computer equipment and storage medium
CN114117140A (en) Method, system and device for processing shared data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant