WO2024037428A1 - 一种进程处理方法和设备 - Google Patents

一种进程处理方法和设备 Download PDF

Info

Publication number
WO2024037428A1
WO2024037428A1 PCT/CN2023/112336 CN2023112336W WO2024037428A1 WO 2024037428 A1 WO2024037428 A1 WO 2024037428A1 CN 2023112336 W CN2023112336 W CN 2023112336W WO 2024037428 A1 WO2024037428 A1 WO 2024037428A1
Authority
WO
WIPO (PCT)
Prior art keywords
memory
target
memory page
page
pages
Prior art date
Application number
PCT/CN2023/112336
Other languages
English (en)
French (fr)
Inventor
汤中睿
张胜举
罗一帆
顾志峰
徐峥
Original Assignee
中移(苏州)软件技术有限公司
中国移动通信集团有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中移(苏州)软件技术有限公司, 中国移动通信集团有限公司 filed Critical 中移(苏州)软件技术有限公司
Publication of WO2024037428A1 publication Critical patent/WO2024037428A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/06Addressing a physical block of locations, e.g. base addressing, module addressing, memory dedication
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/06Addressing a physical block of locations, e.g. base addressing, module addressing, memory dedication
    • G06F12/0646Configuration or reconfiguration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Definitions

  • This application relates to the field of cloud computing technology, and involves but is not limited to a process processing method and equipment.
  • embodiments of the present application are expected to provide a process processing method and device, which solves the problems in related technologies that the operation of self-configuring memory pages is complicated, error-prone, and inefficient.
  • a process processing method includes:
  • the target process is processed based on the target memory page.
  • the target memory page is determined from the candidate memory pages with multiple different memory sizes based on the target memory size in the order of memory pages from large to small, including:
  • the candidate large memory page of the candidate memory page has the target memory page that matches the target memory size, in the order of memory pages from large to small, from the candidate large memory page
  • the memory page in which the target memory size is determined is the target memory page
  • the memory page that determines the target memory size among the small memory pages to be selected is the target memory page; wherein the large memory page to be selected includes a variety of large memory pages of different memory sizes, and the small memory page to be selected is the target memory page.
  • the memory pages include small memory pages with a fixed memory size, and the memory of the large memory page to be selected is larger than the memory of the small memory page to be selected.
  • the memory page that determines the target memory size from the large memory pages to be selected is the target memory page in the order of memory pages from large to small, including:
  • the memory page with the target memory size is determined as the target from the large memory page to be selected and the small memory page to be selected.
  • Memory pages including:
  • the memory page with the target memory size is determined to be the target memory. page.
  • processing the target process based on the target memory page includes:
  • the target process is processed based on the processing method corresponding to the target memory page and the target process type.
  • determining the target process type of the target process includes:
  • the popularity value prediction model is used to determine the popularity value of the process to be processed based on the number of times the process to be processed accesses the memory page within the target period, the processing data of the process to be processed, and the priority of the process to be processed; wherein, The processing data represents the processing status of the process to be processed, and the heat value represents the importance of the process to be processed;
  • the process to be processed is divided to obtain a first process set and a second process set; wherein the popularity value of the process in the first process set is higher than the popularity of the process in the second process set. value;
  • a target process type of the target process is determined based on a relationship between the target process and the first process set and the second process set.
  • determining the target process type of the target process based on the relationship between the target process and the first process set and the second process set includes:
  • the target process type is the first process type; wherein, The first process type indicates that the addressing of the memory page corresponding to the target process needs to be accelerated separately;
  • the target process type is the second process type; wherein, The second process type representation requires accelerating the addressing of the memory page corresponding to the target process;
  • the target process type is determined to be a third process type; wherein the third process type represents not addressing the memory page corresponding to the target process. Perform accelerated processing.
  • processing the target process based on the processing method corresponding to the target memory page and the target process type includes:
  • the page flag bit of the target process is set to a first value; wherein the first value indicates that the addressing of the memory page corresponding to the target process needs to be accelerated. deal with;
  • the target process is processed based on the target memory page and the first target memory page mapping table of the first memory.
  • processing the target process based on the processing method corresponding to the target memory page and the target process type includes:
  • the page flag corresponding to the target process is set to a first value; wherein the first value represents the addressing needs of the memory page corresponding to the target process. speed up processing;
  • the target process is processed based on the target memory page and the second target memory page mapping table of the first memory; wherein the addressing speed of the memory page in the second target memory page mapping table is smaller than that of the first memory page.
  • the addressing speed of memory pages in a target memory page mapping table is smaller than that of the first memory page.
  • processing the target process based on the processing method corresponding to the target memory page and the target process type includes:
  • the page flag bit corresponding to the target process is set to a second value; wherein the second value represents the addressing error of the memory page corresponding to the target process. Need to expedite processing;
  • the target process is processed based on the target memory page and the second memory; wherein the speed of accessing the memory page corresponding to the first memory is faster than the speed of accessing the memory page corresponding to the second memory.
  • a process processing device includes: a processor, a memory and a communication bus;
  • the communication bus is used to realize the communication connection between the processor and the memory
  • the processor is configured to execute a process processing program in the memory to implement the steps of the above process processing method.
  • the process processing method and device provided by the embodiments of the present application can determine the target memory size of the target memory page required by the target process when it is detected that a memory page needs to be allocated to the target process, and then Then according to the order of memory pages from large to small, the target memory page is determined from the candidate memory pages with multiple different memory sizes based on the target memory size, and then the target process is processed based on the target memory page; in this way, based on the target memory size
  • the dynamic allocation method of first allocating memory pages with larger memory and then allocating memory pages with smaller memory does not require the user to perform complex configurations.
  • the determined target memory page There are fewer pages, and because the memory pages to be selected include a variety of memory pages of different memory sizes, it can meet the needs of different processes for different memory sizes, improve the speed and success rate of memory page allocation, and thus help improve the system. performance, solving the problems in related technologies that the operation of self-configuring memory pages is complicated, error-prone and inefficient.
  • Figure 1 is a schematic flowchart of a process processing method provided by an embodiment of the present application
  • FIG. 2 is a schematic flowchart of another process processing method provided by an embodiment of the present application.
  • Figure 3 is a schematic structural diagram of a memory page to be selected in a process processing method provided by an embodiment of the present application
  • Figure 4 is a schematic structural diagram of an initial heat value prediction model in a process processing method provided by an embodiment of the present application
  • FIG. 5 is a schematic flowchart of yet another process processing method provided by an embodiment of the present application.
  • Figure 6 is a schematic diagram of the migration of warm processes and cold processes in a process processing method provided by an embodiment of the present application
  • Figure 7 is a schematic flow chart of the CPU accessing the TLB table in a process processing method provided by the embodiment of the present application.
  • Figure 8 is a schematic flowchart of a hot process and a warm process based on the first memory in a process processing method provided by an embodiment of the present application;
  • Figure 9 is a schematic structural diagram of a process processing device provided by an embodiment of the present application.
  • Figure 10 is a schematic structural diagram of a process processing device provided by an embodiment of the present application.
  • the processor of the electronic device may perform the step. It is also worth noting that the embodiments of the present application do not limit the order in which the electronic device performs the following steps.
  • the methods used to process data in different embodiments may be the same method or different methods. It should also be noted that any step in the embodiments of the present application can be executed independently by an electronic device. That is, when an electronic device executes any step in the following embodiments, it may not depend on the execution of other steps.
  • An embodiment of the present application provides a process processing method, which can be applied to a process processing device. Referring to Figure 1, the method includes the following steps:
  • Step 101 When it is detected that a memory page needs to be allocated to the target process, determine the target memory size of the target memory page required by the target process.
  • the target process may be a process that currently needs to obtain a memory page; further, the target process may be a process that is currently being processed and needs to obtain a memory page.
  • Target The memory page can be the memory page required by the target process to store the data generated by the target process during processing; the target memory size is the memory size of the target memory page.
  • the target memory size of the target memory page required by the target process can be set in advance, so that when it is detected that the memory page needs to be allocated to the target process, the target memory page required by the target process can be determined. Memory size, thereby allocating the required memory pages to the target process.
  • Step 102 Determine the target memory page from the candidate memory pages with multiple different memory sizes based on the target memory size in descending order of the memory pages.
  • the memory pages to be selected may include memory pages of multiple different memory sizes; in a feasible manner, the memory pages to be selected may include multiple different memory sizes of 4KB (kilobytes), 2M (megabytes), 128M, 512M, 1G (gigabytes), 2G, 5G.
  • the embodiments of this application are only examples of the various memory sizes included in the memory pages to be selected. They can be based on actual business needs. Set memory pages for a variety of different memories included in the selected memory page.
  • the target memory page is determined from the candidate memory pages with multiple different memory sizes based on the target memory size in order of the memory pages from large to small. That is, the candidate memory pages are allocated first based on the target memory size. The memory page with the larger memory in the memory page is allocated to the memory page with the smaller memory in the candidate memory page.
  • the user does not need to perform complex configuration by dynamically allocating the memory page, and because the priority is allocated to the memory page with the larger memory, Large memory pages, so the determined target memory page has fewer pages, and because the candidate memory pages include a variety of memory pages with different memory sizes, it can meet the needs of different processes for different memory sizes and improve memory
  • the speed and success rate of page allocation also help in subsequent processing of the target process based on the target memory page.
  • the memory pages in the memory pages to be selected can be arranged in order from large to small, and then after the target memory size is determined, the memory that matches the target memory size is determined from the memory pages to be selected. page, if the determined memory size of this memory page is equal to the target memory size, then this memory page can be directly determined as the target memory page; if the memory size of this memory page is smaller than the target memory size, then this memory page can be determined The difference between the memory size and the target memory size continues to determine the memory page that matches the difference from the candidate memory pages until the target memory page is obtained.
  • Step 103 Process the target process based on the target memory page.
  • the target memory page after the target memory page is determined, the target memory page can be allocated to the target process to process the target process based on the target memory page; because the target memory page has fewer pages, it can save the storage of the target memory.
  • the memory space occupied by the memory page mapping table of the page mapping relationship reduces the number of subsequent address translations and cache failures, thereby improving system performance.
  • the process processing method provided by the embodiment of the present application determines the target memory size of the target memory page required by the target process when it is detected that a memory page needs to be allocated to the target process, and then in order of the memory pages from large to small, based on the target
  • the memory size determines the target memory page from the candidate memory pages with multiple different memory sizes, and then processes the target process based on the target memory page; in this way, based on the target memory size, memory pages with larger memory are allocated first and then smaller memory is allocated.
  • the dynamic allocation method of memory pages does not require complex configuration by the user, and because memory pages with larger memory are allocated first, the number of target memory pages determined is less, and because the memory pages to be selected include multiple
  • a variety of memory pages with different memory sizes can meet the needs of different processes for different memory sizes, which can improve the speed and success rate of memory page allocation, thereby helping to improve system performance and solving the problem of self-configured memory pages in related technologies.
  • the operation is complex, error-prone and inefficient.
  • embodiments of the present application provide a process processing method. Referring to Figure 2, the method includes the following steps:
  • Step 201 When detecting that a memory page needs to be allocated to the target process, the process processing device determines the target memory size of the target memory page required by the target process.
  • Step 202 If there is a target memory page matching the target memory size among the large memory pages to be selected, the process processing device determines the large memory page to be selected from the large memory pages to be selected in order of the memory pages from large to small.
  • a memory page of the target memory size is the target memory page.
  • the memory page to be selected includes a large memory page to be selected and a small memory page to be selected; the large memory page to be selected is a memory page with a larger memory size, and the small memory page to be selected is a memory page with a smaller memory size. page; since the default size of a memory page is 4KB, you can set the memory size to 4KB
  • the memory page is called a small memory page, and the memory page with a memory size of 4KB or more is called a large memory page.
  • the large memory page to be selected includes large memory pages of multiple memory sizes, and the small memory page to be selected includes multiple small memory pages. memory page.
  • the candidate large memory page of the candidate memory page has a target memory page that matches the target memory size, indicating that the target memory page can be determined from the candidate large memory page; in this case, it can Determine the target memory page directly from the large memory page to be selected; wherein, the operation of determining the target memory page from the large memory page to be selected can be: first, arrange the large memory page to be selected in order from large to small memory pages, Then determine the large memory page that matches the target memory size from the large memory pages to be selected.
  • the memory size of the determined large memory page is equal to the target memory size, you can directly determine this large memory page as the target memory page; if If the memory size of this large memory page is less than the target memory size, determine the difference between the memory size of this large memory page and the target memory size, and continue to determine the large memory page that matches the difference from the large memory pages to be selected until we get Target memory page.
  • Step 203 When there is no target memory page that matches the target memory size among the large memory pages to be selected, the process processing device starts from the large memory page to be selected and the memory page to be selected in order of the memory pages from large to small. Select the memory page that determines the target memory size among the small memory pages as the target memory page.
  • the large memory page to be selected includes a variety of large memory pages with different memory sizes
  • the small memory page to be selected includes small memory pages with a fixed memory size
  • the memory of the large memory page to be selected is larger than the memory of the small memory page to be selected.
  • the target memory page cannot be directly determined from the large memory pages. ; In this case, the target memory page can be determined from the large memory page to be selected and the small memory page to be selected. That is, if the memory space of the large memory page to be selected is insufficient, you can continue to fill it from the small memory page to be selected. Insufficient memory space.
  • the difference between the memory size that can be allocated by the large memory page to be selected and the target memory size can be determined, and the small memory page to be selected can be calculated from Determine the memory space whose memory size is the difference in the surface, and determine the memory size that can be allocated by the large memory page to be selected and the memory space whose memory size is the difference from the memory size determined from the small memory page to be selected as the target memory space; Among them, you can determine the small memory pages with consecutive addresses from the small memory pages to be selected, and then piece them together to create a memory space with the memory size of the difference; in this way, use a combination of large memory pages and small memory pages to allocate memory pages , the allocation is more flexible, and it can improve the success rate and speed of page allocation, thereby improving the overall system performance.
  • the memory pages to be selected include allocated large memory pages to be selected, unallocated large memory pages to be selected, allocated small memory pages to be selected, and unallocated candidate memory pages.
  • Small memory pages when it is detected that a certain process is applying for memory, the memory to be allocated can be determined first from the unallocated candidate large memory pages in the order of the memory pages from large to small.
  • the memory to be allocated is also prioritized. Among the large memory pages, the large memory page with the larger memory size will be allocated to the large memory page with the next largest memory size only when there is no memory that can be allocated to the large memory page of the current memory size.
  • the page combination scheme is determined, the combination scheme of large memory pages and small memory pages is determined, and the memory to be allocated is determined from the unallocated large memory pages to be selected and the unallocated small memory pages to be selected.
  • Step 204 The process processing device determines the target process type of the target process.
  • the target process type represents the importance of the target process.
  • target process types may include cold processes, warm processes, and hot processes; among them, hot processes access memory pages with the highest frequency, and hot processes have the highest importance; warm processes access memory pages with higher frequency, And the warm process has a higher degree of importance; the cold process accesses the memory page the least frequently, and the cold process has the lowest degree of importance.
  • the process can be divided based on the historical access status of the process and the heat value of the process at a future point in time, and the process can be divided into three process types: hot process, warm process and cold process.
  • step 204 can be implemented through the following steps:
  • Step 204a The process processing device uses a heat value prediction model to determine the heat value of the process to be processed based on the number of times the process to be processed accesses the memory page within the target period, the processing data of the process to be processed, and the priority of the process to be processed.
  • the processing data represents the processing status of the process to be processed
  • the heat value represents the importance of the process to be processed.
  • the popularity value prediction model can be a pre-trained model, used to predict the popularity value of the process to be processed; in a feasible way, the popularity value prediction model can be used to predict the popularity value of the process to be processed for multiple times in the future.
  • the target period can be set in advance, and the target period can be set to one day.
  • the target period can be set according to actual business requirements, which is not limited in the embodiments of this application.
  • the process to be processed may include multiple processes, and the process to be processed may be all processes running in the device.
  • the processing data is the data generated by each process to be processed during the processing; in a feasible way, the processing data can include the central processing unit (Central Processing Unit, CPU) usage occupied by the process to be processed, the user to be processed
  • the satisfaction level of the process, etc. may also include the processing rate of the process to be processed, etc., which is not limited in the embodiment of the present application.
  • the priority of the process to be processed can be set by the user. Generally, the higher the priority of the process to be processed, the higher the importance of the process to be processed. In a feasible way, the priority of the process to be processed i can be A positive integer. By default, the priority of the process to be processed is set to 1.
  • the priority of the process to be processed can be modified to a higher value.
  • the popularity value is obtained based on the comprehensive analysis of various data such as the number of times the process to be processed accesses the memory page, the degree of processing of the process to be processed, and the priority of the process to be processed, to comprehensively reflect the importance of the process to be processed.
  • the popularity value prediction model can be trained based on the number of times the process to be processed accesses the memory page in the historical period, the processing data of the process to be processed, and the priority of the process to be processed.
  • the operation of training the popularity value prediction model based on the number of times the process to be processed accesses the memory page in the historical period, the processing data of the process to be processed, and the priority of the process to be processed can be implemented through the following steps:
  • Step A The process processing device determines the popularity value of the memory page corresponding to the process to be processed based on the number of times the process to be processed accesses each memory page in the historical period and the memory size corresponding to each memory page.
  • the historical period can be set in advance and can be set according to actual business requirements, which is not limited in the embodiment of the present application.
  • the operation of determining the number of times that a certain process to be processed accesses memory page j It can be: before the start of the historical cycle, the access value of memory page j can be set to 0, and then during the processing of this pending process, every time it is detected that memory page j is accessed by the CPU, the access value of memory page j will be set to 0. The value is increased by 1, so that the access value of memory page j after the end of the historical period is the number of times the process to be processed accesses memory page j.
  • the heat value of the memory page corresponding to the process i to be processed can be expressed as page_hot_sum(i): Among them, page_hot(j) is the heat value of memory page j, page_size(j) is the memory size of memory page j; if memory page j is a large memory page, then page_size(j) is the actual memory size of the large memory page; if memory page j is a small memory page, then page_size(j) is 4KB.
  • Step B The process processing device determines the historical processing data of the process to be processed within the historical period and the historical priority of the process to be processed.
  • the historical processing data is the data generated by the process to be processed in the historical period.
  • the historical processing data includes the CPU usage rate of the process to be processed in the historical period and the user's satisfaction with the process to be processed; where, the process to be processed is
  • the CPU usage of process i can be expressed as cpu_usage(i), and the user's satisfaction with process i can be expressed as satisf(i).
  • the degree of satisfaction can be expressed from 1 to 10.
  • the default setting is 10. If the user is satisfied The lower it is, the more it means the process needs to be optimized and accelerated.
  • the historical priority is the priority of the process to be processed during the historical period, which can be expressed as priority(i).
  • Step C The process processing device determines the data to be trained based on the popularity value of the memory page corresponding to the process to be processed, the historical processing data of the process to be processed, and the historical priority of the process to be processed.
  • the popularity value of the memory page corresponding to the process to be processed at each moment in the historical period, the CPU usage of the process to be processed in the historical period, the user's satisfaction with the process to be processed, and the process to be processed can be used.
  • Step D The process processing device trains the initial popularity value prediction model based on the data to be trained, and obtains the popularity value prediction model.
  • the initial popularity value prediction model may be a neural network (Long Short Term Memory, LSTM) with the ability to memorize long and short-term information.
  • the data to be trained can be divided into a training set and a test set; the LSTM model is trained through the training set, and the trained LSTM model is tested and verified through the test set. If the error of the test result is within acceptable Within the range, stop the model training and obtain the popularity value prediction model; if the error is large, you can expand the training set and continue training the LSTM model until the error is within the acceptable range; where the error can be the average absolute percentage error ( MAPE) means that the smaller the MAPE value, the better the accuracy of the trained LSTM model.
  • MAPE average absolute percentage error
  • Step 204b The process processing device divides the processes to be processed based on the popularity values of the processes to be processed to obtain a first process set and a second process set.
  • the popularity value of the process in the first process set is higher than the popularity value of the process in the second process set.
  • the first process set may be a warm process set
  • the second process set may be a cold process set.
  • the heat values of the processes in the first process set are higher than the heat values of the processes in the second process set.
  • the processes to be processed can be sorted based on the heat value, and the processes to be processed can be divided based on the sorting results to obtain the first process set and the second process set; for example, the f processes to be processed that are sorted first can be into the warm process set, and divide the remaining processes to be processed into the cold process set; or, you can also divide the f processes that have the highest heat values for n consecutive moments in the process to be processed into the warm process set, and divide the remaining processes into the cold process set. Processing processes are divided into cold process sets.
  • the processes to be processed can be divided based on the heat value and the heat threshold of the process to be processed to obtain the first process set and the second process set; for example, the heat values of n consecutive moments can be higher than
  • the processes to be processed with hotspot thresholds are divided into warm process sets, and the remaining processes to be processed are divided into cold process sets; among them, the hotspot thresholds can be set in advance, and the hotspot thresholds can be set according to actual business needs and business experience.
  • processes to be processed whose heat values are higher than the first hotspot threshold for n consecutive moments can be divided into a warm process set, and processes to be processed whose heat values are lower than the second hotspot threshold for n consecutive moments can be classified
  • the process is divided into a cold process set; wherein, the first hot spot threshold is used to determine the warm process, and the second hot spot threshold is used to determine the cold process.
  • Both the first hot spot threshold and the second hot spot threshold can be set in advance. This is not the case in the embodiment of this application. limited.
  • Step 204c The process processing device determines the target process type of the target process based on the relationship between the target process and the first process set and the second process set.
  • the first process set and the second process set are divided based on the popularity value of the process to be processed, and the processes to be processed are all processes in the device, so the processes to be processed also include the target process, so that it can
  • the target process type is determined based on a relationship between the target process and the first set of processes and the second set of processes.
  • the target process type of the target process can be set in advance, or when the target process is not included in the process to be processed, the popularity value of the target process can also be predicted to determine the target process's popularity value. Target process type.
  • Step 205 The process processing device processes the target process based on the processing method corresponding to the target memory page and the target process type.
  • the target process is processed based on the processing method corresponding to the target process type. In this way, using different processing methods for different process types is more targeted and can improve process processing. speed to ensure system performance.
  • the process processing method provided by the embodiment of this application is a dynamic allocation method that prioritizes memory pages with larger memory and then allocates memory pages with smaller memory based on the target memory size.
  • the user does not need to perform complex configurations on his own, and because memory is allocated first Larger memory pages, so the determined target memory page has fewer pages, and because the candidate memory pages include a variety of memory pages of different memory sizes, it can meet the needs of different processes for different memory sizes and can improve
  • the speed and success rate of memory page allocation help improve system performance and solve the problems in related technologies that the operation of self-configuring memory pages is complicated, error-prone, and inefficient.
  • embodiments of the present application provide a process processing method. Referring to Figure 5, the method includes the following steps:
  • Step 301 When detecting that a memory page needs to be allocated to the target process, the process processing device determines the target memory size of the target memory page required by the target process.
  • Step 302 In the case where the candidate large memory page of the candidate memory page has a target memory page that matches the target memory size, the process processing device starts from the idle candidate large memory page in the order of the memory pages from large to small.
  • the memory page that determines the target memory size is the target memory page.
  • the large memory page to be selected is in an idle state, that is, the large memory page to be selected is not allocated; that is, the large memory page to be selected has a target memory page that matches the target memory size.
  • the memory page with the target memory size can be determined as the target memory page from the unallocated large memory pages to be selected in order of the memory pages from large to small.
  • a greedy algorithm can be used to determine the memory page with the target memory size as the target memory page from the unallocated large memory pages to be selected in order from large to small memory pages.
  • Step 303 When there is no target memory page matching the target memory size among the large memory pages to be selected, the process processing device selects the large memory pages to be selected in the idle state and the large memory pages in the idle state in descending order of the memory pages. Among the small memory pages to be selected in the state, the memory page that determines the target memory size is the target memory page.
  • the large memory page to be selected includes a variety of large memory pages with different memory sizes
  • the small memory page to be selected includes small memory pages with a fixed memory size
  • the memory of the large memory page to be selected is larger than the memory of the small memory page to be selected.
  • the small memory page to be selected is in an idle state, that is, the small memory page to be selected is not allocated; that is, there is no target memory page that matches the target memory size among the large memory pages to be selected.
  • the memory page with the target memory size can be determined as the target memory page from the unallocated large memory pages to be selected and the unallocated small memory pages to be selected in the order of memory pages from large to small.
  • a greedy algorithm can be used to determine the memory page of the target memory size from the unallocated large memory pages to be selected and the unallocated small memory pages to be selected in order from large to small memory pages. is the target memory page.
  • Step C Continue to allocate memory from 1G large pages, 512M large pages, 128M large pages, and 2M large pages according to the memory size. If a sufficient number of large pages can be allocated, allocate it directly. Otherwise, proceed to step D. Step D. Allocate from the 4KB small memory page to be selected.
  • Step 304 The process processing device uses a heat value prediction model to determine the heat value of the process to be processed based on the number of times the process to be processed accesses the memory page within the target period, the processing data of the process to be processed, and the priority of the process to be processed.
  • the processing data represents the processing status of the process to be processed
  • the heat value represents the importance of the process to be processed
  • the target process type represents the importance of the target process
  • Step 305 The process processing device divides the processes to be processed based on the popularity values of the processes to be processed to obtain a first process set and a second process set.
  • steps 306 to 309, or steps 310 to 313, or steps 314 to 317 can be performed.
  • Step 306 When the target process matches the first process set and the target process is the process that accesses the memory page the most times within the detection period, the process processing device determines that the target process type is the first process type.
  • the first process type representation needs to separately accelerate the addressing of the memory page corresponding to the target process.
  • the detection period can be set in advance and can be set according to actual business requirements.
  • the target process matches the first process set, indicating that the target process matches the process in the first process set are of the same process type.
  • the target process may be a process with a similar popularity value to the process in the first process set, or it may be a process in the first process set. This is not limited in the embodiment of the present application.
  • the number of times the target process accesses each memory page during the detection period can be counted, and then the process that accesses the memory page the most times during the detection period can be determined; among them, if there are multiple processes corresponding to the memory page If the access values are all the largest, then you can compare the second largest access value of the memory pages corresponding to the multiple processes, and so on, until the unique process that accesses the memory page the most times is determined; you can also count the number of memory pages.
  • the process with an access value of 0 is determined to be a cold process and placed in the cold process collection.
  • the target process type can be determined as a hot process, that is, the target process is Hot process.
  • Step 307 When the target process type is the first process type, the process processing device sets the page flag bit of the target process to the first value.
  • the first numerical value indicates that the addressing of the memory page corresponding to the target process needs to be accelerated.
  • the first value is used to indicate that the addressing of the memory page corresponding to the target process needs to be accelerated; by accelerating the addressing of the memory page corresponding to a certain process, the processing speed of the process can be improved.
  • a page flag bit can be added to each memory page to quickly distinguish which memory pages do not need accelerated processing and which memory pages need accelerated processing; the page flag bit can be set to 1 to indicate that the memory page is a memory page that needs accelerated processing (i.e., a warm process memory page or a hot process memory page).
  • the page identification bit can be set to 0 to indicate that the memory page is a memory page that does not require accelerated processing (i.e., a cold process memory page). process memory page); further, the page flag bit can also be set to another value to further distinguish between hot process memory pages and cold process memory pages.
  • Step 308 The process processing device loads the target process into the first memory.
  • the first memory is used to run hot processes and warm processes, that is, both hot processes and warm processes run on the first memory; the speed of accessing memory pages corresponding to the first memory is faster than that of the second memory.
  • the corresponding memory page access speed is fast; in a feasible way, the first memory can be dynamic Random Access Memory (Dynamic Random Access Memory, DRAM), if the target process is a hot process, the hot process can be loaded into DRAM to increase the speed at which the target process accesses memory pages, thereby increasing the processing rate of the target process.
  • DRAM Dynamic Random Access Memory
  • Step 309 The process processing device processes the target process based on the target memory page and the first target memory page mapping table of the first memory.
  • the first target memory page mapping table is a page table cache in the first memory, which is used to store the mapping relationship between the virtual address and the physical address of the memory page corresponding to the hot process, so as to map the hot process.
  • the addressing of memory pages is accelerated, thereby increasing the processing rate of hot processes.
  • the first target memory page mapping table can be an address translation cache (Translation-Lookaside Buffer, TLB) to speed up the conversion speed of virtual address to physical address; as shown in Figure 7, the TLB table each Each row stores a page table entry (Page Table Entry, PTE).
  • TLB Translation-Lookaside Buffer
  • Each page table entry records the corresponding relationship between the virtual address and the physical data, and corresponds to the mapping relationship between the virtual address and the physical address of the memory page to be accessed.
  • the CPU will first check the TLB table when addressing. If the required page table is stored in the TLB table, it is called a TLB hit. Then the CPU will look at the page table in the TLB in turn. Is the data in the corresponding physical memory address already in the cache? If not, the data stored at the corresponding address is fetched from the memory (Main Memory); if there is no hit in the TLB table, it will continue to Check the conventional page table in the memory.
  • the TLB table can be used to quickly find the physical address pointed by the virtual address without requesting the memory to obtain the mapping relationship between the virtual address and the physical address. Therefore, the search Address speed is faster.
  • the target process is a hot process
  • the target process is run through the first memory, and the data corresponding to the target process is stored through the first target memory page mapping table to process the target process. Since there is only one hot process, the first target memory page mapping table only contains the mapping relationship between the virtual address and the physical address of the memory page corresponding to the target process. Then the probability that the data of the memory page corresponding to the target process is successfully loaded into the cache is 1. , at this time, the probability that the data of the memory page corresponding to other processes is successfully loaded into the cache is 0, which effectively guarantees the addressing speed of the memory page corresponding to the target process, thereby improving the processing rate of the target process.
  • Step 310 When the target process matches the first process set and the target process is not the process that accesses the memory page the most times within the detection period, the process processing device determines that the target process type is the second process type.
  • the second process type indicates that the target process needs to be accelerated.
  • the target process type can be determined as a warm process, that is, the target process is Warming process.
  • Step 311 When the target process type is the second process type, the process processing device sets the page flag corresponding to the target process to the first value.
  • the first numerical value indicates that the addressing of the memory page corresponding to the target process needs to be accelerated.
  • the page flag bit corresponding to the warm process can be set to the first value, that is, set to 1, to reflect that the addressing of the current page memory needs to be accelerated.
  • Step 312 The process processing device loads the target process into the first memory.
  • the number of times the warm process accesses the memory page is less than the number of times the hot process accesses the process page, but it is more than the number of times the cold process accesses the memory page.
  • the warm process can also be loaded into the first memory for processing. To accelerate the addressing of memory pages corresponding to warm processes.
  • Step 313 The process processing device processes the target process based on the target memory page and the second target memory page mapping table of the first memory.
  • the addressing speed of the memory page in the second target memory page mapping table is lower than the addressing speed of the memory page in the first target memory page mapping table.
  • the second target memory mapping table is a page table cache in the first memory, which is used to store the mapping relationship between the virtual address and the physical address of the memory page corresponding to the warm process, and is used to map the corresponding warm process
  • the second target memory mapping table can be a TLB table, but this TLB table mainly stores the mapping relationship between the virtual address and the physical address of the memory page corresponding to the warm process; that is, if the target process is Warm process, the mapping relationship between the virtual address and the physical address of the memory page corresponding to the target process will be stored in this TLB table.
  • the target process is run through the first memory. process, and stores the data corresponding to the target process to the target memory page to process the target process; since there can be multiple warm processes, the second target memory mapping table not only stores the virtual address of the memory page corresponding to the target process.
  • the mapping relationship between physical addresses also stores the mapping relationship between the virtual address and the physical address of the memory page corresponding to other warm processes. Therefore, compared with the first target memory mapping table, the memory in the second target memory mapping table The page's addressing speed is lower.
  • the probability that the data of the memory page corresponding to each warm process is successfully loaded into the cache can be determined based on the heat value of the warm process; if the warm process includes process a and process b, and the heat value of process a Maximum, then the probability of successfully loading the data of the memory page corresponding to process a into the cache can be set to 1.
  • the probability of successfully loading the data of the memory page corresponding to process b into the cache is process_hot(b)/process_hot(a);
  • the probability of successfully caching the data of the memory page corresponding to the warm process into the cache is distinguished based on the heat value of the warm process, so that the warm process with the larger heat value has a higher probability of caching the data of the corresponding memory page into the cache.
  • most of the data cached in the cache is the memory page data of the process with high heat value, which improves the hit rate of loading the data of the process with high heat value into the cache, thereby ensuring the CPU's access to the data of the process with high heat value. Access efficiency.
  • Step 314 When the target process matches the second process set, the process processing device determines that the target process type is the third process type.
  • the third process type indicates that the addressing of the memory page corresponding to the target process is not accelerated.
  • the target process matches the second process set, indicating that the target process and the processes in the second process set are of the same process type.
  • the target process may have a similar popularity value to the process in the second process set.
  • the process may also be a process in the second process set, which is not limited in the embodiment of the present application.
  • the target process type can be determined as a cold process, that is, the target process is a cold process.
  • Step 315 When the target process type is the third process type, process processing device settings The page flag corresponding to the target process is the second value.
  • the second numerical value indicates that the addressing of the memory page corresponding to the target process does not require accelerated processing.
  • the second value is used to indicate that the addressing of the memory page corresponding to the target process does not require accelerated processing. If the target process is a cold process, the page flag bit of the memory page corresponding to the target process can be set to 0. , to indicate that the memory page is a cold process memory page.
  • Step 316 The process processing device loads the target process into the second memory.
  • the second memory is used to run cold processes, and the speed of accessing memory pages corresponding to the first memory is faster than the speed of accessing memory pages corresponding to the second memory; in a feasible way, the second memory It can be non-volatile memory (ApachePass, AEP) or a swap partition (SWAP); that is, when the target process is a cold process, the target process is loaded into the second memory for processing.
  • ApachePass non-volatile memory
  • SWAP swap partition
  • Step 317 The process processing device processes the target process based on the target memory page and the second memory.
  • the speed of accessing the memory page corresponding to the first memory is faster than the speed of accessing the memory page corresponding to the second memory.
  • the target process is run through the second memory, and the data corresponding to the target process is stored in the target memory page, so as to realize the processing of the target process.
  • the process is divided into three process types: hot process, warm process and cold process based on the number of times the process accesses the memory page and the heat value of the process, and different processing methods are used for processing different process types, so
  • the method of hierarchical optimization of processes can increase the processing rate of the process, thereby improving system performance.
  • the extended TLB table is The first target memory page mapping table is used to specifically store the mapping relationship between the virtual address and the physical address of the memory page corresponding to the hot process; the TLB table is the second target memory page mapping table, used to store the memory page corresponding to the warm process.
  • the mapping relationship between the virtual address and the physical address; the page number and frame number in Figure 8 correspond to the virtual page in the virtual address and the physical page frame in the physical address; in this way, the hot process can access the memory page by extending the TLB table
  • Separate acceleration is achieved through the TLB table to accelerate the access of warm processes to memory pages.
  • processes are divided into cold processes, warm processes and hot processes through the process dimension, and hierarchical processing optimization is performed on different process types to improve the processing rate of the process.
  • migrating cold processes to AEP memory for running, and migrating hot processes and warm processes to DRAM memory for running can effectively expand the memory space, ensure that system memory performance is not reduced, and meet the needs of large memory business scenarios.
  • the warm process can be optimized through the cache, which effectively improves the hit rate of loading the warm process data into the cache; while the hot process can be optimized by configuring the cache exclusive mode, so that dual-channel acceleration is achieved through the TLB table, which not only allows the hot process to be The memory page data of the process accessed within a certain period of time in the future is successfully loaded into the cache, and it can also ensure that the hot process searches for the address mapping relationship in the TLB and has a 100% hit in the TLB, thereby achieving hierarchical acceleration of warm and hot processes accessing memory. the goal of.
  • the process processing method provided by the embodiment of this application is a dynamic allocation method that prioritizes memory pages with larger memory and then allocates memory pages with smaller memory based on the target memory size.
  • the user does not need to perform complex configurations on his own, and because memory is allocated first Larger memory pages, so the determined target memory page has fewer pages, and because the candidate memory pages include a variety of memory pages of different memory sizes, it can meet the needs of different processes for different memory sizes and can improve
  • the speed and success rate of memory page allocation help improve system performance and solve the problems in related technologies that the operation of self-configuring memory pages is complicated, error-prone, and inefficient.
  • embodiments of the present application provide a process processing device, which can be applied to the process processing methods provided by the embodiments corresponding to Figures 1 to 2 and 5.
  • the process processing device 4 can include:
  • the determining part 41 is configured to determine the target memory size of the target memory page required by the target process when it is detected that a memory page needs to be allocated to the target process;
  • the processing part 42 is configured to determine the target memory page from the candidate memory pages with multiple different memory sizes based on the target memory size in order of the memory pages from large to small;
  • the processing part 42 is also configured to process the target process based on the target memory page.
  • processing part 42 is also configured to perform the following steps:
  • the memory page with the target memory size is determined from the candidate large memory pages in descending order of the memory pages. is the target memory page;
  • the memory page with the target memory size is the target memory page; among them, the large memory page to be selected includes large memory pages of multiple different memory sizes, the small memory page to be selected includes small memory pages with a fixed memory size, and the large memory page to be selected is The memory is larger than the memory of the small memory page to be selected.
  • the processing part 42 is also configured to determine the memory page of the target memory size as the target memory page from the large memory pages to be selected in the idle state in order of the memory pages from large to small;
  • processing part 42 is also configured to determine the memory page of the target memory size from the large memory page to be selected in the idle state and the small memory page to be selected in the idle state in the order of memory pages from large to small. Target memory page.
  • processing part 42 is also configured to perform the following steps:
  • the target process is processed based on the processing method corresponding to the target memory page and the target process type.
  • processing part 42 is also configured to perform the following steps:
  • the heat value prediction model is used to determine the heat value of the process to be processed based on the number of times the process to be processed accesses the memory page within the target period, the processing data of the process to be processed, and the priority of the process to be processed; where, the processing data represents the process to be processed.
  • the processing status, the heat value represents the importance of the process to be processed;
  • the process to be processed is divided to obtain a first process set and a second process set; wherein, the heat value of the process in the first process set is higher than the heat value of the process in the second process set;
  • a target process type of the target process is determined based on a relationship between the target process and the first process set and the second process set.
  • processing part 42 is also configured to perform the following steps:
  • the target process type is determined to be the first process type; wherein, the first process type characterization requires separate identification of the target process.
  • the addressing of the corresponding memory page is accelerated;
  • the target process type is determined to be the second process type; where the second process type representation needs to correspond to the target process The addressing of memory pages is accelerated;
  • the target process type is determined to be a third process type; wherein, the third process type indicates that the addressing of the memory page corresponding to the target process will not be accelerated.
  • processing part 42 is also configured to perform the following steps:
  • the page flag bit of the target process is set to a first value; wherein, the first value indicates that the addressing of the memory page corresponding to the target process needs to be accelerated;
  • the target process is processed based on the target memory page and the first target memory page mapping table of the first memory.
  • processing part 42 is also configured to perform the following steps:
  • the page flag bit corresponding to the target process is set to a first value; wherein, the first value indicates that the addressing of the memory page corresponding to the target process needs to be accelerated;
  • the target process is processed based on the target memory page and the second target memory page mapping table of the first memory; wherein, the addressing speed of the memory page in the second target memory page mapping table is smaller than that in the first target memory page mapping table.
  • the addressing speed of memory pages is processed based on the target memory page and the second target memory page mapping table of the first memory; wherein, the addressing speed of the memory page in the second target memory page mapping table is smaller than that in the first target memory page mapping table.
  • processing part 42 is also configured to perform the following steps:
  • the page flag bit corresponding to the target process is set to a second value; wherein, the second value indicates that the addressing of the memory page corresponding to the target process does not require accelerated processing;
  • the target process is processed based on the target memory page and the second memory; the speed of accessing the memory page corresponding to the first memory is faster than the speed of accessing the memory page corresponding to the second memory.
  • the process processing device uses a dynamic allocation method that prioritizes allocating memory pages with larger memory and then allocates memory pages with smaller memory based on the target memory size without the need for users to perform complex configurations themselves, and because the priority allocation is Memory pages with larger memory, so the determined target memory page has fewer pages, and because the memory pages to be selected include a variety of memory pages with different memory sizes, it can meet the needs of different processes for different memory sizes. Improving the speed and success rate of memory page allocation, thereby helping to improve system performance, solves the problem in related technologies that the operation of self-configuring memory pages is complicated, error-prone and inefficient.
  • the process processing device Device 5 may include: processor 51, memory 52 and communication bus 53, wherein:
  • the communication bus 53 is used to realize the communication connection between the processor 51 and the memory 52;
  • the processor 51 is used to execute the process processing program in the memory 52 to implement the following steps:
  • the target memory page is determined from the candidate memory pages with multiple different memory sizes
  • the target process is processed based on the target memory page.
  • the processor 51 is configured to execute the process processing program in the memory 52 in order of memory pages from large to small, based on the target memory size, from candidate memory pages with multiple different memory sizes. Determine the target memory page in order to implement the following steps:
  • the memory page with the target memory size is determined from the candidate large memory pages in descending order of the memory pages. is the target memory page;
  • the memory page with the target memory size is the target memory page; among them, the large memory page to be selected includes large memory pages of multiple different memory sizes, the small memory page to be selected includes small memory pages with a fixed memory size, and the large memory page to be selected is The memory is larger than the memory of the small memory page to be selected.
  • the processor 51 is configured to execute the process processing program in the memory 52 and determine the memory page with the target memory size from the large memory pages to be selected as the target memory in order of the memory pages from large to small. page to implement the following steps:
  • the memory page with the target memory size is determined as the target memory page from the large memory pages to be selected in the idle state
  • the processor 51 is configured to execute the process processing program in the memory 52 and determine the target memory size from the large memory pages to be selected and the small memory pages to be selected in the order of memory pages from large to small.
  • the memory page is the target memory page to achieve the following steps:
  • the memory page with the target memory size is determined as the target memory page.
  • the processor 51 is configured to execute the process processing program in the memory 52 to process the target process based on the target memory page to achieve the following steps:
  • the target process is processed based on the processing method corresponding to the target memory page and the target process type.
  • the processor 51 is configured to execute the process processing program in the memory 52 to determine the target process type of the target process, to implement the following steps:
  • the heat value prediction model is used to determine the heat value of the process to be processed based on the number of times the process to be processed accesses the memory page within the target period, the processing data of the process to be processed, and the priority of the process to be processed; where, the processing data represents the process to be processed.
  • the processing status, the heat value represents the importance of the process to be processed;
  • the process to be processed is divided to obtain a first process set and a second process set; wherein, the heat value of the process in the first process set is higher than the heat value of the process in the second process set;
  • a target process type of the target process is determined based on a relationship between the target process and the first process set and the second process set.
  • the processor 51 is configured to execute the process processing program in the memory 52 and determine the target process type of the target process based on the relationship between the target process and the first process set and the second process set, to Implement the following steps:
  • the target process type is determined to be the first process type; wherein, the first process type characterization requires separate identification of the target process.
  • the addressing of the corresponding memory page is accelerated;
  • the target process type is determined to be the second process type; where the second process type representation needs to correspond to the target process The addressing of memory pages is accelerated;
  • the target process type is determined to be a third process type; wherein, the third process type indicates that the target process will not be accelerated.
  • the processor 51 is configured to execute the processing method of the process processing program in the memory 52 based on the target memory page and the target process type, and process the target process to achieve the following steps:
  • the page flag bit of the target process is set to a first value; wherein, the first value indicates that the addressing of the memory page corresponding to the target process needs to be accelerated;
  • the target process is processed based on the target memory page and the first target memory page mapping table of the first memory.
  • the processor 51 is configured to execute the processing method of the process processing program in the memory 52 based on the target memory page and the target process type, and process the target process to achieve the following steps:
  • the page flag bit corresponding to the target process is set to a first value; wherein, the first value indicates that the addressing of the memory page corresponding to the target process needs to be accelerated;
  • the target process is processed based on the target memory page and the second target memory page mapping table of the first memory; wherein, the addressing speed of the memory page in the second target memory page mapping table is smaller than that in the first target memory page mapping table.
  • the addressing speed of memory pages is processed based on the target memory page and the second target memory page mapping table of the first memory; wherein, the addressing speed of the memory page in the second target memory page mapping table is smaller than that in the first target memory page mapping table.
  • the processor 51 is configured to execute the processing method of the process processing program in the memory 52 based on the target memory page and the target process type, and process the target process to achieve the following steps:
  • the page flag bit corresponding to the target process is set to a second value; wherein, the second value indicates that the addressing of the memory page corresponding to the target process does not require accelerated processing;
  • the target process is processed based on the target memory page and the second memory; the speed of accessing the memory page corresponding to the first memory is faster than the speed of accessing the memory page corresponding to the second memory.
  • the process processing device uses a dynamic allocation method that prioritizes memory pages with larger memory and then allocates memory pages with smaller memory based on the target memory size.
  • the user does not need to perform complex configurations on his own, and because memory is allocated first Larger memory pages, so the determined target memory page has fewer pages, and because the candidate memory pages include a variety of memory pages of different memory sizes, it can meet the needs of different processes for different memory sizes and can improve
  • the speed and success rate of memory page allocation help improve system performance and solve the problems in related technologies that the operation of self-configuring memory pages is complicated, error-prone, and inefficient.
  • embodiments of the present application provide a computer-readable storage medium that stores one or more programs, and the one or more programs can be executed by one or more processors to Implement the steps of the process processing method provided by the embodiments corresponding to Figures 1 to 2 and 5.
  • embodiments of the present application may be provided as methods, systems, or computer program products. Accordingly, the present application may take the form of a hardware embodiment, a software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product implemented on one or more computer-usable storage media (including, but not limited to, magnetic disk storage and optical storage, etc.) embodying computer-usable program code therein.
  • a computer-usable storage media including, but not limited to, magnetic disk storage and optical storage, etc.
  • These computer program instructions may also be stored in a computer-readable memory that causes a computer or other programmable data processing apparatus to operate in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including the instruction means, the instructions means to implement a process or multiple flows in a flowchart Functions specified in a block or blocks of a process and/or block diagram.
  • These computer program instructions may also be loaded onto a computer or other programmable data processing device, causing a series of operating steps to be performed on the computer or other programmable device to produce computer-implemented processing, thereby executing on the computer or other programmable device.
  • Instructions provide steps for implementing the functions specified in a process or processes of a flowchart diagram and/or a block or blocks of a block diagram.
  • the process processing method and device provided by the embodiments of the present application can determine the target memory size of the target memory page required by the target process when it is detected that a memory page needs to be allocated to the target process, and then order the memory pages from large to small.
  • the target memory page is determined from the candidate memory pages with multiple different memory sizes, and then the target process is processed based on the target memory page; in this way, based on the target memory size, memory pages with larger memory are allocated first and then
  • the dynamic allocation method of allocating memory pages with smaller memory does not require the user to perform complex configurations.
  • the page includes a variety of memory pages with different memory sizes, so it can meet the needs of different processes for different memory sizes, and can improve the speed and success rate of memory page allocation, which in turn helps improve system performance and solves the problems in related technologies.
  • the operation of configuring memory pages is complex, error-prone, and inefficient.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

本申请实施例公开了一种进程处理方法,该方法包括:在检测到需要为目标进程分配内存页面时,确定目标进程所需的目标内存页面的目标内存大小;按照内存页面由大到小的顺序,基于目标内存大小,从具有多种不同内存大小的待选内存页面中确定目标内存页面;基于目标内存页面对目标进程进行处理;如此,不用用户自己进行复杂的配置,且确定出的目标内存页面的页面较少,能够提高内存页面分配的速度和成功率,进而有助于提高系统性能。本申请实施例还公开了一种进程处理设备。

Description

一种进程处理方法和设备
相关申请的交叉引用
本申请基于申请号为202210991914.3、申请日为2022年08月17日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此引入本发明作为参考。
技术领域
本申请涉及云计算技术领域,涉及但不限于一种进程处理方法和设备。
背景技术
大数据分析以及机器学习对大容量内存页面的需求越来越多,目前,内存页面的规格较小、且都是固定大小,这种情况下,在需要使用大内存页面处理进程时,需要自行配置大内存页面;但是,自行配置内存页面的操作较为复杂,容易出错且效率低。
发明内容
为解决上述技术问题,本申请实施例期望提供一种进程处理方法和设备,解决了相关技术中的自行配置内存页面的操作较为复杂,容易出错且效率低的问题。
本申请的技术方案是这样实现的:
一种进程处理方法,所述方法包括:
在检测到需要为目标进程分配内存页面时,确定所述目标进程所需的目标内存页面的目标内存大小;
按照内存页面由大到小的顺序,基于所述目标内存大小,从具有多种不同内存大小的待选内存页面中确定所述目标内存页面;
基于所述目标内存页面对所述目标进程进行处理。
上述方案中,所述按照内存页面由大到小的顺序,基于所述目标内存大小,从具有多种不同内存大小的待选内存页面中确定所述目标内存页面,包括:
在所述待选内存页面的待选大内存页面中具有与所述目标内存大小匹配的所述目标内存页面的情况下,按照内存页面由大到小的顺序,从所述待选大内存页面中确定所述目标内存大小的内存页面为所述目标内存页面;
在所述待选大内存页面中没有与所述目标内存大小匹配的所述目标内存页面的情况下,按照内存页面由大到小的顺序,从所述待选大内存页面和所述待选内存页面的待选小内存页面中确定所述目标内存大小的内存页面为所述目标内存页面;其中,所述待选大内存页面包括多种不同内存大小的大内存页面,所述待选小内存页面包括固定内存大小的小内存页面,且所述待选大内存页面的内存大于所述待选小内存页面的内存。
上述方案中,所述按照内存页面由大到小的顺序,从所述待选大内存页面中确定所述目标内存大小的内存页面为所述目标内存页面,包括:
按照内存页面由大到小的顺序,从处于空闲状态的所述待选大内存页面中确定所述目标内存大小的内存页面为所述目标内存页面;
相应的,所述按照内存页面由大到小的顺序,从所述待选大内存页面和所述待选内存页面的待选小内存页面中确定所述目标内存大小的内存页面为所述目标内存页面,包括:
按照内存页面由大到小的顺序,从处于空闲状态的所述待选大内存页面和处于空闲状态的所述待选小内存页面中,确定所述目标内存大小的内存页面为所述目标内存页面。
上述方案中,所述基于所述目标内存页面对所述目标进程进行处理,包括:
确定所述目标进程的目标进程类型;其中,所述目标进程类型表征所述目标进程的重要程度;
基于所述目标内存页面和所述目标进程类型对应的处理方式,对所述目标进程进行处理。
上述方案中,所述确定所述目标进程的目标进程类型,包括:
采用热度值预测模型,基于目标周期内的待处理进程访问内存页面的次数、所述待处理进程的处理数据以及所述待处理进程的优先级,确定所述待处理进程的热度值;其中,所述处理数据表征所述待处理进程的处理情况,所述热度值表征所述待处理进程的重要程度;
基于所述待处理进程的热度值,对所述待处理进程进行划分得到第一进程集合和第二进程集合;其中,第一进程集合中进程的热度值高于第二进程集合中进程的热度值;
基于所述目标进程与所述第一进程集合和所述第二进程集合之间的关系,确定所述目标进程的目标进程类型。
上述方案中,所述基于所述目标进程与所述第一进程集合和所述第二进程集合之间的关系,确定所述目标进程的目标进程类型,包括:
在所述目标进程与所述第一进程集合匹配、且所述目标进程是检测周期内访问内存页面次数最多的进程的情况下,确定所述目标进程类型为第一进程类型;其中,所述第一进程类型表征需要单独对所述目标进程对应的内存页面的寻址进行加速处理;
在所述目标进程与所述第一进程集合匹配、且所述目标进程不是所述检测周期内访问内存页面次数最多的进程的情况下,确定所述目标进程类型为第二进程类型;其中,所述第二进程类型表征需要对所述目标进程对应的内存页面的寻址进行加速处理;
在所述目标进程与所述第二进程集合匹配的情况下,确定所述目标进程类型为第三进程类型;其中,所述第三进程类型表征不对所述目标进程对应的内存页面的寻址进行加速处理。
上述方案中,所述基于所述目标内存页面和所述目标进程类型对应的处理方式,对所述目标进程进行处理,包括:
在所述目标进程类型为第一进程类型的情况下,设置所述目标进程的页面标志位为第一数值;其中,所述第一数值表征所述目标进程对应的内存页面的寻址需要加速处理;
加载所述目标进程至第一内存;
基于所述目标内存页面和所述第一内存的第一目标内存页面映射表,对所述目标进程进行处理。
上述方案中,所述基于所述目标内存页面和所述目标进程类型对应的处理方式,对所述目标进程进行处理,包括:
在所述目标进程类型为第二进程类型的情况下,设置所述目标进程对应的页面标志位为第一数值;其中,所述第一数值表征所述目标进程对应的内存页面的寻址需要加速处理;
加载所述目标进程至第一内存;
基于所述目标内存页面和所述第一内存的第二目标内存页面映射表,对所述目标进程进行处理;其中,所述第二目标内存页面映射表中的内存页面的寻址速度小于第一目标内存页面映射表中的内存页面的寻址速度。
上述方案中,所述基于所述目标内存页面和所述目标进程类型对应的处理方式,对所述目标进程进行处理,包括:
在所述目标进程类型为第三进程类型的情况下,设置所述目标进程对应的页面标志位为第二数值;其中,所述第二数值表征所述目标进程对应的内存页面的寻址不需要加速处理;
加载所述目标进程至第二内存;
基于所述目标内存页面和所述第二内存,对所述目标进程进行处理;其中,第一内存对应的访问内存页面的速度比所述第二内存对应的访问内存页面的速度快。
一种进程处理设备,所述设备包括:处理器、存储器和通信总线;
所述通信总线用于实现所述处理器和所述存储器之间的通信连接;
所述处理器用于执行所述存储器中的进程处理程序,以实现上述的进程处理方法的步骤。
本申请的实施例所提供的进程处理方法和设备,可以在检测到需要为目标进程分配内存页面时,确定目标进程所需的目标内存页面的目标内存大小,接 着按照内存页面由大到小的顺序,基于目标内存大小从具有多种不同内存大小的待选内存页面中确定目标内存页面,然后基于目标内存页面对目标进程进行处理;如此,基于目标内存大小优先分配内存较大的内存页面再分配内存较小的内存页面的动态分配方法不用用户自己进行复杂的配置,且由于优先分配的是内存较大的内存页面,因此所确定出的目标内存页面的页面较少,并且由于待选内存页面中包括多种不同内存大小的内存页面,因此可以满足不同进程对不同内存大小的需求,可以提高内存页面分配的速度和成功率,进而有助于提高系统性能,解决了相关技术中的自行配置内存页面的操作较为复杂,容易出错且效率低的问题。
附图说明
此处的附图被并入说明书中构成本说明书的一部分,这些附图示出了符合本申请的实施例,并与说明书一起用于说明本申请的技术方案。
图1为本申请实施例提供的一种进程处理方法的流程示意图;
图2为本申请实施例提供的另一种进程处理方法的流程示意图;
图3为本申请实施例提供的一种进程处理方法中待选内存页面的结构示意图;
图4为本申请实施例提供的一种进程处理方法中初始热度值预测模型的结构示意图;
图5为本申请实施例提供的又一种进程处理方法的流程示意图;
图6为本申请实施例提供的一种进程处理方法中温进程和冷进程的迁移示意图;
图7为本申请实施例提供的一种进程处理方法中CPU访问TLB表的流程示意图;
图8为本申请实施例提供的一种进程处理方法中基于第一内存对热进程和温进程的流程示意图;
图9为本申请实施例提供的一种进程处理装置的结构示意图;
图10为本申请实施例提供的一种进程处理设备的结构示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述。
应理解,说明书通篇中提到的“本申请实施例”或“前述实施例”意味着与实施例有关的特定特征、结构或特性包括在本申请的至少一个实施例中。因此,在整个说明书各处出现的“本申请实施例中”或“在前述实施例中”未必一定指相同的实施例。此外,这些特定的特征、结构或特性可以任意适合的方式结合在一个或多个实施例中。在本申请的各种实施例中,上述各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。
在未做特殊说明的情况下,电子设备执行本申请实施例中的任一步骤,可以是电子设备的处理器执行该步骤。还值得注意的是,本申请实施例并不限定电子设备执行下述步骤的先后顺序。另外,不同实施例中对数据进行处理所采用的方式可以是相同的方法或不同的方法。还需说明的是,本申请实施例中的任一步骤是电子设备可以独立执行的,即电子设备执行下述实施例中的任一步骤时,可以不依赖于其它步骤的执行。
应当理解,此处所描述的具体实施例仅用以解释本申请,并不用于限定本申请。
本申请实施例提供一种进程处理方法,该方法可以应用于进程处理设备中,参照图1所示,该方法包括以下步骤:
步骤101、在检测到需要为目标进程分配内存页面时,确定目标进程所需的目标内存页面的目标内存大小。
在本申请实施例中,目标进程可以为当前需要获取内存页面的进程;进一步地,目标进程可以为当前正在处理的进程且需要获取内存页面的进程。目标 内存页面可以为目标进程所需的内存页面,用于存储目标进程在处理过程中产生的数据;目标内存大小为目标内存页面的内存大小。
在一种可行的方式中,可以预先设置目标进程所需的目标内存页面的目标内存大小,如此在检测到需要为目标进程分配内存页面时,就可以确定目标进程所需的目标内存页面的目标内存大小,进而为目标进程分配所需的内存页面。
步骤102、按照内存页面由大到小的顺序,基于目标内存大小,从具有多种不同内存大小的待选内存页面中确定目标内存页面。
在本申请实施例中,待选内存页面可以包括多种不同内存大小的内存页面;在一种可行的方式中,待选内存页面包括的多种不同内存大小可以为4KB(千字节)、2M(兆字节)、128M、512M、1G(吉字节)、2G、5G,本申请实施例仅以此对待选内存页面中包括的多种不同的内存大小进行示例,可以根据实际业务需求对待选内存页面中包括的多种不同内存的内存页面进行设置。
在本申请实施例中,按照内存页面由大到小的顺序,基于目标内存大小从具有多种不同内存大小的待选内存页面中确定目标内存页面,也即,基于目标内存大小优先分配待选内存页面中内存较大的内存页面,再分配待选内存页面中内存较小的内存页面,如此通过这种动态分配内存页面的方法不用用户自己进行复杂的配置,且由于优先分配的是内存较大的内存页面,因此所确定出的目标内存页面的页面较少,并且由于待选内存页面中包括多种不同内存大小的内存页面,因此可以满足不同进程对不同内存大小的需求,提高了内存页面分配的速度和成功率,也有助于后续基于目标内存页面对目标进程进行处理。
在一种可行的方式中,可以将待选内存页面中的内存页面按照由大到小的顺序进行排列,然后在确定目标内存大小后,从待选内存页面中确定与目标内存大小匹配的内存页面,如果确定出的这个内存页面的内存大小与目标内存大小相等,则可以直接将这个内存页面确定为目标内存页面;如果这个内存页面的内存大小小于目标内存大小,则可以确定这个内存页面的内存大小与目标内存大小的差值,继续从待选内存页面中确定与该差值匹配的内存页面,直至得到目标内存页面。
步骤103、基于目标内存页面对目标进程进行处理。
在本申请实施例中,确定出目标内存页面后,可以将目标内存页面分配给目标进程,以基于目标内存页面对目标进程进行处理;由于目标内存页面的页面较少,因此能够节约存储目标内存页面映射关系的内存页面映射表所占用的内存空间,减少了后续地址转换的次数以及缓存失败的次数,进而提高了系统性能。
本申请实施例所提供的进程处理方法,在检测到需要为目标进程分配内存页面时,确定目标进程所需的目标内存页面的目标内存大小,接着按照内存页面由大到小的顺序,基于目标内存大小从具有多种不同内存大小的待选内存页面中确定目标内存页面,然后基于目标内存页面对目标进程进行处理;如此,基于目标内存大小优先分配内存较大的内存页面再分配内存较小的内存页面的动态分配方法不用用户自己进行复杂的配置,且由于优先分配的是内存较大的内存页面,因此所确定出的目标内存页面的页面较少,并且由于待选内存页面中包括多种不同内存大小的内存页面,因此可以满足不同进程对不同内存大小的需求,可以提高内存页面分配的速度和成功率,进而有助于提高系统性能,解决了相关技术中的自行配置内存页面的操作较为复杂,容易出错且效率低的问题。
基于前述实施例,本申请实施例提供一种进程处理方法,参照图2所示,该方法包括以下步骤:
步骤201、进程处理设备在检测到需要为目标进程分配内存页面时,确定目标进程所需的目标内存页面的目标内存大小。
步骤202、在待选内存页面的待选大内存页面中具有与目标内存大小匹配的目标内存页面的情况下,进程处理设备按照内存页面由大到小的顺序,从待选大内存页面中确定目标内存大小的内存页面为目标内存页面。
在本申请实施例中,待选内存页面包括待选大内存页面和待选小内存页面;待选大内存页面是内存大小较大的内存页面,待选小内存页面是内存大小较小的内存页面;由于内存页面的默认大小是4KB,因此,可以将内存大小为4KB 的内存页面称为小内存页面,将内存大小为4KB以上的内存页面称为大内存页面,待选大内存页面中包括多种内存大小的大内存页面,待选小内存页面中包括多个小内存页面。
在本申请实施例中,待选内存页面的待选大内存页面中具有与目标内存大小匹配的目标内存页面,说明从待选大内存页面中能够确定出目标内存页面;这种情况下,可以直接从待选大内存页面中确定目标内存页面;其中,从待选大内存页面中确定目标内存页面的操作可以为:先将待选大内存页面按照内存页面由大到小的顺序进行排列,然后从待选大内存页面中确定与目标内存大小匹配的大内存页面,如果确定出的大内存页面的内存大小与目标内存大小相等,则可以直接将这个大内存页面确定为目标内存页面;如果这个大内存页面的内存大小小于目标内存大小,则确定这个大内存页面的内存大小与目标内存大小的差值,继续从待选大内存页面中确定与该差值匹配的大内存页面,直至得到目标内存页面。
步骤203、在待选大内存页面中没有与目标内存大小匹配的目标内存页面的情况下,进程处理设备按照内存页面由大到小的顺序,从待选大内存页面和待选内存页面的待选小内存页面中确定目标内存大小的内存页面为目标内存页面。
其中,待选大内存页面包括多种不同内存大小的大内存页面,待选小内存页面包括固定内存大小的小内存页面,且待选大内存页面的内存大于待选小内存页面的内存。
在本申请实施例中,待选大内存页面中没有与目标内存大小匹配的目标内存页面,说明待选大内存页面中的内存空间不足,此时无法直接从大内存页面中确定出目标内存页面;这种情况下,可以从待选大内存页面和待选小内存页面中确定目标内存页面,也即,如果待选大内存页面的内存空间不足,则可以继续从待选小内存页面中补齐不足的内存空间。
在一种可行的方式中,如果待选大内存页面的内存空间不足,则可以确定待选大内存页面能够分配的内存大小与目标内存大小的差值,从待选小内存页 面中确定内存大小为该差值的内存空间,将待选大内存页面能够分配的内存大小与从待选小内存页面中确定的内存大小为该差值的内存空间一起确定为目标内存空间;其中,可以从待选小内存页面中确定地址连续的小内存页面,然后将其拼凑出内存大小为该差值的内存空间;如此采用大内存页面和小内存页面组合使用的方法来分配内存页面,分配更灵活,更能提高页面分配的成功率和速率,从而整体提高系统性能。
在本申请实施例中,如图3所示,待选内存页面包括已分配的待选大内存页面、未分配的待选大内存页面、已分配的待选小内存页面和未分配的待选小内存页面;在检测到某一进程申请内存时,可以按照内存页面由大到小的顺序,优先从未分配的待选大内存页面中确定要分配的内存,分配时,也优先分配待选大内存页面中内存大小较大的大内存页面,仅当当前内存大小的大内存页面没有能分配的内存时,才分配内存大小次之的大内存页面,以此类推,在没有合适的大内存页面的组合方案时,再确定大内存页面和小内存页面的组合方案,从未分配的待选大内存页面中和未分配的待选小内存页面中确定要分配的内存。
步骤204、进程处理设备确定目标进程的目标进程类型。
其中,目标进程类型表征目标进程的重要程度。
在本申请实施例中,目标进程类型可以包括冷进程、温进程和热进程;其中,热进程访问内存页面的频率最高,且热进程的重要程度最高;温进程访问内存页面的频率较高,且温进程的重要程度较高;冷进程访问内存页面的频率最低,且冷进程的重要程度最低。
在一种可行的方式中,可以基于进程的历史访问情况和进程在未来时刻点的热度值对进程进行划分,将进程划分为热进程、温进程和冷进程三种进程类型。
其中,步骤204可以通过以下步骤来实现:
步骤204a、进程处理设备采用热度值预测模型,基于目标周期内的待处理进程访问内存页面的次数、待处理进程的处理数据以及待处理进程的优先级,确定待处理进程的热度值。
其中,处理数据表征待处理进程的处理情况,热度值表征待处理进程的重要程度。
在本申请实施例中,热度值预测模型可以为预先训练的模型,用于预测待处理进程的热度值;在一种可行的方式中,可以通过热度值预测模型预测待处理进程在未来多个时间段内的热度值。目标周期可以预先设置,目标周期可以设置为一天,可以根据实际业务需求对目标周期进行设置,本申请实施例对此不作限定。待处理进程可以包括多个进程,待处理进程可以为设备中运行的所有进程。处理数据是每一待处理进程在处理过程中产生的数据;在一种可行的方式中,该处理数据可以包括待处理进程占用的中央处理器(Central Processing Unit,CPU)使用率、用户对待处理进程的满意度等,当然还可以包括待处理进程的处理速率等,本申请实施例对此不作限定。待处理进程的优先级可以由用户自行设置,一般,待处理进程的优先级越高,表征待处理进程的重要性越高;在一种可行的方式中,待处理进程i的优先级可以为正整数,默认将待处理进程的优先级设置为1,若某一待处理进程的重要程度较高,则可以将这个待处理进程的优先级修改为更高的数值。热度值是基于待处理进程访问内存页面的次数、待处理进程的处理程度以及待处理进程的优先级等多方面的数据综合分析得到的,以综合体现待处理进程的重要程度。
在一种可行的方式中,可以基于历史周期内的待处理进程访问内存页面的次数、待处理进程的处理数据以及待处理进程的优先级来训练热度值预测模型。具体地,基于历史周期内的待处理进程访问内存页面的次数、待处理进程的处理数据以及待处理进程的优先级来训练热度值预测模型的操作可以通过以下步骤来实现:
步骤A、进程处理设备基于历史周期内待处理进程访问每一内存页面的次数和每一内存页面对应的内存大小,确定待处理进程对应的内存页面的热度值。
在本申请实施例中,历史周期可以预先设置,可以根据实际业务需求进行设置,本申请实施例对此不作限定。
在一种可行的方式中,确定某一待处理进程访问内存页面j的次数的操作 可以为:在历史周期开始前,可以将内存页面j的访问值设置为0,然后在这个待处理进程处理的过程中,每检测到内存页面j被CPU访问一次,就将内存页面j的访问值加1,如此历史周期结束后的内存页面j的访问值即为这个待处理进程访问内存页面j的次数。
如果待处理进程i在历史周期T1内访问了k个内存页面,那么待处理进程i对应的内存页面的热度值可以表示为page_hot_sum(i):其中,page_hot(j)为内存页面j的热度值,page_size(j)为内存页面j的内存大小;如果内存页面j为大内存页面,则page_size(j)为大内存页面的实际内存大小;如果内存页面j为小内存页面,则page_size(j)为4KB。
步骤B、进程处理设备确定待处理进程在历史周期内的历史处理数据和待处理进程的历史优先级。
在本申请实施例中,历史处理数据是待处理进程在历史周期内产生的数据,历史处理数据包括待处理进程在历史周期内的CPU使用率和用户对待处理进程的满意度等;其中,待处理进程i的CPU使用率可以表示为cpu_usage(i),用户对待处理进程i的满意度可以表示为satisf(i),满意度可以通过1~10来体现,默认设置为10,如果用户满意度越低,那么说明越需要对这个进程进行优化加速。历史优先级是待处理进程在历史周期内处理时的优先级,可以表示为priority(i)。
步骤C、进程处理设备基于待处理进程对应的内存页面的热度值、待处理进程的历史处理数据以及待处理进程的历史优先级,确定待训练数据。
在本申请实施例中,可以基于历史周期内的每一时刻的待处理进程对应的内存页面的热度值、待处理进程在历史周期内的CPU使用率、用户对待处理进程的满意度以及待处理进程的历史优先级构建待训练数据;在一种可行的方式中,待处理进程在t时刻的待训练数据可以表示为xt: xt=[page_hot_sum(i,t),cpu_usage(i,t),priority(i,t),satisf(i,t)]。
步骤D、进程处理设备基于待训练数据对初始热度值预测模型进行训练,得到热度值预测模型。
在本申请实施例中,初始热度值预测模型可以为具有记忆长短期信息的能力的神经网络(Long Short Term Memory,LSTM)。如图4所示,该LSTM模型的架构中可以包括两层LSTM隐藏层和一全连接层;具体的,基于LSTM模型对待训练数据进行处理的公式可以为:ft=δ(Wf[ht-1,xt]+bf)、it=δ(Wi[ht-1,xt]+bi)、ot=δ(Wo[ht-1,xt]+bo)、ht=ot*tanh(ct)、page_hot(i,j)=xt*WT、W=[w1,w2,w3],yt+1=process_hot(i,t);其中,ft为LSTM模型中t时刻遗忘门的输出、i为LSTM模型中t时刻输入门的输出、ot为LSTM模型中t时刻输出门的输出,为得到的上下文记忆信息,ct为LSTM模型中网络单元的记忆内容,ht为LSTM模型中t时刻隐藏层的输出,ht-1为LSTM模型中t-1时刻隐藏层的输出;δ为激活函数,tanh为双曲正切激活函数,Wf、Wi、Wo分别为遗忘门、输入门和输出门的权重矩阵,bf、bi、bo、bc分别为遗忘门、输入门、输出门和隐藏层的偏置值;LSTM模型可以通过来更新ct,以不断更新模型;权重向量W中的w1、w2、w3分别为待处理进程对应的内存页面的热度值、待处理进程的历史处理数据和待处理进程的历史优先级的权重,取值为0~1,默认w1为0.7、w2为0.2、w3为0.1;yt+1为t+1时刻的热度值。
在本申请实施例中,待训练数据可以划分为训练集和测试集;通过训练集对LSTM模型进行训练,通过测试集对训练后的LSTM模型进行测试验证,如果测试结果的误差在可接受的范围内,则停止模型训练,得到热度值预测模型;如果误差较大,则可以扩充训练集对LSTM模型继续进行训练,直至误差在可接受的范围内;其中,误差可以采用平均绝对百分比误差(MAPE)表示,MAPE值越小,说明训练后的LSTM模型拥有更好的精确度,MAPE的计算公式为:n为预测值的个数,也即预测次数;Yt为LSTM模型 输出的预测值,yt为实际预测值。接着,计算δMAPE的标准差评价模型预测误差波动程度,以检测模型的稳定,其中,n为预测次数;kt为t时刻的平均绝对百分比误差;为t次预测的绝对百分比误差平均值。训练好热度值预测模型后,基于输入的t时刻的特征值xt,预测出t+1时刻yt+1值,进而还可以根据热度值预测模型预测出各个用户对应的进程在未来t+1时刻以及未来一定时间周期T1内的热度值page_hot。
步骤204b、进程处理设备基于待处理进程的热度值,对待处理进程进行划分得到第一进程集合和第二进程集合。
其中,第一进程集合中进程的热度值高于第二进程集合中进程的热度值。
在本申请实施例中,第一进程集合可以为温进程集合,第二进程集合可以为冷进程集合,第一进程集合中进程的热度值均高于第二进程集合中进程的热度值。
在一种可行的方式中,可以基于热度值对待处理进程进行排序,基于排序结果对待处理进程进行划分得到第一进程集合和第二进程集合;如,可以将排序在前的f个待处理进程划分到温进程集合,将其余待处理进程划分到冷进程集合;或者,还可以将待处理进程中连续n个时刻的热度值均排序在前的f个进程划分到温进程集合,将其余待处理进程划分到冷进程集合。
在另一种可行的方式中,可以基于待处理进程的热度值和热度阈值对待处理进程进行划分得到第一进程集合和第二进程集合;如,可以将连续n个时刻的热度值均高于热点阈值的待处理进程划分到温进程集合,将其余待处理进程划分到冷进程集合;其中,热点阈值可以预先设置,热点阈值可以根据实际业务需求和业务经验进行设置,本申请实施例对此不作限定;或者,可以将连续n个时刻的热度值均高于第一热点阈值的待处理进程划分到温进程集合,可以将连续n个时刻的热度值均低于第二热点阈值的待处理进程划分到冷进程集合;其中,第一热点阈值用于确定温进程,第二热点阈值用于确定冷进程,第一热点阈值和第二热点阈值均可以预先设置,本申请实施例对此不作限定。
步骤204c、进程处理设备基于目标进程与第一进程集合和第二进程集合之间的关系,确定目标进程的目标进程类型。
在本申请实施例中,第一进程集合和第二进程集合是基于待处理进程的热度值进行划分的,且待处理进程是设备中的所有进程,因此待处理进程也包括目标进程,从而可以基于目标进程与第一进程集合和第二进程集合之间的关系确定目标进程类型。
在本申请其他实施例中,可以预先设置目标进程的目标进程类型,或者,在待处理进程中不包括目标进程的情况下,还可以通过对目标进程的热度值进行预测,进而确定目标进程的目标进程类型。
步骤205、进程处理设备基于目标内存页面和目标进程类型对应的处理方式,对目标进程进行处理。
在本申请实施例中,确定目标进程的目标进程类型后,基于目标进程类型对应的处理方式对目标进程进行处理,如此针对不同的进程类型采用不同的处理方式更具有针对性,能够提高进程处理的速度,进而保障系统性能。
需要说明的是,本实施例中与其它实施例中相同步骤和相同内容的说明,可以参照其它实施例中的描述,此处不再赘述。
本申请实施例所提供的进程处理方法,基于目标内存大小优先分配内存较大的内存页面再分配内存较小的内存页面的动态分配方法不用用户自己进行复杂的配置,且由于优先分配的是内存较大的内存页面,因此所确定出的目标内存页面的页面较少,并且由于待选内存页面中包括多种不同内存大小的内存页面,因此可以满足不同进程对不同内存大小的需求,可以提高内存页面分配的速度和成功率,进而有助于提高系统性能,解决了相关技术中的自行配置内存页面的操作较为复杂,容易出错且效率低的问题。
基于前述实施例,本申请实施例提供一种进程处理方法,参照图5所示,该方法包括以下步骤:
步骤301、进程处理设备在检测到需要为目标进程分配内存页面时,确定目标进程所需的目标内存页面的目标内存大小。
步骤302、在待选内存页面的待选大内存页面中具有与目标内存大小匹配的目标内存页面的情况下,进程处理设备按照内存页面由大到小的顺序,从处于空闲状态的待选大内存页面中确定目标内存大小的内存页面为目标内存页面。
在本申请实施例中,处于空闲状态的待选大内存页面,也即,未分配的待选大内存页面;也就是说,在待选大内存页面中具有与目标内存大小匹配的目标内存页面的情况下,可以按照内存页面由大到小的顺序,从未分配的待选大内存页面中确定目标内存大小的内存页面为目标内存页面。
在一种可行的方式中,可以采用贪心算法,按照内存页面由大到小的顺序,从未分配的待选大内存页面中确定目标内存大小的内存页面为目标内存页面。
步骤303、在待选大内存页面中没有与目标内存大小匹配的目标内存页面的情况下,进程处理设备按照内存页面由大到小的顺序,从处于空闲状态的待选大内存页面和处于空闲状态的待选小内存页面中,确定目标内存大小的内存页面为目标内存页面。
其中,待选大内存页面包括多种不同内存大小的大内存页面,待选小内存页面包括固定内存大小的小内存页面,且待选大内存页面的内存大于待选小内存页面的内存。
在本申请实施例中,处于空闲状态的待选小内存页面,也即,未分配的待选小内存页面;也就是说,在待选大内存页面中没有与目标内存大小匹配的目标内存页面的情况下,可以按照内存页面由大到小的顺序,从未分配的待选大内存页面和未分配的待选小内存页面中,确定目标内存大小的内存页面为目标内存页面。
在一种可行的方式中,可以采用贪心算法,按照内存页面由大到小的顺序,从未分配的待选大内存页面和未分配的待选小内存页面中,确定目标内存大小的内存页面为目标内存页面。
如果目标进程申请x G内存,且待选大内存页面的内存大小可以包括5G、2G、1G、512M、128M、2M,待选小内存页面的内存大小可以包括4KB,那 么步骤A、判断x是否大于5,若x>=5,则先申请个5G大页,若5G大页能分配足够数量的大页,则直接分配,此时分配结束;否则,刷新x值为执行步骤B。步骤B、判断x是否大于2,若x>=2,则申请个2G大页,若2G大页能分配足够数量的大页,则直接分配,此时分配结束;否则,刷新x值为执行步骤C。步骤C、继续按照内存大小,从1G大页、512M大页、128M大页、2M大页中分配内存,若能分配足够数量的大页,则直接分配,否则,进入步骤D。步骤D、从4KB的待选小内存页面中进行分配。
步骤304、进程处理设备采用热度值预测模型,基于目标周期内的待处理进程访问内存页面的次数、待处理进程的处理数据以及待处理进程的优先级,确定待处理进程的热度值。
其中,处理数据表征待处理进程的处理情况,热度值表征待处理进程的重要程度,目标进程类型表征目标进程的重要程度。
步骤305、进程处理设备基于待处理进程的热度值,对待处理进程进行划分得到第一进程集合和第二进程集合。
在本申请实施例中,步骤305之后,可以执行步骤306~309,或步骤310~步骤313、或步骤314~317。
步骤306、在目标进程与第一进程集合匹配、且目标进程是检测周期内访问内存页面次数最多的进程的情况下,进程处理设备确定目标进程类型为第一进程类型。
其中,第一进程类型表征需要单独对目标进程对应的内存页面的寻址进行加速处理。
在本申请实施例中,检测周期可以预先设置,可以根据实际业务需求进行设置。目标进程与第一进程集合匹配,说明目标进程与第一进程集合中的进程 是同一种进程类型,此时目标进程可以是与第一进程集合中进程的热度值相似的进程,也可以是第一进程集合中的进程,本申请实施例对此不作限定。
在一种可行的方式中,可以统计目标进程在检测周期内访问每一内存页面的次数,然后确定出检测周期内访问内存页面次数最多的进程;其中,如果存在多个进程对应的内存页面的访问值均是最大的,那么可以比较该多个进程对应的内存页面的访问值的次大值,以此类推,直至确定出唯一的访问内存页面次数最多的进程;还可以统计出内存页面的访问值为0的进程,将其确定为冷进程,并放入冷进程集合。
在本申请实施例中,如果目标进程是第一进程集合中的进程,且目标进程是检测周期内访问内存页面次数最多的进程,那么可以将目标进程类型确定为热进程,也即目标进程为热进程。
步骤307、在目标进程类型为第一进程类型的情况下,进程处理设备设置目标进程的页面标志位为第一数值。
其中,第一数值表征目标进程对应的内存页面的寻址需要加速处理。
在本申请实施例中,第一数值用于表征目标进程对应的内存页面的寻址需要加速处理;通过对某一进程对应的内存页面的寻址进行加速处理,可以提高这个进程的处理速度。在一种可行的方式中,可以在每个内存页面上增加一个页面标志位,以快速区分哪些是不需要加速处理的内存页面,哪些是需要加速处理的内存页面;可以将页面标识位设置为1来表征该内存页面是需要加速处理的内存页面(即温进程内存页面或热进程内存页面),可以将页面标识位设置为0来表征该内存页面是不需要加速处理的内存页面(即冷进程内存页面);进一步地,还可以将页面标志位设置为别的数值,以进一步区分热进程内存页面和冷进程内存页面。
步骤308、进程处理设备加载目标进程至第一内存。
在本申请实施例中,第一内存用于运行热进程和温进程,也即,热进程和温进程的运行均在第一内存上;第一内存对应的访问内存页面的速度比第二内存对应的访问内存页面的速度快;在一种可行的方式中,第一内存可以为动态 随机存取存储器(Dynamic Random Access Memory,DRAM),如果目标进程为热进程,则可以将热进程加载至DRAM,以提高目标进程访问内存页面的速度,进而提高目标进程的处理速率。
步骤309、进程处理设备基于目标内存页面和第一内存的第一目标内存页面映射表,对目标进程进行处理。
在本申请实施例中,第一目标内存页面映射表是第一内存中的页表缓存,用于存储热进程对应的内存页面的虚拟地址到物理地址之间的映射关系,以对热进程对应的内存页面的寻址进行加速处理,进而提高热进程的处理速率。在一种可行的方式中,第一目标内存页面映射表可以为地址变换高速缓冲(Translation-Lookaside Buffer,TLB),以加快虚拟地址到物理地址的转换速度;如图7所示,TLB表每一行都保存着一个页表条目(Page Table Entry,PTE),每一个页表条目记录了虚拟地址和物理数据的对应关系,对应着要访问的内存页面的虚拟地址和物理地址的映射关系。有了TLB后,CPU在寻址时,会先查TLB表,如果TLB表中正好存放着所需的页表,则称为TLB命中(TLB Hit),接着CPU再依次看TLB中页表所对应的物理内存地址中的数据是不是已经在高速缓冲存储器(Cache)里了,若没有,则到内存(Main Memory)中取相应地址所存放的数据;如果TLB表没有命中,才会继续到内存中查常规的页表,由于CPU访问TLB表的速度远快于访问内存,因此使用TLB表可以快速找到虚拟地址指向的物理地址,无需请求内存获取虚拟地址到物理地址的映射关系,因此寻址速度更快。
在本申请实施例中,如果目标进程是热进程,那么此时通过第一内存运行目标进程,并通过第一目标内存页面映射表存储目标进程对应的数据,以实现对目标进程的处理。由于热进程只有一个,那么第一目标内存页面映射表中只包含目标进程对应的内存页面的虚拟地址到物理地址的映射关系,那么目标进程对应的内存页面的数据成功加载至cache的概率为1,此时其他进程对应的内存页面的数据成功加载至cache的概率为0,有效保障了目标进程对应的内存页面的寻址速度,进而提高了目标进程的处理速率。
步骤310、在目标进程与第一进程集合匹配、且目标进程不是检测周期内访问内存页面次数最多的进程的情况下,进程处理设备确定目标进程类型为第二进程类型。
其中,第二进程类型表征需要对目标进程进行加速处理。
在本申请实施例中,如果目标进程是第一进程集合中的进程,且目标进程不是检测周期内访问内存页面次数最多的进程,那么可以将目标进程类型确定为温进程,也即目标进程为温进程。
步骤311、在目标进程类型为第二进程类型的情况下,进程处理设备设置目标进程对应的页面标志位为第一数值。
其中,第一数值表征目标进程对应的内存页面的寻址需要加速处理。
在本申请实施例中,温进程对应的页面标志位可以设置为第一数值,也即设置为1,以体现当前这个页面内存的寻址是需要加速处理的。
步骤312、进程处理设备加载目标进程至第一内存。
在本申请实施例中,温进程访问内存页面的次数与热进程访问进程页面的次数较少,但是比冷进程访问内存页面的次数多,也可以将温进程加载至第一内存中进行处理,以对温进程对应的内存页面的寻址进行加速处理。
步骤313、进程处理设备基于目标内存页面和第一内存的第二目标内存页面映射表,对目标进程进行处理。
其中,第二目标内存页面映射表中的内存页面的寻址速度小于第一目标内存页面映射表中的内存页面的寻址速度。
在本申请实施例中,第二目标内存映射表是第一内存中的页表缓存,用于存储温进程对应的内存页面的虚拟地址到物理地址之间的映射关系,用于对温进程对应的内存页面的寻址进行加速处理。在一种可行的方式中,第二目标内存映射表可以为TLB表,但是这个TLB表主要存储温进程对应的内存页面的虚拟地址到物理地址之间的映射关系;也即,如果目标进程是温进程,则目标进程对应的内存页面的虚拟地址到物理地址的映射关系会存储至这个TLB表。
在本申请实施例中,如果目标进程是温进程,那么通过第一内存运行目标 进程,将目标进程对应的数据存储至目标内存页面,以实现对目标进程的处理;由于温进程可以由多个,所以第二目标内存映射表不仅存储有目标进程对应的内存页面的虚拟地址到物理地址之间的映射关系,还存储有其他温进程对应的内存页面的虚拟地址到物理地址之间的映射关系,因此相比于第一目标内存映射表,第二目标内存映射表中的内存页面的寻址速度较低。
在本申请其他实施例中,可以根据温进程的热度值,确定每一温进程对应的内存页面的数据成功加载至cache的概率;如果温进程包括进程a和进程b、且进程a的热度值最大,那么可以将进程a对应的内存页面的数据成功加载至cache的概率设置为1,此时进程b对应的内存页面的数据成功加载至cache的概率为process_hot(b)/process_hot(a);如此根据温进程的热度值来区分温进程对应的内存页面的数据成功缓存至cache的概率,使得热度值越大的温进程,能有更高的概率缓存对应的内存页面的数据至cache,随着时间推移,cache中缓存大部分都是进程热度值高的内存页面的数据,提高了热度值高的进程的数据加载至cache的命中率,从而保证了CPU对于热度值高的进程的数据的访问效率。
步骤314、在目标进程与第二进程集合匹配的情况下,进程处理设备确定目标进程类型为第三进程类型。
其中,第三进程类型表征不对目标进程对应的内存页面的寻址进行加速处理。
在本申请实施例中,目标进程与第二进程集合匹配,说明目标进程与第二进程集合中的进程是同一种进程类型,此时目标进程可以是与第二进程集合中进程的热度值相似的进程,也可以是第二进程集合中的进程,本申请实施例对此不作限定。
在本申请实施例中,如果目标进程是第二进程集合中的进程,那么可以将目标进程类型确定为冷进程,也即目标进程为冷进程。
步骤315、在目标进程类型为第三进程类型的情况下,进程处理设备设置 目标进程对应的页面标志位为第二数值。
其中,第二数值表征目标进程对应的内存页面的寻址不需要加速处理。
在本申请实施例中,第二数值用于表征目标进程对应的内存页面的寻址不需要加速处理,如果目标进程为冷进程,则可以将目标进程对应的内存页面的页面标志位设置为0,以表征该内存页面是冷进程内存页面。
步骤316、进程处理设备加载目标进程至第二内存。
在本申请实施例中,第二内存用于运行冷进程,第一内存对应的访问内存页面的速度比第二内存对应的访问内存页面的速度快;在一种可行的方式中,第二内存可以为非易失性内存(ApachePass,AEP),或者为交换分区(SWAP);也即在目标进程为冷进程的情况下,将目标进程加载至第二内存中进行处理。
步骤317、进程处理设备基于目标内存页面和第二内存,对目标进程进行处理。
其中,第一内存对应的访问内存页面的速度比第二内存对应的访问内存页面的速度快。
在本申请实施例中,通过第二内存运行目标进程,并将目标进程对应的数据存储至目标内存页面,以实现对目标进程的处理。
在本申请实施例中,基于进程访问内存页面的次数和进程的热度值将进程划分为热进程、温进程和冷进程三种进程类型,并针对不同进程类型采用不同的处理方式进行处理,如此对进程进行分级优化处理的方法能够提高进程的处理速率,进而提高系统性能。
在本申请其他实施例中,如图6所示,在确定出温进程集合和冷进程集合后,可以将此时确定出的在温进程集合中的进程全部加载至DRAM内存,并将对应的内存页面的页面标志位设置为1;如果温进程集合中存在上一次在AEP内存里运行的进程则将其换入到DRAM内存;将此时确定出的在冷进程集合中的进程全部加载至AEP内存,将对应的内存页面的页面标志位设置为0;如果冷进程集合中存在上一次在DRAM内存里运行的进程,则将其换出至AEP内存。
在本申请实施例中,在第一内存中设置两种加速模式,一种是按照热度值进行加速的模式,另一种是热进程独占cache模式;如图8所示,扩表TLB表为第一目标内存页面映射表,用于专门存储热进程对应的内存页面的虚拟地址和物理地址之间的映射关系;TLB表为第二目标内存页面映射表,用于存储温进程对应的内存页面的虚拟地址和物理地址之间的映射关系;图8中page number和frame number对应着虚拟地址中的虚拟页面和物理地址中的物理页框;如此,通过扩展TLB表实现对热进程访问内存页面的单独加速,通过TLB表实现对温进程访问内存页面的加速。
在本申请实施例中,通过进程维度,将进程划分为冷进程、温进程和热进程,并对不同的进程类型进行分级处理优化,提高了进程的处理速率。并且,将冷进程迁移到AEP内存进行运行,将热进程和温进程迁移到DRAM内存进行运行,能够有效扩充内存的空间,保证了系统内存性能的不降低,满足了大内存业务场景的需求。其次,温进程可以通过cache进行优化,有效提高了温进程数据加载进cache的命中率;而热进程可以通过配置cache独占模式进行优化,如此通过TLB表实现双通道加速,既能使热进程在未来一定时间周期内访问到进程的内存页面数据成功加载到cache中,又能保证热进程在TLB中查找地址映射关系在TLB中100%命中,从而达到对温进程和热进程访问内存进行分级加速的目的。
需要说明的是,本实施例中与其它实施例中相同步骤和相同内容的说明,可以参照其它实施例中的描述,此处不再赘述。
本申请实施例所提供的进程处理方法,基于目标内存大小优先分配内存较大的内存页面再分配内存较小的内存页面的动态分配方法不用用户自己进行复杂的配置,且由于优先分配的是内存较大的内存页面,因此所确定出的目标内存页面的页面较少,并且由于待选内存页面中包括多种不同内存大小的内存页面,因此可以满足不同进程对不同内存大小的需求,可以提高内存页面分配的速度和成功率,进而有助于提高系统性能,解决了相关技术中的自行配置内存页面的操作较为复杂,容易出错且效率低的问题。
基于前述实施例,本申请实施例提供一种进程处理装置,该进程处理装置可以应用于图1~2和5对应的实施例提供的进程处理方法中,参照图9所示,该进程处理装置4可以包括:
确定部分41,配置为在检测到需要为目标进程分配内存页面时,确定目标进程所需的目标内存页面的目标内存大小;
处理部分42,配置为按照内存页面由大到小的顺序,基于目标内存大小,从具有多种不同内存大小的待选内存页面中确定目标内存页面;
处理部分42,还配置为基于目标内存页面对目标进程进行处理。
在本申请的其他实施例中,处理部分42,还配置为执行以下步骤:
在待选内存页面的待选大内存页面中具有与目标内存大小匹配的目标内存页面的情况下,按照内存页面由大到小的顺序,从待选大内存页面中确定目标内存大小的内存页面为目标内存页面;
在待选大内存页面中没有与目标内存大小匹配的目标内存页面的情况下,按照内存页面由大到小的顺序,从待选大内存页面和待选内存页面的待选小内存页面中确定目标内存大小的内存页面为目标内存页面;其中,待选大内存页面包括多种不同内存大小的大内存页面,待选小内存页面包括固定内存大小的小内存页面,且待选大内存页面的内存大于待选小内存页面的内存。
在本申请的其他实施例中,处理部分42,还配置为按照内存页面由大到小的顺序,从处于空闲状态的待选大内存页面中确定目标内存大小的内存页面为目标内存页面;
相应的,处理部分42,还配置为按照内存页面由大到小的顺序,从处于空闲状态的待选大内存页面和处于空闲状态的待选小内存页面中,确定目标内存大小的内存页面为目标内存页面。
在本申请的其他实施例中,处理部分42,还配置为执行以下步骤:
确定目标进程的目标进程类型;其中,目标进程类型表征目标进程的重要程度;
基于目标内存页面和目标进程类型对应的处理方式,对目标进程进行处理。
在本申请的其他实施例中,处理部分42,还配置为执行以下步骤:
采用热度值预测模型,基于目标周期内的待处理进程访问内存页面的次数、待处理进程的处理数据以及待处理进程的优先级,确定待处理进程的热度值;其中,处理数据表征待处理进程的处理情况,热度值表征待处理进程的重要程度;
基于待处理进程的热度值,对待处理进程进行划分得到第一进程集合和第二进程集合;其中,第一进程集合中进程的热度值高于第二进程集合中进程的热度值;
基于目标进程与第一进程集合和第二进程集合之间的关系,确定目标进程的目标进程类型。
在本申请的其他实施例中,处理部分42,还配置为执行以下步骤:
在目标进程与第一进程集合匹配、且目标进程是检测周期内访问内存页面次数最多的进程的情况下,确定目标进程类型为第一进程类型;其中,第一进程类型表征需要单独对目标进程对应的内存页面的寻址进行加速处理;
在目标进程与第一进程集合匹配、且目标进程不是检测周期内访问内存页面次数最多的进程的情况下,确定目标进程类型为第二进程类型;其中,第二进程类型表征需要对目标进程对应的内存页面的寻址进行加速处理;
在目标进程与第二进程集合匹配的情况下,确定目标进程类型为第三进程类型;其中,第三进程类型表征不对目标进程对应的内存页面的寻址进行加速处理。
在本申请的其他实施例中,处理部分42,还配置为执行以下步骤:
在目标进程类型为第一进程类型的情况下,设置目标进程的页面标志位为第一数值;其中,第一数值表征目标进程对应的内存页面的寻址需要加速处理;
加载目标进程至第一内存;
基于目标内存页面和第一内存的第一目标内存页面映射表,对目标进程进行处理。
在本申请的其他实施例中,处理部分42,还配置为执行以下步骤:
在目标进程类型为第二进程类型的情况下,设置目标进程对应的页面标志位为第一数值;其中,第一数值表征目标进程对应的内存页面的寻址需要加速处理;
加载目标进程至第一内存;
基于目标内存页面和第一内存的第二目标内存页面映射表,对目标进程进行处理;其中,第二目标内存页面映射表中的内存页面的寻址速度小于第一目标内存页面映射表中的内存页面的寻址速度。
在本申请的其他实施例中,处理部分42,还配置为执行以下步骤:
在目标进程类型为第三进程类型的情况下,设置目标进程对应的页面标志位为第二数值;其中,第二数值表征目标进程对应的内存页面的寻址不需要加速处理;
加载目标进程至第二内存;
基于目标内存页面和第二内存,对目标进程进行处理;其中,第一内存对应的访问内存页面的速度比第二内存对应的访问内存页面的速度快。
需要说明的是,各个部分所执行的步骤的具体说明可以参照图1~2和5对应的实施例提供的进程处理方法中,此处不再赘述。
本申请的实施例所提供的进程处理装置,基于目标内存大小优先分配内存较大的内存页面再分配内存较小的内存页面的动态分配方法不用用户自己进行复杂的配置,且由于优先分配的是内存较大的内存页面,因此所确定出的目标内存页面的页面较少,并且由于待选内存页面中包括多种不同内存大小的内存页面,因此可以满足不同进程对不同内存大小的需求,可以提高内存页面分配的速度和成功率,进而有助于提高系统性能,解决了相关技术中的自行配置内存页面的操作较为复杂,容易出错且效率低的问题。
基于前述实施例,本申请的实施例提供一种进程处理设备,该进程处理设备可以应用于图1~2和5对应的实施例提供的进程处理方法中,参照图10所示,该进程处理设备5可以包括:处理器51、存储器52和通信总线53,其中:
通信总线53用于实现处理器51和存储器52之间的通信连接;
处理器51用于执行存储器52中的进程处理程序,以实现以下步骤:
在检测到需要为目标进程分配内存页面时,确定目标进程所需的目标内存页面的目标内存大小;
按照内存页面由大到小的顺序,基于目标内存大小,从具有多种不同内存大小的待选内存页面中确定目标内存页面;
基于目标内存页面对目标进程进行处理。
在本申请的其他实施例中,处理器51用于执行存储器52中的进程处理程序的按照内存页面由大到小的顺序,基于目标内存大小,从具有多种不同内存大小的待选内存页面中确定目标内存页面,以实现以下步骤:
在待选内存页面的待选大内存页面中具有与目标内存大小匹配的目标内存页面的情况下,按照内存页面由大到小的顺序,从待选大内存页面中确定目标内存大小的内存页面为目标内存页面;
在待选大内存页面中没有与目标内存大小匹配的目标内存页面的情况下,按照内存页面由大到小的顺序,从待选大内存页面和待选内存页面的待选小内存页面中确定目标内存大小的内存页面为目标内存页面;其中,待选大内存页面包括多种不同内存大小的大内存页面,待选小内存页面包括固定内存大小的小内存页面,且待选大内存页面的内存大于待选小内存页面的内存。
在本申请的其他实施例中,处理器51用于执行存储器52中的进程处理程序的按照内存页面由大到小的顺序,从待选大内存页面中确定目标内存大小的内存页面为目标内存页面,以实现以下步骤:
按照内存页面由大到小的顺序,从处于空闲状态的待选大内存页面中确定目标内存大小的内存页面为目标内存页面;
相应的,处理器51用于执行存储器52中的进程处理程序的按照内存页面由大到小的顺序,从待选大内存页面和待选内存页面的待选小内存页面中确定目标内存大小的内存页面为目标内存页面,以实现以下步骤:
按照内存页面由大到小的顺序,从处于空闲状态的待选大内存页面和处于空闲状态的待选小内存页面中,确定目标内存大小的内存页面为目标内存页面。
在本申请的其他实施例中,处理器51用于执行存储器52中的进程处理程序的基于目标内存页面对目标进程进行处理,以实现以下步骤:
确定目标进程的目标进程类型;其中,目标进程类型表征目标进程的重要程度;
基于目标内存页面和目标进程类型对应的处理方式,对目标进程进行处理。
在本申请的其他实施例中,处理器51用于执行存储器52中的进程处理程序的确定目标进程的目标进程类型,以实现以下步骤:
采用热度值预测模型,基于目标周期内的待处理进程访问内存页面的次数、待处理进程的处理数据以及待处理进程的优先级,确定待处理进程的热度值;其中,处理数据表征待处理进程的处理情况,热度值表征待处理进程的重要程度;
基于待处理进程的热度值,对待处理进程进行划分得到第一进程集合和第二进程集合;其中,第一进程集合中进程的热度值高于第二进程集合中进程的热度值;
基于目标进程与第一进程集合和第二进程集合之间的关系,确定目标进程的目标进程类型。
在本申请的其他实施例中,处理器51用于执行存储器52中的进程处理程序的基于目标进程与第一进程集合和第二进程集合之间的关系,确定目标进程的目标进程类型,以实现以下步骤:
在目标进程与第一进程集合匹配、且目标进程是检测周期内访问内存页面次数最多的进程的情况下,确定目标进程类型为第一进程类型;其中,第一进程类型表征需要单独对目标进程对应的内存页面的寻址进行加速处理;
在目标进程与第一进程集合匹配、且目标进程不是检测周期内访问内存页面次数最多的进程的情况下,确定目标进程类型为第二进程类型;其中,第二进程类型表征需要对目标进程对应的内存页面的寻址进行加速处理;
在目标进程与第二进程集合匹配的情况下,确定目标进程类型为第三进程类型;其中,第三进程类型表征不对目标进程进行加速处理。
在本申请的其他实施例中,处理器51用于执行存储器52中的进程处理程序的基于目标内存页面和目标进程类型对应的处理方式,对目标进程进行处理,以实现以下步骤:
在目标进程类型为第一进程类型的情况下,设置目标进程的页面标志位为第一数值;其中,第一数值表征目标进程对应的内存页面的寻址需要加速处理;
加载目标进程至第一内存;
基于目标内存页面和第一内存的第一目标内存页面映射表,对目标进程进行处理。
在本申请的其他实施例中,处理器51用于执行存储器52中的进程处理程序的基于目标内存页面和目标进程类型对应的处理方式,对目标进程进行处理,以实现以下步骤:
在目标进程类型为第二进程类型的情况下,设置目标进程对应的页面标志位为第一数值;其中,第一数值表征目标进程对应的内存页面的寻址需要加速处理;
加载目标进程至第一内存;
基于目标内存页面和第一内存的第二目标内存页面映射表,对目标进程进行处理;其中,第二目标内存页面映射表中的内存页面的寻址速度小于第一目标内存页面映射表中的内存页面的寻址速度。
在本申请的其他实施例中,处理器51用于执行存储器52中的进程处理程序的基于目标内存页面和目标进程类型对应的处理方式,对目标进程进行处理,以实现以下步骤:
在目标进程类型为第三进程类型的情况下,设置目标进程对应的页面标志位为第二数值;其中,第二数值表征目标进程对应的内存页面的寻址不需要加速处理;
加载目标进程至第二内存;
基于目标内存页面和第二内存,对目标进程进行处理;其中,第一内存对应的访问内存页面的速度比第二内存对应的访问内存页面的速度快。
需要说明的是,处理器所执行的步骤的具体说明可以参照图1~2和5对应的实施例提供的进程处理方法中,此处不再赘述。
本申请实施例所提供的进程处理设备,基于目标内存大小优先分配内存较大的内存页面再分配内存较小的内存页面的动态分配方法不用用户自己进行复杂的配置,且由于优先分配的是内存较大的内存页面,因此所确定出的目标内存页面的页面较少,并且由于待选内存页面中包括多种不同内存大小的内存页面,因此可以满足不同进程对不同内存大小的需求,可以提高内存页面分配的速度和成功率,进而有助于提高系统性能,解决了相关技术中的自行配置内存页面的操作较为复杂,容易出错且效率低的问题。
基于前述实施例,本申请的实施例提供一种计算机可读存储介质,该计算机可读存储介质存储有一个或者多个程序,该一个或者多个程序可被一个或者多个处理器执行,以实现图1~2和5对应的实施例提供的进程处理方法的步骤。
本领域内的技术人员应明白,本申请的实施例可提供为方法、系统、或计算机程序产品。因此,本申请可采用硬件实施例、软件实施例、或结合软件和硬件方面的实施例的形式。而且,本申请可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器和光学存储器等)上实施的计算机程序产品的形式。
本申请是参照根据本申请实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流 程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
以上所述,仅为本申请的较佳实施例而已,并非用于限定本申请的保护范围。
工业实用性
本申请的实施例所提供的进程处理方法和设备,可以在检测到需要为目标进程分配内存页面时,确定目标进程所需的目标内存页面的目标内存大小,接着按照内存页面由大到小的顺序,基于目标内存大小从具有多种不同内存大小的待选内存页面中确定目标内存页面,然后基于目标内存页面对目标进程进行处理;如此,基于目标内存大小优先分配内存较大的内存页面再分配内存较小的内存页面的动态分配方法不用用户自己进行复杂的配置,且由于优先分配的是内存较大的内存页面,因此所确定出的目标内存页面的页面较少,并且由于待选内存页面中包括多种不同内存大小的内存页面,因此可以满足不同进程对不同内存大小的需求,可以提高内存页面分配的速度和成功率,进而有助于提高系统性能,解决了相关技术中的自行配置内存页面的操作较为复杂,容易出错且效率低的问题。

Claims (10)

  1. 一种进程处理方法,所述方法包括:
    在检测到需要为目标进程分配内存页面时,确定所述目标进程所需的目标内存页面的目标内存大小;
    按照内存页面由大到小的顺序,基于所述目标内存大小,从具有多种不同内存大小的待选内存页面中确定所述目标内存页面;
    基于所述目标内存页面对所述目标进程进行处理。
  2. 根据权利要求1所述的方法,其中,所述按照内存页面由大到小的顺序,基于所述目标内存大小,从具有多种不同内存大小的待选内存页面中确定所述目标内存页面,包括:
    在所述待选内存页面的待选大内存页面中具有与所述目标内存大小匹配的所述目标内存页面的情况下,按照内存页面由大到小的顺序,从所述待选大内存页面中确定所述目标内存大小的内存页面为所述目标内存页面;
    在所述待选大内存页面中没有与所述目标内存大小匹配的所述目标内存页面的情况下,按照内存页面由大到小的顺序,从所述待选大内存页面和所述待选内存页面的待选小内存页面中确定所述目标内存大小的内存页面为所述目标内存页面;其中,所述待选大内存页面包括多种不同内存大小的大内存页面,所述待选小内存页面包括固定内存大小的小内存页面,且所述待选大内存页面的内存大于所述待选小内存页面的内存。
  3. 根据权利要求2所述的方法,其中,所述按照内存页面由大到小的顺序,从所述待选大内存页面中确定所述目标内存大小的内存页面为所述目标内存页面,包括:
    按照内存页面由大到小的顺序,从处于空闲状态的所述待选大内存页面中确定所述目标内存大小的内存页面为所述目标内存页面;
    相应的,所述按照内存页面由大到小的顺序,从所述待选大内存页面和所述待选内存页面的待选小内存页面中确定所述目标内存大小的内存页面为 所述目标内存页面,包括:
    按照内存页面由大到小的顺序,从处于空闲状态的所述待选大内存页面和处于空闲状态的所述待选小内存页面中,确定所述目标内存大小的内存页面为所述目标内存页面。
  4. 根据权利要求1所述的方法,其中,所述基于所述目标内存页面对所述目标进程进行处理,包括:
    确定所述目标进程的目标进程类型;其中,所述目标进程类型表征所述目标进程的重要程度;
    基于所述目标内存页面和所述目标进程类型对应的处理方式,对所述目标进程进行处理。
  5. 根据权利要求4所述的方法,其中,所述确定所述目标进程的目标进程类型,包括:
    采用热度值预测模型,基于目标周期内的待处理进程访问内存页面的次数、所述待处理进程的处理数据以及所述待处理进程的优先级,确定所述待处理进程的热度值;其中,所述处理数据表征所述待处理进程的处理情况,所述热度值表征所述待处理进程的重要程度;
    基于所述待处理进程的热度值,对所述待处理进程进行划分得到第一进程集合和第二进程集合;其中,第一进程集合中进程的热度值高于第二进程集合中进程的热度值;
    基于所述目标进程与所述第一进程集合和所述第二进程集合之间的关系,确定所述目标进程的目标进程类型。
  6. 根据权利要求5所述的方法,其中,所述基于所述目标进程与所述第一进程集合和所述第二进程集合之间的关系,确定所述目标进程的目标进程类型,包括:
    在所述目标进程与所述第一进程集合匹配、且所述目标进程是检测周期内访问内存页面次数最多的进程的情况下,确定所述目标进程类型为第一进程类型;其中,所述第一进程类型表征需要单独对所述目标进程对应的内存 页面的寻址进行加速处理;
    在所述目标进程与所述第一进程集合匹配、且所述目标进程不是所述检测周期内访问内存页面次数最多的进程的情况下,确定所述目标进程类型为第二进程类型;其中,所述第二进程类型表征需要对所述目标进程对应的内存页面的寻址进行加速处理;
    在所述目标进程与所述第二进程集合匹配的情况下,确定所述目标进程类型为第三进程类型;其中,所述第三进程类型表征不对所述目标进程对应的内存页面的寻址进行加速处理。
  7. 根据权利要求4所述的方法,其中,所述基于所述目标内存页面和所述目标进程类型对应的处理方式,对所述目标进程进行处理,包括:
    在所述目标进程类型为第一进程类型的情况下,设置所述目标进程的页面标志位为第一数值;其中,所述第一数值表征所述目标进程对应的内存页面的寻址需要加速处理;
    加载所述目标进程至第一内存;
    基于所述目标内存页面和所述第一内存的第一目标内存页面映射表,对所述目标进程进行处理。
  8. 根据权利要求4所述的方法,其中,所述基于所述目标内存页面和所述目标进程类型对应的处理方式,对所述目标进程进行处理,包括:
    在所述目标进程类型为第二进程类型的情况下,设置所述目标进程对应的页面标志位为第一数值;其中,所述第一数值表征所述目标进程对应的内存页面的寻址需要加速处理;
    加载所述目标进程至第一内存;
    基于所述目标内存页面和所述第一内存的第二目标内存页面映射表,对所述目标进程进行处理;其中,所述第二目标内存页面映射表中的内存页面的寻址速度小于第一目标内存页面映射表中的内存页面的寻址速度。
  9. 根据权利要求4所述的方法,其中,所述基于所述目标内存页面和所述目标进程类型对应的处理方式,对所述目标进程进行处理,包括:
    在所述目标进程类型为第三进程类型的情况下,设置所述目标进程对应的页面标志位为第二数值;其中,所述第二数值表征所述目标进程对应的内存页面的寻址不需要加速处理;
    加载所述目标进程至第二内存;
    基于所述目标内存页面和所述第二内存,对所述目标进程进行处理;其中,第一内存对应的访问内存页面的速度比所述第二内存对应的访问内存页面的速度快。
  10. 一种进程处理设备,所述设备包括:处理器、存储器和通信总线;
    所述通信总线用于实现所述处理器和所述存储器之间的通信连接;
    所述处理器用于执行所述存储器中的进程处理程序,以实现如权利要求1~9中任一项所述的进程处理方法的步骤。
PCT/CN2023/112336 2022-08-17 2023-08-10 一种进程处理方法和设备 WO2024037428A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210991914.3 2022-08-17
CN202210991914.3A CN116821007A (zh) 2022-08-17 2022-08-17 一种进程处理方法和设备

Publications (1)

Publication Number Publication Date
WO2024037428A1 true WO2024037428A1 (zh) 2024-02-22

Family

ID=88111468

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/112336 WO2024037428A1 (zh) 2022-08-17 2023-08-10 一种进程处理方法和设备

Country Status (2)

Country Link
CN (1) CN116821007A (zh)
WO (1) WO2024037428A1 (zh)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6182089B1 (en) * 1997-09-23 2001-01-30 Silicon Graphics, Inc. Method, system and computer program product for dynamically allocating large memory pages of different sizes
CN102446136A (zh) * 2010-10-14 2012-05-09 无锡江南计算技术研究所 自适应的大页分配方法及装置
CN109918193A (zh) * 2019-01-11 2019-06-21 维沃移动通信有限公司 一种资源分配方法及终端设备
CN111078410A (zh) * 2019-12-11 2020-04-28 Oppo(重庆)智能科技有限公司 内存分配方法、装置、存储介质及电子设备
CN111381953A (zh) * 2020-03-19 2020-07-07 Oppo广东移动通信有限公司 进程管理方法、装置、存储介质及电子设备
WO2022089452A1 (zh) * 2020-10-31 2022-05-05 华为终端有限公司 内存管理方法、装置、电子设备以及计算机可读存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6182089B1 (en) * 1997-09-23 2001-01-30 Silicon Graphics, Inc. Method, system and computer program product for dynamically allocating large memory pages of different sizes
CN102446136A (zh) * 2010-10-14 2012-05-09 无锡江南计算技术研究所 自适应的大页分配方法及装置
CN109918193A (zh) * 2019-01-11 2019-06-21 维沃移动通信有限公司 一种资源分配方法及终端设备
CN111078410A (zh) * 2019-12-11 2020-04-28 Oppo(重庆)智能科技有限公司 内存分配方法、装置、存储介质及电子设备
CN111381953A (zh) * 2020-03-19 2020-07-07 Oppo广东移动通信有限公司 进程管理方法、装置、存储介质及电子设备
WO2022089452A1 (zh) * 2020-10-31 2022-05-05 华为终端有限公司 内存管理方法、装置、电子设备以及计算机可读存储介质

Also Published As

Publication number Publication date
CN116821007A (zh) 2023-09-29

Similar Documents

Publication Publication Date Title
JP2858795B2 (ja) 実記憶割り当て方法
KR101761301B1 (ko) 메모리 자원 최적화 방법 및 장치
US20210191765A1 (en) Method for static scheduling of artificial neural networks for a processor
US20150234669A1 (en) Memory resource sharing among multiple compute nodes
JP7164733B2 (ja) データ記憶
CN108959113B (zh) 用于闪存感知堆存储器管理的方法和系统
US20150363326A1 (en) Identification of low-activity large memory pages
TW202236081A (zh) 用於電腦記憶體的效能計數器
JP7311981B2 (ja) 機械学習訓練のためのスラブ基盤のメモリ管理
US20110153978A1 (en) Predictive Page Allocation for Virtual Memory System
KR101587579B1 (ko) 가상화 시스템에서 메모리 조정방법
US20160170906A1 (en) Identification of page sharing opportunities within large pages
WO2022178869A1 (zh) 一种缓存替换方法和装置
US10922147B2 (en) Storage system destaging based on synchronization object with watermark
WO2024099448A1 (zh) 内存释放、内存恢复方法、装置、计算机设备及存储介质
CN116501249A (zh) 一种减少gpu内存重复数据读写的方法及相关设备
JP6042170B2 (ja) キャッシュ制御装置及びキャッシュ制御方法
CN116954929A (zh) 一种实时迁移的动态gpu调度方法及系统
CN109597771B (zh) 用于控制分层存储器系统的方法和装置
JP2009015509A (ja) キャッシュメモリ装置
WO2024037428A1 (zh) 一种进程处理方法和设备
US10019164B2 (en) Parallel computer, migration program and migration method
CN112965921A (zh) 一种多任务gpu中tlb管理方法及系统
CN117215485A (zh) Zns ssd管理方法及数据写入方法、存储装置、控制器
WO2021008552A1 (zh) 数据读取方法和装置、计算机可读存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23854329

Country of ref document: EP

Kind code of ref document: A1