WO2022057749A1 - Procédé et appareil permettant de gérer une anomalie de page de mémoire manquante, dispositif et support de stockage - Google Patents

Procédé et appareil permettant de gérer une anomalie de page de mémoire manquante, dispositif et support de stockage Download PDF

Info

Publication number
WO2022057749A1
WO2022057749A1 PCT/CN2021/117898 CN2021117898W WO2022057749A1 WO 2022057749 A1 WO2022057749 A1 WO 2022057749A1 CN 2021117898 W CN2021117898 W CN 2021117898W WO 2022057749 A1 WO2022057749 A1 WO 2022057749A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
memory
prefetch
page
page fault
Prior art date
Application number
PCT/CN2021/117898
Other languages
English (en)
Chinese (zh)
Inventor
王义彬
王龙
杨栋
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2022057749A1 publication Critical patent/WO2022057749A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • G06F3/0611Improving I/O performance in relation to response time
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/0703Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
    • G06F11/0706Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation the processing taking place on a specific hardware platform or in a specific software environment
    • G06F11/073Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation the processing taking place on a specific hardware platform or in a specific software environment in a memory management context, e.g. virtual memory or cache management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/023Free address space management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/21Design, administration or maintenance of databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/21Design, administration or maintenance of databases
    • G06F16/219Managing data history or versioning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0604Improving or facilitating administration, e.g. storage management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0608Saving storage space on storage systems

Definitions

  • the embodiments of the present application relate to the field of computer technologies, and in particular, to a method, apparatus, device, and storage medium for processing a memory page fault exception.
  • the memory includes multiple consecutive memory pages.
  • the processor needs to access memory data, if the memory data is not in the memory pages included in the memory, a page fault exception will occur.
  • a disk includes a SWAP partition, and the SWAP partition is used to store data on memory pages that are rarely accessed by a processor, that is, data on cold pages.
  • the processor reads the corresponding data from the SWAP partition, and loads the read data into the corresponding memory page in the memory.
  • the processor starts from the memory page where the currently accessed memory data is located, and pre-reads multiple consecutive memories from the SWAP partition The data on the page is loaded into the corresponding memory page for subsequent continuous access.
  • the data on multiple consecutive memory pages are blindly pre-read into the memory.
  • the pre-read data includes a lot of data that is not really to be accessed. In this way, the memory resources will be tight, and the page fault exception will be triggered again in the future, which increases the memory access delay.
  • Embodiments of the present application provide a method, apparatus, device, and storage medium for processing abnormal page faults in memory, which can effectively reduce the number of abnormal page faults, reduce memory access latency, and reduce consumption of memory resources.
  • the technical solution is as follows:
  • a method for processing a page fault exception in memory includes:
  • the prefetch information is predicted according to the historical memory access information, so that the data corresponding to the prefetch information is read into the memory, which is not blind.
  • Prefetching data from multiple consecutive memory page addresses into memory means that this solution has a higher prefetching hit rate, which can effectively reduce the number of subsequent page fault exceptions, effectively reduce memory access latency, and this solution prefetches data. It is more efficient, the consumption of memory resources is lower, and memory resources are not very tight.
  • the memory data to be read by the computer device is not in the memory pages included in the memory, a page fault exception occurs, and when a page fault exception occurs, the information of the memory page where the page fault exception occurs this time is determined, That is, the information of the target memory page is determined, and the first information is obtained.
  • the information of the memory page is any information that can identify the memory page, such as the address of the memory page, the number of the memory page, etc., and the information of the memory page is used as the memory in the embodiment of the present application
  • the address of the page is taken as an example to introduce.
  • the computer device converts the virtual address of the memory data to be read this time into the address of the target memory page to obtain the first address.
  • the first address is the start address of the target memory page.
  • the historical memory access information is determined according to the sequence relationship between memory pages in which a page fault exception occurs during historical memory access.
  • the computer device predicts a plurality of prefetch information corresponding to the first information according to the historical memory access information, including: according to the correlation between the information of the memory page in which the page fault exception occurs in the historical memory access information and the prefetch information. relationship, and obtain multiple pieces of prefetch information corresponding to the first information.
  • the historical access memory information includes the sequence number, the correspondence between page fault information and prefetch information, the page fault information refers to the information of the memory page where the page fault exception occurs, and the serial number is obtained by performing a hash operation on the page fault information. owned.
  • the computer device obtains a plurality of prefetch information corresponding to the first information according to the association relationship between the information of the memory page in which the page fault exception occurs in the historical memory access information and the prefetch information, including: performing the processing on the first information.
  • a hash operation is performed to obtain a first sequence number; according to the first sequence number and the first information, a plurality of corresponding prefetch information is searched from the historical access memory information.
  • the page fault address is the memory page address that triggers the page fault exception stored in the historical access memory information
  • the historical access memory information can store multiple
  • Each serial number and corresponding records can be stored in the record corresponding to each serial number, and a plurality of prefetch addresses can be stored in the record corresponding to each page fault address.
  • a row number threshold (ROW) is configured in the computer device, and the row number threshold is used to indicate the maximum value of the hash result of the memory page information, that is, to limit the maximum value of the sequence number.
  • the historical access memory information may store a plurality of corresponding prefetch information from the first information.
  • the prefetch information it is also possible that multiple pieces of prefetch information corresponding to the first information are not stored.
  • the computer device searches for a plurality of corresponding prefetch information from the historical access memory information, including: searching for the record where the first serial number and the first information are located from the historical access memory information; If the first sequence number and the record where the first information is located are found in the memory information, the corresponding pieces of prefetch information are searched from the record where the first sequence number and the first information are located.
  • the computer device searches for a plurality of corresponding prefetch information from the record where the first sequence number and the first information are located, including: searching for the corresponding multiple prefetch information from the record where the first sequence number and the first information are located according to the prefetch depth. prefetch information. That is, after the computer device searches for the corresponding prefetch information from the record in which the first sequence number and the first information are located, the computer device uses a plurality of prefetch information whose total number does not exceed the prefetch depth in the record as the found plurality of prefetch information. get information.
  • a prefetch depth (prefetch depth, PD) is also configured in the computer device, and the prefetch depth is used to indicate the maximum number of multiple prefetch information acquired each time.
  • the computer device after the computer device searches for the first serial number and the record where the first information is located from the historical access memory information, it further includes: if the first serial number and the record where the first information is located are not found from the historical access memory information record, the historical access memory information is updated according to the first sequence number and the first information.
  • the computer device updates the historical access memory information according to the first serial number and the first information, including: when the historical access memory information does not store the first serial number and the first information, creating the first serial number and the first information in the historical access memory information.
  • the record of the serial number and the first information is to update the historical access memory information; in the case where the historical access memory information is stored with the first serial number, but the first information is not stored, the first information is stored in the record of the first serial number, to Update historical access memory information.
  • the computer equipment stores the first information in the record of the first serial number, including: if the number of page fault information stored in the record of the first serial number does not reach the first quantity threshold, then store in the record of the first serial number.
  • the first information if the number of page-missing information stored in the record of the first sequence number reaches the first quantity threshold, then delete the page-missing information with the earliest storage time and the corresponding prefetch information in the record of the first sequence number.
  • the first information is stored in the record.
  • the computer device is further configured with a first quantity threshold (ASSOC), and the first quantity threshold is used to indicate the maximum quantity of page fault information that can be stored in a record of the same serial number.
  • ASSOC first quantity threshold
  • the computer device deletes the earliest stored page fault in the record of the first serial number information and the corresponding prefetch information, and store the first information in the records corresponding to the first sequence, that is, eliminate the old information, and update the latest information in the historical access memory information.
  • the above-mentioned method of storing the first information in the record of the first serial number can be understood as a least recently used (LRU) method, in which the earliest stored page fault information is eliminated.
  • LRU least recently used
  • the historical access memory information can also be updated according to the page fault queue.
  • the method further includes: updating the historical access memory information according to the page fault queue, and the page fault queue is used to store the occurrences in chronological order. Information about memory pages with page fault exceptions. It should be noted that a page-missing queue (miss queue, MQ) is also stored in the computer device.
  • the computer device updates historical access memory information according to the page fault queue, including: storing the first information in the page fault queue; acquiring the memory in the page fault queue that is located before the first information and whose quantity does not exceed a second quantity threshold.
  • Page information obtain one or more second information; store the first information as prefetch information corresponding to each of the one or more second information in the historical access memory information.
  • the computer device is also configured with a page fault queue length (MQ length, MQ_L), and the page fault queue length is used to indicate the maximum amount of memory page information that can be stored in the page fault queue, so as to ensure that Data timeliness of memory page information.
  • MQ length MQ_L
  • the page fault queue length is used to indicate the maximum amount of memory page information that can be stored in the page fault queue, so as to ensure that Data timeliness of memory page information.
  • the computer device stores the first information as prefetch information corresponding to each of the one or more second information in the historical memory access information, including: according to the first information and the one or more second information.
  • the positional relationship of each second information in the page fault queue in the second information, and the association relationship between the first information and each second information is stored in the historical memory access information.
  • each of the one or more second information corresponds to one or more related groups, the number of the one or more related groups is the second quantity threshold, and each related group corresponds to one or multiple information locations, each relevant group corresponds to a relevant level, and each relevant group is used to store prefetch information; the computer device is in the missing page according to the first information and each second information in the one or more second information.
  • the positional relationship in the queue storing the association relationship between the first information and each second information in the historical memory access information, including: selecting one second information from the one or more second information, for the selected second information
  • the information performs the following operations until the following operations are performed on each of the one or more second information: determine the first information according to the position of the first information and the selected second information in the page fault queue
  • the reference level is obtained with the correlation level of the selected second information; the first information is stored in the first information position of the target correlation group, and the target correlation group is that the correlation level corresponding to the selected second information is the reference level related groups.
  • the prefetch information (that is, the memory page information is stored) may already be stored in the first information position of the target related group, Then, the computer device needs to first move and/or delete the prefetched information stored in the relevant group included in the corresponding second information, and then store the first information in the first information position of the target relevant group.
  • the computer device is further configured with a third quantity threshold (SUCC), where the third quantity threshold is used to indicate the maximum quantity of prefetch information that can be stored in each relevant group.
  • the number of the one or more information locations is a third number threshold, and the one or more relevant groups are sequentially arranged in the order of the relevant level; the computer device stores the first information in the first information location of the target relevant group, including :
  • the computer device stores the first information in the first information position of the target related group
  • the computer device moves each memory page information stored in the target related group backward by one After the information location, the first information is stored in the first information location;
  • the computer device deletes the last memory page information in the target related group , after moving the remaining memory page information by one information position, the first information is stored in the first information position;
  • the target related group is not the last related group corresponding to the selected second information, and the related group corresponding to the selected second information is located in the target related group If there is an idle information position in the subsequent related groups, the computer device moves the target related group and each memory page information before the first free information position in the related group after the target related group by one information position, and then the first information is stored at the first information location;
  • the target related group is not the last related group corresponding to the selected second information, and the related group corresponding to the selected second information is located in the target related group There is no free information location in the subsequent related group, then the computer device deletes the last memory page information in the last related group corresponding to the selected second information, and deletes the target related group and the related group located after the target related group. After the rest of the information of the memory pages are moved back by one information position, the first information is stored in the first information position.
  • the computer equipment stores the first information in the first information position of the target related group in a sequential backward manner.
  • MRU method to insert the first information into the target related group corresponding to each second information.
  • the computer equipment obtains prefetch information from the stored historical memory access information and updates the historical memory access information after each page fault exception occurs.
  • a prefetching algorithm is used to predict the prefetching information, that is, the computer equipment is configured with a prefetching algorithm, the computer equipment continuously updates the historical memory access information by running the prefetching algorithm, and records the historical memory access rules through the historical access memory information.
  • the computer device may store the historical access memory information in any manner of storing data, for example, storing the historical access memory information in a table form.
  • the historical access memory information in tabular form may be referred to as a multi-level correlation table (MLCT).
  • MLCT multi-level correlation table
  • a plurality of prefetch information corresponding to the memory page of the current page fault exception is obtained, and historical memory access information (such as a multi-level correlation table) is updated. Since the historical memory access information is gradually established according to the sequence relationship between the memory pages of the page fault exception occurred in the historical memory access, the multiple prefetch information obtained from the historical memory access information is very likely to be the first time in the historical access.
  • the memory page information corresponding to the memory data that needs to be accessed continuously that is, the data corresponding to the multiple prefetch messages read by this solution is very likely to be the memory to be accessed by the processor next.
  • data, that is, this solution pre-reads memory data more accurately, which can avoid serious waste of memory resources, and can reduce the probability of page fault exceptions again, that is, improve the prefetch hit rate and effectively reduce the memory access delay. .
  • non-sequential memory access mode includes strided mode and mixed mode.
  • This solution Learning the rules of historical memory access through the above method establishes historical access memory information, such as MLCT, instead of blindly prefetching data corresponding to consecutive memory page addresses into memory. All have good results.
  • the computer device reads the data corresponding to the plurality of prefetch information to the corresponding memory page in the memory, including: reading from the specified storage space according to the plurality of prefetch information. Fetch the corresponding data to the corresponding memory page in the memory.
  • the designated storage space is the storage space of the SWAP partition divided on the disk included in the device, or the storage space of the XL-FLASH memory included in the device, or the storage space of the remote storage.
  • a designated storage space is set in the computer device for storing the data of the cold page, that is, storing the data not stored in the memory page of the memory.
  • the read and write speed of XL-FLASH device is faster than that of SWAP partition of disk, and its price is lower than that of memory stick (such as dynamic random access memory (DRAM)), and its capacity is large, which can provide several times more than memory.
  • the accessible memory space includes DRAM and XL-FLASH devices, that is, by adding XL-FLASH devices, the accessible memory space is increased several times, that is, visible to users The fetch space has increased a lot.
  • remote storage for example, a storage device such as a disk and XL-FLASH included in a remote computer device, if the device wants to access the storage space of the remote storage, it can be accessed through the network, such as through high-speed Internet access remote storage.
  • the method further includes: determining the cold page in the memory according to the access time and the access quantity of the memory page in the memory in the first time period; and moving the data on the cold page from the memory to the designated storage space. That is, in addition to prefetching memory data from the specified storage space through the above method, the processor can also combine the method of scanning and eliminating cold pages in the memory to move the data on the cold pages in the memory to the specified storage space, That is, cold pages in memory are eliminated. In this way, more memory space can be freed up for storing hot memory data, thereby improving memory resource utilization.
  • the method further includes: receiving a prefetch algorithm performance query instruction; displaying prefetch algorithm performance information, where the prefetch algorithm performance information includes a prefetch accuracy rate and a prefetch coverage rate; wherein the prefetch accuracy rate is determined by the total prefetch rate.
  • the number and the number of prefetch hits are determined.
  • the prefetch coverage rate is determined by the total number of prefetches and the total number of accesses. The total number of prefetches refers to the total number of all prefetch information acquired in the second time period.
  • the number of prefetch hits refers to The total number of accessed memory pages in the memory pages corresponding to all the prefetch information acquired in the second time period, and the total number of accesses refers to the total number of all memory pages accessed in the second time period.
  • the prefetch accuracy rate can represent the accuracy of the prefetch algorithm to a certain extent
  • the prefetch coverage rate can represent the effectiveness of the prefetch algorithm for applications running on the device to a certain extent.
  • the computer device further includes: receiving a prefetch parameter adjustment instruction, where the prefetch parameter adjustment instruction is determined by user feedback on the prefetch algorithm performance information; and according to the prefetch parameter adjustment instruction, update the history.
  • Access memory information can be understood as being based on a prefetching algorithm, and the user can configure the prefetching parameters included in the prefetching algorithm, such as the row number threshold, the first number threshold, the second number threshold, the third threshold. The number threshold, prefetch depth, page fault queue length, etc., the user can also adjust the prefetch parameters configured in the computer device.
  • MLCT historical access memory information
  • the larger the corresponding multi-level correlation table after the user adjusts the prefetch parameters the more historical memory access information the multi-level correlation table can record. The performance of the algorithm is better.
  • a device for processing a memory page fault exception in a second aspect, is provided, and the device for processing a memory page fault exception has the function of implementing the behavior of the method for processing the memory page fault exception in the first aspect.
  • the apparatus for processing a memory page fault exception includes one or more modules, and the one or more modules are used to implement the processing method for a memory page fault exception provided in the first aspect.
  • a device for processing a memory page fault exception includes:
  • the first determining module is used to determine the information of the target memory page, and obtain the first information, and the target memory page is the memory page where the page fault exception occurs this time;
  • a prediction module configured to predict a plurality of prefetch information corresponding to the first information according to the historical memory access information, and the historical memory access information is used to characterize the law of historical memory access;
  • the reading module is used for reading the data corresponding to the plurality of prefetch information to the corresponding memory page in the memory.
  • the historical memory access information is determined according to the sequence relationship between memory pages in which a page fault exception occurs during historical memory access;
  • the prediction module includes:
  • an obtaining unit configured to obtain a plurality of prefetch information corresponding to the first information according to the association relationship between the information of the memory page in which the page fault exception occurs in the historical memory access information and the prefetch information;
  • the historical memory access information includes the sequence number, the correspondence between page fault information and prefetch information, the page fault information refers to the information of the memory page where the page fault exception occurs, and the serial number is obtained by performing a hash operation on the page fault information. owned;
  • the acquisition unit includes:
  • a hash subunit configured to perform a hash operation on the first information to obtain the first serial number
  • the search subunit is configured to search for a plurality of corresponding pieces of prefetch information from the historical memory access information according to the first sequence number and the first information.
  • lookup subunits are specifically used to:
  • the corresponding pieces of prefetch information are searched from the record where the first sequence number and the first information are located.
  • lookup subunits are specifically used to:
  • the corresponding pieces of prefetch information are searched from the first sequence number and the record where the first information is located.
  • the device also includes:
  • the first update module is used to update the historical memory access information according to the first sequence number and the first information if multiple prefetch information corresponding to the first information is not obtained, and the first sequence number is to hash the first information obtained by operation.
  • the first update module includes:
  • the first update unit is used to create the record where the first sequence number and the first information are located in the historical memory access information, to update the historical memory access information;
  • the second updating unit is configured to store the first information in the record of the first serial number to update the historical memory access information when the first serial number is stored in the historical memory access information but the first information is not stored.
  • the second update unit includes:
  • the first storage subunit is used to store the first information in the record of the first serial number if the quantity of the page fault information stored in the record of the first serial number does not reach the first quantity threshold;
  • the second storage subunit is used to delete the page fault information with the earliest storage time and the corresponding prefetch information in the record of the first serial number if the quantity of the page fault information stored in the record of the first serial number reaches the first quantity threshold, The first information is stored in the record of the first serial number.
  • the device also includes:
  • the second update module is used to update historical memory access information according to the page fault queue, and the page fault queue is used to store the information of the memory pages in which the page fault exception occurs in chronological order.
  • the second update module includes:
  • a first storage unit for storing the first information in the page fault queue
  • an acquisition unit configured to acquire the memory page information that is located before the first information and whose quantity does not exceed the second quantity threshold in the page fault queue, and obtains one or more second information
  • the second storage unit is configured to store the first information as prefetch information corresponding to each of the one or more second information in the historical memory access information.
  • the second storage unit includes:
  • the third storage subunit is configured to store the first information and each second information in the historical memory access information according to the positional relationship between the first information and each second information in the one or more second information in the page fault queue The relationship between the two information.
  • each of the one or more second information corresponds to one or more related groups, the number of the one or more related groups is the second quantity threshold, and each related group corresponds to one or multiple information locations, each related group corresponds to a related level, and each related group is used to store prefetch information;
  • the third storage subunit is specifically used for:
  • the positions of the first information and the selected second information in the page fault queue determine the correlation level of the first information and the selected second information, and obtain the reference level
  • the first information is stored in the first information position of the target correlation group, and the target correlation group is a correlation group whose correlation level corresponding to the selected second information is a reference level.
  • the number of the one or more information locations is a third number threshold, and the one or more related groups are arranged in order of related levels;
  • the third storage subunit is specifically used for:
  • the target related group is the last related group corresponding to the selected second information
  • the last memory page information in the target related group is deleted, and the After the remaining memory page information is moved back by one information position, the first information is stored in the first information position;
  • the target related group is not the last related group corresponding to the selected second information, and the related group corresponding to the selected second information is located in the target related group If there is an idle information position in the subsequent related groups, the target related group and each memory page information before the first free information position in the related group after the target related group are moved back by one information position, and the first Information is stored in the first information location;
  • the target related group is not the last related group corresponding to the selected second information, and the related group corresponding to the selected second information is located in the target related group If there is no free information location in the subsequent related groups, the last memory page information in the last related group corresponding to the selected second information is deleted, and the target related group and the related group located after the target related group are deleted. After the rest of the memory page information is moved back by one information position, the first information is stored in the first information position.
  • the reading module includes:
  • the reading unit is configured to read the corresponding data from the specified storage space to the corresponding memory page in the memory according to the plurality of prefetch information.
  • the designated storage space is the storage space of the SWAP partition divided on the disk included in the device, or the storage space of the XL-FLASH memory included in the device, or the storage space of the remote storage.
  • the device also includes:
  • the second determining module is configured to determine the cold page in the memory according to the access time and the access quantity of the memory page in the memory in the first time period;
  • the move module is used to move data on cold pages from memory to the specified storage space.
  • the device also includes:
  • a first receiving module configured to receive a prefetch algorithm performance query instruction
  • the display module is used to display the performance information of the prefetching algorithm, and the performance information of the prefetching algorithm includes the prefetching accuracy rate and the prefetching coverage rate;
  • the prefetch accuracy rate is determined by the total number of prefetches and the number of prefetch hits
  • the prefetch coverage rate is determined by the total number of prefetches and the total number of accesses
  • the total number of prefetches refers to all the prefetch information obtained in the second time period.
  • the total number of prefetch hits refers to the total number of memory pages accessed in the memory pages corresponding to all prefetch information obtained in the second time period
  • the total number of accesses refers to all the memory accessed in the second time period.
  • the total number of pages is determined by the total number of prefetches and the number of prefetch hits
  • the prefetch coverage rate is determined by the total number of prefetches and the total number of accesses
  • the total number of prefetches refers to all the prefetch information obtained in the second time period.
  • the total number of prefetch hits refers to the total number of memory pages accessed in the memory pages corresponding to all
  • the device also includes:
  • a second receiving module configured to receive a prefetch parameter adjustment instruction, where the prefetch parameter adjustment instruction is determined by user feedback on the performance information of the prefetch algorithm
  • the third update module is used to adjust the instruction according to the prefetch parameter and update the historical memory access information.
  • a computer device in a third aspect, includes a processor and a memory, the memory is used for storing a program for executing the method for processing a memory page fault exception provided in the first aspect, and a memory for implementing The data involved in the method for processing a memory page fault exception provided by the first aspect.
  • the processor is configured to execute programs stored in the memory.
  • the operating means of the storage device may further include a communication bus for establishing a connection between the processor and the memory.
  • a computer-readable storage medium where instructions are stored in the computer-readable storage medium, and when the computer-readable storage medium runs on a computer, the computer executes the processing of the memory page fault exception described in the first aspect. method.
  • a computer program product containing instructions which, when running on a computer, enables the computer to execute the method for processing a memory page fault exception described in the first aspect.
  • the prefetch information is predicted according to the historical memory access information, so that the data corresponding to the prefetch information is read into the memory, which is not blind.
  • Prefetching data of multiple consecutive memory page addresses into memory means that this solution has a higher prefetching hit rate, which can effectively reduce the number of subsequent page fault exceptions, effectively reduce memory access latency, and this solution prefetches data. It is more efficient, the consumption of memory resources is lower, and memory resources are not very tight.
  • FIG. 1 is a schematic structural diagram of a computer device provided by an embodiment of the present application.
  • FIG. 2 is a flowchart of a method for processing a memory page fault exception provided by an embodiment of the present application
  • FIG. 3 is a schematic diagram of a page fault queue provided by an embodiment of the present application.
  • FIG. 4 is a schematic diagram of obtaining a prefetch address from a stored multi-level correlation table according to an embodiment of the present application
  • FIG. 5 is a flowchart of another method for processing a memory page fault exception provided by an embodiment of the present application.
  • FIG. 6 is a schematic diagram of a multi-level correlation table provided by an embodiment of the present application.
  • FIG. 7 is a flowchart of another method for processing a memory page fault exception provided by an embodiment of the present application.
  • FIG. 8 is a schematic diagram of a method for processing a memory page fault exception provided by an embodiment of the present application.
  • FIG. 9 is a schematic diagram of another method for processing a memory page fault exception provided by an embodiment of the present application.
  • FIG. 10 is a schematic structural diagram of an apparatus for processing a memory page fault exception provided by an embodiment of the present application.
  • FIG. 11 is a schematic structural diagram of another apparatus for processing a memory page fault exception provided by an embodiment of the present application.
  • FIG. 12 is a schematic structural diagram of another apparatus for processing a memory page fault exception provided by an embodiment of the present application.
  • FIG. 13 is a schematic structural diagram of another apparatus for processing a memory page fault exception provided by an embodiment of the present application.
  • the network architecture and service scenarios described in the embodiments of the present application are for the purpose of illustrating the technical solutions of the embodiments of the present application more clearly, and do not constitute a limitation on the technical solutions provided by the embodiments of the present application.
  • the evolution of the architecture and the emergence of new business scenarios, the technical solutions provided in the embodiments of the present application are also applicable to similar technical problems.
  • FIG. 1 is a schematic structural diagram of a computer device according to an embodiment of the present application.
  • the computer device includes one or more processors 101 , a communication bus 102 , memory 103 , and one or more communication interfaces 104 .
  • the processor 101 is a general-purpose central processing unit (CPU), a network processor (NP), a microprocessor, or one or more integrated circuits for implementing the solution of the present application, for example, an application-specific integrated circuit ( application-specific integrated circuit, ASIC), programmable logic device (programmable logic device, PLD) or a combination thereof.
  • the above-mentioned PLD is a complex programmable logic device (complex programmable logic device, CPLD), a field-programmable gate array (field-programmable gate array, FPGA), a general array logic (generic array logic, GAL) or any of them. combination.
  • the processor 101 has the function of implementing the method for processing a memory page fault exception provided by the embodiment of the present application.
  • FIGS. 2 to 9 For the specific implementation method, refer to the detailed introduction in the embodiments in FIGS. 2 to 9 .
  • the communication bus 102 is used to transfer information between the aforementioned components.
  • the communication bus 102 is divided into an address bus, a data bus, a control bus, and the like.
  • address bus a data bus
  • control bus a control bus
  • only one thick line is used in the figure, but it does not mean that there is only one bus or one type of bus.
  • the memory 103 is a read-only memory (read-only memory, ROM), a random access memory (random access memory, RAM), an electrically erasable programmable read-only memory (electrically erasable programmable read-only memory, EEPROM) , optical disc (including compact disc read-only memory, CD-ROM, compact disc, laser disc, digital versatile disc, Blu-ray disc, etc.), magnetic disk storage media or other magnetic storage devices, or can be used for portable or any other medium that stores desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited thereto.
  • the memory 103 exists independently and is connected to the processor 101 through the communication bus 102, or the memory 103 is integrated with the processor 101.
  • the memory 103 includes a memory and a designated storage space, such as the storage space of an XL-FLASH device.
  • the Communication interface 104 uses any transceiver-like device for communicating with other devices or a communication network.
  • the communication interface 104 includes a wired communication interface, and optionally, a wireless communication interface.
  • the wired communication interface is, for example, an Ethernet interface.
  • the Ethernet interface is an optical interface, an electrical interface or a combination thereof.
  • the wireless communication interface is a wireless local area network (wireless local area network, WLAN) interface, a cellular network communication interface, or a combination thereof.
  • the computer device includes multiple processors, such as processor 101 and processor 105 as shown in FIG. 1 .
  • processors are a single-core processor, or a multi-core processor.
  • a processor herein refers to one or more devices, circuits, and/or processing cores for processing data (eg, computer program instructions).
  • the computer device further includes an output device 106 and an input device 107 .
  • the output device 106 communicates with the processor 101 and can display information in a variety of ways.
  • the output device 106 is a liquid crystal display (LCD), a light emitting diode (LED) display device, a cathode ray tube (CRT) display device, a projector, or the like.
  • the input device 107 communicates with the processor 101 and can receive user input in a variety of ways.
  • the input device 107 is a mouse, a keyboard, a touch screen device, a sensor device, or the like.
  • the memory 103 is further configured to store program codes 110 for executing the solutions of the present application, and the processor 101 can execute the program codes 110 stored in the memory 103 .
  • the program code 110 includes one or more software modules, and the computer device can use the processor 101 and the program code 110 in the memory 103 to implement the processing method for the memory page fault exception provided in the embodiment of FIG. 2 below.
  • the program code 110 includes the first determination module, the prediction module and the reading module shown in the embodiment of FIG. 10 , and the processor 101 uses the first determination module, the prediction module and the reading module to cause a page fault exception in the memory.
  • the page fault is predicted, the information of the memory page for page fault prediction is determined, the prefetch information is predicted, and the memory data is pre-read to the memory page.
  • FIG. 2 is a flowchart of a method for processing a memory page fault exception provided by an embodiment of the present application, and the method is applied to a computer device. Please refer to FIG. 2 , the method includes the following steps.
  • Step 201 Determine the information of the target memory page, and obtain the first information, and the target memory page is the memory page where the page fault exception occurs this time.
  • the information of the memory page where the page fault exception occurs this time is determined, that is, the information of the target memory page is determined, and the first information is obtained.
  • the information of the memory page may be any kind of information that can identify the memory page, such as the address of the memory page, the number of the memory page, etc.
  • the information of the memory page is The address of the memory page is introduced as an example. That is, when a page fault exception occurs, the address of the memory page where the page fault exception occurs this time is determined, that is, the address of the target memory page is determined, and the first address is obtained.
  • the computer device converts the virtual address of the memory data to be read this time into the address of the target memory page to obtain the first address, and the first information is the first address.
  • the first address is the starting address of the target memory page, and the starting address is a virtual address.
  • the computer device divides the memory into 25 consecutive memory pages according to each consecutive 4k, each memory page corresponds to a memory page address, and each memory page address is the address of the corresponding memory page.
  • the starting address for example, the address of the first memory page is 0000, the address of the second memory page is 0004, the virtual address of the memory data of this page fault exception is 0011, then the quotient obtained by dividing 0011 by 4 is multiplied by 4 , that is, the starting address of the memory page where this page fault exception is obtained is 0008, that is, the first address is 0008.
  • Step 202 Predict a plurality of prefetch information corresponding to the first information according to the historical memory access information, and the historical memory access information is used to represent the regularity of the historical memory access.
  • the computer device predicts a plurality of prefetch information corresponding to the first information according to the historical memory access information .
  • the historical memory access information is used to represent the law of historical memory access.
  • the first information is a first address
  • the prefetch information is a prefetch address
  • the computer device predicts a plurality of prefetch addresses corresponding to the first address according to historical memory access information.
  • the historical memory access information is determined according to the sequence relationship between memory pages in which a page fault exception occurs during historical memory access.
  • the computer device acquires a plurality of prefetch information corresponding to the first information from the stored historical memory access information.
  • the historical memory access information may be referred to as page fault associated information.
  • the computer device acquires a plurality of prefetch information corresponding to the first information according to the association relationship between the information of the memory page in which the page fault exception occurs in the historical memory access information and the prefetch information. That is, the association relationship is stored in the computer device in the embodiment of the present application, and the computer device can predict the prefetch information according to the association relationship.
  • the historical memory access information includes the sequence number, the corresponding relationship between page fault information and prefetch information. obtained by the hash operation.
  • the implementation process for the computer device to obtain a plurality of prefetch information corresponding to the first information from the stored historical memory access information is as follows: the computer device performs a hash operation on the first information to obtain a first serial number, and according to the first serial number and the first information, and look up the corresponding pieces of prefetch information from the historical memory access information.
  • the computer device performs a hash operation on the first address to obtain a first serial number, and according to the first serial number and the first address, from the historical Find the corresponding multiple prefetch addresses in the memory access information.
  • the page fault address is the address of the memory page that triggers the page fault exception stored in the historical memory access information.
  • the historical memory access information can store multiple serial numbers and corresponding records, and the records corresponding to each serial number can store multiple page faults. address. Multiple prefetch addresses can be stored in the record corresponding to each page fault address.
  • the memory includes 100 memory pages, that is, there are 100 memory page addresses, and assuming that the parameter of the hash operation is 10, then after hashing the memory page addresses, at most 10 serial numbers can be obtained, each A maximum of 10 memory page addresses are stored in the record corresponding to the serial number, so that the historical memory access information can store 10 records corresponding to the serial number, and the record corresponding to each serial number can store a maximum of 10 page fault addresses.
  • the first address is 0024
  • the computer device after the computer device performs a hash operation on the first address, the first sequence number is 4, then the computer device searches the stored historical memory access information for the corresponding address according to the first sequence number 4 and the first address 0024. multiple prefetch addresses.
  • a row number threshold is configured in the computer device, and the row number threshold is used to indicate the maximum value of the hash result of memory page information (such as addresses), that is, to limit the maximum value of the sequence number. For example Typically, if the threshold of the number of rows is 10, then the sequence number can be 0-9 (or 1-10, etc.), that is, the maximum value of the sequence number is 9 (or 10, etc.).
  • the historical memory access information may store a plurality of corresponding prefetch information in the historical memory access information.
  • the prefetch information it is also possible that multiple pieces of prefetch information corresponding to the first information are not stored.
  • the computer device searches the historical memory access information for the record where the first serial number and the first information are located. If the record where the first serial number and the first information are located is found from the historical memory access information, Find the corresponding multiple prefetch information in the records of .
  • the computer equipment searches for the record where the first sequence number and the first address are located from the historical memory access information, and if the record where the first sequence number and the first address are located is found from the historical memory access information, then the first sequence number and the record where the first address is located are searched from the historical memory access information. The corresponding multiple prefetch addresses are searched in the record where the first address is located.
  • the computer device uses all the prefetch information in the record as the acquired multiple prefetch information.
  • the prefetch information is a prefetch address, and the computer device uses all the prefetch addresses found as multiple acquired prefetch addresses.
  • the computer device searches for a plurality of corresponding pieces of prefetch information from the record where the first sequence number and the first information are located according to the prefetch depth. That is, after the computer device searches for the corresponding prefetch information from the record in which the first sequence number and the first information are located, multiple prefetch information whose total number does not exceed the prefetch depth in the record are used as the obtained multiple prefetch information. get information. For example, the computer device takes multiple prefetch addresses whose total number does not exceed the prefetch depth in the found records as the obtained multiple prefetch addresses.
  • a prefetch depth is also configured in the computer device, and the prefetch depth is used to indicate the maximum number of multiple pieces of prefetch information (eg, addresses) acquired each time. If the total quantity of the prefetch information in the record where the first sequence number and the first information are located does not exceed the PD, the computer device regards all the prefetch information in the record as the acquired multiple prefetch information. If the number of prefetch information in the record where the first sequence number and the first information are located exceeds the prefetch depth, the computer device uses PD pieces of prefetch information in the record as the found multiple prefetch information. PD pieces of prefetch information are randomly selected from the record, or the computer device selects PD pieces of prefetch information according to the storage time sequence or location sequence of the prefetch information.
  • PD prefetch depth
  • the computer device updates the historical memory access information according to the first sequence number and the first information, where the first sequence number is a
  • the information is obtained by hash operation.
  • the computer equipment searches for the record where the first sequence number and the first address are located from the historical memory access information, and if the record where the first sequence number and the first address are located is not found from the historical memory access information, then the computer equipment A serial number and a first address to update historical memory access information.
  • the computer device when the historical memory access information does not store the first serial number and the first information, the computer device creates a record where the first serial number and the first information are located in the historical memory access information to update the historical memory access information. In the case where the historical memory access information is stored with the first serial number but does not store the first information, the computer device stores the first information in the record of the first serial number to update the historical memory access information.
  • the computer device searches for the prefetch address according to the address of a memory page that triggers a page fault exception for the first time, no information is stored in the historical memory access information.
  • the computer device stores the first address of the current page fault exception and the first serial number obtained by hashing the first address in the historical memory access information, so as to update the historical memory access information.
  • new serial numbers and corresponding page fault addresses are continuously added to the historical memory access information.
  • an implementation manner for the computer device to store the first information in the record of the first sequence number is: if the number of page fault information stored in the record of the first sequence number does not reach the first quantity threshold, the computer device stores the first sequence number in the first sequence number. Store the first information in the record of the first serial number; if the quantity of the page fault information stored in the record of the first serial number reaches the first quantity threshold, then the computer equipment deletes the page fault information with the earliest storage time and the corresponding prefetching in the record of the first serial number information, the first information is stored in the record of the first serial number.
  • the computer device is also configured with a first quantity threshold (ASSOC), and the first quantity threshold is used to indicate the page fault information (such as an address) that can be stored in the records of the same serial number. greatest amount.
  • ASSOC first quantity threshold
  • the computer device deletes the earliest stored page fault information and the corresponding prefetch information in the record of the first serial number, and stores the first information in the first serial number.
  • the old information is eliminated, and the latest information is updated in the historical memory access information.
  • the memory includes 100 memory pages, that is, there are 100 memory page addresses, and assuming that the row number threshold ROW is 10, after hashing the memory page addresses, at most 10 serial numbers can be obtained, each serial number.
  • a maximum of 10 memory page addresses can be stored in the records of , so that the historical memory access information can store 10 serial numbers and corresponding records, and a maximum of 10 page fault addresses can be stored in the records of each serial number.
  • the first number threshold is 4, that is, each serial number stores at most 4 page fault addresses, and the record of the first serial number has already stored 4 page fault addresses, then the computer device stores the earliest of these 4 page fault addresses. The page fault address and the corresponding prefetch address are deleted, and then the first address is stored in the record corresponding to the first serial number.
  • the value of the first number threshold ASSOC may be smaller than the number of memory page addresses that each sequence number stores at most.
  • each sequence number corresponds to storing 10 memory page addresses at most, but the first number The threshold is set to 4, which is less than 10.
  • the computer device can reduce the amount of stored data by hashing the memory page addresses and setting the first number threshold smaller, and only store relatively new data in historical memory access information. information, while ensuring the accuracy of prefetching, and speeding up the rate of obtaining prefetching information from historical memory access information.
  • the above-mentioned method of storing the first information in the record of the first serial number can be understood as a LRU method, in which the earliest stored page fault information is eliminated.
  • the historical memory access information can also be updated according to the page fault queue.
  • a page fault queue is also stored in the computer device, and the page fault queue is used to store the information of the memory pages in which the page fault exception occurs in chronological order, that is, to store the historical access memory in chronological order. Information about the memory page that triggers the page fault exception.
  • the implementation manner of the computer device updating historical memory access information according to the page fault queue MQ includes the following steps:
  • the computer device stores the first information (eg, address) at the end of the page fault queue.
  • the computer device deletes the expired data in the page fault queue every time period, for example, deletes the memory page information whose storage time exceeds the time threshold from the current time, or keeps the page fault queue in the The specified amount of memory page information with a relatively recent storage time is deleted, and other memory page information with an earlier storage time in the page fault queue is deleted, so as to save storage space and ensure the data aging of the memory page information stored in the page fault queue. sex.
  • the newly stored first information (eg, address) is always located at the end of the page fault queue.
  • the computer device is further configured with a page fault queue length MQ_L, and the page fault queue length is used to indicate the maximum number of memory page information (eg addresses) that can be stored in the page fault queue.
  • the computer device starts counting from the first time the memory page information is recorded in the MQ, and the value obtained after each count is used as the total number of page faults, that is, the computer device accumulates the number of page faults, and obtains the total number of page faults. quantity.
  • the computer device calculates the total number of page faults divided by the length of the page fault queue to obtain a remainder, and determines the storage location of the first information in the page fault queue according to the remainder. , and store the first information in the storage location.
  • the length of the page fault queue is 128, that is, the page fault queue stores information on at most 128 memory pages where page fault exceptions occur, and the storage locations of the page fault queue include 0-127 (or 1-128).
  • the computer device divides 36 by 128 to obtain a remainder of 36, then the computer device according to the remainder It is determined that the storage location of the first information is 35 (or 36), and the first information is stored in the storage location in the page fault queue.
  • the computer device divides 139 by 128 to obtain a remainder of 11, then the computer device determines according to the remainder.
  • the storage location of the first information is 10 (or 11), and the first information is stored in the storage location in the page fault queue, that is, the memory page information previously stored in the storage location is overwritten.
  • the first information (eg, address) is stored in the page fault queue in a circular storage manner, and the newly stored first information is not necessarily located at the end of the page fault queue.
  • a second quantity threshold (LEVEL) is further configured in the computer device, and the second quantity threshold is used to indicate the maximum quantity of the second information obtained from the page fault queue.
  • the computer device stores the first information in the page fault queue according to the above method 1, that is, always stores the first information at the end of the page fault queue, then the computer device directly obtains the first information in the page fault queue.
  • the memory page information of the second quantity threshold is obtained, that is, the one or more pieces of second information are obtained.
  • the memory page addresses currently stored in the page fault queue include m1, m2, m3, m4, and m5, the first address is m5, and the second quantity threshold is 3, then the computer equipment Acquire m2, m3, and m4 as the acquired 3 second addresses.
  • the computer device stores the first information in the page fault queue according to the above method 2, that is, the first information is not necessarily stored at the end of the page fault queue.
  • the memory before the first information in the page fault queue is The quantity of page information is not less than the second quantity threshold, and the computer device directly obtains the memory page information whose quantity does not exceed the second quantity threshold before the first information in the page fault queue, that is, obtains the one or more second information.
  • the computer device obtains the memory page information before the first information in the page fault queue, and obtains Part of the memory page information from the end of the queue forward, obtains one or more second information whose quantity is the second quantity threshold. That is, when the computer device stores the first information in the page fault queue by means of circular storage, the computer device also acquires the second information by means of forward circular storage.
  • the memory page addresses currently stored in the page fault queue include m1, m2, m3, m4, m5, m6, m7, m8, the first address is m2, and the second quantity threshold is 3, then the computer device acquires m1, m8 and m7 as the acquired 3 second addresses, so that when the page fault queue stores memory page addresses that are not less than the second number threshold in addition to the first address, One or more second addresses are guaranteed to be acquired in a quantity equal to the second quantity threshold.
  • the first information is stored in the historical memory access information as the prefetch information corresponding to each of the one or more second information.
  • the computer device stores the first information and each second information in the historical memory access information according to the positional relationship between the first information and the one or more second information in the page fault queue. The relationship of the second information.
  • each of the one or more second information corresponds to one or more related groups, and the number of the one or more related groups is a second number threshold,
  • Each correlation group corresponds to one or more information locations, each correlation group corresponds to a correlation level, and each correlation group is used to store prefetch information.
  • the computer device selects one second information from the one or more second information, and performs the following operations on the selected second information until each of the one or more second information is Perform the following operations until: according to the positions of the first information and the selected second information in the page fault queue, determine the correlation level of the first information and the selected second information, obtain a reference level, and store the first information in the target at the first information position of the related group, wherein the target related group is a related group whose related level corresponding to the selected second information is a reference level.
  • the computer device determines the correlation level of the first information and the corresponding second information according to the position of the first information and each second information in the page fault queue, obtains the reference level, and stores the first information in the historical memory access information at the first information position in the target related group corresponding to the corresponding second information in .
  • the page fault queue includes m1, m2, m3, m4, m5, m6, m7, and m8, the first address is m4, and the second The number threshold is 3, m1, m2, and m3 are the three acquired second addresses, and each second address corresponds to three related groups. Then the computer device determines the relationship between m4 and m1 according to the positions of m1 and m4 in the page fault queue.
  • the correlation level is 3, and m4 is stored at the first address position in the third correlation group corresponding to m1; according to the positions of m2 and m4 in the page fault queue, the correlation level between m4 and m2 is determined to be 2, and m4 It is stored in the first address position in the second correlation group corresponding to m2; according to the positions of m3 and m4 in the page fault queue, determine that the correlation level between m4 and m3 is 1, and store m4 in the first address corresponding to m1. at the first address position in the associated group.
  • the computer device performs a hash operation on each second information, obtains the sequence number corresponding to the corresponding second information, and accesses the information from the historical memory.
  • the corresponding serial number is searched in the information, and the corresponding second information is searched from the page fault information stored in the record of the corresponding serial number.
  • the prefetch information (that is, the memory page information is stored) may already be stored in the first information position of the target related group, Then, the computer device needs to first move and/or delete the prefetched information stored in the relevant group included in the corresponding second information, and then store the first information in the first information position of the target relevant group.
  • the number of one or more information locations corresponding to each related group is a third number threshold, and the one or more related groups are arranged in order of related levels. That is, in this embodiment of the present application, the computer device is further configured with a third quantity threshold (SUCC), and the third quantity threshold is used to indicate the maximum quantity of prefetch information (such as addresses) that can be stored in each relevant group. . Based on this, the computer device stores the first information in the first information location of the target related group, including various situations:
  • SUCC third quantity threshold
  • the computer device stores the first information in the first information position of the target related group.
  • Case 3 if the number of memory page information stored in the target related group reaches the third quantity threshold, and the target related group is the last related group corresponding to the selected second information, then the last memory page information in the target related group is stored. Deleting, after moving the remaining memory page information in the target related group by one information position, the first information is stored in the first information position of the target related group.
  • Case 4 if the number of memory page information stored in the target related group reaches the third number threshold, the target related group is not the last related group corresponding to the selected second information, and the selected second information corresponds to the related group in the relevant group. If there is an idle information position in the related group after the target related group, move the target related group and each memory page information before the first free information position in the related group after the target related group by one information position. The first information is stored at the first information location of the target related group.
  • Case 5 if the number of memory page information stored in the target related group reaches the third number threshold, the target related group is not the last related group corresponding to the selected second information, and the selected second information corresponds to the related group in the relevant group. If there is no free information location in the related group after the target related group, the last memory page information in the last related group corresponding to the selected second information is deleted, and the target related group and the related information located after the target related group are deleted. After the rest of the memory page information in the group is moved back by one information position, the first information is stored in the first information position of the target related group.
  • the computer equipment stores the first information in the first information position of the target related group in a sequential backward manner.
  • This implementation can be understood as inserting the first information into the MRU method. in a target related group corresponding to each second information.
  • a third quantity threshold is configured in the computer device, or a plurality of third quantity thresholds are configured, and each relevant group corresponds to a third quantity threshold.
  • the quantity of prefetch information for example, the third quantity threshold corresponding to the related group with a high degree of configuration correlation (for example, the correlation level is level 1) is larger, and the related group with a low level of configuration correlation (for example, the correlation level is level 3) corresponds to The threshold of the third quantity is smaller, which can improve the prefetching accuracy to a certain extent.
  • the computer device obtains prefetch information from the stored historical memory access information and updates the historical memory access information after each page fault exception occurs.
  • a prefetch algorithm to predict prefetch information such as addresses
  • the computer device is configured with a prefetch algorithm
  • the computer device continuously updates the historical memory access information by running the prefetch algorithm, and records the historical memory access information through the historical memory access information. law.
  • a prefetch algorithm (which can be understood as a software module) is configured in the computer device, and the configured prefetch algorithm includes a prefetch parameter, and the prefetch parameter includes the row number threshold (ROW) and the first number threshold (ASSOC) introduced above. ), the second quantity threshold (LEVEL), the third quantity threshold (SUCC), the prefetch depth (PD) and the page fault queue length (MQ_L), wherein the first quantity threshold, the prefetch depth and the page fault queue length can be selected Configure or not configure.
  • each sequence number can store the most page fault information (such as addresses), and in the case where the prefetch depth is not configured, the first sequence number and all prefetched information in the record of the first information are stored.
  • the information is taken as the acquired prefetch information, and the computer device stores the first information at the end of the page fault queue without configuring the length of the page fault queue.
  • the computer device can store the historical memory access information in any way of storing data, such as storing the historical memory access information in a table form.
  • Memory access information storing data
  • the historical memory access information in tabular form can be called a multi-level correlation table.
  • MLCT the following is an example of MLCT
  • the information of the memory page is the address of the memory page.
  • the prefetch algorithm includes a prefetch parameter
  • the prefetch parameter includes a row number threshold, a first number threshold, a second number threshold, a third number threshold, a prefetch depth, and a page fault queue length
  • the computer device updates the stored MLCT according to the page fault queue.
  • a user may configure a prefetch algorithm through a computer device, including configuring a prefetch parameter, wherein:
  • ROW R, the maximum value of the hash result of the memory page address, that is, the maximum value used to limit the serial number;
  • ASSOC In the tags (T) corresponding to the same hash result, the maximum number of memory page addresses that can be recorded, that is, the maximum number of memory page addresses stored in the record of the same serial number;
  • LEVEL LEV, the number of related groups corresponding to the memory page address corresponding to each tag
  • SUCC the maximum number of memory page addresses that can be stored in each relevant group
  • PD the maximum number of multiple prefetch addresses acquired each time
  • MQ_L The page fault queue length.
  • MQ stores the addresses of the memory pages of each page fault exception in chronological order, including A, B, C, D, E, F, G, H, and I. Assume that the page fault occurs on the first page. One address is E, and the third quantity threshold is 3, then obtain B, C and D in MQ to get 3 second addresses, where the correlation level between E and B is 3, the correlation level between E and C is 2, and the correlation level between E and C is 2.
  • the correlation level with D is 1, which can be understood as E is the third-level successor of B (level 3 successor, L3 SUCCE), E is the second-level successor of C (level 2 successor, L2 SUCCE), E is the first-level successor of D (level 1 successor, L1 SUCCE), then store the first address E in the third-level correlation group of B, store E in the second-level correlation group of C, and store E in the first-level correlation group of D, obtaining A multi-level correlation table storing B, C, and D and the corresponding prefetch addresses.
  • the multi-level correlation table shown in Figure 4 stores the first address E and the corresponding prefetch address, and the level 1 group corresponding to E (level 1 group, L1G). ), A and C are stored in ), B and D are stored in the level 2 group (L2G), and H is stored in the third-level correlation group. Assuming that the prefetch depth is 3, then the computer device obtains A, C and B as the prefetch address.
  • the computer device handles the page fault exception according to the following steps.
  • the virtual address va of the missing memory data is determined, and according to the virtual address va, the starting address of the memory page of the current page fault exception is determined, and the first address va p is obtained.
  • each memory page corresponds to a memory page address
  • each memory page address is the start of the corresponding memory page. For example, if the address of the first memory page is 0000, the address of the second memory page is 0004, and the virtual address of the memory data of this page fault exception is 0011, then the quotient obtained by dividing 0011 by 4 is multiplied by 4, that is, The starting address of the memory page where this page fault exception is obtained is 0008, that is, the first address is 0008.
  • the sequence number starts from 0, the maximum value of the sequence number that can be obtained by the hash operation is 9, and the first address is 0008. According to the hash operation, divide 0008 by 10 to obtain The quotient is 0, and 0 is used as the first sequence number.
  • the computer device obtains a plurality of corresponding prefetch addresses according to the prefetch depth PD.
  • the column corresponding to tags is used to store the page fault address.
  • the computer device adds the first sequence number rp in the MLCT , and adds the first address va p to a tag corresponding to the first sequence number rp ;
  • the computer device stores the first address va p in a tag corresponding to the first serial number rp in an LRU manner.
  • the LRU method is used to store the first address va p in the first sequence number as follows: if the number of memory page addresses recorded in the tags corresponding to the first sequence number r p reaches the first number threshold (ASSOC), then the computer equipment After deleting the memory page address with the earliest storage time in the tags, the first address va p is added to the tags; if the number of memory page addresses recorded in the tags corresponding to the first serial number r p does not reach the first number threshold, Then the computer device directly adds the first address va p to tags.
  • ASSOC first number threshold
  • storing the first address va page in the relevant group in an MRU manner may be understood as a manner of moving backward in sequence, and the specific implementation method refers to the foregoing embodiments, which will not be repeated here.
  • FIG. 6 is an exemplary MLCT shown in an embodiment of the present application.
  • the column where ROW(R) is located is used to store the serial number
  • the column where ASSOC is located is used to store the serial number of each TAG(T).
  • the column is optional, the column where TAG is located stores the page fault address corresponding to each serial number, the column where L1 is located is used to store the prefetch address of the first-level successor corresponding to the page fault address (VA), and the column where L2 is located is used to store The prefetch address (PVA) of the secondary successor corresponding to the page fault address.
  • VA page fault address
  • multiple prefetch addresses corresponding to the memory page of the current page fault exception are obtained, and the multi-level correlation table is updated. Since the multi-level correlation table is gradually established according to the sequence relationship between the memory pages with page fault exceptions that occur during historical access to memory, the multiple prefetch addresses obtained from the multi-level correlation table are highly likely to be the first in historical access. After the memory data corresponding to an address, the memory page address corresponding to the memory data that needs to be accessed continuously, that is, the data corresponding to the multiple prefetch addresses read by this solution is very likely to be the memory to be accessed by the processor next. data, that is, this solution pre-reads memory data more accurately, which can avoid serious waste of memory resources, and can reduce the probability of page fault exceptions again, that is, improve the prefetch hit rate and effectively reduce the memory access delay. .
  • the non-sequential memory access mode includes strided mode and mixed mode.
  • the MLCT is established by learning the rules of historical memory access through the prefetch algorithm, instead of blindly prefetching the data corresponding to the consecutive memory page addresses into the memory. This scheme has good effects on sequential memory access mode, skip read mode and mixed mode. .
  • the computer device predicts the prefetch information corresponding to the first information based on a Markov model.
  • a computer device constructs a Markov model based on historical memory access information, which includes historically accessed memory page information (such as addresses) arranged in chronological order, or memory page information of abnormal page faults arranged in chronological order, The computer device calculates the measurement probability of the memory page information and the transition probability between the memory page information according to the Markov model, and then calculates the probability that the first information is transferred to other memory page information according to the measurement probability and the transition probability, and calculates the maximum probability obtained by the calculation.
  • the corresponding memory page information is used as a prefetch information.
  • the computer device continues to use the prefetch information as the first information, and continues to predict the next most probable prefetch information by using the Markov model, to obtain the second prefetch information, And so on, until a plurality of prefetch information with the number of the prefetch depth is obtained.
  • the computer device predicts the prefetch information corresponding to the first information by using a deep learning model, such as predicting the prefetch information by using an artificial intelligence (artificial intelligence, AI) model.
  • a deep learning model such as predicting the prefetch information by using an artificial intelligence (artificial intelligence, AI) model.
  • AI artificial intelligence
  • the historical memory access information includes historically accessed memory page information (such as addresses) arranged in chronological order, and the deep learning model is trained according to the historical memory access information.
  • Each training sample includes a sample input and a The sample expected output, the computer equipment inputs the training sample into the initial model, and the deep learning model is obtained by training.
  • the computer device After obtaining the memory page information of the page fault exception this time, that is, after obtaining the first information, the computer device inputs the first information into the deep learning model, and outputs a plurality of prefetch information.
  • the computer device obtains training samples online, and gradually trains and updates the deep learning model by means of online training, or after the computer device obtains a certain amount of training samples, the deep learning model is obtained by training by means of offline training.
  • Step 203 Read the data corresponding to the plurality of prefetch information to the corresponding memory page in the memory.
  • the computer device after predicting and obtaining multiple pieces of prefetch information (eg, addresses), the computer device reads data corresponding to the multiple pieces of prefetch information to a corresponding memory page in the memory.
  • prefetch information eg, addresses
  • the prefetch address is a virtual address
  • some of the data corresponding to the multiple prefetch addresses may already be on the corresponding memory page in the memory.
  • the mapping relationship between the virtual address and the physical address of the memory is used to determine whether the data corresponding to the multiple prefetch addresses are already on the corresponding memory page, and the data not on the memory page is read to the corresponding memory page.
  • the computer device reads the corresponding data from the designated storage space to the corresponding memory page in the memory according to the plurality of prefetch information.
  • the designated storage space is the storage space of the SWAP partition divided on the disk included in the device, or the storage space of the XL-FLASH memory included in the device, or the storage space of the remote storage.
  • a designated storage space is set in the computer device for storing the data of the cold page, that is, storing the data not stored in the memory page of the memory.
  • the read and write speed of XL-FLASH devices is faster than that of SWAP partitions of disks, and the price is lower than that of memory sticks (such as DRAM), and the capacity is large, which can provide several times more capacity than memory.
  • the memory space includes DRAM and XL-FLASH devices, that is, by adding XL-FLASH devices, the accessible memory space is increased several times, that is, the access memory space visible to the user is increased a lot.
  • remote storage for example, a storage device such as a disk and XL-FLASH included in a remote computer device, if the device wants to access the storage space of the remote storage, it can be accessed through the network, such as through high-speed Internet access remote storage.
  • the designated memory space includes the storage space of one or more of the SWAP partition, the XL-FLASH device and the remote storage.
  • the computer device determines the cold pages in the memory according to the access time and the number of accesses of the memory pages in the memory in the first time period, and moves the data on the cold pages from the memory to the designated storage space. That is, in addition to prefetching memory data from the specified storage space through the prefetch algorithm, the processor can also combine the method of scanning and eliminating cold pages in the memory to move the data on the cold pages in the memory to the specified storage space. , that is, eliminating cold pages in memory. In this way, more memory space can be freed up for storing hot memory data, thereby improving memory resource utilization.
  • the computer device obtains the access time and the access quantity of the memory pages in the memory in the first time period, and determines the cold page in the memory according to the access time and the access quantity of the memory pages in the memory during the period of time .
  • the number of accesses of each memory page in the memory corresponds to the weight w1
  • the average of the durations between the respective access times of each memory page in the memory and the current time during this period is the first duration
  • the first duration corresponds to the weight w2.
  • the processor calculates the product of the number of accesses of each memory page in the memory and the weight w1 plus the product of the first duration and the weight w2 to obtain the memory access statistics value corresponding to the corresponding memory page, and the processor calculates the corresponding memory access statistics in the memory
  • a memory page whose value is less than the statistical threshold is determined as a cold page, or the processor sorts the access statistics corresponding to each memory page in the memory from small to large, and determines the memory page corresponding to the specified proportion of access statistics at the lower end of the sorting as a cold page .
  • the computer device periodically scans and eliminates cold pages, the first period of time is a period of time before the current scan, and the duration of the first period of time may be greater than, equal to or less than the scan period.
  • the computer device when the computer device eliminates cold pages, it compresses the data on the cold pages in the memory and stores it in the specified storage space. When pre-reading the memory data, it decompresses the prefetched memory data from the specified storage space and reads the in memory. In this way, through data compression, the amount of data storage in the specified storage space can be saved, and more cold pages can be eliminated from the memory to the specified storage space. In this way, more hot memory data can be stored in the memory, further improving the Resource utilization of memory.
  • the computer device combines the prefetch algorithm and the memory page scan to prefetch the memory data and eliminate the memory data, wherein the computer device scans the cold pages in the memory through the page scan module, and compresses the data of the cold pages and stores them in the memory.
  • the computer equipment prefetches the memory data from the designated storage space through the prefetching algorithm, in which the corresponding data is decompressed from the designated storage space through the kernel compression module and then read into the memory (such as DRAM), which is equivalent to reading heat. page data.
  • the computer device can also receive a prefetch algorithm performance query instruction, and display prefetch algorithm performance information, where the prefetch algorithm performance information includes a prefetch accuracy rate and a prefetch coverage rate.
  • the prefetch accuracy rate is determined by the total number of prefetches and the number of prefetch hits
  • the prefetch coverage rate is determined by the total number of prefetches and the total number of accesses
  • the total number of prefetches refers to all the prefetch information obtained in the second time period.
  • the total number of prefetch hits refers to the total number of memory pages accessed in the memory pages corresponding to all prefetch information obtained in the second time period, and the total number of accesses refers to all the memory accessed in the second time period.
  • the total number of pages refers to the total number of pages accessed in the memory pages corresponding to all prefetch information obtained in the second time period, and the total number of accesses refers to all the memory accessed in the second time period. The total number of pages.
  • the second time period refers to a period of time from when the computer device runs the prefetch algorithm to when it receives the prefetch algorithm performance query instruction, or the second time period is before the moment when the prefetch algorithm performance information query is received.
  • a period of time of the specified duration refers to a period of time from when the computer device runs the prefetch algorithm to when it receives the prefetch algorithm performance query instruction, or the second time period is before the moment when the prefetch algorithm performance information query is received.
  • the prefetch accuracy rate can represent the accuracy of the prefetch algorithm to a certain extent
  • the prefetch coverage rate can represent the effectiveness of the prefetch algorithm for applications running on the device to a certain extent.
  • the user obtains the performance information of the prefetch algorithm, including the prefetch accuracy, Prefetch coverage, total number of prefetches, number of prefetch hits, etc. If the prefetching accuracy rate is low, the user can adjust the prefetching parameters by entering a command line in the computer device, such as increasing LEVEL, increasing ASSOC or increasing SUCC, etc.
  • the computer device updates the historical memory according to the adjusted prefetching parameters Access information (such as MLCT) to record the association relationship of more historically accessed memory page information in the historical memory access information.
  • Access information such as MLCT
  • the computer device receives the prefetch parameter adjustment instruction, the prefetch parameter adjustment instruction is determined by user feedback on the prefetch algorithm performance information, the computer device adjusts the instruction according to the prefetch parameter, and updates the history. Memory access information.
  • a user can query prefetch algorithm performance information through a computer device, and the computer device displays the prefetch algorithm performance information after receiving the prefetch algorithm performance query instruction, such as the prefetch accuracy rate and the prefetch coverage rate, Optionally, the total number of prefetches, the number of prefetch hits, etc. can also be displayed, and the user can choose to adjust the prefetch parameters through the computer device according to the prefetch accuracy rate and the prefetch coverage rate.
  • the historical memory access information as MLCT as an example, under normal circumstances, the larger the corresponding multi-level correlation table after the user adjusts the prefetch parameters, the more the correlation relationship of the historically accessed memory page information that the multi-level correlation table can record. To a certain extent, the performance of the prefetching algorithm is better.
  • the computer equipment includes a processor (CPU), a memory and a designated storage space (taking XL-FLASH as an example), and this method is understood to be realized through an abstract three-layer model of SMAP, which includes a perception layer, a decision layer and a physical layer.
  • SMAP can be understood to include functions corresponding to all the methods provided in the embodiments of the present application.
  • the computer device counts the hot and cold pages in the memory (main memory) through the processor and, for example, through the application (application, APP), the operating system (operaing system, OS) or the virtual machine (such as Hyper-v) statistics of memory cold and hot pages Pages, that is, memory hot and cold through the perception layer at the software level.
  • a computer device prefetches or eliminates memory pages by running a prefetching algorithm and a memory elimination algorithm, etc. on the processor, that is, prefetching or eliminating memory pages through a decision layer at the software and hardware levels.
  • the computer device performs media compression and decompression through the memory hardware module, that is, the media compression at the physical layer improves the access rate and saves storage space.
  • the processing method for the memory page fault exception provided by the embodiment of the present application is exemplarily described.
  • the user configures the prefetch algorithm and other related algorithms (such as the memory elimination algorithm for scanning cold pages) through the command line and other forms.
  • the algorithm is run to realize the data prefetch of the memory page. And the elimination of cold pages in memory.
  • the prefetch information is predicted according to the historical memory access information, so that the data corresponding to the prefetch information is read into the memory It is not blindly prefetching the data of multiple consecutive memory page addresses into the memory, that is, the prefetching hit rate of this scheme is higher, which can effectively reduce the number of subsequent page fault exceptions, effectively reduce the memory access delay, and this
  • the data prefetched by the scheme is more effective, the consumption of memory resources is lower, and the memory resources are not very tight.
  • FIG. 10 is a schematic structural diagram of an apparatus 1000 for processing a memory page fault exception provided by an embodiment of the present application.
  • the apparatus 1000 for processing a memory page fault exception may be implemented as part or all of a computer device by software, hardware, or a combination of the two.
  • the computer device may be the computer device shown in FIG. 1 .
  • the apparatus 1000 includes: a first determination module 1001 , a prediction module 1002 and a reading module 1003 .
  • the first determination module 1001 is used to determine the information of the target memory page, and obtain the first information, and the target memory page is the memory page where the page fault exception occurs this time; for the specific implementation method, refer to the detailed introduction of step 201 in the foregoing embodiment of FIG. 2 , I won't go into details here.
  • the prediction module 1002 is used to predict a plurality of prefetch information corresponding to the first information according to the historical memory access information, and the historical memory access information is used to represent the law of historical memory access; for the specific implementation method, refer to step 202 in the aforementioned embodiment of FIG. 2 The detailed introduction will not be repeated here.
  • the reading module 1003 is configured to read the data corresponding to the plurality of prefetch information to the corresponding memory page in the memory.
  • the reading module 1003 is configured to read the data corresponding to the plurality of prefetch information to the corresponding memory page in the memory.
  • the historical memory access information is determined according to the sequence relationship between memory pages in which a page fault exception occurs during historical memory access;
  • Prediction module 1002 includes:
  • an obtaining unit configured to obtain a plurality of prefetch information corresponding to the first information according to the association relationship between the information of the memory page in which the page fault exception occurs in the historical memory access information and the prefetch information;
  • the historical memory access information includes the sequence number, the correspondence between page fault information and prefetch information, the page fault information refers to the information of the memory page where the page fault exception occurs, and the serial number is obtained by performing a hash operation on the page fault information. owned;
  • the acquisition unit includes:
  • a hash subunit configured to perform a hash operation on the first information to obtain the first serial number
  • the search subunit is configured to search for a plurality of corresponding pieces of prefetch information from the historical memory access information according to the first sequence number and the first information.
  • lookup subunits are specifically used to:
  • the corresponding pieces of prefetch information are searched from the record where the first sequence number and the first information are located.
  • lookup subunits are specifically used to:
  • the corresponding pieces of prefetch information are searched from the first sequence number and the record where the first information is located.
  • the device also includes:
  • the first update module is used to update the historical memory access information according to the first sequence number and the first information if multiple prefetch information corresponding to the first information is not obtained, and the first sequence number is to hash the first information obtained by operation.
  • the first update module includes:
  • the first update unit is used to create a record where the first serial number and the first information are located in the historical memory access information in the case that the historical memory access information does not store the first serial number and the first information, to update the historical memory access information;
  • the second updating unit is configured to store the first information in the record of the first serial number to update the historical memory access information when the first serial number is stored in the historical memory access information but the first information is not stored.
  • the second update unit includes:
  • the first storage subunit is used to store the first information in the record of the first serial number if the quantity of the page fault information stored in the record of the first serial number does not reach the first quantity threshold;
  • the second storage subunit is used to delete the page fault information with the earliest storage time and the corresponding prefetch information in the record of the first serial number if the quantity of the page fault information stored in the record of the first serial number reaches the first quantity threshold, The first information is stored in the record of the first serial number.
  • the device also includes:
  • the second update module is used to update historical memory access information according to the page fault queue, and the page fault queue is used to store the information of the memory pages in which the page fault exception occurs in chronological order.
  • the second update module includes:
  • a first storage unit for storing the first information in the page fault queue
  • an acquisition unit configured to acquire the memory page information that is located before the first information and whose quantity does not exceed the second quantity threshold in the page fault queue, and obtains one or more second information
  • the second storage unit is configured to store the first information as prefetch information corresponding to each of the one or more second information in the historical memory access information.
  • the second storage unit includes:
  • the third storage subunit is configured to store the first information and each second information in the historical memory access information according to the positional relationship between the first information and each second information in the one or more second information in the page fault queue The relationship between the two information.
  • each of the one or more second information corresponds to one or more related groups, the number of the one or more related groups is the second quantity threshold, and each related group corresponds to one or multiple information locations, each related group corresponds to a related level, and each related group is used to store prefetch information;
  • the third storage subunit is specifically used for:
  • the positions of the first information and the selected second information in the page fault queue determine the correlation level of the first information and the selected second information, and obtain the reference level
  • the first information is stored in the first information position of the target related group, and the target related group is a related group whose correlation level corresponding to the selected second information is a reference level.
  • the number of the one or more information locations is a third number threshold, and the one or more related groups are arranged in order of related levels;
  • the third storage subunit is specifically used for:
  • the target related group is the last related group corresponding to the selected second information
  • the last memory page information in the target related group is deleted, and the After the remaining memory page information is moved back by one information position, the first information is stored in the first information position;
  • the target related group is not the last related group corresponding to the selected second information, and the related group corresponding to the selected second information is located in the target related group If there is an idle information position in the subsequent related groups, the target related group and each memory page information before the first free information position in the related group after the target related group are moved back by one information position, and the first Information is stored in the first information location;
  • the target related group is not the last related group corresponding to the selected second information, and the related group corresponding to the selected second information is located in the target related group If there is no free information location in the subsequent related groups, the last memory page information in the last related group corresponding to the selected second information is deleted, and the target related group and the related group located after the target related group are deleted. After the rest of the memory page information is moved back by one information position, the first information is stored in the first information position.
  • the reading module 1003 includes:
  • the reading unit is configured to read the corresponding data from the specified storage space to the corresponding memory page in the memory according to the plurality of prefetch information.
  • the designated storage space is the storage space of the SWAP partition divided on the disk included in the device, or the storage space of the XL-FLASH memory included in the device, or the storage space of the remote storage.
  • the apparatus 1000 further includes:
  • the second determination module 1004 is configured to determine the cold page in the memory according to the access time and the number of accesses of the memory pages in the memory in the first time period; for the specific implementation method, refer to the detailed introduction of step 203 in the foregoing embodiment of FIG. 2 , here No longer.
  • the moving module 1005 is configured to move the data on the cold page from the memory to the designated storage space.
  • the apparatus 1000 further includes:
  • a first receiving module 1006, configured to receive a prefetch algorithm performance query instruction
  • a display module 1007 configured to display prefetch algorithm performance information, where the prefetch algorithm performance information includes a prefetch accuracy rate and a prefetch coverage rate;
  • the prefetch accuracy rate is determined by the total number of prefetches and the number of prefetch hits
  • the prefetch coverage rate is determined by the total number of prefetches and the total number of accesses
  • the total number of prefetches refers to all the prefetch information obtained in the second time period.
  • the total number of prefetch hits refers to the total number of memory pages accessed in the memory pages corresponding to all prefetch information obtained in the second time period
  • the total number of accesses refers to all the memory accessed in the second time period.
  • the total number of pages is determined by the total number of prefetches and the number of prefetch hits
  • the prefetch coverage rate is determined by the total number of prefetches and the total number of accesses
  • the total number of prefetches refers to all the prefetch information obtained in the second time period.
  • the total number of prefetch hits refers to the total number of memory pages accessed in the memory pages corresponding to all
  • the apparatus 1000 further includes:
  • the second receiving module 1008 is configured to receive a prefetch parameter adjustment instruction, where the prefetch parameter adjustment instruction is determined by user feedback on the performance information of the prefetch algorithm;
  • the third update module 1009 is configured to adjust the instruction according to the prefetch parameter and update the historical memory access information.
  • the prefetch information is predicted according to the historical memory access information, so that the data corresponding to the prefetch information is read into the memory, which is not blind.
  • Prefetching data from multiple consecutive memory page addresses into memory means that this solution has a higher prefetching hit rate, which can effectively reduce the number of subsequent page fault exceptions, effectively reduce memory access latency, and this solution prefetches data. It is more efficient, the consumption of memory resources is lower, and memory resources are not very tight.
  • the device for processing a memory page fault exception when processing the memory page fault abnormality, only the division of the above-mentioned functional modules is used as an example for illustration. In practical applications, the above-mentioned functions may be allocated as required. It is completed by different functional modules, that is, the internal structure of the device is divided into different functional modules, so as to complete all or part of the functions described above.
  • the apparatus for processing a memory page fault exception provided by the above embodiment and the embodiment of the processing method for a memory page fault exception belong to the same concept, and the specific implementation process is detailed in the method embodiment, which will not be repeated here.
  • the computer program product includes one or more computer instructions.
  • the computer may be a general purpose computer, special purpose computer, computer network or other programmable device.
  • the computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be downloaded from a website site, computer, server, or data center Transmission to another website site, computer, server, or data center by wire (eg, coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (eg, infrared, wireless, microwave, etc.).
  • the computer-readable storage medium can be any available medium that can be accessed by a computer, or a data storage device such as a server, a data center, etc. that includes one or more available media integrated.
  • the available media may be magnetic media (eg: floppy disk, hard disk, magnetic tape), optical media (eg: digital versatile disc (DVD)) or semiconductor media (eg: solid state disk (SSD)) Wait.
  • the computer-readable storage medium mentioned in the embodiments of the present application may be a non-volatile storage medium, in other words, may be a non-transitory storage medium.
  • references herein to "at least one” refers to one or more, and “plurality” refers to two or more.
  • “/” means or means, for example, A/B can mean A or B;
  • "and/or” in this document is only an association that describes an associated object Relation, it means that there can be three kinds of relations, for example, A and/or B can mean that A exists alone, A and B exist at the same time, and B exists alone.
  • words such as “first” and “second” are used to distinguish the same or similar items with basically the same function and effect. Those skilled in the art can understand that the words “first”, “second” and the like do not limit the quantity and execution order, and the words “first”, “second” and the like are not necessarily different.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Databases & Information Systems (AREA)
  • Quality & Reliability (AREA)
  • Data Mining & Analysis (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

Procédé et appareil permettant de gérer une anomalie de page de mémoire manquante, dispositif et support de stockage, se rapportant au domaine technique des ordinateurs. Au moyen du procédé et de l'appareil, du dispositif et du support de stockage, des informations de pré-extraction sont prédites en fonction des informations d'accès à la mémoire historique, et des données correspondant aux informations de pré-extraction sont lues dans une mémoire, au lieu d'extraire de manière aveugle des données consécutives d'une pluralité d'adresses de pages de mémoire dans la mémoire. Ainsi, le taux de succès de pré-extraction est supérieur, le nombre d'anomalies de pages manquantes ultérieures peut être efficacement réduit, un retard d'accès à la mémoire peut être efficacement réduit, les données pré-extraites sont plus efficaces, et la perte de ressources de mémoire est plus basse, de telle sorte qu'il y a moins de contrainte sur les ressources de mémoire.
PCT/CN2021/117898 2020-09-21 2021-09-13 Procédé et appareil permettant de gérer une anomalie de page de mémoire manquante, dispositif et support de stockage WO2022057749A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010998076.3A CN114253458B (zh) 2020-09-21 2020-09-21 内存缺页异常的处理方法、装置、设备及存储介质
CN202010998076.3 2020-09-21

Publications (1)

Publication Number Publication Date
WO2022057749A1 true WO2022057749A1 (fr) 2022-03-24

Family

ID=80776448

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/117898 WO2022057749A1 (fr) 2020-09-21 2021-09-13 Procédé et appareil permettant de gérer une anomalie de page de mémoire manquante, dispositif et support de stockage

Country Status (2)

Country Link
CN (1) CN114253458B (fr)
WO (1) WO2022057749A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117130565A (zh) * 2023-10-25 2023-11-28 苏州元脑智能科技有限公司 数据处理方法、装置、磁盘阵列卡及介质

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117931693A (zh) * 2024-03-22 2024-04-26 摩尔线程智能科技(北京)有限责任公司 一种内存管理方法及内存管理单元

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200026663A1 (en) * 2018-07-20 2020-01-23 EMC IP Holding Company LLC Method, device and computer program product for managing storage system
CN110795363A (zh) * 2019-08-26 2020-02-14 北京大学深圳研究生院 一种存储介质的热页预测方法和页面调度方法
CN110955495A (zh) * 2019-11-26 2020-04-03 网易(杭州)网络有限公司 虚拟化内存的管理方法、装置和存储介质
CN111427804A (zh) * 2020-03-12 2020-07-17 深圳震有科技股份有限公司 一种减少缺页中断次数的方法、存储介质及智能终端

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104133780B (zh) * 2013-05-02 2017-04-05 华为技术有限公司 一种跨页预取方法、装置及系统
CN103488523A (zh) * 2013-09-26 2014-01-01 华为技术有限公司 一种页的访问方法和页的访问装置、服务器
CN105095094B (zh) * 2014-05-06 2018-11-30 华为技术有限公司 内存管理方法和设备
KR101940382B1 (ko) * 2016-12-21 2019-04-11 연세대학교 산학협력단 페이지의 프리페칭 방법 및 장치
CN111143243B (zh) * 2019-12-19 2023-06-27 上海交通大学 一种基于nvm混合内存的缓存预取方法及系统

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200026663A1 (en) * 2018-07-20 2020-01-23 EMC IP Holding Company LLC Method, device and computer program product for managing storage system
CN110795363A (zh) * 2019-08-26 2020-02-14 北京大学深圳研究生院 一种存储介质的热页预测方法和页面调度方法
CN110955495A (zh) * 2019-11-26 2020-04-03 网易(杭州)网络有限公司 虚拟化内存的管理方法、装置和存储介质
CN111427804A (zh) * 2020-03-12 2020-07-17 深圳震有科技股份有限公司 一种减少缺页中断次数的方法、存储介质及智能终端

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117130565A (zh) * 2023-10-25 2023-11-28 苏州元脑智能科技有限公司 数据处理方法、装置、磁盘阵列卡及介质
CN117130565B (zh) * 2023-10-25 2024-02-06 苏州元脑智能科技有限公司 数据处理方法、装置、磁盘阵列卡及介质

Also Published As

Publication number Publication date
CN114253458B (zh) 2024-04-26
CN114253458A (zh) 2022-03-29

Similar Documents

Publication Publication Date Title
US10198363B2 (en) Reducing data I/O using in-memory data structures
WO2022057749A1 (fr) Procédé et appareil permettant de gérer une anomalie de page de mémoire manquante, dispositif et support de stockage
US8874823B2 (en) Systems and methods for managing data input/output operations
JP2021511588A (ja) データクエリ方法、装置およびデバイス
US8086804B2 (en) Method and system for optimizing processor performance by regulating issue of pre-fetches to hot cache sets
US10061517B2 (en) Apparatus and method for data arrangement
US11907164B2 (en) File loading method and apparatus, electronic device, and storage medium
US11461239B2 (en) Method and apparatus for buffering data blocks, computer device, and computer-readable storage medium
US10339055B2 (en) Cache system with multiple cache unit states
US9851925B2 (en) Data allocation control apparatus and data allocation control method
US11481318B2 (en) Method and apparatus, and storage system for translating I/O requests before sending
CN116931838A (zh) 一种固态盘缓存管理方法、系统、电子设备及存储介质
US10067678B1 (en) Probabilistic eviction of partial aggregation results from constrained results storage
CN115495394A (zh) 数据预取方法和数据预取装置
US11461101B2 (en) Circuitry and method for selectively controlling prefetching of program instructions
CN114461590A (zh) 一种基于关联规则的数据库文件页预取方法及装置
US20150134919A1 (en) Information processing apparatus and data access method
CN117235088B (zh) 一种存储系统的缓存更新方法、装置、设备、介质及平台
WO2022223047A1 (fr) Procédé et contrôleur de lecture/écriture de données et support de stockage
CN117687936A (zh) 提高缓存命中率的方法、装置、设备及存储介质
CN106407242B (zh) 分组处理器转发数据库缓存
CN117203624A (zh) 可预取数据的智能缓存
CN118035132A (zh) 一种缓存数据预取方法、处理器及电子设备
CN117687931A (zh) 地址映射信息激活方法、电子设备及计算机可读存储装置
CN116880754A (zh) 数据访问热度统计方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21868574

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21868574

Country of ref document: EP

Kind code of ref document: A1