WO2012167533A1 - 一种构建内存访问模型的方法及装置 - Google Patents

一种构建内存访问模型的方法及装置 Download PDF

Info

Publication number
WO2012167533A1
WO2012167533A1 PCT/CN2011/081544 CN2011081544W WO2012167533A1 WO 2012167533 A1 WO2012167533 A1 WO 2012167533A1 CN 2011081544 W CN2011081544 W CN 2011081544W WO 2012167533 A1 WO2012167533 A1 WO 2012167533A1
Authority
WO
WIPO (PCT)
Prior art keywords
memory
page
page table
accessed
target process
Prior art date
Application number
PCT/CN2011/081544
Other languages
English (en)
French (fr)
Inventor
刘仪阳
王伟
裘稀石
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP11867473.8A priority Critical patent/EP2772853B1/en
Priority to CN201180002377.5A priority patent/CN102439577B/zh
Priority to PCT/CN2011/081544 priority patent/WO2012167533A1/zh
Publication of WO2012167533A1 publication Critical patent/WO2012167533A1/zh
Priority to US14/263,212 priority patent/US9471495B2/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0806Multiuser, multiprocessor or multiprocessing cache systems
    • G06F12/0815Cache consistency protocols
    • G06F12/0831Cache consistency protocols using a bus scheme, e.g. with bus monitoring or watching means
    • G06F12/0833Cache consistency protocols using a bus scheme, e.g. with bus monitoring or watching means in combination with broadcast means (e.g. for invalidation or updating)
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • G06F11/3409Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment for performance assessment
    • G06F11/3419Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment for performance assessment by assessing time
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • G06F11/3447Performance evaluation by modeling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • G06F11/3466Performance evaluation by tracing or monitoring
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • G06F11/3466Performance evaluation by tracing or monitoring
    • G06F11/3471Address tracing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/10Address translation
    • G06F12/1009Address translation using page tables, e.g. page table structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/12Replacement control
    • G06F12/121Replacement control using replacement algorithms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2201/00Indexing scheme relating to error detection, to error correction, and to monitoring
    • G06F2201/86Event-based monitoring
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2201/00Indexing scheme relating to error detection, to error correction, and to monitoring
    • G06F2201/865Monitoring of software
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2201/00Indexing scheme relating to error detection, to error correction, and to monitoring
    • G06F2201/88Monitoring involving counting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/62Details of cache specific to multiprocessor cache arrangements

Definitions

  • the present invention relates to the field of computers, and in particular, to a method and apparatus for constructing a memory access model. Background technique
  • a method for constructing a memory access model in the prior art, which is specifically: monitoring a application in real time on a computer system, and if the application accesses the memory, acquiring a memory access address when the application accesses the memory each time, The obtained memory access address is stored in a specified area in the memory, and then the memory access model of the application is constructed according to the memory access address stored in the specified area.
  • the appropriate optimization method for the memory system can be proposed according to the built memory access model.
  • embodiments of the present invention provide a method and apparatus for constructing a memory access model.
  • the technical solution is as follows:
  • a method of constructing a memory access model comprising:
  • the Present bit is cleared, the page table is used to store a page table entry of a page to be accessed by a process that references the memory block; the process that references the memory block is monitored in real time and starts timing;
  • Adding a page fault interrupt event when a process that references the memory block accesses a page in the memory block increases a number of accesses of the accessed page; wherein the page fault interrupt event is a process that references the memory block Determining, when the page entry of the accessed page includes a Present bit that is cleared, the page entry of the accessed page is a reference to the memory block The process is obtained from its corresponding page table;
  • the memory access model of the memory block is constructed according to the number of accesses and the time of each page in the memory block, the memory access model including at least the number of accesses and the access frequency of each page in the memory block. .
  • a method of constructing a memory access model comprising:
  • a page fault interrupt event is generated when the target process accesses the page to be accessed, the number of accesses of the page to be accessed is increased; wherein the page fault interrupt event determines, for the target process, the page to be accessed
  • the page table entry of the page to be accessed is generated by the target process from the corresponding page table;
  • a memory access model of the target process Constructing a memory access model of the target process according to the number of accesses and timing of each page accessed by the target process, the memory access model including at least the number of accesses and accesses of each page accessed by the target process Frequency -
  • An apparatus for constructing a memory access model comprising:
  • a first obtaining module configured to obtain a page table corresponding to a process that references the memory block, and clear a Present bit included in each page table item stored in the page table, where the page table is used to store the reference The page table entry of the page to be accessed by the process of the memory block;
  • a first monitoring module configured to perform real-time monitoring on a process that references the memory block and start timing
  • a first adding module configured to increase a number of access times of the accessed page when a process that references the memory block accesses a page in the memory block, where the page fault interrupt event is The process of referencing the memory block determines that a page table entry of the accessed page includes a page table entry that is cleared, and the page entry of the accessed page is a reference to a process that references the memory block. Obtained in the page table;
  • a second obtaining module configured to construct a memory access model of the memory block according to a number of accesses and a time of each page in the memory block, where the memory access model includes at least each page in the memory block Visits and frequency of visits.
  • An apparatus for constructing a memory access model comprising:
  • a fifth obtaining module configured to obtain a page table corresponding to the target process, and clear a Present bit included in each page table item stored in the page table, where the page table is used to store the target process to be accessed. Page table entry for the page;
  • a second monitoring module configured to perform real-time monitoring on the target process and start timing
  • a second adding module configured to generate a page fault interrupt event when the target process accesses the page to be accessed, and increase the number of accesses of the page to be accessed; wherein the page fault interrupt event is determined by the target process
  • the page table entry of the page to be accessed is generated when the page table entry of the page to be accessed is cleared, and the page table entry of the page to be accessed is obtained by the target process from the corresponding page table;
  • a sixth obtaining module configured to construct a memory access model of the target process according to an access times and a time of each page accessed by the target process, where the memory access model includes at least each accessed by the target process Page visits and frequency of visits.
  • the page table corresponding to the process that references the memory block is obtained, and the Present bit included in each page table item in the obtained page table is cleared, and the process that references the memory block is monitored in real time and starts timing.
  • a process reference page of the memory parcel generates a page fault interrupt event
  • the number of accesses of the accessed page is increased, and the memory access model of the memory block is constructed according to the number of accesses and timing of each page in the memory block;
  • obtain a page table corresponding to the target process clear the Present bit included in each page table item in the obtained page table, perform real-time monitoring on the target process, and start timing, if the target process accesses the memory page, the default occurs.
  • the page interrupt event increases the number of accesses to the accessed page, builds the memory access model of the target process according to the number of accesses and timed times of the page accessed by the target process, so that no record reference is needed when constructing the memory access model of the memory block.
  • Each process of the memory block accesses the memory access address of the memory block; or builds the memory of the target process Accessing the model does not require logging the memory access address of the target process access memory, reducing memory consumption and impact on system performance to avoid system crashes.
  • FIG. 1 is a flowchart of a method for constructing a memory access model according to Embodiment 1 of the present invention
  • FIG. 2 is a flowchart of a method for constructing a memory access model according to Embodiment 2 of the present invention
  • FIG. 3 is a flowchart of a method for constructing a memory access model according to Embodiment 3 of the present invention.
  • FIG. 4 is a flowchart of a method for constructing a memory access model according to Embodiment 4 of the present invention.
  • FIG. 5 is a flowchart of a method for constructing a memory access model according to Embodiment 5 of the present invention.
  • FIG. 6 is a schematic diagram of an apparatus for constructing a memory access model according to Embodiment 6 of the present invention.
  • FIG. 7 is a schematic diagram of an apparatus for constructing a memory access model according to Embodiment 7 of the present invention. detailed description
  • an embodiment of the present invention provides a method for constructing a memory access model, including:
  • Step 101 Obtain a page table corresponding to the process that references the memory block, and clear the Present bit included in each page table item stored in the obtained page table.
  • the page table is used to store the page table entry of the page to be accessed by the process that references the memory block.
  • Step 102 Perform real-time monitoring on the process that references the memory block and start timing
  • Step 103 If a page referencing the memory block accesses a page in the memory block, a page fault interrupt event occurs, and the number of accesses of the accessed page is increased;
  • the page fault interrupt event is generated when the process that references the memory block determines that the page table entry of the accessed page includes the Present bit that is cleared, and the page table entry of the accessed page is a process that references the memory block from the process. Obtained in the corresponding page table.
  • Step 104 Construct a memory access model of the memory block according to the number of accesses and timing of each page in the memory block, where the memory access model includes at least the number of accesses and the access frequency of each page in the memory block.
  • the page table corresponding to the process that references the memory block is obtained, and the Present bit included in each page table item in the obtained page table is cleared, and the process that references the memory block is monitored in real time and starts timing. If a page fault is interrupted when a process that references the memory block accesses the page, the number of accesses to the accessed page is increased, and the memory access model of the memory block is constructed according to the number of accesses and the time of each page in the memory block. Therefore, when constructing the memory access model of the memory block, it is not necessary to record the memory access address of each process accessing the memory of the memory block, thereby reducing memory consumption and impact on system performance, thereby avoiding system crash.
  • an embodiment of the present invention provides a method for constructing a memory access model, including:
  • Step 201 Divide the memory of the node, and divide the memory of the node into multiple memory segments;
  • the memory of the node may be divided into multiple memory segments by using a preset partitioning strategy.
  • the preset partitioning strategy may include: dividing the memory of the node into multiple memory segments of equal size.
  • the computer system includes a plurality of nodes, and each node includes at least a CPU (Central Processing Unit) and a memory.
  • a CPU Central Processing Unit
  • a device such as a CPU in the node accesses the memory in the node is called access.
  • Local memory in a computer system, access to memory within a node other than the node is called a remote node accessing the memory.
  • Step 202 For any memory segment, the interconnected chip corresponding to the node monitors the memory segment in real time and starts timing. If the remote node (other than the node) is monitored to access the memory segment, the memory is increased. The number of times the segment was accessed by the remote node;
  • Each node has a corresponding interconnecting chip. For any node, if the remote node needs to access the memory of the node, the remote node needs to access the memory of the node through the interconnecting chip of the node; In an embodiment, the interconnecting chip corresponding to the node may be used to monitor whether a remote node accesses a memory segment in the node.
  • the interconnecting chip includes a plurality of timers and a plurality of counters.
  • a corresponding timer and a counter may be selected for the memory segment in the plurality of timers and the plurality of counters included in the interconnecting chip.
  • the timer is used for counting.
  • the counter is used to count the number of times the memory segment is accessed by the remote node. That is, if the interconnected chip monitors that the remote node accesses the memory segment, the memory segment is increased by increasing the value of the counter. The number of times the remote node accessed.
  • the initial value of the counter may be 0, and the number of times that the memory segment is accessed by the remote node may be 1 or the like. In this embodiment, the specific values of the two are not limited.
  • Step 203 Calculate, according to the number of times the memory segment is accessed by the remote node and the time counted, the access frequency of the memory segment accessed by the remote node;
  • the access frequency of the memory segment accessed by the remote node may be periodically calculated. Specifically, the time counted from the timer corresponding to the memory segment is read, and the number of times the memory segment is accessed by the remote node is read from the counter corresponding to the memory segment, and the number of times the memory segment is accessed by the remote node is The ratio operation is performed with the time of the timing to obtain the access frequency of the memory segment accessed by the remote node.
  • a corresponding timer and a counter may be selected for each memory segment included in the node from the interconnecting chip corresponding to the node, so that each memory segment included in the node may be monitored in real time by the interconnecting chip.
  • the number of times each memory segment is accessed by the remote node is obtained, and then the access frequency of each memory segment accessed by the remote node is periodically calculated.
  • Step 204 Determine whether the number of times the memory segment is accessed by the remote node exceeds a first threshold, and whether the access frequency of the memory segment accessed by the remote node exceeds a second threshold, if the number of times the memory segment is accessed by the remote node exceeds the number of times a threshold value and the access frequency of the memory segment being accessed by the remote node exceeds a second threshold, then step 205 is performed;
  • the real-time monitoring continues to detect whether the remote node accesses the memory segment.
  • Step 205 Divide the memory segment, and divide the memory segment into multiple memory blocks.
  • the memory segment may be divided by a preset partitioning strategy, and the memory segment is divided into multiple memory blocks.
  • Step 206 For any one of the memory blocks, the memory block is monitored in real time through the interconnect chip and starts timing, if the monitor If the remote node accesses the memory block, the number of times the memory block is accessed by the remote node is increased;
  • a corresponding counter and a timer may be selected for each memory block in the remaining counters and timers of the interconnect chip, and each counter is used to count the memory block whose corresponding memory block is accessed by the remote node.
  • the number of times, that is, for any one of the memory blocks, if the remote node is monitored to access the memory block, the number of times the memory block is accessed by the remote node is increased by increasing the value of the counter corresponding to the memory block.
  • Step 207 Calculate, according to the number of times the memory block is accessed by the remote node and the time counted, the access frequency of the memory block accessed by the remote node;
  • the access frequency of the memory block accessed by the remote node may be periodically calculated. Specifically, the number of times the memory block is accessed by the remote node is compared with the time counted, and the access frequency of the memory block accessed by the remote node is obtained.
  • the access frequency of each memory block accessed by the remote node can be calculated according to the steps of 206 and 207 above.
  • Step 208 Determine whether the number of times the memory block is accessed by the remote node exceeds a third threshold, and whether the access frequency of the memory block accessed by the remote node exceeds a fourth threshold, if the number of times the memory segment is accessed by the remote node exceeds the number of times a threshold of three, and the access frequency of the memory segment being accessed by the remote node exceeds a fourth threshold, then step 209 is performed;
  • the remote node If the number of accesses of the memory block accessed by the remote node does not exceed the third threshold or the access frequency of the memory block accessed by the remote node does not exceed the fourth threshold, continue to monitor in real time whether the remote node accesses the memory block. .
  • the third threshold may be equal to, greater than, or less than the first threshold
  • the fourth threshold may be equal to, greater than, or less than the second threshold
  • a corresponding one may be established for the memory block.
  • Step 209 Obtain all the processes that refer to the memory block by using a reverse mapping method, and obtain a page table corresponding to each process, and the page table corresponding to each process is used to store a page table of pages to be accessed by each process. Item, the page table entry of the page includes at least the page table item information of the page and the Present bit of the page;
  • the minimum unit of memory is a page, each memory block includes one or more pages, each process accessing the memory corresponds to a page table, and the page table stores a page table entry of a page to be accessed by the process; wherein, the page table Each page table entry includes a Present bit and a page table entry information. If a page is valid, the Present bit included in the page table entry of the page is set, and the process can access the page if the page is invalid. , the Present bit included in the page table entry for this page is cleared, and the process cannot directly access the page.
  • Step 210 Apply a memory area in a memory of the node, and store a Present bit included in each page table item in the obtained page table in the memory area;
  • a memory area is divided in the memory of the node, and the process number of the process corresponding to the page table is obtained for any page table obtained, according to the serial number in the page table of each page table item in the page table,
  • the obtained process number and the start address of the memory area are calculated by using a preset calculation model to store a memory address of the Present bit included in each page table entry in the memory area, according to the storage included in each page table entry.
  • the memory address of the Present bit stores the Present bit included in each page table entry in the page table into the memory area.
  • the Present bit included in each page table item in each of the other page tables is stored in the memory area in the same manner as described above.
  • the default calculation model can be as shown in formula (1), Memoryaddress is the memory address, Startaddress is the starting address of the memory area, ProcessID is the process number, element is the coefficient, and number is the serial number of the page table item in the page table. .
  • Step 211 Clear the Present bit included in each page table entry in the obtained page table
  • the process When the process needs to access a page of the memory, the process first obtains the page table entry corresponding to the page to be accessed from the page table corresponding to the page, and determines the Present bit included in the obtained page table entry, if the obtained The Present bit included in the page table entry is set, and the process obtains the start address of the required access page according to the obtained page table entry including the page table entry information of the required access page, and searches for the memory according to the obtained start address.
  • the process of accessing the page in the memory cannot be monitored, but the page fault interrupt event generated by the process can be detected; if the process obtains the required access page from its corresponding page table If the Present bit included in the page table entry is cleared, the process will generate a page fault interrupt event, which can detect the page fault interrupt event generated by the process, and determine that the process accesses the page in memory. Therefore, in this embodiment, in the page table corresponding to each process that references the memory block, the Present bits included in the page table entry of each page are all cleared, so that the process that refers to the memory block accesses the memory. When a page in the block is generated, a page fault interrupt event is generated, and a page fault interrupt event generated by the process is detected, and then the process is determined to access the page in the memory block.
  • Step 212 Real-time monitoring the process that references the memory block and starting timing. If a process that references the memory block accesses a page in the memory block, a page fault interrupt event occurs, and the number of pages accessed by the process is increased;
  • the process that references the memory block is monitored in real time and starts timing. If a process that references the memory block accesses a page in the memory block is detected, a page fault interrupt event is generated, wherein the page fault interrupt event is determined by the process.
  • Visit The page table entry of the page that is requested is generated when the Present bit is cleared.
  • the page table entry of the accessed page is obtained by the process from the corresponding page table of the page, and the page table entry of the accessed page is included.
  • the page table entry information obtains the starting address of the accessed page according to the page table item information, and determines the accessed page according to the obtained starting address, and increases the number of accesses of the accessed page.
  • a corresponding timer may be set for the memory block, and a corresponding counter is set for each page included in the memory block, and a counter corresponding to each page is used to count the access of the page corresponding to itself.
  • the number of times that is, increasing the number of accesses to the page by increasing the value of the counter corresponding to a page; since the number of pages included in the memory block may be large, the counter corresponding to each page included in the memory block is It is in the form of software to count the number of visits per page.
  • steps (1) - (3) may also be performed, respectively:
  • the memory address is calculated according to the process ID of the process, the sequence number of the page table entry of the page accessed by the process in the page table, and the start address of the memory area, by using a preset calculation model, according to the calculated memory.
  • the address reads the Present bit of the page accessed by the process from the corresponding space in the memory area, and judges the obtained Present bit. If the obtained Present bit is set, it determines that the page to be accessed by the process is valid. If the obtained Present bit is cleared, it is determined that the page accessed by the process is invalid.
  • the page fault exception handler needs to be triggered, and the triggered page fault exception handler performs exception processing.
  • the page table entry of the page accessed by itself is obtained from the corresponding page table, and the Present bit included in the obtained page table entry is judged, and the obtained page table entry is determined.
  • the included Present bit is set, and then the start address of the page to be accessed is obtained according to the page table item information included in the obtained page table entry, and the corresponding page is found from the memory block according to the obtained start address, and The data is read and written in the page it is looking for, so that the process implements access to the accessed page.
  • Step 213 Construct a memory access model of the memory block according to the number of accesses and timing of each page in the memory block.
  • the memory access model includes at least the access frequency and the number of accesses of each page in the memory block.
  • the memory access model can be constructed periodically. Specifically, the number of accesses of each page in the memory block is compared with the time of the timed operation, and the access frequency of each page included in the memory block is obtained, so that the memory access model of the memory block is at least the memory. The frequency of visits and the number of visits for each page in the block.
  • the memory block that the number of accesses to the remote node exceeds the third threshold and the access frequency accessed by the remote node exceeds the fourth threshold is obtained, and the page table corresponding to the process that references the memory block is obtained and acquired.
  • the Present bit of each page table entry in the page table is cleared, the process that references the memory block is monitored in real time and starts timing.
  • an embodiment of the present invention provides a method for constructing a memory access model, including:
  • Step 301 Divide the memory of the node by a preset dividing strategy, and divide the memory of the node into multiple memory blocks;
  • Step 302 For any memory block, obtain all the processes that refer to the memory block by using a reverse mapping method, and obtain a page table corresponding to each process.
  • the page table corresponding to each process is used to store each process.
  • the page table entry of the accessed page, the page table entry of the page includes at least the page table entry information of the page and the Present bit of the page;
  • the minimum unit of memory is a page
  • each memory block includes one or more pages
  • each process accessing the memory corresponds to a page table
  • the page table table stores a page table entry of a page to be accessed by each process; wherein, the page Each page table entry in the table includes a Present bit and a page table entry information. If a page is valid, the Present bit included in the page table entry of the page is set, and the process can access the page, if a certain If the page is invalid, the Present bit included in the page table entry of the page is cleared, and the process cannot directly access the page.
  • Step 303 Applying a memory area in the memory of the node, and storing the Present bit included in each page table item in the obtained page table in the memory area;
  • a memory area is divided in the memory of the node, and the process number of the corresponding process of the page table is obtained for any page table obtained, according to the serial number of the page table in the page table, Process number and the memory area
  • the starting address of the domain is calculated by using a preset calculation model to store the memory address of the Present bit included in each page table entry in the page table in the memory region, according to the memory storing the Present bit included in each page table entry.
  • the address stores the Present bit included in each page table entry in the page table in the corresponding memory space.
  • the Present bit included in each page table item in each of the other page tables is stored in the memory area in the same manner as described above.
  • Step 304 Clear the Present bit included in the page table entry of each page in the obtained page table
  • the process When the process needs to access a page of the memory, the process first obtains the page table entry corresponding to the page to be accessed from the page table corresponding to the page, and determines the Present bit included in the obtained page table entry, if the obtained The Present bit included in the page table entry is set, and the process obtains the start address of the required access page according to the obtained page table entry including the page table entry information of the required access page, and the memory is obtained from the memory according to the start address of the obtained page. Find the page that needs to be accessed, and then read and write data in the sought page to access the page that needs to be accessed; if the obtained page table entry includes the Present bit is cleared, the process will generate a page fault. Interrupt event.
  • the Present bits included in the page table entry of each page are all cleared, so that when the process that references the memory block accesses the memory, When a page in the block is generated, a page fault interrupt event is generated, and a page fault interrupt event generated by the process is detected, and then the process is determined to access the page in the memory block.
  • Step 305 Real-time monitoring the process that references the memory block and starting timing. If there is a page fault interrupt event generated when accessing the page in the memory block by referring to the memory block, increasing the number of pages accessed by the process;
  • the process that references the memory block is monitored in real time and starts timing. If a process that references the memory block accesses a page in the memory block is detected, a page fault interrupt event is generated, wherein the process determines the accessed page.
  • the page table entry included in the page table entry is cleared.
  • the page table entry of the accessed page is obtained by the process from its corresponding page table, and the page table entry information included in the page table entry of the accessed page is obtained. And obtaining, according to the page entry information, a start address of the page accessed by the process, and determining a corresponding page according to the obtained start address, and increasing the number of accesses of the determined page.
  • a corresponding timer may be set for the memory block, and a corresponding counter is set for each page included in the memory block, and a counter corresponding to each page is used to count the access of the page corresponding to itself.
  • the number of times that is, increasing the number of accesses to the page by increasing the value of the counter corresponding to a page; since the number of pages included in the memory block may be large, the counter corresponding to each page included in the memory block is It is in the form of software to count the number of visits per page.
  • the memory address is calculated according to the process ID of the process, the sequence number of the page table entry of the page accessed by the process in the page table, and the start address of the memory area, by using a preset calculation model, according to the calculated memory.
  • the address reads the Present bit of the page accessed by the process from the corresponding space in the memory area, and judges the obtained Present bit. If the obtained Present bit is set, it determines that the page to be accessed by the process is valid. If the obtained Present bit is cleared, it is determined that the page to be accessed by the process is invalid.
  • the page fault exception handler needs to be triggered, and the exception processing is performed by the triggered page fault exception handler.
  • the page table entry of the page that needs to be accessed by the page table is obtained from the corresponding page table, and the Present bit included in the obtained page table entry is judged, and the obtained page table is determined.
  • the Present bit included in the item is set, and then the start address of the page to be accessed is obtained according to the page table item information included in the obtained page table entry, and the corresponding page is found from the memory block according to the obtained start address. And the data is read and written in the page that is being searched, so that the process implements access to the page to be accessed.
  • Step 306 Construct a memory access model of the memory block according to the number of accesses and timing of each page in the memory block.
  • the memory access model includes at least the access frequency and the number of accesses of each page in the memory block.
  • the number of accesses of each page in the memory block is compared with the time of the timed operation, and the access frequency of each page included in the memory block is obtained, so that the memory access model of the memory block is at least the memory.
  • the frequency of visits and the number of visits for each page in the block are compared with the time of the timed operation, and the access frequency of each page included in the memory block is obtained, so that the memory access model of the memory block is at least the memory.
  • the page table corresponding to the process that references the memory block is obtained, and the Present bit of each page table item in the obtained page table is cleared, and the process that references the memory block is monitored in real time and starts timing. If the page fault interrupt event generated when a process accessing the page of the memory block exists is monitored, the number of accesses of the accessed page is increased, and the memory block is constructed according to the number of accesses and the time of each page in the memory block.
  • the memory access model when constructing the memory access model, does not need to record the memory access address of each memory that references the memory block to access the memory block, reducing memory consumption and impact on system performance, thereby avoiding system crash.
  • an embodiment of the present invention provides a method for constructing a memory access model, including:
  • Step 401 Obtain a page table corresponding to the target process, and clear the Present bit included in each page table item stored in the page table, where the page table is used to store a page table entry of a page to be accessed by the target process;
  • Step 402 Perform real-time monitoring on the target process and start timing
  • Step 403 If a page break interrupt event occurs when the target process accesses the page to be accessed, the number of accesses of the page to be accessed is increased;
  • the page fault interrupt event is generated when the target process determines that the page bit item included in the page entry of the page to be accessed is cleared, and the page table entry of the page to be accessed is obtained by the target process from the corresponding page table. of.
  • Step 404 Build a memory access model of the target process according to the number of accesses and the time of each page accessed by the target process.
  • the memory access model includes at least the number of accesses and the access frequency of each page accessed by the target process.
  • the page table corresponding to the target process is obtained, and the Present bit included in each page table item in the page table is cleared, and the target process is monitored in real time and started timing, if the target process accesses the to-be-accessed
  • the number of accesses of the page to be accessed is increased, and the memory access model of the target process is built according to the number of accesses and time of each page accessed by the target process, so that the memory access of the target process is constructed.
  • the model does not need to record the memory access address of the target process accessing memory, reducing memory consumption and impact on system performance, thus avoiding system crash.
  • an embodiment of the present invention provides a method for constructing a memory access model, including:
  • Step 501 When the target process in the node is scheduled to the processor in the node, the number of times the target process accesses the local memory and the number of times the remote node memory is accessed is loaded into the statistical count register of the processor;
  • the context information of the target process includes the number of times the target process accesses the local memory and the number of times the target process accesses the remote node memory. Specifically, when a process in the node is scheduled to be processed in the processor of the node, the number of times the target process accesses the local memory and the number of times of accessing the remote node memory are extracted from the context information of the target process, and the target process is accessed to access the local memory. The number of times and the number of times the remote node's memory is accessed is loaded into the statistical count register of the processor.
  • the processor includes a plurality of counters. Further, two counters are selected from the plurality of counters included in the processor, respectively being a first counter and a second counter, and the initial value of the first counter is set as a target process. Access The number of local memory times, the initial value of the second counter is set to the number of times the target process accesses the remote node memory.
  • the computer system includes a plurality of nodes, and the node includes at least a processor and a memory.
  • the processor in the node runs the target process
  • the target process needs to access the memory of the node, and the target process accesses the local memory, and the target process needs to access.
  • the memory of other nodes in the computer system is the target process accessing the remote node memory.
  • Step 502 Perform real-time monitoring on the target process by using the processor. If the target process accesses the local memory, the number of times the target process accesses the local memory is increased. If the target process accesses the remote node memory, the target process access is increased. The number of end node memory;
  • the number of times the target process accesses the local memory is increased by increasing the value of the first counter, and the number of times the target process accesses the remote node memory is increased by increasing the value of the second counter.
  • the target process When the target process is running, if the target process needs to access the local memory, the target process calls the local memory event. If the target process needs to access the remote node memory, the target process calls the remote node memory. Event; therefore, the processor can monitor in real time whether the target process accesses the local memory and the remote node memory.
  • the processor allocates runtime to the target process in each time slice, and then in each time slice the processor runs the target process within the runtime allocated for the target process.
  • the context information of the target process including the number of times the target process accesses the local memory and the number of times the target process accesses the remote node memory, may be updated to the increased target process accessing the local memory. The number of times and the number of times the target process accesses the remote node's memory.
  • Step 503 When a time slice ends, obtain an actual running time after the target process is scheduled to be executed; specifically, obtain a time slice that the target process is scheduled to be sent to the processor, and the target process is in each time slice acquired. The running time is accumulated to get the actual running time of the target process.
  • Step 504 According to the increased number of times the target process accesses the local memory and the number of times the memory of the remote node is accessed, the number of times the target process accesses the local memory stored in the statistics counter register, and the number of times the memory of the remote node is accessed, the target process is actually obtained. The number of times to access local memory and the number of times of remote node memory during runtime;
  • the number of times the target process accesses the local memory is read from the first counter, and the number of times the target process accesses the remote node memory is read from the second counter.
  • the number of times the target process accesses the local memory during the actual running time is calculated according to the number of times the target process accesses the local memory and the number of times the target process accesses the local memory in the statistic counting register, and the number of times the target process accesses the remote node memory is
  • the statistics count register stores the number of times the target process accesses the remote node memory to count the number of times the target process accesses the remote node memory during the actual running time.
  • Step 505 According to the target process, the number of times the local memory is accessed during the actual running time and the memory of the remote node is accessed. The number of times and the actual running time to calculate the far-end access ratio and access frequency of the target process;
  • the ratio of the number of times the target process accesses the local memory in the actual running time to the number of times the memory of the remote node is accessed is calculated, and the calculated ratio is used as the far-end access ratio of the target process, according to the target process in the actual running time.
  • the number of times the local memory is accessed and the number of times the memory of the remote node is accessed is counted.
  • the number of accesses of the target process is counted, and the access frequency of the target process is calculated according to the number of accesses of the target process and the actual running time.
  • Step 506 Perform a determination on the far-end access ratio and the access frequency of the target process. If the far-end access ratio of the target process exceeds the first threshold, and the access frequency of the target process exceeds the preset sixth threshold, step 507 is performed; If the far-end access ratio of the target process does not exceed the fifth threshold or the access frequency of the target process does not exceed the sixth threshold, then execution 503 is continued.
  • Step 507 Obtain a page table corresponding to the target process, where the page table corresponding to the target process is used to store a page table entry of a page to be accessed by the target process, and the page table entry of the page includes at least a page table entry information of the page and a Present bit;
  • the minimum unit of memory is a page, and each process corresponds to a page table.
  • the page table stores a page table entry of a page to be accessed by the process; wherein each page in the memory corresponds to a Present bit, and if a page is valid, The corresponding Counter bit of the page is set, and the process can access the page. If a page is invalid, the corresponding Present bit of the page is cleared, and the process cannot directly access the page.
  • Step 508 Apply a memory area in the memory of the node, and store the Present bit included in the page table entry of each page stored in the obtained page table in the memory area;
  • a memory area is divided in a memory of the node, and a process number of the target process is obtained, according to the sequence number of the page table in each page table item in the page table, the obtained process number, and the start address of the memory area.
  • Calculating, by using a preset calculation model, a memory address of a Present bit included in each page table entry in the page table stored in the memory region, according to a memory storing a Present bit included in each page table entry in the page table The address stores the Present bit included in each page table entry in the page table in the memory area.
  • Step 509 Clear the Present bit included in each page table entry in the obtained page table to zero;
  • the target process when the target process accesses a page of the memory, the target process first obtains the page table entry of the page to be accessed by itself in the corresponding page table, and judges the Present bit included in the obtained page table entry, The Present bit of each page stored in the page table corresponding to the target process is cleared, so the target process determines that the Present bit included in the acquired page table entry is cleared, and then the target process generates a page fault interrupt event.
  • Step 510 Listening to the target process in real time and starting timing. If the target process accesses the page to be accessed, a page fault interrupt event occurs, and the number of accesses of the target process accessing the page to be accessed is increased;
  • the target process is monitored in real time and timing is started, if the target process is monitored, the page to be accessed in the memory is accessed. a page fault interrupt event generated, wherein the page fault interrupt event is generated when the target process determines that the page bit item included in the page entry of the page to be accessed is cleared, and the page table entry of the page to be accessed is the target process Obtaining the page table entry of the page to be accessed, obtaining the start address of the page to be accessed according to the page table entry information of the page to be accessed, according to the obtained The start address determines the page to be accessed and increases the number of accesses to the page to be accessed.
  • steps (A) - (C) may also be performed, respectively:
  • the memory address is calculated according to the process ID of the target process, the sequence number of the page table entry of the page to be accessed in the page table, and the start address of the memory area, by using a preset calculation model, according to the calculated memory address.
  • the memory area reads the Present bit of the page to be accessed, and judges the obtained Present bit. If the obtained Present bit is set, it determines that the page to be accessed is valid. If the obtained Present bit is cleared, It is determined that the page to be accessed is invalid.
  • the page fault exception handler needs to be triggered, and the exception processing is performed by the triggered page fault exception handler.
  • the page table entry of the page to be accessed is obtained from the page table corresponding to the target process, and the Present bit included in the obtained page table entry is determined, and the obtained page table entry is determined.
  • the Present bit is set, and then the start address of the page to be accessed is obtained according to the page table item information of the page to be accessed in the obtained page table entry, and the corresponding address is found from the memory of the node according to the obtained start address.
  • Step 511 Build a memory access model of the target process according to the number of accesses and time of each page accessed by the target process, including at least the number of accesses and the frequency of access of each page accessed by the target process.
  • the memory access model may be periodically constructed. Specifically, the number of accesses of each page accessed by the target process is compared with the time of the timed operation, and the access frequency of each target access is obtained, so that the target process is obtained.
  • the memory access model including at least the number of visits and frequency of visits for each page accessed by the target process.
  • the memory access model may also include a near-end access ratio of the target process.
  • the page table corresponding to the target process is obtained and the Present bit of each page table entry in the page table is obtained.
  • the target process is monitored in real time and starts timing. If the page fault interrupt event generated when the target process accesses the page to be accessed is monitored, the number of accesses of the page to be accessed is increased, according to each page accessed by the target process. The number of accesses and timings builds the memory access model of the target process. Therefore, when building the memory access model of the target process, it is not necessary to record the memory access address of the target process accessing memory, reducing memory consumption and impact on system performance, thereby avoiding system breakdown.
  • an embodiment of the present invention provides an apparatus for constructing a memory access model, including:
  • the first obtaining module 601 is configured to obtain a page table corresponding to the process that references the memory block, and clear the Present bit included in each page table item stored in the obtained page table, where the page table is used to store and reference the memory block.
  • a first monitoring module 602 configured to perform real-time monitoring and start timing on a process that references the memory block
  • a first adding module 603, configured to generate a page fault if a process that refers to the memory block accesses a page in the memory block
  • the interruption event increases the number of accesses of the accessed page; wherein, the page fault interrupt event is generated by the process that refers to the memory block, and the page of the page that is accessed is determined to be generated when the Present bit is cleared, and the page of the accessed page is generated.
  • the entry is obtained from the corresponding page table by the process that references the memory block;
  • the second obtaining module 604 is configured to construct a memory access model of the memory block according to the number of accesses and the time of each page in the memory block, where the memory access model includes at least the number of accesses of each page in the memory block. Frequency of access.
  • the first obtaining module 601 includes:
  • a first obtaining unit configured to obtain a process that references the memory block by using a method of reverse mapping, and further obtain a page table corresponding to a process that refers to the memory block;
  • a first storage unit configured to apply a memory area in the memory of the node, and store the Present bit included in each page table item in the obtained page table in the memory area;
  • the first clearing unit is configured to clear the Present bit included in each page table entry in the obtained page table.
  • the first storage unit includes:
  • a first calculating subunit configured to apply a memory area in a memory of the node, according to a process number of a process that references the memory block, a serial number in the page table of each page table item in the obtained page table, and the memory area a starting address, and calculating, by using a preset calculation model, a storage address of a Present bit included in each page table entry in the obtained page table in the memory area;
  • a first storage subunit configured to store, according to a storage address of a Present bit included in each page table entry in the obtained page table, The Present bit included in each page table entry in the obtained page table is stored in the memory area.
  • the second obtaining module 604 is specifically configured to calculate an access frequency of each page in the memory block according to the number of accesses and the time of each page in the memory block, to obtain a memory access model, including at least the memory. The number of visits and frequency of visits for each page in the block.
  • the device further includes:
  • a third obtaining module configured to divide the memory of the node into multiple memory segments, and obtain the access times and access frequencies of the memory segment accessed by the remote node, where the remote node is a node other than the node in the computer system;
  • a first dividing module configured to divide the memory segment into a plurality of memory blocks if the number of accesses accessed by the remote node exceeds a first threshold and the access frequency accessed by the remote node exceeds a second threshold;
  • the fourth obtaining module is configured to obtain the access times and the access frequency of the memory block accessed by the remote node, if there is a memory whose access frequency exceeds the third threshold by the remote node and the access frequency accessed by the remote node exceeds the fourth threshold Block, the operation of obtaining the page table corresponding to the process that references the memory block is performed.
  • the third obtaining module includes:
  • the first monitoring unit is configured to divide the memory of the node into multiple memory segments, and the memory segment corresponding to the node is used to monitor the memory segment in real time and start timing. If the remote node accesses the memory segment, the memory segment is increased. The number of accesses accessed by the end node;
  • the first calculating unit is configured to calculate, according to the number of accesses and time counted by the remote node, the access frequency of the memory segment accessed by the remote node.
  • the fourth obtaining module includes:
  • the second monitoring unit is configured to monitor the memory block in real time through the interconnecting chip corresponding to the node and start timing. If the remote node accesses the memory block, the number of accesses accessed by the remote node is increased.
  • a second calculating unit configured to calculate, according to the number of accesses and time counted by the remote node, the access frequency of the memory block accessed by the remote node.
  • the device further includes:
  • a first determining module configured to obtain a Present bit of the accessed page from the memory area, and determine, according to the obtained Present bit, whether the accessed page is valid in the memory of the node;
  • the first setting module is configured to, if valid, the Present position of the page table entry of the page to be accessed in the page table corresponding to the process, and trigger a process that refers to the memory block to continue accessing the accessed page.
  • the first determining module includes:
  • a third calculating unit configured to: in accordance with a process number of a process that references the memory block, a page table entry of the accessed page in the page table The serial number and the starting address of the memory area, and calculate the memory address of the Present bit of the page accessed in the memory area by a preset calculation model;
  • a first determining unit configured to read a Present bit of the accessed page from the memory area according to the calculated memory address, and if the read Present bit is set, determine that the accessed page is valid, if the read Present bit If it is cleared, it is judged that the accessed page is invalid.
  • the device further includes:
  • the first clearing module is configured to clear, when the process that references the memory block accesses the accessed page, the Present bit included in the page table entry of the accessed page in the page table corresponding to the process that references the memory block.
  • the page table corresponding to the process that references the memory block is obtained, and the Present bit of each page table item in the obtained page table is cleared, and the process that references the memory block is monitored in real time and starts timing. If the page fault interrupt event generated when a process accessing the page of the memory block exists is monitored, the number of accesses of the accessed page is increased, and the memory block is constructed according to the number of accesses and the time of each page in the memory block.
  • the memory access model so that when constructing the memory access model of the memory block, it is not necessary to record the memory access address of each memory that references the memory block to access the memory block, thereby reducing memory consumption and impact on system performance, thereby avoiding system crash. .
  • an embodiment of the present invention provides an apparatus for constructing a memory access model, including:
  • the fifth obtaining module 701 is configured to obtain a page table corresponding to the target process, and clear the Present bit included in each page table item stored in the page table, where the page table is used to store a page of the page to be accessed by the target process. Entry
  • a second monitoring module 702 configured to perform real-time monitoring on the target process and start timing
  • a second adding module 703 configured to increase a number of accesses of the page to be accessed if the target process generates a page fault interrupt event when accessing the page to be accessed; wherein, the page fault interrupt event determines a page table of the page to be accessed by the target process
  • the page table entry of the page to be accessed is obtained by the target process from its corresponding page table;
  • the sixth obtaining module 704 is configured to access each of the pages according to the target process.
  • the number of visits to the page and the time of the build build the memory access model of the target process.
  • the memory access model includes at least the number of visits and the frequency of access for each page accessed by the target process.
  • the fifth obtaining module 701 includes:
  • a second obtaining unit configured to acquire a page table corresponding to the target process
  • a second storage unit configured to apply, in a memory of the node, a memory area, and store a Present bit included in each page table item in the page table in the memory area;
  • the second clearing unit is configured to clear the Present bit included in each page table entry in the page table.
  • the second storage unit includes:
  • a second calculating subunit configured to apply a memory area in the memory of the node, according to the process number of the target process, the serial number of the page table in the page table, and the starting address of the memory area, And calculating, by using a preset calculation model, a storage address of a Present bit included in each page table item in the page table stored in the memory area;
  • a second storage subunit configured to store, in the memory area, a Present bit included in each page table entry in the page table according to a storage address of a Present bit included in each page table entry in the page table.
  • the sixth obtaining module 704 is specifically configured to calculate, according to the number of accesses and the time of each page accessed by the target process, the access frequency of each page accessed by the target process, and obtain a memory access model, including at least the target. The number of visits and frequency of visits for each page accessed by the process.
  • the device further includes:
  • a seventh obtaining module configured to acquire a far-end access ratio and an access frequency of the target process when a time slice ends, if the far-end access ratio of the target process exceeds a fifth threshold and the access frequency of the target process exceeds a sixth threshold, Execute the operation of obtaining the page table corresponding to the target process.
  • the seventh obtaining module includes:
  • the third obtaining unit is configured to obtain an actual running time after the target process is scheduled by the processor of the node, and the number of times the target process accesses the local memory and the number of times of accessing the remote node memory in the actual running time, and the remote node memory is a computer system. Memory in other nodes except this node;
  • the fourth obtaining unit is configured to obtain the far-end access ratio and the access frequency of the target process according to the actual running time and the number of times the target process accesses the local memory and the number of times the remote node accesses the memory in the actual running time.
  • the third obtaining unit includes:
  • a loading subunit configured to: when the target process is scheduled by the processor of the node, load the number of times of accessing the local memory included in the context information of the target process and the number of times of accessing the memory of the remote node into the statistical counting register of the processor;
  • the subunit is configured to perform real-time monitoring on the target process by the processor. If the target process accesses the local memory, the number of times the target process accesses the local memory is increased. If the target process accesses the remote node memory, the target process access is increased. The number of times the remote node has memory;
  • the fourth obtaining unit includes:
  • a third calculating sub-unit configured to calculate a ratio of the number of times the target process accesses the local memory in the actual running time to the number of times the remote node accesses the memory of the remote node, and uses the ratio as the far-end access ratio of the target process;
  • a fourth calculating sub-unit configured to calculate the number of accesses of the target process according to the number of times the target process accesses the local memory during the actual running time and the number of times the remote node memory is accessed;
  • the fifth calculating subunit is configured to calculate the access frequency of the target process according to the number of accesses of the target process and the actual running time.
  • the device further includes:
  • a second determining module configured to obtain a Present bit of the page to be accessed from the memory area, and determine, according to a Present bit of the page to be accessed, whether the page to be accessed is valid in the memory of the node;
  • the second setting module is configured to, if valid, set the Present position included in the page table entry of the page to be accessed in the page table corresponding to the target process, and trigger the target process to continue to access the page to be accessed.
  • the second determining module includes:
  • a fourth calculating unit configured to calculate, according to the process number of the target process, the sequence number of the page table item of the page to be accessed in the page table and the starting address of the memory area, and calculate the storage in the memory area by using a preset calculation model The memory address of the Present bit of the page to be accessed;
  • a second determining unit configured to read a Present bit of the page to be accessed from the memory area according to the calculated memory address, and if the read Present bit is set, determine that the page to be accessed is valid, if the read When the Present bit is cleared, it is determined that the page to be accessed is invalid.
  • the device further includes:
  • the second clearing module is configured to: if the target process accesses the page to be accessed, clear the Present bit included in the page table entry of the page to be accessed in the page table corresponding to the target process.
  • the page table corresponding to the target process is obtained, and the Present bit of each page table item in the page table is cleared, the target process is monitored in real time and timed, and if the target process access is to be accessed, the target process is accessed.
  • the page break interrupt event generated by the page increases the number of accesses of the page to be accessed, builds the memory access model of the target process according to the number of accesses and timing of each page accessed by the target process, thus constructing the target process
  • the memory access model does not need to record the memory access address of the target process accessing memory, reducing memory consumption and impact on system performance, thus avoiding system crash.
  • the related hardware may be instructed by a program, and the program may be stored in a computer readable storage medium, and the storage medium mentioned above may be a read only memory, a magnetic disk or an optical disk.
  • the storage medium mentioned above may be a read only memory, a magnetic disk or an optical disk.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Quality & Reliability (AREA)
  • Computer Hardware Design (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Memory System Of A Hierarchy Structure (AREA)
  • Debugging And Monitoring (AREA)

Abstract

提供了一种构建内存访问模型的方法及装置,涉及计算机领域,所述方法包括:获取引用内存块的进程对应的页表,并将所述页表中存储的每个页表项所包括的Present位清零;对引用所述内存块的进程进行实时监听并开始计时;如果引用所述内存块的进程访问所述内存块中的页时产生缺页中断事件,则增加所述访问的页的访问次数;根据所述内存块中的每个页的访问次数和计时的时间构建所述内存块的内存访问模型,所述内存访问模型至少包括所述内存块中的每个页的访问次数和访问频率。所述装置包括:第一获取模块、第一监听模块、第一增加模块和第二获取模块。本发明能够减少内存消耗和对系统性能的影响,以及避免造成系统崩溃。

Description

一种构建内存访问模型的方法及装置 技术领域
本发明涉及计算机领域, 特别涉及一种构建内存访问模型的方法及装置。 背景技术
随着计算机技术的日新月异的发展, 处理器的速度和内存访问速度的差距越来越大, 内存系统成为性能的瓶颈, 因此针对内存系统亟需优化。 其中, 如果能够构建出应用程序 的内存访问模型, 就可以根据构建的内存访问模型提出合适的内存系统的优化方法, 从而 可以对内存系统进行优化。
目前, 现有技术存在一种构建内存访问模型的方法, 具体为: 在计算机系统上, 实时 监听应用程序, 如果监听出应用程序访问内存, 则获取应用程序每次访问内存时的内存访 问地址, 将获取的内存访问地址存储在内存中指定的区域内, 然后再根据指定的区域内存 储的内存访问地址构建应用程序的内存访问模型。 其中, 当构建出应用程序的内存访问模 型后就可以根据构建的内存访问模型对内存系统提出合适的优化方法。
其中, 现有技术在构建内存访问模型时, 需将应用程序每次访问内存时的内存访问地 址存储在内存中指定的区域内, 如此会消耗大量的内存, 将会产生内存资源紧张, 从而影 响系统性能, 甚至造成系统崩溃。 发明内容
为了减少内存消耗和系统性能的影响以及避免造成系统崩溃, 本发明实施例提供了一 种构建内存访问模型的方法及装置。 所述技术方案如下:
一种构建内存访问模型的方法, 所述方法包括:
获取引用内存块的进程对应的页表, 并将所述页表中存储的每个页表项所包括的
Present (当前) 位清零, 所述页表用于存储引用所述内存块的进程所要访问的页的页表项; 对引用所述内存块的进程进行实时监听并开始计时;
如果引用所述内存块的进程访问所述内存块中的页时产生缺页中断事件, 则增加所述 访问的页的访问次数; 其中, 所述缺页中断事件为引用所述内存块的进程判断出所述访问 的页的页表项包括的 Present位被清零时产生的, 所述访问的页的页表项为引用所述内存块 的进程从其对应的页表中获取得到的;
根据所述内存块中的每个页的访问次数和计时的时间构建所述内存块的内存访问模 型, 所述内存访问模型至少包括所述内存块中的每个页的访问次数和访问频率。。
一种构建内存访问模型的方法, 所述方法包括:
获取目标进程对应的页表,并将所述页表中存储的每个页表项所包括的 Present位清零, 所述页表用于存储所述目标进程所要访问的页的页表项;
对所述目标进程进行实时监听并开始计时;
如果所述目标进程访问待访问的页时产生缺页中断事件, 则增加所述待访问的页的访 问次数; 其中, 所述缺页中断事件为所述目标进程判断所述待访问的页的页表项包括的 Present位被清零时产生的, 所述待访问的页的页表项为所述目标进程从其对应的页表中获 取得到的;
根据所述目标进程所访问的每个页的访问次数和计时的时间构建所述目标进程的内存 访问模型, 所述内存访问模型至少包括所述目标进程所访问的每个页的访问次数和访问频 - 一种构建内存访问模型的装置, 所述装置包括:
第一获取模块, 用于获取引用内存块的进程对应的页表, 并将所述页表中存储的每个 页表项所包括的 Present位清零, 所述页表用于存储引用所述内存块的进程所要访问的页的 页表项;
第一监听模块, 用于对引用所述内存块的进程进行实时监听并开始计时;
第一增加模块, 用于如果引用所述内存块的进程访问所述内存块中的页时产生缺页中 断事件, 则增加所述访问的页的访问次数; 其中, 所述缺页中断事件为引用所述内存块的 进程判断出所述访问的页的页表项包括的 Present位被清零时产生的, 所述访问的页的页表 项为引用所述内存块的进程从其对应的页表中获取得到的;
第二获取模块, 用于根据所述内存块中的每个页的访问次数和计时的时间构建所述内 存块的内存访问模型, 所述内存访问模型至少包括所述内存块中的每个页的访问次数和访 问频率。
一种构建内存访问模型的装置, 所述装置包括:
第五获取模块, 用于获取目标进程对应的页表, 并将所述页表中存储的每个页表项所 包括的 Present位清零, 所述页表用于存储所述目标进程所要访问的页的页表项;
第二监听模块, 用于对所述目标进程进行实时监听并开始计时; 第二增加模块, 用于所述目标进程访问待访问的页时产生缺页中断事件, 则增加所述 待访问的页的访问次数; 其中, 所述缺页中断事件为所述目标进程判断所述待访问的页的 页表项包括的 Present位被清零时产生的, 所述待访问的页的页表项为所述目标进程从其对 应的页表中获取得到的;
第六获取模块, 用于根据所述目标进程所访问的每个页的访问次数和计时的时间构建 所述目标进程的内存访问模型, 所述内存访问模型至少包括所述目标进程所访问的每个页 的访问次数和访问频率。
在本发明中, 获取引用内存块的进程对应的页表并将获取的页表中的每个页表项包括 的 Present位清零, 对引用该内存块的进程进行实时监听并开始计时, 如果引用该内存地块 的进程访问页时产生缺页中断事件, 则增加访问的页的访问次数, 根据该内存块中的每个 页的访问次数和计时的时间构建该内存块的内存访问模型; 或者, 获取目标进程对应的页 表, 将获取的页表中的每个页表项包括的 Present位清零, 对该目标进程进行实时监听并开 始计时, 如果目标进程访问内存的页时产生缺页中断事件, 则增加访问的页的访问次数, 根据目标进程所访问的页的访问次数和计时的时间构建目标进程的内存访问模型, 如此在 构建该内存块的内存访问模型时不需要记录引用该内存块的每个进程访问该内存块的内存 访问地址; 或者在构建目标进程的内存访问模型时不需要记录目标进程访问内存的内存访 问地址, 减少内存消耗和对系统性能的影响, 从而避免造成系统崩溃。 附图说明
图 1是本发明实施例 1提供的 种构建内存访问模型的方法流程图;
图 2是本发明实施例 2提供的 种构建内存访问模型的方法流程图;
图 3是本发明实施例 3提供的 种构建内存访问模型的方法流程图;
图 4是本发明实施例 4提供的 种构建内存访问模型的方法流程图;
图 5是本发明实施例 5提供的 种构建内存访问模型的方法流程图;
图 6是本发明实施例 6提供的 种构建内存访问模型的装置示意图;
图 7是本发明实施例 7提供的 种构建内存访问模型的装置示意图。 具体实施方式
为使本发明的目的、 技术方案和优点更加清楚, 下面将结合附图对本发明实施方式作 进一步地详细描述。 实施例 1
如图 1所示, 本发明实施例提供了一种构建内存访问模型的方法, 包括:
步骤 101 : 获取引用内存块的进程对应的页表, 并将获取的页表中存储的每个页表项所 包括的 Present位清零;
其中, 页表用于存储引用该内存块的进程所要访问的页的页表项。
步骤 102: 对引用该内存块的进程进行实时监听并开始计时;
步骤 103: 如果引用该内存块的进程访问该内存块中的页时产生缺页中断事件, 则增加 访问的页的访问次数;
其中, 该缺页中断事件为引用该内存块的进程判断出访问的页的页表项包括的 Present 位被清零时产生的, 访问的页的页表项为引用该内存块的进程从其对应的页表中获取得到 的。
步骤 104:根据该内存块中的每个页的访问次数和计时的时间构建该内存块的内存访问 模型, 内存访问模型至少包括该内存块中的每个页的访问次数和访问频率。
在本发明实施例中, 获取引用内存块的进程对应的页表并将获取的页表中的每个页表 项包括的 Present位清零, 对引用该内存块的进程进行实时监听并开始计时, 如果引用该内 存块的进程访问页时产生缺页中断事件, 则增加访问的页的访问次数, 根据该内存块中的 每个页的访问次数和计时的时间构建该内存块的内存访问模型, 如此在构建该内存块的内 存访问模型时不需要记录引用该内存块的每个进程访问内存的内存访问地址, 减少内存消 耗和对系统性能的影响, 从而避免造成系统崩溃。 实施例 2
如图 2所示, 本发明实施例提供了一种构建内存访问模型的方法, 包括:
步骤 201 : 对节点的内存进行划分, 将节点的内存划分成多个内存段;
其中, 可以通过预设的划分策略将节点的内存划分成多个内存段, 预设的划分策略可 以包括: 将节点的内存划分成等大小的多个内存段。
其中,计算机系统中包括多个节点,且每个节点至少包括 CPU (Central Processing Unit, 中央处理器) 和内存, 对于任一个节点, 该节点内的 CPU等设备访问该节点内的内存称为 访问本地内存, 在计算机系统中, 该节点以外的其他节点访问该节点内的内存称为远端节 点访问该内存。 步骤 202: 对于任一个内存段, 通过该节点对应的互联芯片实时监听该内存段并开始计 时, 如果监听出存在远端节点 (除该节点以外的其他节点) 访问该内存段, 则增加该内存 段被远端节点访问的次数;
其中, 每个节点都有对应的互联芯片, 对于任一个节点, 如果存在远端节点需要访问 该节点的内存, 则远端节点需要通过该节点的互联芯片访问该节点的内存; 因此, 在本实 施例中, 可以通过该节点对应的互联芯片监听是否存在远端节点访问该节点中的内存段。
其中, 互联芯片中包括多个计时器和多个计数器, 在本实施例中, 可以在互联芯片包 括的多个计时器和多个计数器中为该内存段选择对应的一个计时器和计数器, 该计时器用 于计时, 该计数器用于统计该内存段被远端节点访问的次数, 即如果互联芯片监听出存在 远端节点访问该内存段, 则通过增加该计数器的值来实现增加该内存段被远端节点访问的 次数。 另外, 该计数器的初值可以为 0 等, 增加该内存段被远端节点访问的次数可以为 1 等, 在本实施例中, 对二者的具体取值不做限定。
步骤 203 : 根据该内存段被远端节点访问的次数和计时的时间, 计算出该内存段被远端 节点访问的访问频率;
其中, 可以周期性地计算出该内存段被远端节点访问的访问频率。 具体地, 从该内存 段对应的计时器中读取计时的时间, 从该内存段对应的计数器中读取该内存段被远端节点 访问的次数, 将该内存段被远端节点访问的次数与计时的时间做比值运算, 得到该内存段 被远端节点访问的访问频率。
其中, 在本实施例中, 可以从该节点对应的互联芯片中为该节点包括的每个内存段选 择对应的一个计时器和计数器, 如此可以通过互联芯片实时监听该节点包括的每个内存段, 从而得到每个内存段被远端节点访问的次数, 然后再周期性地计算出每个内存段被远端节 点访问的访问频率。
步骤 204:判断该内存段被远端节点访问的次数是否超过第一阈值以及该内存段被远端 节点访问的访问频率是否超过第二阈值, 如果该内存段被远端节点访问的次数超过第一阈 值且该内存段被远端节点访问的访问频率超过第二阈值, 则执行步骤 205 ;
其中, 如果该内存段被远端节点访问的次数没有超过第一阈值或该内存段被远端节点 访问的访问频率没有超过第二阈值, 则继续实时监听是否有远端节点访问该内存段。
步骤 205 : 对该内存段进行划分, 将该内存段划分成多个内存块;
其中, 可以通过预设的划分策略对该内存段进行划分, 将该内存段划分成多个内存块。 步骤 206: 对于任一个内存块, 通过互联芯片实时监听该内存块并开始计时, 如果监听 出存在远端节点访问该内存块, 则增加该内存块被远端节点访问的次数;
其中, 在本实施例中, 可以在互联芯片剩余的计数器和计时器中为每个内存块选择对 应的计数器和计时器, 每个计数器用于统计其自身对应的内存块被远端节点访问的次数, 即对于任一个内存块, 如果监听出存在远端节点访问该内存块, 则通过增加该内存块对应 的计数器的值来实现增加该内存块被远端节点访问的次数。
步骤 207: 根据该内存块被远端节点访问的次数和计时的时间, 计算出该内存块被远端 节点访问的访问频率;
其中, 可以周期性地计算出该内存块被远端节点访问的访问频率。 具体地, 将该内存 块被远端节点访问的次数与计时的时间做比值运算, 得到该内存块被远端节点访问的访问 频率。
其中, 在本实施例中, 可以按上述 206和 207的步骤计算出每个内存块的被远端节点 访问的访问频率。
步骤 208:判断该内存块被远端节点访问的次数是否超过第三阈值以及该内存块被远端 节点访问的访问频率是否超过第四阈值, 如果该内存段被远端节点访问的次数超过第三阈 值且该内存段被远端节点访问的访问频率超过第四阈值, 则执行步骤 209;
其中, 如果该内存块被远端节点访问的访问次数没有超过第三阈值或该内存块被远端 节点访问的访问频率没有超过第四阈值, 则继续实时监听是否有远端节点访问该内存块。
其中, 第三阈值可以等于、 大于或小于第一阈值, 第四阈值可以等于、 大于或小于第二 阈值。
其中,在本实施例中,如果该内存块被远端节点访问的次数超过第三阈值且该内存块被 远端节点访问的访问频率超过第四阈值, 则可以为该内存块建立对应的一个统计线程, 并 通过该统计线程来执行如下的步骤。
步骤 209: 通过反向映射的方法, 获取到引用该内存块的所有进程, 并获取每个进程对 应的页表, 每个进程对应的页表用于存储每个进程所要访问的页的页表项, 页的页表项至 少包括页的页表项信息和页的 Present位;
其中, 内存的最小单位为页, 每个内存块包括一个或多个页, 访问内存的每个进程对 应一个页表, 页表中存储进程所要访问的页的页表项; 其中, 页表中的每个页表项包括一 个 Present位和页表项信息, 如果某个页有效, 则该页的页表项中包括的 Present位被置位, 且进程可以访问该页, 如果某个页无效, 则该页的页表项中包括的 Present位被清零, 且进 程无法直接访问该页。 步骤 210: 在节点的内存中申请一块内存区域, 将获取的页表中的每个页表项包括的 Present位存储在该内存区域中;
具体地, 在节点的内存中划分一块内存区域, 对于获取的任一个页表, 获取该页表对 应的进程的进程号, 根据该页表中的每个页表项在页表中的序号、 获取的进程号和该内存 区域的起始地址通过预设的计算模型计算出在该内存区域中存储每个页表项中包括的 Present位的内存地址,根据存储每个页表项中包括的 Present位的内存地址将该页表中的每 个页表项包括的 Present位存储到该内存区域中。 其中, 对于获取的其他每个页表, 按上述 相同的方法将其他每个页表中的每个页表项包括的 Present位存储在该内存区域中。
其中, 预设的计算模型可以如公式(1 )所示, Memoryaddress为内存地址, Startaddress 为内存区域的起始地址, ProcessID为进程号, element为系数以及 number为页表项在页表 中的序号。
Memoryaddress - Startaddress+ProcessID x element+number ( 1 )。
步骤 211: 在获取的页表中将每个页表项包括的 Present位清零;
其中, 当进程需要访问内存的某个页时, 该进程先从其自身对应的页表中获取所要访 问页对应的页表项, 对获取的页表项包括的 Present位进行判断, 如果获取的页表项包括的 Present位被置位, 则该进程根据获取的页表项包括所需要访问页的页表项信息获取所需要 访问页的起始地址, 根据获取的起始地址从内存中寻找出所需要访问的页, 然后在寻找的 页中进行数据的读写操作以实现访问所需要访问的页; 如果获取的页表项包括的 Present位 被清零, 则该进程会产生缺页中断事件。
其中, 对于任一个进程, 无法监测到该进程访问内存中的页的过程, 但可以检测出该 进程产生的缺页中断事件; 如果该进程从其自身对应的页表中获取所需要访问页的页表项 中包括的 Present位被清零, 则该进程会产生缺页中断事件, 可以检测到该进程产生的缺页 中断事件, 并据此确定出该进程访问内存中的页。 因此, 在本实施例中, 在引用该内存块 的每个进程对应的页表中, 将每个页的页表项包括的 Present位全部清零, 如此当引用该内 存块的进程访问该内存块中的页时就会产生缺页中断事件, 并检测到该进程产生的缺页中 断事件, 然后据此确定出该进程访问该内存块中的页。
步骤 212: 实时监听引用该内存块的进程并开始计时, 如果存在引用该内存块的进程访 问该内存块中的页时产生缺页中断事件, 增加该进程所访问的页的次数;
具体地, 实时监听引用该内存块的进程并开始计时, 如果监听出存在引用该内存块的 进程访问该内存块中的页时产生的缺页中断事件, 其中, 缺页中断事件是该进程判断出访 问的页的页表项包括的 Present位被清零时产生的, 访问的页的页表项为该进程从其自身对 应的页表中获取得到的, 获取访问的页的页表项包括的页表项信息, 根据该页表项信息获 取访问的页的起始地址, 并根据获取的起始地址确定出访问的页, 并增加访问的页的访问 次数。
其中, 在本实施例中, 可以为该内存块设置对应的计时器以及为该内存块包括的每个 页设置对应的一个计数器, 每个页对应的计数器用于统计其自身对应的页的访问次数, 即 通过增加某个页对应的计数器的值来实现增加该页的访问次数; 由于该内存块中包括的页 的数目可能较多, 因此该内存块中包括的每个页对应的计数器都是采用软件的形式来实现 统计每个页的访问次数。
其中, 在本实施例中, 由于对引用该内存块中的所有进程进行监控, 从而可以统计出 该内存块中的每个页的访问次数。
进一步地, 为了保证该进程能够正常访问所访问的页, 在本实施例中, 当增加该进程 所访问的页的次数之后, 还可以执行如下 (1 ) - (3 ) 的步骤, 分别为:
( 1 ): 从该内存区域中获取该进程所访问的页的 Present位,根据获取的 Present位判断 该进程所访问的页在节点的内存中是否有效, 如果有效, 则执行 (2);
具体地, 根据该进程的进程号、 该进程所访问的页的页表项在页表中的序号和该内存 区域的起始地址并通过预设的计算模型计算出内存地址, 根据计算的内存地址从该内存区 域中对应的空间中读取该进程所访问的页的 Present位,对获取的 Present位进行判断,如果 获取的 Present位被置位,则判断出该进程所要访问的页有效,如果获取的 Present位被清零, 则判断出该进程所访问的页无效。
其中, 如果判断出该进程所访问的页无效, 则需要触发缺页异常处理程序, 并由触发 的缺页异常处理程序进行异常处理。
(2): 在该进程对应的页表中将该进程所访问的页的页表项包括的 Present位置位, 并 触发该进程重新访问所访问的页;
其中, 该进程被触发后, 从自身对应的页表中获取其自身所访问的页的页表项, 并对 获取的页表项中包括的 Present位进行判断,且判断出获取的页表项包括的 Present位被置位, 然后再根据获取的页表项中包括的页表项信息获取所要访问页的起始地址, 根据获取的起 始地址从该内存块中寻找出对应的页, 并在寻找的页中进行数据的读写操作, 如此该进程 实现访问所访问的页。
( 3 ): 当该进程访问完所访问的页, 在该进程对应的页表中将该进程所访问的页的页 表项中包括的 Present位清零。
步骤 213 : 根据该内存块中的每个页的访问次数和计时的时间, 构建该内存块的内存访 问模型, 内存访问模型至少包括该内存块中的每个页的访问频率和访问次数。
其中, 可以周期性地构建内存访问模型。 具体地, 分别将该内存块中的每个页的访问 次数与计时的时间做比值运算, 得到该内存块包括的每个页的访问频率, 如此得到该内存 块的内存访问模型至少包括该内存块中的每个页的访问频率和访问次数。
在本发明实施例中, 获取到远端节点访问的次数超过第三阈值以及远端节点访问的访 问频率超过第四阈值的内存块, 获取引用该内存块的进程对应的页表并将获取的页表中的 每个页表项的 Present位清零, 对引用该内存块的进程进行实时监听并开始计时, 如果监听 出存在引用该内存块的进程访问页时产生的缺页中断事件, 则增加访问的页的访问次数, 根据该内存块中的每个页的访问次数和计时的时间构建内存访问模型, 如此在构建内存访 问模型时不需要记录引用该内存块的每个进程访问该内存块的内存访问地址, 减少内存消 耗和对系统性能的影响, 从而避免造成系统崩溃。 实施例 3
如图 3所示, 本发明实施例提供了一种构建内存访问模型的方法, 包括:
步骤 301 : 通过预设的划分策略对节点的内存进行划分, 将节点的内存划分成多个内存 块;
步骤 302: 针对任一个内存块,通过反向映射的方法, 获取到引用该内存块的所有进程, 并获取每个进程对应的页表, 每个进程对应的页表用于存储每个进程所要访问的页的页表 项, 页的页表项至少包括页的页表项信息和页的 Present位;
其中, 内存的最小单位为页, 每个内存块包括一个或多个页, 访问内存的每个进程对 应一个页表, 页表中存储每个进程所要访问的页的页表项; 其中, 页表中的每个页表项包 括一个 Present位和页表项信息,如果某个应页有效,则该页的页表项中包括的 Present位被 置位, 且进程可以访问该页, 如果某个页无效, 则该页的页表项中包括的 Present位被清零, 且进程无法直接访问该页。
步骤 303 : 在节点的内存中申请一块内存区域, 将获取的页表中的每个页表项包括的 Present位存储在该内存区域中;
具体地, 在节点的内存中划分一块内存区域, 对于获取的任一个页表, 获取该页表对 应进程的进程号, 根据该页表中的每个页表项在页表中的序号、 获取的进程号和该内存区 域的起始地址通过预设的计算模型计算出在该内存区域中存储该页表中的每个页表项包括 的 Present位的内存地址,根据存储每个页表项包括的 Present位的内存地址将该页表中的每 个页表项包括的 Present位存储在对应的内存空间中。 其中, 对于获取的其他每个页表, 按 上述相同的方法将其他每个页表中的每个页表项包括的 Present位存储在该内存区域中。
步骤 304: 在获取的页表中将每个页的页表项包括的 Present位清零;
其中, 当进程需要访问内存的某个页时, 该进程先从其自身对应的页表中获取所要访 问页对应的页表项, 对获取的页表项包括的 Present位进行判断, 如果获取的页表项包括的 Present位被置位, 则该进程根据获取的页表项包括所需要访问页的页表项信息获取所需要 访问页的起始地址, 根据获取的页的起始地址从内存中寻找出所需要访问的页, 然后在寻 找的页中进行数据的读写操作以实现访问所需要访问的页; 如果获取的页表项包括的 Present位被清零, 则该进程会产生缺页中断事件。
其中, 在本实施例中, 在引用该内存块的每个进程对应的页表中, 将每个页的页表项 包括的 Present位全部清零, 如此当引用该内存块的进程访问该内存块中的页时就会产生缺 页中断事件, 并检测到该进程产生的缺页中断事件, 然后据此确定出该进程访问该内存块 中的页。
步骤 305: 实时监听引用该内存块的进程并开始计时, 如果存在引用该内存块的访问该 内存块中的页时产生的缺页中断事件, 增加该进程所访问的页的次数;
具体地, 实时监听引用该内存块的进程并开始计时, 如果监听出存在引用该内存块的 进程访问该内存块中的页时产生的缺页中断事件, 其中, 该进程判断出访问的页的页表项 包括的 Present位被清零时产生的, 访问的页的页表项为该进程从其自身对应的页表中获取 得到的, 获取访问的页的页表项包括的页表项信息, 根据该页表项信息获取该进程所访问 页的起始地址, 并根据获取的起始地址确定出对应的页, 并增加确定的页的访问次数。
其中, 在本实施例中, 可以为该内存块设置对应的计时器以及为该内存块包括的每个 页设置对应的一个计数器, 每个页对应的计数器用于统计其自身对应的页的访问次数, 即 通过增加某个页对应的计数器的值来实现增加该页的访问次数; 由于该内存块中包括的页 的数目可能较多, 因此该内存块中包括的每个页对应的计数器都是采用软件的形式来实现 统计每个页的访问次数。
其中, 在本实施例中, 由于对引用该内存块中的所有进程进行监控, 从而可以统计出 该内存块中的每个页的访问次数。
进一步地, 为了保证该进程能够正常访问所访问的页, 本实施例中, 当增加该进程所 访问的页的次数之后, 还可以执行如下 (a) - (c) 的步骤, 分别为:
(a) : 从该内存区域中获取该进程所访问的页的 Present位, 根据获取的 Present位判断 该进程所访问的页在内存中是否有效, 如果有效, 则执行步骤 (b);
具体地, 根据该进程的进程号、 该进程所访问的页的页表项在页表中的序号和该内存 区域的起始地址并通过预设的计算模型计算出内存地址, 根据计算的内存地址从该内存区 域中对应的空间中读取该进程所访问的页的 Present位,对获取的 Present位进行判断,如果 获取的 Present位被置位,则判断出该进程所要访问的页有效,如果获取的 Present位被清零, 则判断出该进程所要访问的页无效。
其中, 如果判断出该进程所要访问的页无效, 则需要触发缺页异常处理程序, 并由触 发的缺页异常处理程序进行异常处理。
(b) : 在该进程对应的页表中将该进程所要访问的页的页表项包括的 Present位置位, 并触发该进程重新访问所要访问的页;
其中, 该进程被触发后, 从自身对应的页表中获取其自身所需要访问的页的页表项, 并对获取的页表项中包括的 Present位进行判断,且判断出获取的页表项包括的 Present位被 置位, 然后再根据获取的页表项中包括的页表项信息获取所要访问页的起始地址, 根据获 取的起始地址从该内存块中寻找出对应的页, 并在寻找的页中进行数据的读写操作, 如此 该进程实现访问所要访问的页。
(c) : 当该进程访问完所要访问的页,在该进程对应的页表中将该进程所要访问的页的 页表项中包括的 Present位清零。
步骤 306: 根据该内存块中的每个页的访问次数和计时的时间, 构建该内存块的内存访 问模型, 内存访问模型至少包括该内存块中的每个页的访问频率和访问次数。
具体地, 分别将该内存块中的每个页的访问次数与计时的时间做比值运算, 得到该内 存块包括的每个页的访问频率, 如此得到该内存块的内存访问模型至少包括该内存块中的 每个页的访问频率和访问次数。
在本发明实施例中, 获取引用内存块的进程对应的页表并将获取的页表中的每个页表 项的 Present位清零, 对引用该内存块的进程进行实时监听并开始计时, 如果监听出存在引 用该内存块的进程访问页时产生的缺页中断事件, 则增加访问的页的访问次数, 根据该内 存块中的每个页的访问次数和计时的时间构建该内存块的内存访问模型, 如此在构建内存 访问模型时不需要记录引用该内存块的每个进程访问该内存块的内存访问地址, 减少内存 消耗和对系统性能的影响, 从而避免造成系统崩溃。 实施例 4
如图 4所示, 本发明实施例提供了一种构建内存访问模型的方法, 包括:
步骤 401 :获取目标进程对应的页表,并将该页表中存储的每个页表项所包括的 Present 位清零, 页表用于存储目标进程所要访问的页的页表项;
步骤 402: 对目标进程进行实时监听并开始计时;
步骤 403 : 如果述目标进程访问待访问的页时产生缺页中断事件, 则增加待访问的页的 访问次数;
其中, 该缺页中断事件为目标进程判断待访问的页的页表项包括的 Present位被清零时 产生的, 待访问的页的页表项为目标进程从其对应的页表中获取得到的。
步骤 404:根据目标进程所访问的每个页的访问次数和计时的时间构建目标进程的内存 访问模型, 内存访问模型至少包括目标进程所访问的每个页的访问次数和访问频率。
在本发明实施例中, 获取目标进程对应的页表并将该页表中的每个页表项包括的 Present位清零, 对目标进程进行实时监听并开始计时, 如果目标进程访问待访问的页时产 生缺页中断事件, 则增加待访问的页的访问次数, 根据目标进程所访问的每个页的访问次 数和计时的时间构建目标进程的内存访问模型, 如此在构建目标进程的内存访问模型时不 需要记录目标进程访问内存的内存访问地址, 减少内存消耗和对系统性能的影响, 从而避 免造成系统崩溃。 实施例 5
如图 5所示, 本发明实施例提供了一种构建内存访问模型的方法, 包括:
步骤 501 : 当节点中的目标进程被调度到该节点中的处理器时, 将目标进程访问本地内 存的次数以及访问远端节点内存的次数加载到该处理器的统计计数寄存器中;
其中, 目标进程的上下文信息中包括目标进程访问本地内存的次数和目标进程访问远 端节点内存的次数。 具体地, 当节点中的进程被调度到该节点的处理器中时, 从目标进程 的上下文信息中提取目标进程访问本地内存的次数和访问远端节点内存的次数, 将提取目 标进程访问本地内存的次数和访问远端节点内存的次数加载到该处理器的统计计数寄存器 中。
其中, 该处理器中包括多个计数器, 进一步地, 从该处理器包括的多个计数器中选择 两个计数器, 分别为第一计数器和第二计数器, 将第一计数器的初值设置为目标进程访问 本地内存的次数, 将第二计数器的初值设置为目标进程访问远端节点内存的次数。
其中, 计算机系统中包括多个节点, 节点至少包括处理器和内存, 该节点中的处理器 在运行目标进程时, 目标进程需要访问该节点的内存即为目标进程访问本地内存, 目标进 程需要访问计算机系统中的其他节点的内存即为目标进程访问远端节点内存。
步骤 502:通过该处理器对目标进程进行实时监听,如果监听出目标进程访问本地内存, 则增加目标进程访问本地内存的次数, 如果监听出目标进程访问远端节点内存, 则增加目 标进程访问远端节点内存的次数;
其中, 可以通过增加第一计数器的值来实现增加目标进程访问本地内存的次数, 以及 通过增加第二计数器的值来实现增加目标进程访问远端节点内存的次数。
其中, 当该处理器在运行目标进程时, 如果目标进程需要访问本地内存, 则目标进程 会调用访问本地内存事件, 如果目标进程需要访问远端节点内存, 则目标进程会调用访问 远端节点内存事件; 因此可以通过该处理器实时监听到目标进程是否访问本地内存和远端 节点内存。
其中, 当目标进程被处理器调度后, 处理器在每个时间片中为目标进程分配运行时间, 然后在每个时间片中处理器在为目标进程分配的运行时间内运行目标进程。
其中, 当目标进程被调离处理器时, 可以将目标进程的上下文信息中包括目标进程访 问本地内存的次数和目标进程访问远端节点内存的次数分别更新为增加后的目标进程访问 本地内存的次数和目标进程访问远端节点内存的次数。
步骤 503 : 当一个时间片结束时, 获取目标进程被调度后运行的实际运行时间; 具体地, 获取目标进程被调度到处理器后经过的时间片, 将目标进程在获取的每个时 间片内的运行时间进行累加得到目标进程的实际运行时间。
步骤 504:根据增加后的目标进程访问本地内存的次数和访问远端节点内存的次数以及 统计计数寄存器中存储的目标进程访问本地内存的次数和访问远端节点的内存的次数获取 目标进程在实际运行时间内访问本地内存的次数和远端节点内存的次数;
其中, 可以从第一计数器中读取目标进程访问本地内存的次数, 从第二计数器中读取 目标进程访问远端节点内存的次数。 具体地, 根据目标进程访问本地内存的次数和统计计 数寄存器中存储目标进程访问本地内存的次数计算出目标进程在实际运行时间内访问本地 内存的次数, 根据目标进程访问远端节点内存的次数和统计计数寄存器存储目标进程访问 远端节点内存的次数计数出目标进程在实际运行时间内访问远端节点内存的次数。
步骤 505 :根据目标进程在实际运行时间内访问本地内存的次数和访问远端节点内存的 次数以及实际运行时间计算出目标进程的远近端访问比率和访问频率;
具体地, 计算出目标进程在实际运行时间内访问本地内存的次数与访问远端节点内存 的次数的比值, 将计算出的比值作为目标进程的远近端访问比率, 根据目标进程在实际运 行时间内访问本地内存的次数和访问远端节点内存的次数计数出目标进程的访问次数, 根 据目标进程的访问次数和实际运行时间计算出目标进程的访问频率。
步骤 506: 对目标进程的远近端访问比率和访问频率进行判断, 如果目标进程的远近端 访问比率超过第一阈值且目标进程的访问频率超过预设的第六阈值, 则执行步骤 507; 其中, 如果目标进程的远近端访问比率没有超过第五阈值或目标进程的访问频率没有 超过第六阈值, 则继续执行 503。
步骤 507: 获取目标进程对应的页表, 目标进程对应的页表用于存储目标进程所要访问 的页的页表项, 页的页表项至少包括页的页表项信息和 Present位;
其中, 内存的最小单位为页, 每个进程对应一个页表, 页表中存储进程所要访问页的 页表项;其中, 内存中的每个页对应一个 Present位,如果某个页有效,则该页对应的 Present 位被置位, 且进程可以访问该页, 如果某个页无效, 则该页对应的 Present位被清零, 且进 程无法直接访问该页。
步骤 508: 在节点的内存中申请一块内存区域, 将获取的页表存储的每个页的页表项中 包括的 Present位存储在该内存区域中;
具体地, 在节点的内存中划分一块内存区域, 获取目标进程的进程号, 根据该页表中 的每个页表项在页表中的序号、 获取的进程号和该内存区域的起始地址通过预设的计算模 型计算出在该内存区域中存储该页表中的每个页表项包括的 Present位的内存地址, 根据存 储该页表中的每个页表项包括的 Present 位的内存地址将该页表中的每个页表项包括的 Present位存储在该内存区域中。
步骤 509: 将获取的页表中的每个页表项中包括的 Present位清零;
其中, 当目标进程访问内存的某个页时, 目标进程首先在其对应的页表中获取其自身 所要访问页的页表项, 并对获取的页表项中包括的 Present位进行判断, 由于目标进程对应 的页表中存储的每个页的 Present位被清零, 所以目标进程判断出获取的页表项中包括的 Present位被清零, 然后目标进程产生缺页中断事件。
步骤 510: 实时监听目标进程并开始计时, 如果目标进程访问待访问的页时产生缺页中 断事件, 则增加目标进程访问待访问的页的访问次数;
具体地, 实时监听目标进程并开始计时, 如果监听出目标进程访问内存中待访问的页 时产生的缺页中断事件, 其中, 所述缺页中断事件为目标进程判断待访问的页的页表项包 括的 Present位被清零时产生的, 待访问的页的页表项为目标进程从其对应的页表中获取得 到的, 获取待访问的页的页表项, 根据该页表项包括待访问的页的页表项信息获取待访问 的页的起始地址, 根据获取的起始地址确定出待访问的页, 并增加待访问的页的访问次数。
进一步地, 为了保证目标进程能够正常访问待访问的页, 本实施例中, 当增加目标进 程访问待访问的页的次数之后, 还可以执行如下 (A) - (C) 的步骤, 分别为:
(A): 从该内存区域中获取待访问的页的 Present位, 根据待访问的页的 Present位判 断待访问的页在内存中是否有效, 如果有效, 则执行步骤 513;
具体地, 根据目标进程的进程号、 待访问的页的页表项在页表中的序号和该内存区域 的起始地址并通过预设的计算模型计算出内存地址, 根据计算的内存地址从该内存区域中 读取待访问的页的 Present位,对获取的 Present位进行判断, 如果获取的 Present位被置位, 则判断出待访问的页有效, 如果获取的 Present位被清零, 则判断出待访问的页无效。
其中, 如果判断出待访问的页无效, 则需要触发缺页异常处理程序, 并由触发的缺页 异常处理程序进行异常处理。
(B): 在目标进程对应的页表中将待访问的页的页表项包括的 Present位置位, 并触发 目标进程重新访问待访问的页;
其中, 目标进程被触发后, 从自身对应的页表中获取待访问的页的页表项, 并对获取 的页表项中包括的 Present位进行判断,且判断出获取的页表项包括的 Present位被置位,然 后再根据获取的页表项中包括待访问的页的页表项信息获取待访问的页的起始地址, 根据 获取的起始地址从节点的内存中寻找出对应的页, 并在寻找的页中进行数据的读写操作, 如此目标进程实现访问待访问的页。
( C): 当目标进程访问完待访问的页, 在目标进程对应的页表中将待访问的页的页表 项中包括的 Present位清零。
步骤 511 : 根据目标进程访问的每个页的访问次数和计时的时间, 构建目标进程的内存 访问模型, 至少包括目标进程访问的每个页的访问次数和访问频率。
其中, 可以周期性地构建内存访问模型, 具体地, 分别将目标进程访问的每个页的访 问次数与计时的时间做比值运算, 得到目标进程访问的每个的访问频率, 如此得到目标进 程的内存访问模型, 至少包括目标进程访问的每个页的访问次数和访问频率。
进一步地, 内存访问模型还可以包括目标进程的远近端访问比率。
在本发明实施例中, 获取目标进程对应的页表并将该页表中的每个页表项的 Present位 清零, 对目标进程进行实时监听并开始计时, 如果监听出目标进程访问待访问的页时产生 的缺页中断事件, 则增加待访问的页的访问次数, 根据目标进程所访问的每个页的访问次 数和计时的时间构建目标进程的内存访问模型, 如此在构建目标进程的内存访问模型时不 需要记录目标进程访问内存的内存访问地址, 减少内存消耗和对系统性能的影响, 从而避 免造成系统崩溃。 实施例 6
如图 6所示, 本发明实施例提供了一种构建内存访问模型的装置, 包括:
第一获取模块 601, 用于获取引用内存块的进程对应的页表, 并将获取的页表中存储的 每个页表项所包括的 Present位清零, 页表用于存储引用该内存块的进程所要访问的页的页 表项;
第一监听模块 602, 用于对引用该内存块的进程进行实时监听并开始计时; 第一增加模块 603,用于如果监听出引用该内存块的进程访问该内存块中的页时产生缺 页中断事件, 则增加访问的页的访问次数; 其中, 缺页中断事件为引用该内存块的进程判 断出访问的页的页表项包括的 Present位被清零时产生的, 访问的页的页表项为引用该内存 块的进程从其对应的页表中获取得到的;
第二获取模块 604,用于根据该内存块中的每个页的访问次数和计时的时间构建该内存 块的内存访问模型, 内存访问模型至少包括该内存块中的每个页的访问次数和访问频率。
其中, 第一获取模块 601包括:
第一获取单元, 用于通过反向映射的方法, 获取引用该内存块的进程并进一步获取引 用该内存块的进程对应的页表;
第一存储单元, 用于在节点的内存中申请一块内存区域, 将获取的页表中的每个页表 项包括的 Present位存储在内存区域中;
第一清零单元, 用于将获取的页表中的每个页表项包括的 Present位清零。
其中, 第一存储单元包括:
第一计算子单元, 用于在节点的内存中申请一块内存区域, 根据引用该内存块的进程 的进程号、 获取的页表中的每个页表项在页表中的序号和该内存区域的起始地址, 并通过 预设的计算模型计算出在该内存区域中存储获取的页表中的每个页表项包括的 Present位的 存储地址;
第一存储子单元, 用于根据获取的页表中的每个页表项包括的 Present位的存储地址, 将获取的页表中的每个页表项包括的 Present位存储在该内存区域中。
其中, 第二获取模块 604, 具体用于根据该内存块中的每个页的访问次数和计时的时间 计算出该内存块中的每个页的访问频率, 得到内存访问模型, 至少包括该内存块中的每个 页的访问次数和访问频率。
进一步地, 该装置还包括:
第三获取模块, 用于将节点的内存划分成多个内存段, 获取内存段被远端节点访问的 访问次数和访问频率, 远端节点为计算机系统中除该节点以外的其他节点;
第一划分模块, 用于如果存在被远端节点访问的访问次数超过第一阈值且被远端节点 访问的访问频率超过第二阈值的内存段, 则将该内存段划分成多个内存块;
第四获取模块, 用于获取内存块被远端节点访问的访问次数和访问频率, 如果存在被 远端节点的访问次数超过第三阈值且被远端节点访问的访问频率超过第四阈值的内存块, 则执行获取引用该内存块的进程对应的页表的操作。
其中, 第三获取模块包括:
第一监听单元, 用于将节点的内存划分成多个内存段, 通过节点对应的互联芯片实时 监听内存段并开始计时, 如果监听出存在远端节点访问内存段, 则增加该内存段被远端节 点访问的访问次数;
第一计算单元, 用于根据该内存段被远端节点访问的访问次数和计时的时间计算出该 内存段被远端节点访问的访问频率。
其中, 第四获取模块包括:
第二监听单元, 用于通过节点对应的互联芯片实时监听内存块并开始计时, 如果监听 出存在远端节点访问内存块, 则增加该内存块被远端节点访问的访问次数;
第二计算单元, 用于根据该内存块被远端节点访问的访问次数和计时的时间计算出该 内存块被远端节点访问的访问频率。
进一步地, 该装置还包括:
第一判断模块,用于从内存区域中获取访问的页的 Present位,根据获取的 Present位判 断访问的页在节点的内存中是否有效;
第一置位模块, 用于如果有效, 则在该进程对应的页表中将访问的页的页表项包括的 Present位置位, 并触发引用该内存块的进程继续访问访问的页。
其中, 第一判断模块包括:
第三计算单元, 用于根据引用该内存块的进程的进程号, 访问的页的页表项在页表中 的序号和内存区域的起始地址, 并通过预设的计算模型计算出在内存区域中存储访问的页 的 Present位的内存地址;
第一判断单元, 用于根据计算出的内存地址从内存区域中读取访问的页的 Present位, 如果读取的 Present位被置位, 则判断出访问的页有效, 如果读取的 Present位被清零, 则判 断出访问的页无效。
进一步地, 该装置还包括:
第一清零模块, 用于如果引用该内存块的进程访问完访问的页时, 则在引用该内存块 的进程对应的页表中将访问的页的页表项包括的 Present位清零。
在本发明实施例中, 获取引用内存块的进程对应的页表并将获取的页表中的每个页表 项的 Present位清零, 对引用该内存块的进程进行实时监听并开始计时, 如果监听出存在引 用该内存块的进程访问页时产生的缺页中断事件, 则增加访问的页的访问次数, 根据该内 存块中的每个页的访问次数和计时的时间构建该内存块的内存访问模型, 如此在构建该内 存块的内存访问模型时不需要记录引用该内存块的每个进程访问该内存块的内存访问地 址, 减少内存消耗和对系统性能的影响, 从而避免造成系统崩溃。 实施例 7
如图 7所示, 本发明实施例提供了一种构建内存访问模型的装置, 包括:
第五获取模块 701, 用于获取目标进程对应的页表, 并将该页表中存储的每个页表项所 包括的 Present位清零, 页表用于存储目标进程所要访问的页的页表项;
第二监听模块 702, 用于对目标进程进行实时监听并开始计时;
第二增加模块 703, 用于如果目标进程访问待访问的页时产生缺页中断事件, 则增加待 访问的页的访问次数; 其中, 缺页中断事件为目标进程判断待访问的页的页表项包括的 Present位被清零时产生的,待访问的页的页表项为目标进程从其对应的页表中获取得到的; 第六获取模块 704,用于根据目标进程所访问的每个页的访问次数和计时的时间构建目 标进程的内存访问模型, 内存访问模型至少包括目标进程所访问的每个页的访问次数和访 问频率。
其中, 第五获取模块 701包括:
第二获取单元, 用于获取目标进程对应的页表;
第二存储单元, 用于在节点的内存中申请一块内存区域, 将该页表中的每个页表项包 括的 Present位存储在内存区域中; 第二清零单元, 用于将该页表中的每个页表项包括的 Present位清零。
其中, 第二存储单元包括:
第二计算子单元, 用于在节点的内存中申请一块内存区域, 根据目标进程的进程号、 该页表中的每个页表项在该页表中的序号和内存区域的起始地址, 并通过预设的计算模型 计算出在内存区域中存储该页表中的每个页表项包括的 Present位的存储地址;
第二存储子单元, 用于根据该页表中的每个页表项包括的 Present位的存储地址, 将该 页表中的每个页表项包括的 Present位存储在内存区域中。
其中, 第六获取模块 704, 具体用于根据目标进程所访问的每个页的访问次数和计时的 时间, 计算出目标进程所访问的每个页的访问频率, 得到内存访问模型, 至少包括目标进 程所访问的每个页的访问次数和访问频率。
进一步地, 该装置还包括:
第七获取模块, 用于当一个时间片结束时, 获取目标进程的远近端访问比率和访问频 率, 如果目标进程的远近端访问比率超过第五阈值且目标进程的访问频率超过第六阈值, 则执行获取目标进程对应的页表的操作。
其中, 第七获取模块包括:
第三获取单元, 用于获取目标进程被节点的处理器调度后的实际运行时间以及在实际 运行时间内目标进程访问本地内存的次数和访问远端节点内存的次数, 远端节点内存为计 算机系统中除该节点以外的其他节点的内存;
第四获取单元, 用于根据实际运行时间和在实际运行时间内目标进程访问本地内存的 次数和访问远端节点内存的次数, 获取目标进程的远近端访问比率和访问频率。
其中, 第三获取单元包括:
加载子单元, 用于当目标进程被节点的处理器调度后, 将目标进程的上下文信息中包 括的访问本地内存的次数和访问远端节点内存的次数加载到处理器的统计计数寄存器中; 监听子单元, 用于通过处理器对目标进程进行实时监听, 如果监听出目标进程访问本 地内存, 则增加目标进程访问本地内存的次数, 如果监听出目标进程访问远端节点内存, 则增加目标进程访问远端节点内存的次数;
第一获取子单元, 用于当时间片结束时获取目标进程被调度后的实际运行时间; 第二获取子单元, 用于根据增加后的目标进程访问本地内存的次数和访问远端节点内 存的次数以及统计计数寄存器中存储的目标进程访问本地内存的次数和访问远端节点的内 存的次数获取目标进程在实际运行时间内访问本地内存的次数和远端节点内存的次数。 其中, 第四获取单元包括:
第三计算子单元, 用于计算出在实际运行时间内目标进程访问本地内存的次数与访问 远端节点内存的次数的比值, 将该比值作为目标进程的远近端访问比率;
第四计算子单元, 用于根据在实际运行时间内目标进程访问本地内存的次数和访问远 端节点内存的次数计算出目标进程的访问次数;
第五计算子单元, 用于根据目标进程的访问次数和实际运行时间计算出目标进程的访 问频率。
进一步地, 该装置还包括:
第二判断模块, 用于从内存区域中获取待访问的页的 Present位, 根据待访问的页的 Present位判断待访问的页在节点的内存中是否有效;
第二置位模块, 用于如果有效, 则在目标进程对应的页表中将待访问的页的页表项包 括的 Present位置位, 并触发目标进程继续访问待访问的页。
其中, 第二判断模块包括:
第四计算单元, 用于根据目标进程的进程号, 待访问的页的页表项在页表中的序号和 内存区域的起始地址, 并通过预设的计算模型计算出在内存区域中存储待访问的页的 Present位的内存地址;
第二判断单元,用于根据计算出的内存地址从内存区域中读取待访问的页的 Present位, 如果读取的 Present位被置位, 则判断出待访问的页有效, 如果读取的 Present位被清零, 则 判断出待访问的页无效。
进一步地, 该装置还包括:
第二清零模块, 用于如果目标进程访问完待访问的页时, 则在目标进程对应的页表中 将待访问的页的页表项包括的 Present位清零。
在本发明实施例中, 获取目标进程对应的页表并将该页表中的每个页表项的 Present位 清零, 对目标进程进行实时监听并开始计时, 如果监听出目标进程访问待访问的页时产生 的缺页中断事件, 则增加待访问的页的访问次数, 根据目标进程所访问的每个页的访问次 数和计时的时间构建目标进程的内存访问模型, 如此在构建目标进程的内存访问模型时不 需要记录目标进程访问内存的内存访问地址, 减少内存消耗和对系统性能的影响, 从而避 免造成系统崩溃。 本领域普通技术人员可以理解实现上述实施例的全部或部分步骤可以通过硬件来完 成, 也可以通过程序来指令相关的硬件完成, 所述的程序可以存储于一种计算机可读存储 介质中, 上述提到的存储介质可以是只读存储器, 磁盘或光盘等。 以上所述仅为本发明的较佳实施例, 并不用以限制本发明, 凡在本发明的精神和原则 之内, 所作的任何修改、 等同替换、 改进等, 均应包含在本发明的保护范围之内。

Claims

权 利 要 求 书
1、 一种构建内存访问模型的方法, 其特征在于, 所述方法包括:
获取引用内存块的进程对应的页表, 并将所述页表中存储的每个页表项所包括的当前 Present位清零, 所述页表用于存储引用所述内存块的进程所要访问的页的页表项;
对引用所述内存块的进程进行实时监听并开始计时;
如果引用所述内存块的进程访问所述内存块中的页时产生缺页中断事件, 则增加所述访 问的页的访问次数; 其中, 所述缺页中断事件为引用所述内存块的进程判断出所述访问的页 的页表项包括的 Present位被清零时产生的,所述访问的页的页表项为引用所述内存块的进程 从其对应的页表中获取得到的;
根据所述内存块中的每个页的访问次数和计时的时间构建所述内存块的内存访问模型, 所述内存访问模型至少包括所述内存块中的每个页的访问次数和访问频率。
2、 如权利要求 1所述的方法, 其特征在于, 所述获取引用内存块的进程对应的页表, 并 将所述页表中存储的每个页表项所包括的当前 Present位清零, 包括:
通过反向映射的方法, 获取引用所述内存块的进程并进一步获取引用所述内存块的进程 对应的页表;
在节点的内存中申请一块内存区域,将所述页表中的每个页表项包括的 Present位存储在 所述内存区域中;
将所述页表中的每个页表项包括的 Present位清零。
3、 如权利要求 2所述的方法, 其特征在于, 将所述页表中的每个页表项包括的 Present 位存储在所述内存区域中, 包括:
根据引用所述内存块的进程的进程号、 所述页表中的每个页表项在所述页表中的序号和 所述内存区域的起始地址, 并通过预设的计算模型计算出在所述内存区域中存储所述页表中 的每个页表项包括的 Present位的存储地址;
根据所述页表中的每个页表项包括的 Present位的存储地址,将所述页表中的每个页表项 包括的 Present位存储在所述内存区域中。
4、 如权利要求 1-3任一项权利要求所述的方法, 其特征在于, 所述获取引用内存块的进 程对应的页表之前, 还包括:
将节点的内存划分成多个内存段, 获取所述内存段被远端节点访问的访问次数和访问频 率, 所述远端节点为计算机系统中除所述节点以外的其他节点;
如果存在被远端节点访问的访问次数超过第一阈值且被远端节点访问的访问频率超过第 二阈值的内存段, 则将所述内存段划分成多个内存块;
获取所述内存块被远端节点访问的访问次数和访问频率, 如果存在被远端节点的访问次 数超过第三阈值且被远端节点访问的访问频率超过第四阈值的内存块, 则执行获取引用所述 内存块的进程对应的页表的操作。
5、 如权利要求 4所述的方法, 其特征在于, 获取所述内存段被远端节点访问的访问次数 和访问频率, 包括:
通过所述节点对应的互联芯片实时监听所述内存段并开始计时, 如果监听出存在远端节 点访问所述内存段, 则增加所述内存段被远端节点访问的访问次数;
根据所述内存段被远端节点访问的访问次数和计时的时间计算出所述内存段被远端节点 访问的访问频率。
6、 如权利要求 4所述的方法, 其特征在于, 获取所述内存块被远端节点访问的访问次数 和访问频率, 包括:
通过所述节点对应的互联芯片实时监听所述内存块并开始计时, 如果监听出存在远端节 点访问所述内存块, 则增加所述内存块被远端节点访问的访问次数;
根据所述内存块被远端节点访问的访问次数和计时的时间计算出所述内存块被远端节点 访问的访问频率。
7、 如权利要求 1-3任一项权利要求所述的方法, 其特征在于, 所述增加所述访问的页的 访问次数之后, 还包括:
从内存区域中获取所述访问的页的 Present位, 根据所述获取的 Present位判断所述访问 的页在节点的内存中是否有效;
如果有效, 则在所述进程对应的页表中将所述访问的页的页表项包括的 Present位置位, 并触发引用所述内存块的进程继续访问所述访问的页。
8、如权利要求 7所述的方法,其特征在于,所述从内存区域中获取所述访问的页的 Present 位, 根据所述获取的 Present位判断所述访问的页在节点的内存中是否有效, 包括:
根据引用所述内存块的进程的进程号, 所述访问的页的页表项在页表中的序号和内存区 域的起始地址, 并通过预设的计算模型计算出在所述内存区域中存储所述访问的页的 Present 位的内存地址;
根据所述计算出的内存地址从所述内存区域中读取所述访问的页的 Present位,如果所述 读取的 Present位被置位, 则判断出所述访问的页有效, 如果所述读取的 Present位被清零, 则判断出所述访问的页无效。
9、 如权利要求 7所述的方法, 其特征在于, 所述方法还包括:
如果引用所述内存块的进程访问完所述访问的页时, 则在引用所述内存块的进程对应的 页表中将所述访问的页的页表项包括的 Present位清零。
10、 一种构建内存访问模型的方法, 其特征在于, 所述方法包括:
获取目标进程对应的页表,并将所述页表中存储的每个页表项所包括的当前 Present位清 零, 所述页表用于存储所述目标进程所要访问的页的页表项;
对所述目标进程进行实时监听并开始计时;
如果所述目标进程访问待访问的页时产生缺页中断事件, 则增加所述待访问的页的访问 次数; 其中, 所述缺页中断事件为所述目标进程判断所述待访问的页的页表项包括的 Present 位被清零时产生的,所述待访问的页的页表项为所述目标进程从其对应的页表中获取得到的; 根据所述目标进程所访问的每个页的访问次数和计时的时间构建所述目标进程的内存访 问模型, 所述内存访问模型至少包括所述目标进程所访问的每个页的访问次数和访问频率。
11、 如权利要求 10所述的方法, 其特征在于, 所述获取目标进程对应的页表, 并将所述 页表中存储的每个页表项包括的 Present位清零, 包括:
获取所述目标进程对应的页表;
在节点的内存中申请一块内存区域,将所述页表中的每个页表项包括的 Present位存储在 所述内存区域中;
将所述页表中的每个页表项包括的 Present位清零。
12、如权利要求 11所述的方法, 其特征在于, 将所述页表中的每个页表项包括的 Present 位存储在所述内存区域中, 包括:
根据所述目标进程的进程号、 所述页表中的每个页表项在所述页表中的序号和所述内存 区域的起始地址, 并通过预设的计算模型计算出在所述内存区域中存储所述页表中的每个页 表项包括的 Present位的存储地址;
根据所述页表中的每个页表项包括的 Present位的存储地址,将所述页表中的每个页表项 包括的 Present位存储在所述内存区域中。
13、 如权利要求 10-12任一项权利要求所述的方法, 其特征在于, 所述获取目标进程对 应的页表之前, 还包括:
当一个时间片结束时, 获取所述目标进程的远近端访问比率和访问频率, 如果所述目标 进程的远近端访问比率超过第五阈值且所述目标进程的访问频率超过第六阈值, 则执行获取 所述目标进程对应的页表的操作。
14、 如权利要求 13所述的方法, 其特征在于, 获取所述目标进程的远近端访问比率和访 问频率, 包括:
获取所述目标进程被节点的处理器调度后的实际运行时间以及在所述实际运行时间内所 述目标进程访问本地内存的次数和访问远端节点内存的次数, 所述远端节点内存为计算机系 统中除所述节点以外的其他节点的内存;
根据所述实际运行时间和在所述实际运行时间内所述目标进程访问本地内存的次数和访 问远端节点内存的次数, 获取所述目标进程的远近端访问比率和访问频率。
15、 如权利要求 14所述的方法, 其特征在于, 获取所述目标进程被节点的处理器调度后 的实际运行时间以及在所述实际运行时间内所述目标进程访问本地内存的次数和访问远端节 点内存的次数, 包括:
当所述目标进程被节点的处理器调度后, 将所述目标进程的上下文信息中包括的访问本 地内存的次数和访问远端节点内存的次数加载到所述处理器的统计计数寄存器中;
通过所述处理器对所述目标进程进行实时监听,如果监听出所述目标进程访问本地内存, 则增加所述目标进程访问本地内存的次数, 如果监听出所述目标进程访问远端节点内存, 则 增加所述目标进程访问远端节点内存的次数; 当所述时间片结束时获取所述目标进程被调度后的实际运行时间;
根据增加后的所述目标进程访问本地内存的次数和访问远端节点内存的次数以及所述统 计计数寄存器中存储的所述目标进程访问本地内存的次数和访问远端节点内存的次数获取所 述目标进程在所述实际运行时间内访问本地内存的次数和远端节点内存的次数。
16、 如权利要求 14所述的方法, 其特征在于, 根据所述实际运行时间和在所述实际运行 时间内所述目标进程访问本地内存的次数和访问远端节点内存的次数, 获取所述目标进程的 远近端访问比率和访问频率, 包括:
计算出在所述实际运行时间内所述目标进程访问本地内存的次数与访问远端节点内存的 次数的比值, 将所述比值作为所述目标进程的远近端访问比率;
根据在所述实际运行时间内所述目标进程访问本地内存的次数和访问远端节点内存的次 数计算出所述目标进程的访问次数;
根据所述目标进程的访问次数和所述实际运行时间计算出所述目标进程的访问频率。
17、 如权利要求 10-12任一项权利要求所述的方法, 其特征在于, 增加所述待访问的页 的访问次数之后, 还包括:
从内存区域中获取所述待访问的页的 Present位, 根据所述待访问的页的 Present位判断 所述待访问的页在节点的内存中是否有效;
如果有效, 则在所述目标进程对应的页表中将所述待访问的页的页表项包括的 Present 位置位, 并触发所述目标进程继续访问所述待访问的页。
18、 如权利要求 17所述的方法, 其特征在于, 所述从内存区域中获取所述待访问的页的 Present位, 根据所述待访问的页的 Present位判断所述待访问的页在节点的内存中是否有效, 包括:
根据所述目标进程的进程号, 所述待访问的页的页表项在页表中的序号和内存区域的起 始地址,并通过预设的计算模型计算出在所述内存区域中存储所述待访问的页的 Present位的 内存地址;
根据所述计算出的内存地址从所述内存区域中读取所述待访问的页的 Present位,如果所 述读取的 Present位被置位, 则判断出所述待访问的页有效, 如果所述读取的 Present位被清 零, 则判断出所述待访问的页无效。
19、 如权利要求 17所述的方法, 其特征在于, 所述方法还包括:
如果所述目标进程访问完待访问的页时, 则在所述目标进程对应的页表中将所述待访问 的页的页表项包括的 Present位清零。
20、 一种构建内存访问模型的装置, 其特征在于, 所述装置包括:
第一获取模块, 用于获取引用内存块的进程对应的页表, 并将所述页表中存储的每个页 表项所包括的当前 Present位清零,所述页表用于存储引用所述内存块的进程所要访问的页的 页表项;
第一监听模块, 用于对引用所述内存块的进程进行实时监听并开始计时;
第一增加模块, 用于如果引用所述内存块的进程访问所述内存块中的页时产生缺页中断 事件, 则增加所述访问的页的访问次数; 其中, 所述缺页中断事件为引用所述内存块的进程 判断出所述访问的页的页表项包括的 Present位被清零时产生的,所述访问的页的页表项为引 用所述内存块的进程从其对应的页表中获取得到的;
第二获取模块, 用于根据所述内存块中的每个页的访问次数和计时的时间构建所述内存 块的内存访问模型, 所述内存访问模型至少包括所述内存块中的每个页的访问次数和访问频 -
21、 如权利要求 20所述的装置, 其特征在于, 所述第一获取模块包括:
第一获取单元, 用于通过反向映射的方法, 获取引用所述内存块的进程并进一步获取引 用所述内存块的进程对应的页表;
第一存储单元, 用于在节点的内存中申请一块内存区域, 将所述页表中的每个页表项包 括的 Present位存储在所述内存区域中;
第一清零单元, 用于将所述页表中的每个页表项包括的 Present位清零。
22、 如权利要求 21所述的装置, 其特征在于, 所述第一存储单元包括:
第一计算子单元, 用于在节点的内存中申请一块内存区域, 根据引用所述内存块的进程 的进程号、 所述页表中的每个页表项在所述页表中的序号和所述内存区域的起始地址, 并通 过预设的计算模型计算出在所述内存区域中存储所述页表中的每个页表项包括的 Present位 的存储地址; 第一存储子单元, 用于根据所述页表中的每个页表项包括的 Present位的存储地址, 将所 述页表中的每个页表项包括的 Present位存储在所述内存区域中。
23、 如权利要求 20-22任一项权利要求所述的装置, 其特征在于, 所述装置还包括: 第三获取模块, 用于将节点的内存划分成多个内存段, 获取所述内存段被远端节点访问 的访问次数和访问频率, 所述远端节点为计算机系统中除所述节点以外的其他节点;
第一划分模块, 用于如果存在被远端节点访问的访问次数超过第一阈值且被远端节点访 问的访问频率超过第二阈值的内存段, 则将所述内存段划分成多个内存块;
第四获取模块, 用于获取所述内存块被远端节点访问的访问次数和访问频率, 如果存在 被远端节点的访问次数超过第三阈值且被远端节点访问的访问频率超过第四阈值的内存块, 则执行获取引用所述内存块的进程对应的页表的操作。
24、 如权利要求 23所述的装置, 其特征在于, 所述第三获取模块包括:
第一监听单元, 用于将节点的内存划分成多个内存段, 通过所述节点对应的互联芯片实 时监听所述内存段并开始计时, 如果监听出存在远端节点访问所述内存段, 则增加所述内存 段被远端节点访问的访问次数;
第一计算单元, 用于根据所述内存段被远端节点访问的访问次数和计时的时间计算出所 述内存段被远端节点访问的访问频率。
25、 如权利要求 23所述的装置, 其特征在于, 所述第四获取模块包括:
第二监听单元, 用于通过所述节点对应的互联芯片实时监听所述内存块并开始计时, 如 果监听出存在远端节点访问所述内存块, 则增加所述内存块被远端节点访问的访问次数; 第二计算单元, 用于根据所述内存块被远端节点访问的访问次数和计时的时间计算出所 述内存块被远端节点访问的访问频率。
26、 如权利要求 20-22任一项权利要求所述的装置, 其特征在于, 所述装置还包括: 第一判断模块, 用于从内存区域中获取所述访问的页的 Present 位, 根据所述获取的 Present位判断所述访问的页在节点的内存中是否有效;
第一置位模块, 用于如果有效, 则在所述进程对应的页表中将所述访问的页的页表项包 括的 Present位置位, 并触发引用所述内存块的进程继续访问所述访问的页。
27、 如权利要求 26所述的装置, 其特征在于, 所述第一判断模块包括: 第三计算单元, 用于根据引用所述内存块的进程的进程号, 所述访问的页的页表项在页 表中的序号和内存区域的起始地址, 并通过预设的计算模型计算出在所述内存区域中存储所 述访问的页的 Present位的内存地址;
第一判断单元, 用于根据所述计算出的内存地址从所述内存区域中读取所述访问的页的
Present位, 如果所述读取的 Present位被置位, 则判断出所述访问的页有效, 如果所述读取的
Present位被清零, 则判断出所述访问的页无效。
28、 如权利要求 26所述的方法, 其特征在于, 所述装置还包括:
第一清零模块, 用于如果引用所述内存块的进程访问完所述访问的页时, 则在引用所述 内存块的进程对应的页表中将所述访问的页的页表项包括的 Present位清零。
29、 一种构建内存访问模型的装置, 其特征在于, 所述装置包括:
第五获取模块, 用于获取目标进程对应的页表, 并将所述页表中存储的每个页表项所包 括的当前 Present位清零, 所述页表用于存储所述目标进程所要访问的页的页表项;
第二监听模块, 用于对所述目标进程进行实时监听并开始计时;
第二增加模块, 用于如果所述目标进程访问待访问的页时产生缺页中断事件, 则增加所 述待访问的页的访问次数; 其中, 所述缺页中断事件为所述目标进程判断所述待访问的页的 页表项包括的 Present位被清零时产生的,所述待访问的页的页表项为所述目标进程从其对应 的页表中获取得到的;
第六获取模块, 用于根据所述目标进程所访问的每个页的访问次数和计时的时间构建所 述目标进程的内存访问模型, 所述内存访问模型至少包括所述目标进程所访问的每个页的访 问次数和访问频率。
30、 如权利要求 29所述的装置, 其特征在于, 所述第五获取模块包括:
第二获取单元, 用于获取所述目标进程对应的页表;
第二存储单元, 用于在节点的内存中申请一块内存区域, 将所述页表中的每个页表项包 括的 Present位存储在所述内存区域中;
第二清零单元, 用于将所述页表中的每个页表项包括的 Present位清零。
31、 如权利要求 30所述的装置, 其特征在于, 所述第二存储单元包括:
第二计算子单元, 用于在节点的内存中申请一块内存区域, 根据所述目标进程的进程号、 所述页表中的每个页表项在所述页表中的序号和所述内存区域的起始地址, 并通过预设的计 算模型计算出在所述内存区域中存储所述页表中的每个页表项包括的 Present位的存储地址; 第二存储子单元, 用于根据所述页表中的每个页表项包括的 Present位的存储地址, 将所 述页表中的每个页表项包括的 Present位存储在所述内存区域中。
32、 如权利要求 29-31任一项权利要求所述的装置, 其特征在于, 所述装置还包括: 第七获取模块, 用于当一个时间片结束时, 获取所述目标进程的远近端访问比率和访问 频率, 如果所述目标进程的远近端访问比率超过第五阈值且所述目标进程的访问频率超过第 六阈值, 则执行获取所述目标进程对应的页表的操作。
33、 如权利要求 32所述的装置, 其特征在于, 所述第七获取模块包括:
第三获取单元, 用于获取所述目标进程被节点的处理器调度后的实际运行时间以及在所 述实际运行时间内所述目标进程访问本地内存的次数和访问远端节点内存的次数, 所述远端 节点内存为计算机系统中除所述节点以外的其他节点的内存;
第四获取单元, 用于根据所述实际运行时间和在所述实际运行时间内所述目标进程访问 本地内存的次数和访问远端节点内存的次数, 获取所述目标进程的远近端访问比率和访问频 率。
34、 如权利要求 33所述的装置, 其特征在于, 所述第三获取单元包括:
加载子单元, 用于当所述目标进程被节点的处理器调度后, 将所述目标进程的上下文信 息中包括的访问本地内存的次数和访问远端节点内存的次数加载到所述处理器的统计计数寄 存器中;
监听子单元, 用于通过所述处理器对所述目标进程进行实时监听, 如果监听出所述目标 进程访问本地内存, 则增加所述目标进程访问本地内存的次数, 如果监听出所述目标进程访 问远端节点内存, 则增加所述目标进程访问远端节点内存的次数;
第一获取子单元,用于当所述时间片结束时获取所述目标进程被调度后的实际运行时间; 第二获取子单元, 用于根据增加后的所述目标进程访问本地内存的次数和访问远端节点 内存的次数以及所述统计计数寄存器中存储的所述目标进程访问本地内存的次数和访问远端 节点内存的次数获取所述目标进程在所述实际运行时间内访问本地内存的次数和远端节点内 存的次数。
35、 如权利要求 33所述的装置, 其特征在于, 所述第四获取单元包括:
第三计算子单元, 用于计算出在所述实际运行时间内所述目标进程访问本地内存的次数 与访问远端节点内存的次数的比值, 将所述比值作为所述目标进程的远近端访问比率; 第四计算子单元, 用于根据在所述实际运行时间内所述目标进程访问本地内存的次数和 访问远端节点内存的次数计算出所述目标进程的访问次数;
第五计算子单元, 用于根据所述目标进程的访问次数和所述实际运行时间计算出所述目 标进程的访问频率。
36、 如权利要求 29-31任一项权利要求所述的装置, 其特征在于, 所述装置还包括: 第二判断模块, 用于从内存区域中获取所述待访问的页的 Present位, 根据所述待访问的 页的 Present位判断所述待访问的页在节点的内存中是否有效;
第二置位模块, 用于如果有效, 则在所述目标进程对应的页表中将所述待访问的页的页 表项包括的 Present位置位, 并触发所述目标进程继续访问所述待访问的页。
37、 如权利要求 36所述的装置, 其特征在于, 所述所述第二判断模块包括: 第四计算单元, 用于根据所述目标进程的进程号, 所述待访问的页的页表项在页表中的 序号和内存区域的起始地址, 并通过预设的计算模型计算出在所述内存区域中存储所述待访 问的页的 Present位的内存地址;
第二判断单元, 用于根据所述计算出的内存地址从所述内存区域中读取所述待访问的页 的 Present位, 如果所述读取的 Present位被置位, 则判断出所述待访问的页有效, 如果所述 读取的 Present位被清零, 则判断出所述待访问的页无效。
38、 如权利要求 36所述的方法, 其特征在于, 所述装置还包括:
第二清零模块, 用于如果所述目标进程访问完待访问的页时, 则在所述目标进程对应的 页表中将所述待访问的页的页表项包括的 Present位清零。
PCT/CN2011/081544 2011-10-31 2011-10-31 一种构建内存访问模型的方法及装置 WO2012167533A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
EP11867473.8A EP2772853B1 (en) 2011-10-31 2011-10-31 Method and device for building memory access model
CN201180002377.5A CN102439577B (zh) 2011-10-31 2011-10-31 一种构建内存访问模型的方法及装置
PCT/CN2011/081544 WO2012167533A1 (zh) 2011-10-31 2011-10-31 一种构建内存访问模型的方法及装置
US14/263,212 US9471495B2 (en) 2011-10-31 2014-04-28 Method and apparatus for constructing memory access model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2011/081544 WO2012167533A1 (zh) 2011-10-31 2011-10-31 一种构建内存访问模型的方法及装置

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US14/263,212 Continuation US9471495B2 (en) 2011-10-31 2014-04-28 Method and apparatus for constructing memory access model

Publications (1)

Publication Number Publication Date
WO2012167533A1 true WO2012167533A1 (zh) 2012-12-13

Family

ID=45986242

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2011/081544 WO2012167533A1 (zh) 2011-10-31 2011-10-31 一种构建内存访问模型的方法及装置

Country Status (4)

Country Link
US (1) US9471495B2 (zh)
EP (1) EP2772853B1 (zh)
CN (1) CN102439577B (zh)
WO (1) WO2012167533A1 (zh)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20140042546A (ko) * 2012-09-28 2014-04-07 에스케이하이닉스 주식회사 반도체 장치 및 그 동작 방법
CN103914363B (zh) 2012-12-31 2016-10-26 华为技术有限公司 一种内存监控方法及相关装置
CN104346293B (zh) * 2013-07-25 2017-10-24 华为技术有限公司 混合内存的数据访问方法、模块、处理器及终端设备
CN105701020B (zh) * 2014-11-28 2018-11-30 华为技术有限公司 一种内存访问的方法、相关装置和系统
US9727241B2 (en) * 2015-02-06 2017-08-08 Advanced Micro Devices, Inc. Memory page access detection
CN104899111B (zh) * 2015-06-09 2018-03-20 烽火通信科技股份有限公司 一种处理家庭网关系统内核崩溃的方法及系统
CN105159838B (zh) * 2015-08-27 2018-06-26 华为技术有限公司 访问内存的方法及计算机系统
WO2017070861A1 (zh) * 2015-10-28 2017-05-04 华为技术有限公司 一种中断响应方法、装置及基站
US20180150256A1 (en) * 2016-11-29 2018-05-31 Intel Corporation Technologies for data deduplication in disaggregated architectures
US11023135B2 (en) 2017-06-27 2021-06-01 TidalScale, Inc. Handling frequently accessed pages
US10817347B2 (en) * 2017-08-31 2020-10-27 TidalScale, Inc. Entanglement of pages and guest threads

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101136765A (zh) * 2006-09-01 2008-03-05 中兴通讯股份有限公司 一种快速访问信息模型的方法
CN101315602A (zh) * 2008-05-09 2008-12-03 浙江大学 硬件化的进程内存管理核的方法
CN101604283A (zh) * 2009-06-11 2009-12-16 北京航空航天大学 一种基于Linux内核页表替换的内存访问模型追踪方法

Family Cites Families (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4761733A (en) * 1985-03-11 1988-08-02 Celerity Computing Direct-execution microprogrammable microprocessor system
US4890223A (en) * 1986-01-15 1989-12-26 Motorola, Inc. Paged memory management unit which evaluates access permissions when creating translator
US5282274A (en) * 1990-05-24 1994-01-25 International Business Machines Corporation Translation of multiple virtual pages upon a TLB miss
TW212840B (en) * 1992-04-22 1993-09-11 Ibm Multi-bit vector for page aging
US6112286A (en) * 1997-09-19 2000-08-29 Silicon Graphics, Inc. Reverse mapping page frame data structures to page table entries
US6157398A (en) * 1997-12-30 2000-12-05 Micron Technology, Inc. Method of implementing an accelerated graphics port for a multiple memory controller computer system
US7487508B2 (en) * 2002-05-16 2009-02-03 Hewlett-Packard Development Company, L.P. System and method for reconstructing client web page accesses from captured network packets
US7246101B2 (en) * 2002-05-16 2007-07-17 Hewlett-Packard Development Company, L.P. Knowledge-based system and method for reconstructing client web page accesses from captured network packets
US7155548B2 (en) * 2003-11-04 2006-12-26 Texas Instruments Incorporated Sequential device control with time-out function
CN100383763C (zh) * 2004-02-27 2008-04-23 中国人民解放军国防科学技术大学 基于操作系统反向页表的页迁移和复制方法
US7403945B2 (en) * 2004-11-01 2008-07-22 Sybase, Inc. Distributed database system providing data and space management methodology
US7395385B2 (en) * 2005-02-12 2008-07-01 Broadcom Corporation Memory management for a mobile multimedia processor
US7330958B2 (en) * 2005-09-22 2008-02-12 International Business Machines Corporation Method and apparatus for translating a virtual address to a real address using blocks of contiguous page table entries
US7493439B2 (en) * 2006-08-01 2009-02-17 International Business Machines Corporation Systems and methods for providing performance monitoring in a memory system
GB0623276D0 (en) * 2006-11-22 2007-01-03 Transitive Ltd Memory consistency protection in a multiprocessor computing system
US8344475B2 (en) * 2006-11-29 2013-01-01 Rambus Inc. Integrated circuit heating to effect in-situ annealing
TWI417722B (zh) * 2007-01-26 2013-12-01 Hicamp Systems Inc 階層式不可改變的內容可定址的記憶體處理器
US7991942B2 (en) * 2007-05-09 2011-08-02 Stmicroelectronics S.R.L. Memory block compaction method, circuit, and system in storage devices based on flash memories
GB2457341B (en) * 2008-02-14 2010-07-21 Transitive Ltd Multiprocessor computing system with multi-mode memory consistency protection
US8868847B2 (en) * 2009-03-11 2014-10-21 Apple Inc. Multi-core processor snoop filtering
US8397219B2 (en) * 2009-03-31 2013-03-12 Oracle America, Inc. Method and apparatus for tracking enregistered memory locations
CA2758235A1 (en) * 2009-04-27 2010-11-04 Kamlesh Gandhi Device and method for storage, retrieval, relocation, insertion or removal of data in storage units
US9086973B2 (en) * 2009-06-09 2015-07-21 Hyperion Core, Inc. System and method for a cache in a multi-core processor
US20110016290A1 (en) * 2009-07-14 2011-01-20 Arie Chobotaro Method and Apparatus for Supporting Address Translation in a Multiprocessor Virtual Machine Environment
US8627041B2 (en) * 2009-10-09 2014-01-07 Nvidia Corporation Efficient line and page organization for compression status bit caching
US8954697B2 (en) * 2010-08-05 2015-02-10 Red Hat, Inc. Access to shared memory segments by multiple application processes

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101136765A (zh) * 2006-09-01 2008-03-05 中兴通讯股份有限公司 一种快速访问信息模型的方法
CN101315602A (zh) * 2008-05-09 2008-12-03 浙江大学 硬件化的进程内存管理核的方法
CN101604283A (zh) * 2009-06-11 2009-12-16 北京航空航天大学 一种基于Linux内核页表替换的内存访问模型追踪方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP2772853A4 *

Also Published As

Publication number Publication date
EP2772853A4 (en) 2014-11-26
EP2772853B1 (en) 2019-05-22
US20140237192A1 (en) 2014-08-21
CN102439577A (zh) 2012-05-02
EP2772853A1 (en) 2014-09-03
CN102439577B (zh) 2014-01-22
US9471495B2 (en) 2016-10-18

Similar Documents

Publication Publication Date Title
WO2012167533A1 (zh) 一种构建内存访问模型的方法及装置
US6219727B1 (en) Apparatus and method for computer host system and adaptor interrupt reduction including clustered command completion
US8417999B2 (en) Memory management techniques selectively using mitigations to reduce errors
US9189410B2 (en) Hypervisor-based flash cache space management in a multi-VM environment
US10241889B2 (en) Tracking pipelined activity during off-core memory accesses to evaluate the impact of processor core frequency changes
JP2011507128A (ja) プロセッサで実行中のプログラムソフトウェアをプロファイルするためのメカニズム
KR101574451B1 (ko) 트랜잭션 메모리 시스템 내구성 부여
US9009368B2 (en) Interrupt latency performance counters
US20190258561A1 (en) Real-time input/output bandwidth estimation
JP5694170B2 (ja) 選択的に軽減を使用してエラーを低減するメモリー管理技術
US10331537B2 (en) Waterfall counters and an application to architectural vulnerability factor estimation
Koshiba et al. A software-based NVM emulator supporting read/write asymmetric latencies
US20140245082A1 (en) Implementing client based throttled error logging
US6725363B1 (en) Method for filtering instructions to get more precise event counts
KR102329757B1 (ko) 시스템 및 그것의 동작 방법
US9182958B2 (en) Software code profiling
US11561895B2 (en) Oldest operation wait time indication input into set-dueling
Haghdoost et al. hfplayer: Scalable replay for intensive block I/O workloads
JP3099807B2 (ja) Cpu使用率測定方式
JP2009217385A (ja) プロセッサ及びマルチプロセッサ
JP6378289B2 (ja) データベースにおいてホットページを決定するための方法および装置
US20120117330A1 (en) Method and apparatus for selectively bypassing a cache for trace collection in a processor
WO2023125248A1 (zh) 内存带宽的控制方法、装置、电子设备和存储介质
US11914519B2 (en) Affinity-based cache operation for a persistent storage device
US11099966B2 (en) Efficient generation of instrumentation data for direct memory access operations

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 201180002377.5

Country of ref document: CN

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11867473

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 2011867473

Country of ref document: EP