CN111651379B - DAX equipment address conversion caching method and system - Google Patents

DAX equipment address conversion caching method and system Download PDF

Info

Publication number
CN111651379B
CN111651379B CN202010357810.8A CN202010357810A CN111651379B CN 111651379 B CN111651379 B CN 111651379B CN 202010357810 A CN202010357810 A CN 202010357810A CN 111651379 B CN111651379 B CN 111651379B
Authority
CN
China
Prior art keywords
address
register
dax
address conversion
cache
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010357810.8A
Other languages
Chinese (zh)
Other versions
CN111651379A (en
Inventor
熊子威
蒋德钧
熊劲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Computing Technology of CAS
Original Assignee
Institute of Computing Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Computing Technology of CAS filed Critical Institute of Computing Technology of CAS
Priority to CN202010357810.8A priority Critical patent/CN111651379B/en
Publication of CN111651379A publication Critical patent/CN111651379A/en
Application granted granted Critical
Publication of CN111651379B publication Critical patent/CN111651379B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/10Address translation
    • G06F12/1081Address translation for peripheral access to main memory, e.g. direct memory access [DMA]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/10Address translation
    • G06F12/1009Address translation using page tables, e.g. page table structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/10Providing a specific technical effect
    • G06F2212/1016Performance improvement

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The invention provides a DAX device address translation caching method and system, comprising the following steps: constructing a DAX address conversion cache formed by a mapping file head address register MFA, an object offset register OFS, a file number register FID and an address conversion table; writing the file number in the persistent address and the object offset in the persistent address into the file number register and the object offset register respectively according to the address conversion function; the fast table converts the virtual address sent by the CPU into a physical address, the DAX address conversion buffer searches the address conversion table through the data stored in the file number register, adds the first address corresponding to the search result and the data in the object offset register to obtain a direct access address, and feeds back the direct access address as the conversion result of the virtual address to the CPU. The invention can reduce the instruction overhead of the address conversion function by half and greatly enhance the efficiency of processing the multi-mapping file.

Description

DAX equipment address conversion caching method and system
Technical Field
The invention relates to the field of computer architecture and nonvolatile memory research, in particular to a DAX device address translation caching method and system.
Background
The method for accessing the mapping file is common in the current system. In such a case, the file is mapped into a large array of bytes, the head address of which is determined by the mapping function. The application is free to access any data within the array during run-time. This creates the problem that if the program writes data into the mapped file and wishes to be able to retrieve it after the program has restarted, the program must process it to a certain format. Because the mapping function does not guarantee that the same file is mapped to the same address, after the program is restarted, the mapping address in the last run is invalid in the current run. The program cannot locate the data stored to the mapping file in the last run by the virtual address.
With the advent of new generation Non-Volatile Memory (Non-Volatile Memory), some researchers and enterprises are developing NVM developer libraries, and it is desirable to provide friendly application interfaces for use by developers. The development libraries work on persistent equipment supporting DAX (Direct Access) mode, and libraries which can be well docked with the existing operating system can all select to use a file system to manage the nonvolatile memory NVM and Access resources on the NVM in a mode of mapping files. Therefore, these development libraries also need to provide solutions that allow programs to conveniently access data on NVM after reboot.
The conventional development libraries are designed in such a way that each development library maintains a persistent address for different storage objects, in the persistent address, the number of a mapping file and the offset of the storage object relative to the head address of the mapping file are stored, and an address conversion function is provided to convert the persistent address into a virtual address during operation, so that the cost of formatting data is avoided. From the foregoing, it can be seen that such a conversion is necessary. Because the virtual address is volatile, after each process reboots and remaps the file, the head address of the mapped file is not guaranteed to be similar to the mapping address at the last run, and therefore the development libraries have to each maintain a persistent address. Such a design, while enabling the program to still access the data in the NVM normally after a restart, also becomes a performance bottleneck for the NVM development library.
For the existing NVM development library, the cost of the address conversion function is relatively high, and the cost is about 13% of the total cost. And such functions cannot be optimized in software because address translation is done by hardware in conventional hardware. While the current development libraries perform address translation on software, they consume more time. Meanwhile, if a plurality of files need to be managed, the address conversion function will have to repeatedly inquire the head addresses of different files in the running process, and the efficiency is extremely low. The logic of the address conversion function is very simple, the code is very short, and therefore, the optimization of the software level is very difficult.
Disclosure of Invention
The invention designs DAX address translation cache on hardware, provides hardware functions for development libraries to accelerate address translation, and provides the running efficiency of applications using these development libraries.
Specifically, in order to overcome the shortcomings of the prior art, the present invention provides a method for address translation and caching of a DAX device, which includes:
step 1, constructing a DAX address conversion cache formed by a mapping file head address register MFA, an object offset register OFS, a file number register FID and an address conversion table;
step 2, according to the address conversion function, writing the file number in the persistent address and the object offset in the persistent address into the file number register and the object offset register respectively;
and step 3, converting the virtual address sent by the CPU into a physical address by the fast table, searching the address conversion table by the DAX address conversion buffer through the data stored in the file number register, adding the first address corresponding to the search result and the data in the object offset register to obtain a direct access address, and feeding back the direct access address to the CPU as the conversion result of the virtual address.
The DAX equipment address translation caching method comprises the following steps:
step 31, if the direct access address is 0, the address conversion function fills the first address of the mapping file into the first address register of the mapping file, writes 0 into the DAX address conversion buffer, and after the DAX address conversion buffer receives the writing request, fills the data in the file number register and the first address register of the mapping file into the address conversion table through a replacement algorithm.
The DAX equipment address translation caching method further comprises the following steps:
and step 4, sending the physical address to a cache memory, taking data corresponding to the physical address in the cache memory as a response result, judging whether the direct access address is effective, if so, feeding back the direct access address to the CPU, otherwise, feeding back the response result to the CPU.
The DAX equipment address conversion caching method comprises the step of enabling an address conversion table to be 32 register pairs.
The invention also provides a DAX device address translation cache system, which comprises:
the method comprises the steps of 1, constructing a DAX address conversion cache formed by a mapping file head address register MFA, an object offset register OFS, a file number register FID and an address conversion table;
the module 2 writes the file number in the persistent address and the object offset in the persistent address into the file number register and the object offset register respectively according to the address conversion function;
and the module 3, the fast table converts the virtual address sent by the CPU into a physical address, the DAX address conversion buffer searches the address conversion table through the data stored in the file number register, adds the first address corresponding to the search result and the data in the object offset register to obtain a direct access address, and feeds back the direct access address as the conversion result of the virtual address to the CPU.
The DAX device address translation cache system, wherein the module 3 comprises:
if the direct access address is 0, the address conversion function fills the first address of the mapping file into the first address register of the mapping file, writes 0 into the DAX address conversion buffer, and fills the data in the file number register and the first address register of the mapping file into the address conversion table through a replacement algorithm after the DAX address conversion buffer receives the write request.
The DAX device address translation cache system further comprises:
and the module 4 is used for sending the physical address to the cache memory, taking the data corresponding to the physical address in the cache memory as a response result, judging whether the direct access address is effective, if so, feeding back the direct access address to the CPU, and otherwise, feeding back the response result to the CPU.
The DAX device address translation cache system comprises 32 register pairs.
The advantages of the invention are as follows:
the invention can reduce the instruction overhead of the address conversion function by half and greatly enhance the efficiency of processing the multi-mapping file.
Drawings
FIG. 1 is a block diagram of an address translation cache;
FIG. 2 is a diagram of the connection between the CPU, TLB and Cache;
FIG. 3 is a block diagram of the present invention;
FIG. 4 is a graph showing the effect of the present invention.
Detailed Description
The inventors have conducted studies on the efficiency of address translation functions and have found that this disadvantage in the prior art is caused by excessive redundant instructions. These too many instructions come from conditional branches, redundant address loads, security checks, etc. The purpose of these redundant instructions is to maintain a simple cache that temporarily stores the head address of the most recently accessed mapped file.
Obviously, the software cannot maintain a larger cache, otherwise the searching efficiency is extremely low; the validity check of the cache also brings more redundant instructions. Considering that the purpose of these instructions is to implement address translation, a fast table for accelerating address translation is already included in current computer architectures, and thus it is considered that this process can be done by hardware. But in design, several requirements need to be met: (1) changes to the current computer architecture are as small as possible, and do not cause excessive changes to the data path, and preferably avoid the data path changes (2) adding new instructions as much as possible, otherwise the practical value of the invention will be compromised, (3) ease of use, and not allow developers desiring to achieve performance enhancement through the device to rewrite excessive code.
By referring to the structure of the fast table, the invention designs the DAX address conversion cache. By means of the cache, the instruction number of the address conversion function can be reduced by half, and the device can greatly improve the performance of the address conversion function for processing a plurality of mapping files, because parallel searching is very efficient in hardware.
The main invention comprises the following steps:
the key point 1, considering the balance between hardware performance and power consumption, the DAX address translation cache is composed of 32 pairs of registers and 3 independent registers. The registers are symmetrical as an address translation table (Address Translation Table), and the three independent registers are respectively called an MFA register (mapping file head address register), an OFS register (object offset register) and a FID register (file number register). The register pair stores the number of the mapping file and the head address of the mapping file, and three independent registers can carry out data transmission on the register pair;
the key point 2, the address translation function, needs to explicitly access the DAX address translation cache through a memory access instruction. Because the present invention does not wish to disrupt an existing data path in the current computer architecture, the address translation function is required to explicitly access the DAX address translation cache to obtain the required head address. On the one hand, the method can avoid the influence on the performance of the existing system by reducing the efficiency of the existing data at the same time when adding the DAX address conversion cache; on the other hand, the addition of additional instructions or modification of existing instructions can be avoided. The only change is to register four virtual addresses in the operating system and map the four virtual addresses to the DAX address translation cache. Since a reserved address with a certain size is reserved in each architecture chip at present, the operation is not complex;
the key point 3, the DAX address conversion cache should be responsible for checking the validity of the address, the file number is 0, and the offset is 0, which belong to illegal addresses;
and the key point 4, the DAX address conversion cache can be written and unreadable, so that the malicious program can be prevented from illegally obtaining the address of the mapping file without access rights through the reading operation on the DAX.
In order to make the above features and effects of the present invention more clearly understood, the following specific examples are given with reference to the accompanying drawings.
DAX address translation cache structure:
the address translation cache structure is shown in fig. 1. In this figure, three independent registers may be unidirectional to address translation tables, and the OFS registers and address registers in the address translation tables may send data to an adder that generates the result as a virtual address translated by the DAX address translation cache.
DAX address translation cache location:
in modern computer architecture, the connection between CPU, TLB (fast table) and Cache is shown in FIG. 2. When executing the access instruction, the virtual address generated by the instruction is sent to the TLB by the CPU, and in case of hit, the TLB performs address translation to generate a physical address. The physical address is sent to the Cache, and once hit, the data in the Cache is transmitted to the CPU to complete access. If the data is not hit, the generated physical address is sent to the memory bus and is further sent to the memory controller, and finally sent to the DRAM to finish data reading. Wherein the Cache is completely transparent to the programmer and the Cache is cached to the DRAM. The DRAM itself is used to hold instructions and data.
The DAX address translation Cache should be placed between the TLB and the Cache and addressed to the reserved address of the CPU. After the TLB finishes address translation, the generated physical address is directly sent to the DAX address translation Cache and the Cache, and if the physical address is an access to the DAX address translation Cache, the DAX address translation Cache responds to transmit data to the CPU. Otherwise, the Cache transmits the data to the CPU or reports errors. Therefore, arbitration logic should be set between the DAX address translation Cache and the Cache, and the response of the DAX address translation Cache has higher priority, and data sent by the DAX address translation Cache should be transmitted preferentially.
The address conversion function is written by a developer, and the address conversion function utilizes a DAX cache to improve the execution speed of the function. Typically this function requires querying a software maintained cache and then deciding how to perform the address translation. The invention is equivalent to moving the cache maintained by the software to the hardware. The address translation function should perform the following procedure:
1. the FID register is written and the file number in the persistent address is written to this register. The file number must not be 0, except that there is no special requirement. So each development library is free to choose how to generate the file number. Wherein the file number in the address is determined by the upper layer developer, and the file number is only a 64bit integer. The PMDK developed by Intel currently manages objects in a manner of file number + intra-file offset. The FID is the file number in the PMDK, and the OFS is the in-file offset.
2. And writing an OFS register, and writing the object offset in the persistent address into the register. The object offset must not be 0, except for no special requirements.
3. The address translation table in the DAX address translation cache is read. At this time, the DAX address conversion buffer will search the address conversion table through the data stored in the FID, if a match is found, add its corresponding first address to the data in the OFS register, and take the result as a response to the address conversion function read request.
4. Checking whether the read data is 0, if not, ending address conversion; if 0, executing the next step.
5. The MFA register is written and the first address of the mapping file is filled into the register. When programming, the first step in accessing the NVM is to map the file, which can then take the first address of the mapped file.
6. A 0 is written to the DAX address translation cache. After the DAX address conversion cache receives the write request, the data in the FID and the MFA are filled into the address conversion table through a replacement algorithm.
7. The address translation function ends.
Arbitration between DTLB (Direct access TLB) and Cache:
in the foregoing, after the TLB completes the address translation, the obtained physical address should be sent to the DTLB and the Cache at the same time, and arbitrate the responses of the DTLB and the Cache, and preferentially send the response of the DTLB to the CPU. Fig. 3 shows the hardware architecture that should be employed to accomplish such arbitration.
And (5) evaluating. Since it is impractical to add this component to the CPU at present, the evaluation is performed in an analog manner. The invention selects a gem5 simulator. The simulator simulates CPUs of different architectures, including X86, ARM, and the like. Two modes of system-wide simulation and system interrupt simulation are provided. Because the invention works in the user mode, the operating system is not required to be run, and the system interrupt simulation mode is used.
In the test, the invention compares the pmemobj_direct address conversion function in the PMDK library developed and maintained by Intel with the address conversion function which is self-written and calls the DAX address conversion cache, and respectively performs address conversion on 800 ten thousand persistent objects under the condition of using a single memory pool and a plurality of memory pools. The respective time-consuming (units: seconds) is shown in fig. 4.
Influence on the existing system:
to evaluate what impact an adding DAX address translation cache will have on an existing system, an evaluation of the performance of various components in the existing computer system is required.
Currently, a TLB may complete a response in 1 clock cycle, and a Cache may complete a response in 5 clock cycles. Then theoretically, when the instruction finishes decoding, the data can be sent to the CPU after 6 clock cycles at the highest speed after entering the execution stage. At present
The searched data shows that the hit rate of the primary Cache is up to 95%, and the hit rate of 97% can be realized by matching with the secondary Cache. The memory average delay can be estimated to be 9 clock cycles. If the DAX address translation buffer is added between the TLB and the Cache, one clock cycle is added to each access memory to arbitrate whether to transmit the physical address to the Cache, so that the common access memory instruction suffers additional 1 clock cycle delay, and the performance is reduced by about 20%. Thus, it is emphasized in the design herein that the TLB should direct physical addresses to both the Cache and the DAX address translation Cache, and select data responsive to the memory request via the arbitration logic, rather than being directed to the DAX address translation Cache in order, and then to the Cache.
The following is a system example corresponding to the above method example, and this embodiment mode may be implemented in cooperation with the above embodiment mode. The related technical details mentioned in the above embodiments are still valid in this embodiment, and in order to reduce repetition, they are not repeated here. Accordingly, the related technical details mentioned in the present embodiment can also be applied to the above-described embodiments.
The invention also provides a DAX device address translation cache system, which comprises:
the method comprises the steps of 1, constructing a DAX address conversion cache formed by a mapping file head address register MFA, an object offset register OFS, a file number register FID and an address conversion table;
the module 2 writes the file number in the persistent address and the object offset in the persistent address into the file number register and the object offset register respectively according to the address conversion function;
and the module 3, the fast table converts the virtual address sent by the CPU into a physical address, the DAX address conversion buffer searches the address conversion table through the data stored in the file number register, adds the first address corresponding to the search result and the data in the object offset register to obtain a direct access address, and feeds back the direct access address as the conversion result of the virtual address to the CPU.
The DAX device address translation cache system, wherein the module 3 comprises:
if the direct access address is 0, the address conversion function fills the first address of the mapping file into the first address register of the mapping file, writes 0 into the DAX address conversion buffer, and fills the data in the file number register and the first address register of the mapping file into the address conversion table through a replacement algorithm after the DAX address conversion buffer receives the write request.
The DAX device address translation cache system further comprises:
and the module 4 is used for sending the physical address to the cache memory, taking the data corresponding to the physical address in the cache memory as a response result, judging whether the direct access address is effective, if so, feeding back the direct access address to the CPU, and otherwise, feeding back the response result to the CPU.
The DAX device address translation cache system comprises 32 register pairs.

Claims (6)

1. A DAX device address translation caching method, comprising:
step 1, constructing a DAX address conversion cache formed by a mapping file head address register MFA, an object offset register OFS, a file number register FID and an address conversion table;
step 2, according to the address conversion function, writing the file number in the persistent address and the object offset in the persistent address into the file number register and the object offset register respectively;
step 3, the fast table converts the virtual address sent by the CPU into a physical address, the DAX address conversion buffer searches the address conversion table through the data stored in the file number register, adds the first address corresponding to the search result and the data in the object offset register to obtain a direct access address, and feeds back the direct access address as the conversion result of the virtual address to the CPU;
wherein the step 3 comprises:
step 31, if the direct access address is 0, the address conversion function fills the first address of the mapping file into the first address register of the mapping file, writes 0 into the DAX address conversion buffer, and after the DAX address conversion buffer receives the writing request, fills the data in the file number register and the first address register of the mapping file into the address conversion table through a replacement algorithm.
2. The DAX device address translation caching method of claim 1, further comprising:
and step 4, sending the physical address to a cache memory, taking data corresponding to the physical address in the cache memory as a response result, judging whether the direct access address is effective, if so, feeding back the direct access address to the CPU, otherwise, feeding back the response result to the CPU.
3. The DAX device address translation cache method of claim 1, wherein the address translation table is 32 register pairs.
4. A DAX device address translation cache system, comprising:
the method comprises the steps of 1, constructing a DAX address conversion cache formed by a mapping file head address register MFA, an object offset register OFS, a file number register FID and an address conversion table;
the module 2 writes the file number in the persistent address and the object offset in the persistent address into the file number register and the object offset register respectively according to the address conversion function;
the module 3, the fast table converts the virtual address sent by CPU into the physical address, the DAX address conversion buffer searches the address conversion table through the data stored in the file number register, adds the first address corresponding to the search result and the data in the object offset register to obtain the direct access address, and feeds back the direct access address as the conversion result of the virtual address to CPU;
wherein the module 3 comprises:
if the direct access address is 0, the address conversion function fills the first address of the mapping file into the first address register of the mapping file, writes 0 into the DAX address conversion buffer, and fills the data in the file number register and the first address register of the mapping file into the address conversion table through a replacement algorithm after the DAX address conversion buffer receives the write request.
5. The DAX device address translation cache system of claim 4, further comprising:
and the module 4 is used for sending the physical address to the cache memory, taking the data corresponding to the physical address in the cache memory as a response result, judging whether the direct access address is effective, if so, feeding back the direct access address to the CPU, and otherwise, feeding back the response result to the CPU.
6. The DAX device address translation cache system of claim 4, wherein the address translation table is 32 register pairs.
CN202010357810.8A 2020-04-29 2020-04-29 DAX equipment address conversion caching method and system Active CN111651379B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010357810.8A CN111651379B (en) 2020-04-29 2020-04-29 DAX equipment address conversion caching method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010357810.8A CN111651379B (en) 2020-04-29 2020-04-29 DAX equipment address conversion caching method and system

Publications (2)

Publication Number Publication Date
CN111651379A CN111651379A (en) 2020-09-11
CN111651379B true CN111651379B (en) 2023-09-12

Family

ID=72346609

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010357810.8A Active CN111651379B (en) 2020-04-29 2020-04-29 DAX equipment address conversion caching method and system

Country Status (1)

Country Link
CN (1) CN111651379B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101609429A (en) * 2009-07-22 2009-12-23 大唐微电子技术有限公司 A kind of method and apparatus of debugging embedded operating system
CN102495132A (en) * 2011-12-13 2012-06-13 东北大学 Multi-channel data acquisition device for submarine pipeline magnetic flux leakage internal detector
CN102929796A (en) * 2012-06-01 2013-02-13 杭州中天微系统有限公司 Memory management module simultaneously supporting software backfilling and hardware backfilling
US9058284B1 (en) * 2012-03-16 2015-06-16 Applied Micro Circuits Corporation Method and apparatus for performing table lookup
CN105740168A (en) * 2016-01-23 2016-07-06 中国人民解放军国防科学技术大学 Fault-tolerant directory cache controller
CN106940815A (en) * 2017-02-13 2017-07-11 西安交通大学 A kind of programmable convolutional neural networks Crypto Coprocessor IP Core
CN108959125A (en) * 2018-07-03 2018-12-07 中国人民解放军国防科技大学 Storage access method and device supporting rapid data acquisition

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7100028B2 (en) * 2000-08-09 2006-08-29 Advanced Micro Devices, Inc. Multiple entry points for system call instructions

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101609429A (en) * 2009-07-22 2009-12-23 大唐微电子技术有限公司 A kind of method and apparatus of debugging embedded operating system
CN102495132A (en) * 2011-12-13 2012-06-13 东北大学 Multi-channel data acquisition device for submarine pipeline magnetic flux leakage internal detector
US9058284B1 (en) * 2012-03-16 2015-06-16 Applied Micro Circuits Corporation Method and apparatus for performing table lookup
CN102929796A (en) * 2012-06-01 2013-02-13 杭州中天微系统有限公司 Memory management module simultaneously supporting software backfilling and hardware backfilling
CN105740168A (en) * 2016-01-23 2016-07-06 中国人民解放军国防科学技术大学 Fault-tolerant directory cache controller
CN106940815A (en) * 2017-02-13 2017-07-11 西安交通大学 A kind of programmable convolutional neural networks Crypto Coprocessor IP Core
CN108959125A (en) * 2018-07-03 2018-12-07 中国人民解放军国防科技大学 Storage access method and device supporting rapid data acquisition

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
沙行勉 等."面向同驻虚拟机的高效共享内存文件系统".《计算机学报》.2019,第42卷(第4期),第800-819页. *

Also Published As

Publication number Publication date
CN111651379A (en) 2020-09-11

Similar Documents

Publication Publication Date Title
US10318322B2 (en) Binary translator with precise exception synchronization mechanism
Seshadri et al. RowClone: Fast and energy-efficient in-DRAM bulk data copy and initialization
US10705972B2 (en) Dynamic adaptation of memory page management policy
JP7483950B2 (en) Systems and methods for performing binary conversion - Patents.com
US7840845B2 (en) Method and system for setting a breakpoint
KR100421749B1 (en) Method and apparatus for implementing non-faulting load instruction
US8099559B2 (en) System and method for generating fast instruction and data interrupts for processor design verification and validation
US20090070532A1 (en) System and Method for Efficiently Testing Cache Congruence Classes During Processor Design Verification and Validation
Chang et al. Efficient memory virtualization for cross-isa system mode emulation
US7269825B1 (en) Method and system for relative address translation
US20060277371A1 (en) System and method to instrument references to shared memory
Kumar et al. Survey on various advanced technique for cache optimization methods for RISC based system architecture
CN112148641A (en) System and method for tracking physical address accesses by a CPU or device
Garcia et al. A reconfigurable hardware interface for a modern computing system
CN114662426A (en) Detecting simulation states of transient execution attacks
CN111651379B (en) DAX equipment address conversion caching method and system
US6862675B1 (en) Microprocessor and device including memory units with different physical addresses
Delshadtehrani et al. In-scratchpad memory replication: Protecting scratchpad memories in multicore embedded systems against soft errors
US11544201B2 (en) Memory tracing in an emulated computing system
Whitham et al. The scratchpad memory management unit for microblaze: Implementation, testing, and case study
CN113885943A (en) Processing unit, system on chip, computing device and method
US11693725B2 (en) Detecting execution hazards in offloaded operations
KR100802686B1 (en) System and method for low overhead boundary checking of java arrays
Wang et al. Mei: A light weight memory error injection tool for validating online memory testers
US20100077145A1 (en) Method and system for parallel execution of memory instructions in an in-order processor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant