CN110442469B - Cache side channel attack defense method based on local random mapping - Google Patents

Cache side channel attack defense method based on local random mapping Download PDF

Info

Publication number
CN110442469B
CN110442469B CN201910666998.1A CN201910666998A CN110442469B CN 110442469 B CN110442469 B CN 110442469B CN 201910666998 A CN201910666998 A CN 201910666998A CN 110442469 B CN110442469 B CN 110442469B
Authority
CN
China
Prior art keywords
cache
bit
salt
candidate
physical address
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910666998.1A
Other languages
Chinese (zh)
Other versions
CN110442469A (en
Inventor
卜凯
谭钦翰
曾治华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN201910666998.1A priority Critical patent/CN110442469B/en
Publication of CN110442469A publication Critical patent/CN110442469A/en
Application granted granted Critical
Publication of CN110442469B publication Critical patent/CN110442469B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/0703Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
    • G06F11/0706Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation the processing taking place on a specific hardware platform or in a specific software environment
    • G06F11/073Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation the processing taking place on a specific hardware platform or in a specific software environment in a memory management context, e.g. virtual memory or cache management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2455Query execution
    • G06F16/24552Database cache management

Abstract

The invention discloses a cache side channel attack defense method based on local random mapping, which comprises the following steps: (1) constructing a system consisting of a CPU, a first-level cache, a second-level cache, a final-level cache, a storage controller and a memory; (2) n candidate cache groups are calculated and generated for each physical address; (3) when cache access occurs, firstly, calculating n candidate cache group indexes corresponding to the n candidate cache groups according to the physical address, storing the n candidate cache group indexes in n index registers, and searching the n cache groups in parallel to determine whether the cache is hit or lost; when the cache is lost, randomly selecting one of the n candidate cache groups as a target cache group to which the cache is mapped finally; (5) in the target cache group, if a record with a valid bit of 0 exists, the memory block fetched from the memory is written into the record, otherwise, a record is selected to replace an existing memory block. The method has the advantages of high safety, high speed, friendly final-level cache and the like which are not possessed by the existing method.

Description

Cache side channel attack defense method based on local random mapping
Technical Field
The invention belongs to the field of cache security, and particularly relates to a cache side channel attack defense method based on local random mapping.
Background
By utilizing the delay difference caused by cache hit (cache hit) and cache miss (cache miss) in the memory access, cache timing attack (cache timing attack) can cause serious information leakage. Although the memory spaces of different processes are isolated from each other for security reasons, the cache is still shared by different processes. Thus, cache accesses by one process may affect hit or miss situations when another process accesses the cache. Since the access delay caused by cache hit and cache miss is significantly different, the cache hit or miss condition can be effectively known by measuring the access delay. Thus, an attacker process can guess the cache access condition of a victim process by measuring the cache access delay of the attacker process. This approach can be used for two attack modes, side channel attack (side channel attack) and covert channel attack (cover channel attack).
In side channel attack, an attacker process may obtain private information by monitoring cache accesses of a victim process. For example, in an AES encryption program, one bit of the key may determine whether a program branch will be executed, which may result in different cache access behavior. Thus, by monitoring the Cache access of the encryption process, an attacker can obtain the key of AES (Cache-timing attacks on AES, 2005).
In a covert channel attack, two attacker processes use a difference in cache access delay to communicate secretly. The sender process has knowledge of some private information and attempts to send this information to the receiver process. One typical approach is as follows: the sender process sends information by a hit or miss condition that affects the recipient process's cache access, while the recipient process receives information by observing its own cache access latency. For example, the sender process subjects the receiver process to a cache hit to pass a bit "1", or subjects the receiver process to a cache miss to pass a bit "0". By following well-followed communication protocols, such covert channels can reach transmission rates of tens or even hundreds of Kilobytes (KB) per second (Hello front: Ssh over debug cache cover channels in the closed, NDSS, 2017).
Cache conflicts (cache conflicts) are the basis for most cache timing attacks. A cache conflict is a situation where a memory block (memory block) may be replaced out of the cache when the memory block is put into the cache because the cache capacity is limited. When the replaced memory block is accessed next time, the accessing process encounters a cache miss. Modern processors are mostly set-associative, i.e. each set is determined by a segment of index bits (index bits) in each physical address, and can hold several memory blocks. When a set is filled, a new incoming block replaces an old block, following the Least Recently Used (LRU) principle. These features allow an attacker to easily collect several physical addresses that map to the same set and use them to make cache conflicts. For example, in a fill-probe (prime-probe) attack, an attacker first collects a set of physical addresses mapped to the same cache set, where the number of addresses is equal to the number of memory blocks that can be held by the cache set, i.e., associativity (associativity). The attacker then fills up (prime) the target cache set by accessing these addresses, then waits for a period of time, accesses these addresses again and measures the latency. If some addresses are accessed too long, this means that a cache miss occurs, which indicates that the victim process accessed the cache set during the previous wait time, resulting in a cache conflict. If no cache miss is observed, this indicates that the victim process has not previously accessed this cache set.
However, no practical defense scheme against cache timing attacks has been proposed in the existing literature. Most of the existing defense solutions have significant drawbacks: sacrificing the normal functionality of the system or being unable to fully defend against cache timing attacks.
Disclosure of Invention
The invention provides a cache side channel attack defense method based on local random mapping, which can defend the cache side channel attack without improving the cache loss rate of a system.
A cache side channel attack defense method based on local random mapping comprises the following steps:
(1) constructing a system consisting of a CPU, a first-level cache, a second-level cache, a final-level cache, a storage controller and a memory;
(2) n candidate cache groups are calculated and generated for each physical address;
(3) when cache access occurs, firstly, calculating n candidate cache group indexes corresponding to the n candidate cache groups according to the physical address, storing the n candidate cache group indexes in n index registers, and searching the n cache groups in parallel to determine whether the cache is hit or lost;
(4) when the cache is hit, the target cache block is sent to the CPU; when the cache is lost, generating a random number from 0 to n-1, and randomly selecting one cache group from the n candidate cache groups as a target cache group to which the cache is mapped finally;
(5) in the target cache group, if a record with a valid bit of 0 still exists, the memory block fetched from the memory is written into the record, otherwise, a record is selected to replace an existing memory block according to the least recently used principle.
In the invention, each physical address has n alternative cache sets, when cache loss occurs each time, a target memory block needs to be fetched from a memory, and one of the n alternative cache sets is randomly selected and put in the target memory block, which follows the Least Recently Used (LRU) principle. In the step (2), the specific process of generating n candidate cache groups by computing each physical address is as follows:
(2-1) in an initialization stage, generating n random numbers salt with a hardware random number generator, wherein the length of each random number salt is equal to the sum of the lengths of a tag bit tag and an index bit index in a physical address;
(2-2) disassembling salt into saltleftAnd saltrightTwo parts, saltleftEqual length to the mark bit of the physical address, saltrightThe length of the index bit is equal to that of the physical address;
(2-3)saltleftcalculating XOR with the mark bit of the physical address, and hashing the result by using a hash function to generate an output with the same length as the index bit;
(2-4) outputting the hash, the index bit and the saltrightCalculating the result generated by the XOR as an alternative cache group; n candidate cache set indexes are obtained by respectively calculating n salt in parallel.
In the step (2-3), the hash function is a hash function based on a linear feedback shift register LFSR, and is implemented by a combinational logic circuit.
The algorithm of the hash function is as follows:
inputting: message (message)
And (3) outputting: results (result)
Temporary variables: state, position (b)
(2-3-1) initializing the result to 0;
(2-3-2) initializing a temporary variable state to an initial state of the LFSR;
(2-3-3) initializing a temporary variable b to the lowest bit of the input message;
(2-3-4) if b is equal to 1, assigning result as result XOR state;
(2-3-5) assigning the state to the next state of the LFSR;
(2-3-6) if b is the highest bit of the message, jumping to (2-3-7); otherwise, jumping to (2-3-4);
and (2-3-7) outputting a result.
The essence of the hash algorithm is to select the corresponding state of the linear feedback shift register for xor operation using a bit of 1 in the message. Since the state sequence of the linear feedback shift register is pseudo-random, the hash result may also have strong randomness.
In the step (3), when n cache groups are searched in parallel, additionally adding a domain on each record of the cache, wherein the domain is used for storing a random number n generated when the memory block in the record is initially placed in the cache, and respectively comparing the marking bit with the random number n to jointly match the target memory block; if the tag bit and the random number n are successfully matched and the recorded valid bit is 1, triggering a cache hit, otherwise triggering a cache miss.
In step (5), for the write-back type cache, if the replaced memory block is modified, the original index bit needs to be recovered, so as to determine the write-back address together with the tag bit, and write the write-back address back into the memory.
Compared with the prior art, the invention has the following beneficial effects:
1. the invention adopts local random mapping technology, avoids dynamic change of mapping rules, enables each memory block to have a group of alternative cache groups, randomly selects one alternative cache group to be put in each time the memory block enters the cache, and limits the range of random mapping to a smaller range, thereby avoiding global search during cache access.
2. The invention combines the LFSR-based single-cycle hash function, only brings 0.30% performance reduction, can effectively protect the final-level cache of 2-16 MB, and has less performance reduction for the larger final-level cache.
Drawings
FIG. 1 is a schematic flow chart of a cache side channel attack defense method based on local random mapping according to the present invention;
FIG. 2 is a schematic flow chart of computing an alternative cache set according to the present invention;
FIG. 3 is a schematic diagram of a hash function based on a linear feedback shift register according to the present invention.
Detailed Description
The invention will be described in further detail below with reference to the drawings and examples, which are intended to facilitate the understanding of the invention without limiting it in any way.
As shown in fig. 1, a method for defending against a cache side channel attack based on local random mapping includes the following steps:
and S01, constructing a system consisting of a CPU, a first-level cache, a second-level cache, a final-level cache, a storage controller and a memory.
S02, n candidate cache sets are generated for each physical address calculation.
During the system initialization phase, n random numbers, called salt, are generated with a Hardware Random Number Generator (HRNG). The length of each salt is equal to the sum of the lengths of the tag bit (tag) and the index bit (index) in the physical address. First, the salt is disassembled into saltleftAnd saltrightTwo parts, saltleftEqual length to the mark bit of the physical address, saltrightAnd the index bits of the physical address.
As shown in FIG. 2, saltleftAnd after the XOR operation is carried out on the result and the mark bit of the physical address, the result is hashed to generate output with the length equal to that of the index bit. The hash output, index bits, and salt are then outputrightThe result of the xor is computed as an alternative set of buffers. By using n salt to perform computation respectively in parallel, we can get n candidate cache set indexes (setindex).
S03, when cache access occurs, firstly, according to the physical address, calculating n candidate cache group indexes corresponding to the n candidate cache groups and storing the n candidate cache group indexes in n index registers, and searching the n cache groups in parallel to determine whether cache hit or cache loss occurs; in the case of a cache miss, a random number from 0 to n-1 is generated to randomly select one of the n candidate cache sets as the target cache set to which the cache is finally mapped.
When searching n cache groups in parallel, the target memory block cannot be matched only by the flag bits, because in the case of random mapping, memory blocks having the same flag bits may be mapped to the same cache group. Therefore, it is necessary to add a field to each record of the cache for storing the random number n generated when the memory block in the record is initially placed in the cache. In addition to the flag bits, the random number n needs to be compared to match the target memory block when searching for a certain memory block. For example, n in the records in the first candidate cache group is compared with 0, and n in the records in the second candidate cache group is compared with 1 (note that, for the same random number n, two memory blocks with the same tag bit must not be mapped to the same cache group, because the same n means that the same salt is used in calculating the cache group, and in the case that the salt and the tag bit are both the same, the same cache group index is calculated only when the index bits of the two memory blocks are also the same, but this means that the two memory blocks are the same). If the tag bit and the random number n are successfully matched and the recorded valid bit is 1, triggering a cache hit, and sending a target cache block to the CPU; otherwise, a cache miss is triggered, a random number n is generated and used to select one of the n cache set indices stored previously to determine the final target cache set.
S04, if there is a record with a valid bit of 0 (i.e. a record that has not been written to) in the target cache set, the memory block fetched from the memory is written to the record, otherwise, a record is selected to replace an existing memory block according to the least recently used rule.
For write-back caches (write-back caches), a replaced memory block, if modified, needs to be written toWhich is written back into memory. There is a need to recover its original index bits to determine the address written back along with the tag bits. As can be seen from FIG. 1, the alternate group index is formed by the original index bits and the output of the hash function and saltrightAnd XOR is obtained. Since we keep in the record the random number n that was previously generated for selecting the candidate cache set, we can use n to select the corresponding salt here. By saltleftAnd the flag bit can calculate the output of the hash function, and the output is connected with the current group index and saltrightThe original index bits can be restored by performing an exclusive-or operation, thereby performing a write-back operation.
For the hash function in fig. 2, in principle any hash function with low latency and low hardware complexity can be used. The present invention uses a Linear Feedback Shift Register (LFSR) based hash function (LFSR), implemented with combinational logic, which limits its circuit delay to within one clock cycle. The algorithm of the hash function is as follows:
inputting: message (message)
And (3) outputting: results (result)
Temporary variables: state, position (b)
1) Initialize result to 0
2) Initialize state to LFSR initial state
3) B is initialized to the lowest bit of the message
4) If b is equal to 1, assigning result to result XOR state
5) Assign state to next state of LFSR
6) If b is the highest bit of the message, jumping to 7); otherwise, jump to 4)
7) Output result
The essence of the hash algorithm is to select the corresponding state of the linear feedback shift register for xor operation using a bit of 1 in the message. Since the state sequence of the linear feedback shift register is pseudo-random, the hash result may also have strong randomness. To make it more suitable for the application scenario of the present invention, we implement the hash algorithm using combinational logic, as shown in fig. 3.
In the application scenario of the present invention, the number of bits input by the hash function (equal to the number of bits of the flag bits in the physical address) and the number of bits output by the hash function (equal to the length of the index bits in the physical address) are determined, so that the required states can be calculated in advance, and each input bit and the corresponding state are anded, thereby controlling whether the state is added to the xor operation. The method can greatly reduce the time required by the hash operation and can control the time delay of the hash circuit within one clock period.
In general, the overhead incurred by the present invention for cache accesses is that the latency of each access is increased by one clock cycle. Considering that accesses to a final level cache (LLC) typically have a latency of tens of clock cycles, the performance overhead incurred by the present invention is negligible. In addition, the hardware overhead, the circuit scale required by the invention is equivalent to that of the AES encryption circuit, and the circuit scale is acceptable for the CPU at present. By using the invention, the final-level cache of 2-16 MB can be effectively protected under the condition that the overall performance of the system is only reduced by about 0.3%.
The invention is now run in a ChampSim simulator environment. The simulator simulates an all-true multi-core CPU with three levels of on-chip cache. ChampSim is widely used for various Cache performance competitions, such as a Cache Replacement policy tournament (Cache Replacement Championship) held by ISCA 2017. The invention realizes a local random mapping mechanism aiming at LLC on the ChampSim. The core module comprises an alternative cache group calculation module, a parallel search module and a cache replacement module. Just as other random mapping schemes for hardware, the present invention only involves modifications to the final level cache (LLC) module. We used various benchmark tests (benchmark) to test the performance of the present invention.
The simulator implementation specifically comprises the following steps:
1) firstly, a system consisting of a CPU, a first-level cache, a second-level cache, a final-level cache, a storage controller and a memory is established.
2) Adding a back _ invalid function, and modifying the handle _ file and the handle _ writeback function so as to modify the default generated non-inclusive (non-inclusive) cache into an inclusive (inclusive) cache.
3) A single _ clock _ hash function is added and called in the get _ set function to compute n candidate cache sets.
4) And realizing the functions of parallel searching and hit loss judgment in check _ hit and invalid _ entry functions and realizing the function of cache replacement.
With a well-implemented environment, we measured the performance of the present invention using the SPEC CPU 2017 benchmark test package. When the CPU has only a single core, we run each benchmark test separately; for a multi-core CPU (core number denoted c), we randomly pick c out of all 20 benchmark tests, each core being assigned to a different benchmark test to run. For each benchmark test, at least 20 hundred million instructions are run, the first 10 hundred million instructions are used for preheating the cache, and the last 10 hundred million instructions are used for counting performance data.
We use three performance indicators to measure the performance of the present invention-instruction number per cycle (IPC), lost per thousand instructions at LLC (MPKI), and loss rate of LLC (MR). Higher IPC or lower MPKI and MR represent higher performance. By comparing the measured index of the present invention with the baseline (i.e., without any modification) index, it was found that the present invention only resulted in a 0.30% performance reduction (averaged over the three indices). Through further testing, we found that as the number of cores increased (from 1 to 4), there was an additional 0.05% performance drop. While there is some decrease in IPC with increasing access latency, for example, IPC decreases by 2% compared to baseline as access latency increases from 0 cycles to 4 cycles. Since the default delay of the present invention is 1 cycle, this only results in a 0.45% decrease in IPC compared to baseline. The test result has no obvious change for the change of the number of the alternative buffer groups (namely the size of n), which shows that the performance of the invention is not sensitive to the parameter. Finally, tests have shown that for larger LLCs, the performance degradation caused by the present invention is smaller, e.g., the IPC degradation rate drops from 0.51% to 0.19% compared to baseline as LLC size increases from 8MB to 64 MB.
The above tests show that the invention is not only feasible, but also has safety, reliability and practicability, and solves the practical problems.
The embodiments described above are intended to illustrate the technical solutions and advantages of the present invention, and it should be understood that the above-mentioned embodiments are only specific embodiments of the present invention, and are not intended to limit the present invention, and any modifications, additions and equivalents made within the scope of the principles of the present invention should be included in the scope of the present invention.

Claims (6)

1. A cache side channel attack defense method based on local random mapping is characterized by comprising the following steps:
(1) constructing a system consisting of a CPU, a first-level cache, a second-level cache, a final-level cache, a storage controller and a memory;
(2) n candidate cache groups are calculated and generated for each physical address;
(3) when cache access occurs, firstly, calculating n candidate cache group indexes corresponding to the n candidate cache groups according to the physical address, storing the n candidate cache group indexes in n index registers, and searching the n cache groups in parallel to determine whether the cache is hit or lost;
(4) when the cache is hit, the target cache block is sent to the CPU; when the cache is lost, generating a random number from 0 to n-1, and randomly selecting one cache group from the n candidate cache groups as a target cache group to which the cache is mapped finally;
(5) in the target cache group, if a record with a valid bit of 0 still exists, the memory block fetched from the memory is written into the record, otherwise, a record is selected to replace an existing memory block according to the least recently used principle.
2. The method for defending against channel attacks on the cache side based on local random mapping as claimed in claim 1, wherein the specific process of step (2) is as follows:
(2-1) in an initialization stage, generating n random numbers salt with a hardware random number generator, wherein the length of each random number salt is equal to the sum of the lengths of a tag bit tag and an index bit index in a physical address;
(2-2) disassembling salt into saltleftAnd saltrightTwo parts, saltleftEqual length to the mark bit of the physical address, saltrightThe length of the index bit is equal to that of the physical address;
(2-3)saltleftcalculating XOR with the mark bit of the physical address, and hashing the result by using a hash function to generate an output with the same length as the index bit;
(2-4) outputting the hash, the index bit and the saltrightCalculating the result generated by the XOR as an alternative cache group; n candidate cache set indexes are obtained by respectively calculating n salt in parallel.
3. The method for defending against channel attacks on the cache side based on local random mapping as claimed in claim 2, wherein in step (2-3), the hash function is a hash function based on Linear Feedback Shift Register (LFSR) and is implemented by using combinational logic circuit.
4. The method for defending against cache side channel attack based on local random mapping according to claim 3, wherein the algorithm of the hash function is as follows:
(2-3-1) initializing the result to 0;
(2-3-2) initializing a temporary variable state to an initial state of the LFSR;
(2-3-3) initializing a temporary variable b to the lowest bit of the input message;
(2-3-4) if b is equal to 1, assigning result as result XOR state;
(2-3-5) assigning the state to the next state of the LFSR;
(2-3-6) if b is the highest bit of the message, jumping to (2-3-7); otherwise, jumping to (2-3-4);
and (2-3-7) outputting a result.
5. The method according to claim 1, wherein in step (3), when n cache groups are searched in parallel, a field is additionally added to each record in the cache to store a random number m generated when the memory block in the record is initially placed in the cache, and the tag bits and the random number m are respectively compared to match the target memory block; if the tag bit and the random number m are successfully matched and the recorded valid bit is 1, triggering a cache hit, otherwise triggering a cache loss.
6. The method as claimed in claim 3, wherein in step (5), for the write-back type cache, if the replaced memory block is modified, the original index bits are recovered to determine the write-back address together with the tag bits, and the write-back address is written back to the memory.
CN201910666998.1A 2019-07-23 2019-07-23 Cache side channel attack defense method based on local random mapping Active CN110442469B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910666998.1A CN110442469B (en) 2019-07-23 2019-07-23 Cache side channel attack defense method based on local random mapping

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910666998.1A CN110442469B (en) 2019-07-23 2019-07-23 Cache side channel attack defense method based on local random mapping

Publications (2)

Publication Number Publication Date
CN110442469A CN110442469A (en) 2019-11-12
CN110442469B true CN110442469B (en) 2020-06-30

Family

ID=68431201

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910666998.1A Active CN110442469B (en) 2019-07-23 2019-07-23 Cache side channel attack defense method based on local random mapping

Country Status (1)

Country Link
CN (1) CN110442469B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111274584B (en) * 2020-01-17 2022-07-15 中国科学院计算技术研究所 Device for defending processor transient attack based on cache rollback
CN113779649B (en) * 2021-09-08 2023-07-14 中国科学院上海高等研究院 Defense method for executing attack against speculation

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7310706B1 (en) * 2001-06-01 2007-12-18 Mips Technologies, Inc. Random cache line refill
CN101621498A (en) * 2008-06-30 2010-01-06 成都市华为赛门铁克科技有限公司 Method, device and equipment for defending against network attacks
CN107622199A (en) * 2017-09-21 2018-01-23 中国科学院信息工程研究所 Flush Reload cache side-channel attack defence method and device in a kind of cloud environment
CN108491694A (en) * 2018-03-26 2018-09-04 湖南大学 A kind of method of dynamic randomization defence Cache attacks
CN108650075A (en) * 2018-05-11 2018-10-12 中国科学院信息工程研究所 A kind of quick encryption implementation methods of soft or hard combination AES and system of preventing side-channel attack
CN110032867A (en) * 2019-03-26 2019-07-19 中国人民解放军国防科技大学 Method and system for actively cutting off hidden channel to deal with channel attack at cache side

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10310973B2 (en) * 2012-10-25 2019-06-04 Nvidia Corporation Efficient memory virtualization in multi-threaded processing units
US10102375B2 (en) * 2016-08-11 2018-10-16 Qualcomm Incorporated Multi-modal memory hierarchical management for mitigating side-channel attacks in the cloud

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7310706B1 (en) * 2001-06-01 2007-12-18 Mips Technologies, Inc. Random cache line refill
CN101621498A (en) * 2008-06-30 2010-01-06 成都市华为赛门铁克科技有限公司 Method, device and equipment for defending against network attacks
CN107622199A (en) * 2017-09-21 2018-01-23 中国科学院信息工程研究所 Flush Reload cache side-channel attack defence method and device in a kind of cloud environment
CN108491694A (en) * 2018-03-26 2018-09-04 湖南大学 A kind of method of dynamic randomization defence Cache attacks
CN108650075A (en) * 2018-05-11 2018-10-12 中国科学院信息工程研究所 A kind of quick encryption implementation methods of soft or hard combination AES and system of preventing side-channel attack
CN110032867A (en) * 2019-03-26 2019-07-19 中国人民解放军国防科技大学 Method and system for actively cutting off hidden channel to deal with channel attack at cache side

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Architecting against Software Cache-Based Side-Channel Attacks;Jingfei Kong 等;《IEEE TRANSACTIONS ON COMPUTERS》;20130731;全文 *
Hardware-Software Integrated Approaches to Defend Against Software Cache-based Side Channel Attacks;Jingfei Kong 等;《2009 IEEE 15th International Symposium on High Performance Computer Architecture》;20090306;全文 *

Also Published As

Publication number Publication date
CN110442469A (en) 2019-11-12

Similar Documents

Publication Publication Date Title
Saileshwar et al. {MIRAGE}: Mitigating {Conflict-Based} Cache Attacks with a Practical {Fully-Associative} Design
Vila et al. Theory and practice of finding eviction sets
Tan et al. PhantomCache: Obfuscating Cache Conflicts with Localized Randomization.
Ren et al. Constants count: Practical improvements to oblivious {RAM}
US9396119B2 (en) Device for controlling the access to a cache structure
US10950292B1 (en) Method and apparatus for mitigating row hammer attacks
CN110442469B (en) Cache side channel attack defense method based on local random mapping
CN105446897B (en) Cache logic, memory system and method for generating cache address
CN110018811B (en) Cache data processing method and Cache
Jiang et al. A novel cache bank timing attack
WO2019180402A1 (en) Random tag setting instruction for a tag-guarded memory system
Guo et al. Leaky way: a conflict-based cache covert channel bypassing set associativity
Zhang et al. Secure cache modeling for measuring side-channel leakage
US9813235B2 (en) Resistance to cache timing attacks on block cipher encryption
Shrivastava et al. Towards an optimal countermeasure for cache side-channel attacks
GR20150100422A (en) Data storage
Zenner A cache timing analysis of HC-256
Bao et al. Reducing timing side-channel information leakage using 3D integration
CN116720191A (en) Processor data prefetching security enhancement method for relieving cache side channel attack
Chakraborty et al. A short note on the paperAre Randomized Caches Really Random?'
Esfahani et al. Enhanced cache attack on AES applicable on ARM-based devices with new operating systems
Lee et al. Hardware-based flush+ reload attack on Armv8 system via ACP
Li et al. Cache attack on aes for android smartphone
Ramkrishnan et al. New attacks and defenses for randomized caches
Xue et al. CTPP: A Fast and Stealth Algorithm for Searching Eviction Sets on Intel Processors

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant