US20150212942A1 - Electronic device, and method for accessing data in electronic device - Google Patents

Electronic device, and method for accessing data in electronic device Download PDF

Info

Publication number
US20150212942A1
US20150212942A1 US14/602,682 US201514602682A US2015212942A1 US 20150212942 A1 US20150212942 A1 US 20150212942A1 US 201514602682 A US201514602682 A US 201514602682A US 2015212942 A1 US2015212942 A1 US 2015212942A1
Authority
US
United States
Prior art keywords
cache
data
cache memory
memory
access
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US14/602,682
Other languages
English (en)
Inventor
Seung-Jin Yang
Gil-Yoon Kim
Jin-Young Park
Jin-Yong JANG
Chun-Mok CHUNG
Jin Choi
Eun-Seok HONG
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Samsung Electronics Co Ltd
Original Assignee
Samsung Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Samsung Electronics Co Ltd filed Critical Samsung Electronics Co Ltd
Assigned to SAMSUNG ELECTRONICS CO., LTD. reassignment SAMSUNG ELECTRONICS CO., LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: CHOI, JIN, CHUNG, CHUN-MOK, HONG, EUN-SEOK, JANG, JIN-YONG, KIM, GIL-YOON, PARK, JIN-YOUNG, YANG, SEUNG-JIN
Publication of US20150212942A1 publication Critical patent/US20150212942A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0862Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches with prefetch
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0806Multiuser, multiprocessor or multiprocessing cache systems
    • G06F12/0842Multiuser, multiprocessor or multiprocessing cache systems for multiprocessing or multitasking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0806Multiuser, multiprocessor or multiprocessing cache systems
    • G06F12/0813Multiuser, multiprocessor or multiprocessing cache systems with a network or matrix configuration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0806Multiuser, multiprocessor or multiprocessing cache systems
    • G06F12/0815Cache consistency protocols
    • G06F12/0831Cache consistency protocols using a bus scheme, e.g. with bus monitoring or watching means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0844Multiple simultaneous or quasi-simultaneous cache accessing
    • G06F12/0846Cache with multiple tag or data arrays being simultaneously accessible
    • G06F12/0851Cache with interleaved addressing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/60Details of cache memory
    • G06F2212/6022Using a prefetch buffer or dedicated prefetch cache
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/60Details of cache memory
    • G06F2212/6032Way prediction in set-associative cache
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/62Details of cache specific to multiprocessor cache arrangements

Definitions

  • the present disclosure relates to an electronic device for accessing data using a cache memory and a method for accessing data in an electronic device.
  • processors e.g., a Central Processing Unit (CPU), a Graphic Processing Unit (GPU), a Digital Signal Processor (DSP), and the like
  • CPU Central Processing Unit
  • GPU Graphic Processing Unit
  • DSP Digital Signal Processor
  • Each processor of an electronic device may access various types of memory to read desired data from the memory, and/or to write data to be stored in the memory, thereby performing a desired task.
  • the cache is a storage device in the form of a buffer, which is filled with the commands or programs read from a memory (e.g., a main memory), and is a buffer memory that is installed between a memory and a processor (e.g., a CPU).
  • the cache is also referred to as a cache memory or a local memory.
  • the cache memory may be accessed at a higher speed, compared with the memory (e.g., the main memory), and the processor may access the cache memory ahead of the memory. Therefore, an electronic device may store data or program commands in the cache memory, to prevent the operation of repeatedly searching for the frequently accessed data or programs.
  • a memory interleaving system may divide a cache memory into cache memories, the number of which corresponds to the number of, for example, memory modules, for the maintenance of the bandwidth, and install a cache memory in front of each memory module to reduce the latency of the memory access time while maintaining the bandwidth.
  • a memory address may be divided for each memory module, and addresses for different memory modules may undesirably have the spatial locality.
  • the data processing performance may be affected.
  • an aspect of the present disclosure is to provide an electronic device in which separated cache memories may share information related to data access, a method for accessing data in the electronic device, and a computer-readable recording medium.
  • a method for accessing data in an electronic device includes receiving a request for the data from at least one processor by a first cache memory among a plurality of cache memories, transmitting the requested data to the at least one processor, and transmitting access-related information regarding the request to a second cache memory among the plurality of cache memories.
  • a method for accessing data in an electronic device includes receiving a request for the data from at least one processor by a first cache memory among a plurality of cache memories, determining whether the data requested by the at least one processor is present in the first cache memory, and transmitting access-related information regarding the request to a second cache memory among the plurality of cache memories, if the requested data is present in the first cache memory.
  • a method for accessing data in an electronic device includes receiving a request for the data from at least one processor by a first cache memory among a plurality of cache memories, determining whether the data requested by the at least one processor is present in the first cache memory, and transmitting access-related information regarding the request to a second cache memory among the plurality of cache memories, if the requested data is not present in the first cache memory.
  • an electronic device for accessing data includes at least one processor, a plurality of cache memories configured to transmit the data requested by the at least one processor to the at least one processor, and a plurality of memories, each of which is connected to an associated one of the cache memories to transmit the requested data through the associated one of the cache memories. At least one of the plurality of cache memories may share access-related information with other cache memories.
  • an electronic device for accessing data includes a processor, a first cache memory configured to transmit first data requested by the processor to the processor, and a second cache memory configured to transmit second data requested by the processor to the processor.
  • the first cache memory and the second cache memory may be functionally connected by a bus line to share data with each other.
  • FIG. 1 schematically illustrates an electronic device for data access according to an embodiment of the present disclosure
  • FIG. 2 illustrates address assignment to a plurality of memories or cache memories in an electronic device according to an embodiment of the present disclosure
  • FIG. 3 schematically illustrates an electronic device including cache memories capable of data sharing according to an embodiment of the present disclosure
  • FIG. 4 is a flowchart illustrating a data access procedure in an electronic device according to an embodiment of the present disclosure
  • FIG. 5 is a flowchart illustrating a data access procedure during occurrence of a cache miss according to an embodiment of the present disclosure
  • FIG. 6 is a flowchart illustrating a data access procedure during occurrence of a cache hit according to an embodiment of the present disclosure
  • FIGS. 7 , 8 , 9 , and 10 illustrate examples in which data access is handled in each component of an electronic device according to an embodiment of the present disclosure
  • FIG. 11 is a block diagram illustrating an electronic device according to an embodiment of the present disclosure.
  • FIG. 12 is a block diagram illustrating an electronic device according to an embodiment of the present disclosure.
  • FIG. 13 is a block diagram illustrating an electronic device according to an embodiment of the present disclosure.
  • An electronic device for data access may be an electronic device such as, for example, a smart phone, a tablet Personal Computer (PC), a PC, a laptop computer, a Moving Picture Experts Group (MPEG-1 or MPEG-2) Audio Layer III (MP3) player, a camera, a wearable device and the like.
  • the electronic device may be a device equipped with a communication function.
  • the electronic device may be a smart home appliance equipped with a communication function.
  • the electronic device may include various medical devices, navigation devices, Global Positioning System (GPS) receivers, cars and the like.
  • GPS Global Positioning System
  • Embodiments of the present disclosure provide a method for accessing data using, for example, memory interleaving.
  • embodiments of the present disclosure provide a method for accessing data using a plurality of cache memories connected to a plurality of memories (e.g., main memories) by, for example, at least one processor.
  • cache memories may share, with each other, the information related to data access from a processor, thereby reducing the latency of the memory access time while maintaining the bandwidth.
  • processor may refer to a functional unit that executes a command in an electronic device.
  • the processor may include a Central Processing Unit (CPU), a Graphic Processing Unit (GPU), a Memory Flow Controller (MFC), a Digital Signal Processor (DSP), and the like.
  • the processor may be incorporated into a display, an audio module, an embedded Multi Media Card (eMMC) and the like, and embodiments of the present disclosure will not be limited thereto.
  • CPU Central Processing Unit
  • GPU Graphic Processing Unit
  • MFC Memory Flow Controller
  • DSP Digital Signal Processor
  • eMMC embedded Multi Media Card
  • the term ‘memory’ as used herein may refer to various types of storage media for storing data.
  • the processor may access the memory to read the data stored in the memory, or to write the data to be stored in the memory.
  • the memory may mean a main memory (hereinafter referred to as a memory) which is distinguishable from a cache memory, and the memory according to an embodiment of the present disclosure will not be limited thereto.
  • the ‘cache memory’ may refer to a storage device in the form of a buffer, which is connected to the memory and filled with commands or programs read from the memory.
  • the cache memory may mean a buffer memory that is installed between the memory and the processor (e.g., CPU).
  • access may be construed to include a process of writing data in the memory or searching for and reading data stored in the memory by the processor, and may refer to the overall operation between the processor and the memory.
  • access-related information may be construed to include various types of information that may be considered in an operation in which the processor accesses the memory.
  • the access-related information may be a data value that the processor requests by accessing the memory, and may be information about a logical or physical address of the memory, in which the requested data is stored. If data is stored in the memory in units of blocks, the access-related information may be information about the block to be accessed.
  • the access-related information may include information (e.g., cache miss information or cache hit information) indicating whether the requested data is present in the cache memory regarding an operation in which the processor requests data from the cache memory.
  • the access-related information may include information about the number of occurrences of a cache miss or information about the number of occurrences of a cache hit.
  • information related to memory access by the processor may be included in the access-related information according to an embodiment of the present disclosure.
  • cache hit may refer to a case in which when the processor requests data from the cache memory, the requested data is stored in the cache memory.
  • cache miss may refer to a case in which when the processor requests data from the cache memory, the requested data is not stored in the cache memory.
  • Various embodiments of the present disclosure may provide methods in which at least cache memory transmits various types of access-related information to another cache memory, thereby improving performance of the processor and performance of the data access. For example, in accordance with an embodiment of the present disclosure, if a cache miss occurs in a specific cache memory, the cache memory may deliver information related to the cache miss to another cache memory so that another cache memory may prepare the data in advance, thereby preventing the same cache miss from occurring in another cache memory. If the occurrence of a cache miss is reduced in this way, the data transmission speed and transmission efficiency may be improved.
  • the cache memory may deliver information related to the cache hit to another cache memory so that another cache memory may prepare the data to be requested next in advance, thereby implementing the cache memory to function as a pre-fetch buffer.
  • FIGS. 1 and 2 describe the concept of a method for accessing data using memory interleaving in an electronic device to which various embodiments of the present disclosure are applied.
  • FIG. 1 schematically illustrates an electronic device for data access according to an embodiment of the present disclosure.
  • the electronic device for data access may include at least one processor 101 , an interleaver 103 , a plurality of cache memories 105 a and 105 b, and a plurality of memories 107 a and 107 b.
  • the memories 107 a and 107 b may mean main memories as described above, and in below-described embodiments of the present disclosure, the memories 107 a and 107 b may be used as the concept of the memories distinguishable from the cache memories 105 a and 105 b.
  • the processor 101 may access the first memory 107 a or the second memory 107 b to request data, and may read the data from the first memory 107 a or the second memory 107 b and process the read data. Instead of directly accessing, for example, the first memory 107 a or the second memory 107 b to request data, the processor 101 may request data from the first cache memory 105 a or the second cache memory 105 b which is connected to their associated first memory 107 a or second memory 107 b as shown in the drawing.
  • the cache memories 105 a and 105 b may determine whether data is stored in the cache memories 105 a and 105 b, in response to the data request, and if the requested data is stored (as described above, this is called a ‘cache hit’), the cache memories 105 a and 105 b may provide the stored data to the processor 101 . If the data requested by the processor 101 is not stored in the cache memories 105 a and 105 b (as described above, this is called a ‘cache miss’), the processor 101 may request the data from the memories 107 a and 107 b connected to their associated cache memories 105 a and 105 b.
  • the cache memories 105 a and 105 b may read the data from their associated memories 107 a and 107 b, and provide the read data to the processor 101 .
  • the read data may be stored in the cache memories 105 a and 105 b.
  • the processor 101 may select any one of the plurality of memories 107 a and 107 b to request data, and then fetch the data stored in the selected one of the memories 107 a and 107 b.
  • the processor 101 is not limited to a specific processor in the electronic device, and any component that requests data stored in a memory and processes the data that is read in response to the request may serve as the processor according to an embodiment of the present disclosure.
  • the processor 101 may include a CPU, GPU, MFC, DSP and the like.
  • the processor 101 may be incorporated into a display, an audio module, an eMMC or the like, and embodiments of the present disclosure are not limited thereto.
  • the first memory 107 a or the second memory 107 b may mean main memories which are distinguishable from the cache memories 105 a and 105 b, as described above.
  • the cache memories 105 a and 105 b may mean storage devices in the form of a buffer, which are filled with commands or programs read from the memories 107 a and 107 b, as described above.
  • the cache memories 105 a and 105 b may be high-speed buffer memories which are installed between the memories 107 a and 107 b and the processor 101 (e.g., a CPU).
  • the cache memories 105 a and 105 b may store the data stored in the memories 107 a and 107 b, in units of blocks.
  • the miss rate of the cache memories 105 a and 105 b may be related to the transmission line size of the cache memories 105 a and 105 b.
  • FIG. 2 to describe in more detail the function of the interleaver 103 in FIG. 1 .
  • FIG. 2 illustrates address assignment to a plurality of memories or cache memories in an electronic device according to an embodiment of the present disclosure.
  • processors 201 a and 201 b when requesting data stored in a plurality of memories 207 a and 207 b, at least one of processors 201 a and 201 b may request the data from its associated one of the memories 207 a and 207 b, in which the requested data is stored, through an interleaver 203 . If a plurality of cache memories 205 a and 205 b are provided for their associated memories 207 a and 207 b, the at least one of processors 201 a and 201 b may request the data from an associated one of the cache memories 205 a and 205 b corresponding to the memories 207 a and 207 b, through the interleaver 203 .
  • the interleaving technique using the interleaver 203 may divide a continuous memory space into small-size memory spaces and assign them to different memories so that the bus traffic of processors may be uniformly distributed to a plurality of memories.
  • FIG. 3 schematically illustrates an electronic device including cache memories capable of data sharing according to an embodiment of the present disclosure.
  • the electronic device for data access may include at least one processor 301 , an interleaver 303 , a plurality of cache memories 305 a and 305 b, and a plurality of memories 307 a and 307 b.
  • processor 301 the electronic device for data access
  • interleaver 303 the electronic device for data access
  • the electronic device for data access may include at least one processor 301 , an interleaver 303 , a plurality of cache memories 305 a and 305 b, and a plurality of memories 307 a and 307 b.
  • three or more memories may be connected to three or more cache memories according to embodiments of the present disclosure (see FIG. 11 ).
  • the number of memories may be different from the number of cache memories.
  • the processor 301 may request data from the first cache memory 305 a or the second cache memory 305 b which is connected to their associated first memory 307 a or second memory 307 b, through the interleaver 303 as described in FIG. 1 .
  • the cache memories 305 a and 305 b may determine whether data is stored in the cache memories 305 a and 305 b, in response to the data request, and if the requested data is stored, the cache memories 305 a and 305 b may provide the stored data to the processor 301 .
  • the processor 301 may request the requested data from the memories 307 a and 307 b connected to their associated cache memories 305 a and 305 b.
  • a case where the requested data is stored in the cache memory will be referred to as a ‘cache hit’, and a case where the requested data is not stored in the cache memory will be referred to as a ‘cache miss’.
  • At least one cache memory may be connected to at least one other cache memory (e.g., the second cache memory 305 b ), to transmit or receive information (e.g., access-related information) to/from the at least one other cache memory.
  • information e.g., access-related information
  • access-related information for the at least one cache memory may be shared with the at least one other cache memory according to an embodiment of the present disclosure.
  • FIG. 3 it is shown in FIG. 3 that two cache memories are connected to each other to share information, the number of memories and the number of cache memories are not limited thereto.
  • the cache memory may be connected to another cache memory by a variety of connection means, and may be connected to at least one other cache memory through, for example, a bus line 310 .
  • the cache memory may share access-related information with another cache memory by transmitting and receiving the access-related information to/from another cache memory through the bus line.
  • the cache memory may be considered to be functionally connected to another cache memory.
  • the cache memory may be connected by, for example, a physical connection means, or may be implemented to be logically connected.
  • the first cache memory 305 a may share information (as described above, this is referred to as ‘access-related information’) related to data access, with the second cache memory 305 b.
  • the access-related information may include not only a value of the data that a processor requests from a memory by accessing the memory as described above, information about the logical or physical address in a memory, in which the requested data is stored, block information of the data to be accessed, cache miss information, cache hit information, information about the number of occurrences of a cache miss, or information about the number of occurrences of a cache hit, but also any information (e.g., various traffic-related information of a bus line) related to data access.
  • the first cache memory 305 a may deliver address information received from the interleaver 303 to one or more other cache memories (e.g., the second cache memory 305 b ). Further, in accordance with an embodiment of the present disclosure, if data related to the received address information is stored in the cache memory (e.g., if a cache hit occurs), the cache memory may deliver at least one of the address information and cache hit information to one or more other cache memories. If the data related to the received address information is not stored in the cache memory (e.g., if a cache miss occurs), the cache memory may deliver at least one of the address information and cache miss information to one or more other cache memories.
  • the cache memory may deliver at least one of the address information and cache miss information to one or more other cache memories.
  • the occurrence of a cache hit or a cache miss in the cache memories 305 a and 305 b may be used as conditions for determining an operation in which the cache memories 305 a and 305 b deliver access-related information to another cache memory. For example, if a cache hit occurs in the first cache memory 305 a, the first cache memory 305 a may deliver address information of the access-requested data to one or more other cache memories. In addition, for example, the first cache memory 305 a may deliver information about the cache hit together with the address information.
  • the first cache memory 305 a may deliver address information of the access-requested data to one or more other cache memories. For example, the first cache memory 305 a may deliver information about the cache miss together with the address information.
  • the second cache memory 305 b may determine that a cache hit has occurred in the first cache memory 305 a with respect to the address. In another embodiment, upon receiving address information as an example of access-related information from the first cache memory 305 a, the second cache memory 305 b may determine that a cache miss has occurred in the first cache memory 305 a with respect to the address.
  • sharing of access-related information between the cache memories may be performed depending on the number of occurrences of a cache hit or cache miss in a specific cache memory. For example, if a cache hit occurs in the first cache memory 305 a, the first cache memory 305 a may count the number of occurrences of a cache hit. If the counted number of occurrences of a cache hit is greater than or equal to a certain number, the first cache memory 305 a may deliver access-related information (e.g., address information of the requested data) to one or more other cache memories (e.g., the second cache memory 305 b ).
  • access-related information e.g., address information of the requested data
  • the first cache memory 305 a may deliver information (e.g., the occurrence/non-occurrence of a cache hit, the number of occurrences of a cache hit, and the like) related to the cache hit together with the address information according to an embodiment of the present disclosure.
  • information e.g., the occurrence/non-occurrence of a cache hit, the number of occurrences of a cache hit, and the like
  • the first cache memory 305 a may count the number of occurrences of a cache miss. If the counted number of occurrences of a cache miss is greater than or equal to a certain number, the first cache memory 305 a may deliver access-related information (e.g., address information of the requested data) to one or more other cache memories (e.g., the second cache memory 305 b ).
  • access-related information e.g., address information of the requested data
  • the first cache memory 305 a may deliver information (e.g., the occurrence/non-occurrence of a cache miss, the number of occurrences of a cache miss, and the like) related to the cache miss together with the address information according to an embodiment of the present disclosure.
  • information e.g., the occurrence/non-occurrence of a cache miss, the number of occurrences of a cache miss, and the like
  • the cache memory 305 may read data from the address from the memory 307 before a data request for the next address is made by the processor 301 or the interleaver 303 , and then store the read data in the cache memory 305 in advance. Accordingly, it is possible to set a size (or length) of a cache line of the cache memory 305 to be greater than a given size. Therefore, regardless of, for example, the locality of the memory to be accessed, it is possible to reduce the occurrence of a cache miss and to improve the performance of the cache memory 305 according to embodiments of the present disclosure.
  • FIGS. 4 to 6 are flowcharts illustrating a data access procedure in an electronic device according to various embodiments of the present disclosure.
  • a device for performing the methods according to various embodiments of the present disclosure, which are shown in FIGS. 4 to 6 may be, for example, the electronic device shown in FIG. 3 .
  • FIG. 4 is a flowchart illustrating a data access procedure in an electronic device according to an embodiment of the present disclosure.
  • a cache memory (e.g., the first cache memory 305 a ) may receive a data request from a processor (e.g., the processor 301 ).
  • the data request may be received through an interleaver (e.g., the interleaver 303 ).
  • the data-requested cache memory (e.g., the first cache memory 305 a ) may transmit access-related information regarding the request to at least one other cache memory (e.g., the second cache memory 305 b ).
  • the transmission of access-related information in operation 403 may be performed after operation 405 , 407 or 409 according to various embodiments of the present disclosure.
  • the access-related information may include not only a value of the data that a processor requests from a memory by accessing the memory as described above, information about the logical or physical address in a memory, in which the requested data is stored, block information of the data to be accessed, cache miss information, cache hit information, information about the number of occurrences of a cache miss, or information about the number of occurrences of a cache hit, but also any information (e.g., various traffic-related information of a bus line) related to data access.
  • the cache memory may transmit the data value or address information to at least one other cache memory. For example, if it is determined in operation 405 or 407 whether the requested data is present in the cache memory, the cache memory may transmit hit information, miss information, count information or the like to at least one other cache memory.
  • the data-requested cache memory may read the requested data from the cache memory (e.g., the first cache memory 305 a ) and transmit the read data to the processor (or the interleaver) in operation 409 .
  • the cache memory may determine that a cache miss has occurred.
  • the cache miss-occurred cache memory may deliver access-related information (e.g., cache miss information, information about the number of occurrences of a cache miss, address information of the access-requested data, or the like) associated with the requested data to one or more other cache memories (e.g., the second cache memory 305 b ) in response to the occurrence of the cache miss.
  • access-related information e.g., cache miss information, information about the number of occurrences of a cache miss, address information of the access-requested data, or the like
  • the cache miss-occurred cache memory (e.g., the first cache memory 305 a ) may request the data from the memory (e.g., the second cache memory 305 b ) that is functionally connected to the cache memory. Upon receiving the requested data from the connected memory, the cache miss-occurred cache memory may transmit the data to the processor (or the interleaver). In addition, the cache miss-occurred cache memory may store the data received from the memory (e.g., the first memory 307 a ) in the cache memory.
  • another cache memory e.g., the second cache memory 305 b
  • another cache memory may request the data from the memory (e.g., the second memory 307 b ) that is functionally connected to another cache memory.
  • another cache memory e.g., the second cache memory 305 b
  • FIGS. 5 and 6 illustrate procedures for transmitting access-related information according to an embodiment of the present disclosure, if transmission conditions (e.g., a case where a cache miss occurs in a cache memory, or a case where a cache hit occurs in a cache memory) of access-related information are satisfied.
  • transmission conditions e.g., a case where a cache miss occurs in a cache memory, or a case where a cache hit occurs in a cache memory
  • FIG. 5 is a flowchart illustrating a data access procedure during occurrence of a cache miss according to an embodiment of the present disclosure.
  • a cache memory (e.g., the first cache memory 305 a ) may receive a data request from a processor (e.g., the processor 301 ).
  • the data request may be received through an interleaver (e.g., the interleaver 303 ).
  • the cache memory e.g., the first cache memory 305 a
  • the cache memory may read the requested data from the cache memory (e.g., the first cache memory 305 a ) and transmit the read data to the processor (or the interleaver) in operation 509 .
  • the cache memory may deliver access-related information (e.g., cache miss information, information about the number of occurrences of a cache miss, address information of the access-requested data, or the like) associated with the requested data to one or more other cache memories (e.g., the second cache memory 305 b ) in response to the occurrence of the cache miss in operation 505 according to an embodiment of the present disclosure.
  • access-related information e.g., cache miss information, information about the number of occurrences of a cache miss, address information of the access-requested data, or the like
  • the cache miss-occurred cache memory (e.g., the first cache memory 305 a ) may request the data from the memory (e.g., the second cache memory 305 b ) that is functionally connected to the cache memory.
  • the cache miss-occurred cache memory may transmit the requested data to the processor (or the interleaver) in operation 509 .
  • the cache miss-occurred cache memory may store the data received from the memory (e.g., the first memory 307 a ) in the cache memory.
  • a cache memory transmits access-related information (e.g., cache miss-related information) to another cache memory each time a cache miss occurs
  • the cache memory may count the number of occurrences of a cache miss, and transmit access-related information to another cache memory if the counted number is greater than or equal to a certain value.
  • the occurrence/non-occurrence of a cache miss or the number of occurrences of a cache miss may be set as transmission conditions of access-related information. If a cache memory transmits access-related information to another cache memory depending on the cache miss occurrence conditions in this way, the cache miss which may occur in the cache memory may be reduced.
  • FIG. 6 is a flowchart illustrating a data access procedure during occurrence of a cache hit according to an embodiment of the present disclosure.
  • a cache memory (e.g., the first cache memory 305 a ) may receive a data request from a processor (e.g., the processor 301 ).
  • the data request may be received through an interleaver (e.g., the interleaver 303 ).
  • the cache miss-occurred cache memory (e.g., the first cache memory 305 a ) may request the data from the memory (e.g., the second cache memory 305 b ) that is functionally connected to the cache memory, in operation 605 .
  • the cache miss-occurred cache memory may transmit the data to the processor (or the interleaver) in operation 613 .
  • the cache miss-occurred cache memory may store the data received from the memory (e.g., the first memory 307 a ) in the cache memory.
  • the data-requested cache memory e.g., the first cache memory 305 a
  • the data-requested cache memory may count the number of occurrences of a cache hit in operation 607 .
  • the counting of the number of occurrences of a cache hit may be performed for a certain time, and may be reset after transmission of access-related information in operation 611 .
  • the cache memory may transmit access-related information (e.g., cache hit information, information about the number of occurrences of a cache hit, address information of the access-requested data, and the like) associated with the requested data to one or more other cache memories (e.g., the second cache memory 305 b ) in response to the occurrence of the cache hit in operation 611 according to an embodiment of the present disclosure.
  • access-related information e.g., cache hit information, information about the number of occurrences of a cache hit, address information of the access-requested data, and the like
  • the cache memory may transmit the requested data to the processor (or the interleaver).
  • a cache memory transmits access-related information to another cache memory if the number of occurrences of a cache hit, which is counted by the cache memory, is greater than or equal to a certain value
  • the cache memory may transmit access-related information (e.g., cache hit-related information) to another cache memory each time a cache hit occurs, regardless of the number of occurrences of a cache hit.
  • the occurrence/non-occurrence of a cache hit or the number of occurrences of a cache hit may be set as transmission conditions of access-related information. If a cache memory transmits access-related information to another cache memory depending on the cache hit occurrence conditions in this way, it is possible to use the cache memory as a pre-fetch buffer.
  • a method for accessing data in an electronic device may include receiving a request for data from a processor by a first cache memory among a plurality of cache memories; transmitting the requested data to the processor; and transmitting access-related information regarding the request to a second cache memory among the plurality of cache memories.
  • the transmitting of the access-related information may include determining whether the data requested by the processor is present in the first cache memory; and transmitting the access-related information to the second cache memory, if the requested data is present in the first cache memory.
  • the transmitting of the access-related information may include counting the number of occurrences of a cache hit, if the requested data is present in the first cache memory; and transmitting the access-related information to the second cache memory, if the number of occurrences of a cache hit is greater than or equal to a certain number.
  • the access-related information may include cache hit-related information for the first cache memory.
  • the cache hit-related information may include at least one selected from information about occurrence/non-occurrence of a cache hit, address information of data for which a cache hit has occurred, next address information of data for which a cache hit has occurred, and information about the number of occurrences of a cache hit.
  • the transmitting of the access-related information may include determining whether the data requested by the processor is present in the first cache memory; and transmitting the access-related information to the second cache memory, if the requested data is not present in the first cache memory.
  • the transmitting of the access-related information may include counting the number of occurrences of a cache miss, if the requested data is not present in the first cache memory; and transmitting the access-related information to the second cache memory, if the number of occurrences of a cache miss is greater than or equal to a certain number.
  • the access-related information may include cache miss-related information for the first cache memory.
  • the cache miss-related information may include at least one selected from information about occurrence/non-occurrence of a cache miss, address information of data for which a cache miss has occurred, next address information of data for which a cache miss has occurred, and information about the number of occurrences of a cache miss.
  • a method for accessing data in an electronic device may include receiving a request for data from a processor by a first cache memory among a plurality of cache memories; determining whether the data requested by the processor is present in the first cache memory; and transmitting access-related information regarding the request to a second cache memory among the plurality of cache memories, if the requested data is present in the first cache memory.
  • a method for accessing data in an electronic device may include receiving a request for data from a processor by a first cache memory among a plurality of cache memories; determining whether the data requested by the processor is present in the first cache memory; and transmitting access-related information regarding the request to a second cache memory among the plurality of cache memories, if the requested data is not present in the first cache memory.
  • a specific cache memory may determine whether a cache hit or a cache miss has occurred, and transmit access-related information (e.g., information related to a cache miss) to another cache memory depending on the importance of the requested data, if the cache miss has occurred.
  • access-related information e.g., information related to a cache miss
  • the cache memory may determine whether to deliver access-related information to another cache memory, depending on the importance of the requested data.
  • the transmission of access-related information to another cache memory may be performed in a broadcasting way to at least one other cache memory, without a separate connection operation with, for example, an individual cache memory.
  • the cache memory may allocate data from a memory in advance before a data request from the processor. Accordingly, it is possible to prepare for a data request from the processor and to prevent occurrence of a cache miss in advance.
  • FIGS. 7 to 10 illustrate examples in which data access is handled in each component of an electronic device according to an embodiment of the present disclosure.
  • a processor 701 may request data of a specific block size, whose address starts at ‘address 0’, through an interleaver 703 .
  • the address may be converted into another address (e.g., ‘address x’) by the interleaver 703 .
  • the interleaver 703 may request the data from a first cache memory 705 a.
  • the first cache memory 705 a may determine whether the requested data is present in the first cache memory 705 a. For example, if the requested data is not present in the first cache memory 705 a, a cache miss may occur in the first cache memory 705 a.
  • the first cache memory 705 a may deliver access-related information (e.g., an address of the requested data for which the cache miss has occurred) to a first memory 707 a according to an embodiment of the present disclosure.
  • the first cache memory 705 a may deliver access-related information (e.g., information related to the cache miss (e.g., address information of data for which a cache miss has occurred)) to a second cache memory 705 b through a path (e.g., a bus) formed between the first cache memory 705 a and the second cache memory 705 b.
  • the first cache memory 705 a may read the data requested by the processor 701 from the first memory 707 a, and fill a cache line with the read data. If the requested data is read, the first cache memory 705 a may transmit the data to the processor 701 . The processor 701 may process the data provided from the first cache memory 705 a.
  • the second cache memory 705 b may request data from a second memory 707 b using the access-related information (e.g., address information of the requested data) provided from the first cache memory 705 a.
  • the second cache memory 705 b may request the data corresponding to the provided address information, and may request the data (e.g., data of the next address or the next block, and the like) related to the data corresponding to the address information.
  • the second cache memory 705 b may read the requested data from the second memory 707 b, and fill a cache line with the read data.
  • the processor 701 may request data for the next address (e.g., ‘address 1’) of the address of ‘address 0’ for the processed data, for processing.
  • the request may be sent to the second cache memory 705 b through the interleaver 703 .
  • the ‘address 1’ may be converted into ‘address x’, and then delivered to the second cache memory 705 b.
  • the second cache memory 705 b Since the second cache memory 705 b has filled the cache line in advance with data to be requested next as described in FIG. 9 depending on the information (e.g., access-related information) provided from the first cache memory 705 a, a cache hit may occur for the data request from the processor 701 . Therefore, a cache hit may occur in the second cache memory 705 b for the stored data, and the second cache memory 705 b may provide the data to the processor 701 .
  • the information e.g., access-related information
  • the second cache memory 705 b may provide cache hit information of the second cache memory 705 b back to the first cache memory 705 a or any other cache memory, thereby allowing the first cache memory 705 a to prepare in advance the data corresponding to the next address of the address ‘address x’. Therefore, in accordance with an embodiment of the present disclosure, the processor 701 may receive data from the second cache memory 705 b and perform the next processing without the latency that occurs during a cache miss as described above.
  • the line size of the cache memory is adjustable. For example, whether or how many times a cache memory will deliver information related to the cache hit or cache miss to another cache memory may be adjusted depending on the characteristics of the processor. Accordingly, it is possible to optimize the data transmission by providing a different cache line size depending on the characteristics of the processor.
  • a cache memory may deliver an address of data for which a cache miss has occurred, to the other cache memory, and another cache memory that has received the cache miss-related information may fill the cache memory with the data related to the address.
  • the cache memory may be used as a pre-fetch buffer as described above. For example, by delivering the current access-related information (e.g., cache hit information) of the processor to another cache memory by a cache memory, it is possible to enable another cache memory to pre-fetch the data to be requested next.
  • the processor may be enabled to operate as if it accesses only the cache memory without accessing the memory (e.g., the main memory).
  • the processor may deliver, to another cache memory, address information of the data that the currently accessed cache memory desires to access.
  • Another cache memory that has received the address information may load the data, and if the processor accesses the data, the processor may deliver the access information to another cache memory. By repeating this, it is possible to load data at the processing speed of the processor.
  • FIGS. 11 to 13 illustrate examples in which an electronic device is implemented in a variety types according to an embodiment of the present disclosure.
  • FIG. 11 is a block diagram illustrating an electronic device according to an embodiment of the present disclosure.
  • one interleaver 1103 is connected to N cache memories 1105 - 1 to 1105 -N and the cache memories 1105 - 1 to 1105 -N are connected to memories (e.g., main memories) 1107 - 1 to 1107 -N, respectively, then information (e.g., access-related information) may be shared among the N cache memories 1105 - 1 to 1105 -N.
  • memories e.g., main memories
  • the data request may be sent to, for example, the first cache memory 1105 - 1 through the interleaver 1103 .
  • the first cache memory 1105 - 1 may determine whether data is present in the first cache memory 1105 - 1 in response to the data request, and provide the data to the processor 1101 through the first cache memory 1105 - 1 or the first memory 1107 - 1 .
  • the access-related information in the first cache memory 1105 - 1 may be transmitted to another cache memory (e.g., at least one of the second cache memory 1105 - 2 to the N-th cache memory 1105 -N).
  • FIG. 12 is a block diagram illustrating an electronic device according to another embodiment of the present disclosure.
  • a plurality of processors 1201 may be connected to a plurality of cache memories 1205 ( 1205 - 1 to 1205 -M 1 and 1205 - 1 to 1205 -M 2 ) through a plurality of interleavers 1203 ( 1203 - 1 and 1203 - 2 ).
  • the plurality of processors 1201 may request data from each of the cache memories 1205 or the memories 1207 through the plurality of interleavers 1203 - 1 and 1203 - 2 .
  • Each of the interleavers 1203 - 1 and 1203 - 2 may be connected to the plurality of cache memories 1205 - 1 to 1205 -M to interleave a data request.
  • the first interleaver 1203 - 1 may be connected to Mi cache memories 1205 - 1 to 1205 -M 1
  • the second interleave 1203 - 2 may be connected to M 2 cache memories 1205 - 1 to 1205 -M 2 .
  • the M 1 cache memories 1205 - 1 to 1205 -M 1 connected to the first interleaver 1203 - 1 may be connected to memories 1207 - 1 to 1207 -M 1 , respectively.
  • the cache memories 1205 may share information with each other according to various embodiments of the present disclosure.
  • the Mi cache memories 1205 - 1 to 1205 -M 1 connected to the first interleaver 1203 - 1 may share information with each other
  • the M 2 cache memories 1205 - 1 to 1205 -M 2 connected to the second interleaver 1203 - 2 may also share information with each other.
  • the Mi cache memories 1205 - 1 to 1205 -M 1 connected to the first interleaver 1203 - 1 may share information with the M 2 cache memories 1205 - 1 to 1205 -M 2 connected to the second interleaver 1203 - 2 .
  • FIG. 13 is a block diagram illustrating an electronic device according to further another embodiment of the present disclosure.
  • processors that may be provided in the electronic device may include various processors such as a CPU 1301 , a GPU 1303 , an MFC 1305 , a DSP 1307 , or others 1325 .
  • the processors may use the data that is stored in a memory, in a way of being incorporated into such modules as a display 1309 , an audio module 1321 or an eMMC 1323 .
  • Each of the processors 1301 , 1303 , 1305 , 1307 , 1309 , 1321 , 1323 , 1325 or the like may send a request for data to be processed to a first cache memory 1313 or a second cache memory 1317 through a bus 1311 such as an address interleaver according to an embodiment of the present disclosure.
  • the first cache memory 1313 or the second cache memory 1317 may request data from a first memory 1315 or a second memory 1319 , respectively.
  • the first memory 1315 or the second memory 1319 that has received the data request may provide the data to its associated cache memory 1313 or 1317 by a first memory controller or a second memory controller, respectively.
  • the first cache memory 1313 and the second cache memory 1317 may share information with each other according to an embodiment of the present disclosure.
  • a direct path capable of communication between cache memory controllers provided in their cache memories may be formed, and information (e.g., access-related information (e.g., cache miss-related information, cache hit-related information, address information of the requested data, address information of the next data of the requested data, traffic information, and the like)) may be shared through the formed path.
  • access-related information e.g., cache miss-related information, cache hit-related information, address information of the requested data, address information of the next data of the requested data, traffic information, and the like
  • An electronic device for accessing data may include at least one processor; a plurality of cache memories configured to transmit data requested by the processor to the processor; and a plurality of memories, each of which is connected to an associated one of the cache memories to transmit the requested data through the cache memory. At least one of the plurality of cache memories may share access-related information with other cache memories.
  • At least two of the plurality of cache memories may share the access-related information with each other through a bus line.
  • a first cache memory among the plurality of cache memories may transmit the access-related information to a second cache memory, if the data requested by the processor is present in the first cache memory.
  • the first cache memory may count the number of occurrences of a cache hit if the requested data is present in the first cache memory, and transmit the access-related information to the second cache memory if the number of occurrences of a cache hit is greater than or equal to a certain number.
  • the access-related information may include cache hit-related information for the at least one cache memory.
  • the cache hit-related information may include at least one selected from information about occurrence/non-occurrence of a cache hit, address information of data for which a cache hit has occurred, next address information of data for which a cache hit has occurred, and information about the number of occurrences of a cache hit.
  • the first cache memory among the plurality of cache memories may transmit the access-related information to a second cache memory, if the data requested by the processor is not present in the first cache memory.
  • the first cache memory may count the number of occurrences of a cache miss if the requested data is not present in the first cache memory, and transmit the access-related information to the second cache memory if the number of occurrences of a cache miss is greater than or equal to a certain number.
  • the access-related information may include cache miss-related information for the first cache memory.
  • the cache miss-related information may include at least one selected from information about occurrence/non-occurrence of a cache miss, address information of data for which a cache miss has occurred, next address information of data for which a cache miss has occurred, and information about the number of occurrences of a cache miss.
  • An electronic device for accessing data may include a processor; a first cache memory configured to transmit first data requested by the processor to the processor; and a second cache memory configured to transmit second data requested by the processor to the processor.
  • the first cache memory and the second cache memory may be functionally connected by a bus line to share data with each other.
  • An embodiment described below shows the performance improvement which may occur when embodiments of the present disclosure are applied to a layer 3 (L3) cache memory.
  • L3 layer 3
  • a line size of the L3 cache memory may be greater than or equal to a line size of a layer 2 (L2) cache memory.
  • a one-time data request size of each processor e.g., a CPU, a GPU, a codec, and the like
  • the latency may be assumed to be 40 cycles.
  • the latency which occurs as the L3 cache memory requests data from the main memory due to a cache miss in the L3 cache memory, may be assumed to be 40 cycles.
  • L3 should be set to be greater than or equal to L2 in terms of the cache line size during system configuration, to make it possible to quickly respond by handling a cache miss of L2 as one data request. In contrast, however, if L3 is set to be less than L2, multiple data requests should be sent for one cache miss, causing a decrease in the efficiency. For the same reason, the cache line size may not be set to be less than the data request size of the processor.
  • a plurality of cache memories may share access-related information with each other, thereby reducing the latency of the memory access time and improving the data processing speed of the processor.
  • a plurality of cache memories may share access-related information with each other, thereby reducing the cache miss that occurs in a cache memory.
  • a cache miss in a cache memory may be predicted, making it possible to hide the miss penalty during data access.
  • cache memories may share access-related information with each other, so that the cache memories may be used like cache lines of a size greater than the size given for the cache memories.
  • a cache memory may deliver access-related information to another cache memory to share the access-related information, thereby allowing a cache memory to play a role similar to a pre-fetch buffer in the processor such as an image processor.
US14/602,682 2014-01-29 2015-01-22 Electronic device, and method for accessing data in electronic device Abandoned US20150212942A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020140011194A KR20150090491A (ko) 2014-01-29 2014-01-29 전자 장치 및 전자 장치에서 데이터를 액세스하는 방법
KR10-2014-0011194 2014-01-29

Publications (1)

Publication Number Publication Date
US20150212942A1 true US20150212942A1 (en) 2015-07-30

Family

ID=52465189

Family Applications (1)

Application Number Title Priority Date Filing Date
US14/602,682 Abandoned US20150212942A1 (en) 2014-01-29 2015-01-22 Electronic device, and method for accessing data in electronic device

Country Status (4)

Country Link
US (1) US20150212942A1 (fr)
EP (1) EP2902910A1 (fr)
KR (1) KR20150090491A (fr)
WO (1) WO2015115820A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170109278A1 (en) * 2015-10-19 2017-04-20 Fujitsu Limited Method for caching and information processing apparatus
US20180074961A1 (en) * 2016-09-12 2018-03-15 Intel Corporation Selective application of interleave based on type of data to be stored in memory

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030009643A1 (en) * 2001-06-21 2003-01-09 International Business Machines Corp. Two-stage request protocol for accessing remote memory data in a NUMA data processing system
US20090249318A1 (en) * 2008-03-28 2009-10-01 International Business Machines Corporation Data Transfer Optimized Software Cache for Irregular Memory References
US20140108828A1 (en) * 2012-10-15 2014-04-17 Advanced Micro Devices, Inc. Semi-static power and performance optimization of data centers
US20140149679A1 (en) * 2012-11-27 2014-05-29 Nvidia Corporation Page crossing prefetches

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4959777A (en) * 1987-07-27 1990-09-25 Motorola Computer X Write-shared cache circuit for multiprocessor system
CA2000031A1 (fr) * 1988-10-20 1990-04-20 Robert W. Horst Antememoire a information rapidement accessible sans alignement
EP0741356A1 (fr) * 1995-05-05 1996-11-06 Rockwell International Corporation Architecture d'antémémoire comprenant une unité de préchargement de données
US5649153A (en) * 1995-06-19 1997-07-15 International Business Machines Corporation Aggressive adaption algorithm for selective record caching
US6427187B2 (en) * 1998-07-31 2002-07-30 Cache Flow, Inc. Multiple cache communication
US7237068B2 (en) * 2003-01-28 2007-06-26 Sun Microsystems, Inc. Computer system employing bundled prefetching and null-data packet transmission
US7151544B2 (en) * 2003-05-16 2006-12-19 Sun Microsystems, Inc. Method for improving texture cache access by removing redundant requests
JP4477906B2 (ja) * 2004-03-12 2010-06-09 株式会社日立製作所 ストレージシステム
US7350030B2 (en) * 2005-06-29 2008-03-25 Intel Corporation High performance chipset prefetcher for interleaved channels
KR100654462B1 (ko) * 2005-08-24 2006-12-06 삼성전자주식회사 캐쉬 메모리를 메모리 블록으로 나누어 파일의 데이터를저장하는 캐쉬 방법 및 캐쉬 시스템
US7596663B2 (en) * 2006-11-15 2009-09-29 Arm Limited Identifying a cache way of a cache access request using information from the microtag and from the micro TLB
JP2010020432A (ja) * 2008-07-09 2010-01-28 Nec Electronics Corp キャッシュメモリ装置
US20110197031A1 (en) * 2010-02-05 2011-08-11 Nokia Corporation Update Handler For Multi-Channel Cache

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030009643A1 (en) * 2001-06-21 2003-01-09 International Business Machines Corp. Two-stage request protocol for accessing remote memory data in a NUMA data processing system
US20090249318A1 (en) * 2008-03-28 2009-10-01 International Business Machines Corporation Data Transfer Optimized Software Cache for Irregular Memory References
US20140108828A1 (en) * 2012-10-15 2014-04-17 Advanced Micro Devices, Inc. Semi-static power and performance optimization of data centers
US20140149679A1 (en) * 2012-11-27 2014-05-29 Nvidia Corporation Page crossing prefetches

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170109278A1 (en) * 2015-10-19 2017-04-20 Fujitsu Limited Method for caching and information processing apparatus
JP2017078881A (ja) * 2015-10-19 2017-04-27 富士通株式会社 キャッシュ方法、キャッシュプログラム及び情報処理装置
US20180074961A1 (en) * 2016-09-12 2018-03-15 Intel Corporation Selective application of interleave based on type of data to be stored in memory
US9971691B2 (en) * 2016-09-12 2018-05-15 Intel Corporation Selevtive application of interleave based on type of data to be stored in memory

Also Published As

Publication number Publication date
KR20150090491A (ko) 2015-08-06
WO2015115820A1 (fr) 2015-08-06
EP2902910A1 (fr) 2015-08-05

Similar Documents

Publication Publication Date Title
US9898341B2 (en) Adjustable priority ratios for multiple task queues
US10860244B2 (en) Method and apparatus for multi-level memory early page demotion
US9201796B2 (en) System cache with speculative read engine
CN109478168B (zh) 内存访问技术及计算机系统
US9135177B2 (en) Scheme to escalate requests with address conflicts
US11379381B2 (en) Main memory device having heterogeneous memories, computer system including the same, and data management method thereof
CN107408079B (zh) 带有一致单元的多级别系统存储器的存储器控制器
US9798498B2 (en) Method of operating memory controller and methods for devices having the same
US8447897B2 (en) Bandwidth control for a direct memory access unit within a data processing system
US20140052906A1 (en) Memory controller responsive to latency-sensitive applications and mixed-granularity access requests
US11093399B2 (en) Selecting resources to make available in local queues for processors to use
KR101445826B1 (ko) 다수의 메모리 영역들에 걸친 강하게 순서화된 디바이스 및 배타적인 트랜잭션들의 자동-순서화
US10268416B2 (en) Method and systems of controlling memory-to-memory copy operations
CN111684427A (zh) 高速缓存控制感知的存储器控制器
US10198357B2 (en) Coherent interconnect for managing snoop operation and data processing apparatus including the same
KR20200100151A (ko) 집단화된 메모리 장치에 대한 메모리 요청 스케줄링
US20190354483A1 (en) Controller and memory system including the same
CN112445423A (zh) 存储器系统、计算机系统及其数据管理方法
US10042773B2 (en) Advance cache allocator
US20150212942A1 (en) Electronic device, and method for accessing data in electronic device
US10580110B2 (en) Hardware structure to track page reuse
US20080005404A1 (en) Method for managing buffers pool and a system using the method
US20170242623A1 (en) Apparatuses and methods for multiple address registers for a solid state device
CN111913662B (zh) Slc写性能提升方法、装置、计算机设备及存储介质
US10860498B2 (en) Data processing system

Legal Events

Date Code Title Description
AS Assignment

Owner name: SAMSUNG ELECTRONICS CO., LTD., KOREA, REPUBLIC OF

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:YANG, SEUNG-JIN;KIM, GIL-YOON;PARK, JIN-YOUNG;AND OTHERS;REEL/FRAME:034789/0328

Effective date: 20150122

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION