WO2012109882A1 - Data reading method and ddr controller - Google Patents

Data reading method and ddr controller Download PDF

Info

Publication number
WO2012109882A1
WO2012109882A1 PCT/CN2011/078077 CN2011078077W WO2012109882A1 WO 2012109882 A1 WO2012109882 A1 WO 2012109882A1 CN 2011078077 W CN2011078077 W CN 2011078077W WO 2012109882 A1 WO2012109882 A1 WO 2012109882A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
read request
adjacent
reading
memory
Prior art date
Application number
PCT/CN2011/078077
Other languages
French (fr)
Chinese (zh)
Inventor
程永波
贺成洪
兰可嘉
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to CN201180001724.2A priority Critical patent/CN102378971B/en
Priority to PCT/CN2011/078077 priority patent/WO2012109882A1/en
Publication of WO2012109882A1 publication Critical patent/WO2012109882A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/14Handling requests for interconnection or transfer
    • G06F13/16Handling requests for interconnection or transfer for access to memory bus

Definitions

  • the present invention relates to the field of computers, and in particular, to a data reading method and a memory controller.
  • a central processing unit reads data of a memory or writes data to a memory by transmitting a data operation instruction to a memory controller (DDR Controler).
  • the memory controller places the received data manipulation instructions into its own cache queue and reorders them.
  • the ordering principle is: Arrange the data operation instructions of the accessed memory address close to (in the same storage BANK or the same address line), so that the data reading can be performed sequentially, avoiding the time caused by frequent switching of the BANK (storage area) and the address line. delay.
  • the data in the memory is read and loaded into the data response message, which is sent by the memory controller to the CACHE (cache) of the central processing unit for use by the central processing unit.
  • the CACHE in the central processor caches the data returned by the memory controller in a CACHE line unit.
  • the capacity of the CHACHE line may be different from the capacity of the data response message. For example, if the CACHE line capacity of the central processing unit is 128 bytes, and the data response message used in the computer system can only return 64 bytes of data, the central processor needs to send two data operation instructions, respectively, requiring reading. Take the first 64 bytes and the last 64 bytes of 128 bytes to correspond to the capacity of the CACHE line.
  • a plurality of data operation instructions corresponding to the CACHE line capacity require that the memory address for reading data is generally continuous, when reading data from the memory. It is not necessary to switch BANK or address lines frequently.
  • Embodiments of the present invention provide a data reading method and a memory controller that reduce the time taken by the central processing unit to acquire data of a complete CACHE line, and improve the data processing efficiency of the computer system.
  • a method of reading data including:
  • a sum of data amounts of the first data and the adjacent data is a capacity of a CHCHE cache line of a central processing unit
  • the subsequent read request of the adjacent data is read according to the received request, and the cached adjacent data is sent to the central processor.
  • a memory controller includes:
  • a data reading unit configured to: after receiving the first read request issued by the central processing unit, read the first data corresponding to the first read request from the memory, and continue reading with the first Adjacent data adjacent to the data address; a sum of data amounts of the first data and the adjacent data is a capacity of a cache line of the central processing unit;
  • An adjacent data buffer unit configured to cache the adjacent data
  • a neighboring data sending unit configured to read the subsequent read request of the adjacent data according to the received request, and send the cached adjacent data to the central processor.
  • the data reading method and the memory controller provided by the embodiment of the present invention, when reading data from the memory, read out adjacent data and cache, so that when the central processor requests to read adjacent data, it directly
  • the cache is fetched and sent to the central processor, avoiding repeated reads of the memory.
  • consecutive data operation instructions of the same central processing unit may be inserted into data of other central processing units when reaching the memory controller.
  • the instructions are manipulated and cannot enter the cache queue at the same time for timing optimization.
  • different central processors require that the addresses of the read data are far apart, resulting in frequent switching of BANKs and rows and columns when reading from memory.
  • adjacent data can be taken out and cached in advance, the influence of data operation instructions of other central processing units is avoided, and frequent reading of the memory is reduced, so that the same central processing unit acquires a complete CACHE line.
  • the time consumed by data is shortened, and the data processing efficiency of the computer system is improved.
  • Figure 1 is a schematic diagram of a high performance computer system consisting of multiple central processing units and multiple memory cascade extensions;
  • Embodiment 1 of the present invention is a flowchart of a method for reading data in Embodiment 1 of the present invention
  • Embodiment 3 is a flowchart of a method for reading data in Embodiment 2 of the present invention.
  • FIG. 4 is a block diagram of a memory controller in Embodiment 3 of the present invention.
  • FIG. 5 is a block diagram of another memory controller in Embodiment 3 of the present invention.
  • FIG. 6 is a schematic diagram showing the internal structure of a computer using a cache chip in Embodiment 3 of the present invention. detailed description
  • An embodiment of the present invention provides a data reading method. As shown in FIG. 2, the method includes the following steps:
  • the central processor's data operations on the memory include read operations and write operations.
  • a first read request is issued to the memory controller.
  • the memory controller analyzes and decodes the received first read request, thereby obtaining a BANK address, and a row and column address of the first data corresponding to the first read request in the memory. Afterwards, the memory controller addresses the BANK where the first data is located, activates the row where the first data is located in the BANK where it is located, and determines the first data according to the column address where the first data is located. Positioning, reading the first data while continuing to read adjacent data adjacent to the address of the first data.
  • the memory controller transmits the read first data to the central processor, and on the other hand, caches the adjacent data.
  • the cached adjacent data is sent to the central processor when the memory controller receives a subsequent read request to read the adjacent data.
  • the central processor A needs to read the entire CHACHE.
  • the data of the line needs to send two data operation instructions to respectively read the data of 64BYTE, and receive two data response messages in succession.
  • the first data operation instruction issued is the first read request
  • the second data operation instruction is the subsequent read request.
  • the data amount of the first data is 64 BYTE
  • the first data response message is loaded by the memory controller A after receiving the first read request. , sent to the CHCHE line of central processor A.
  • the memory controller A After the memory controller A reads the first data, it continues to read the adjacent data of the first data (size 64 BYTE) and caches it. After the subsequent read request of the central memory A reaches the memory controller A, if the subsequent read request requires reading the adjacent data of the first data, the memory controller A loads the cached adjacent data into the second data. The response message is sent to the CHCHE line of central processor A. When the central processor A requests to read the adjacent data, the memory controller A has been pre-cached, and it is no longer necessary to enter the memory for the read operation. The sum of the data amounts of the first data and the adjacent data is the capacity of the CHCHE line of the central processor A.
  • the data reading method provided by the embodiment of the present invention reads out adjacent data and caches when reading data from the memory, so as to be directly taken out from the cache when the central processor requests to read adjacent data. Sent to the central processor, avoiding repeated read operations on the memory.
  • consecutive data operation instructions of the same central processing unit may be inserted into data of other central processing units when reaching the memory controller.
  • the instructions are manipulated and cannot enter the cache queue at the same time for timing optimization. In this case, the addresses of the different central processors that require reading data are far apart, resulting in frequent switching of BANKs and rows and columns when reading from memory.
  • the method provided by the embodiment of the present invention can pre-fetch and cache adjacent data, avoiding the influence of data operation instructions of other central processing units, and reducing frequent reading of the memory, so that the same central processing unit acquires a complete CACHE line.
  • the time consumed by data is shortened, and the data processing efficiency of the computer system is improved.
  • An embodiment of the present invention provides a data reading method, as shown in FIG. 3, including the following steps:
  • the memory controller receives a first read request from the central processor.
  • the first read request requires reading the first data.
  • the memory controller analyzes and decodes the received first read request, and obtains a BANK address, a row and a column address of the first data corresponding to the first read request in the memory, if The data corresponding to the address has been cached in the memory controller, and the first data is directly returned to the central processor; otherwise, the process proceeds to step 303.
  • 303. Read the first data from a memory, and continue to read adjacent data adjacent to the first data address.
  • the memory controller acquires the first data corresponding to the first read request, after the BANK address in the memory, and the row and column addresses, and addresses the BANK where the first data is located, and activates the location in the BANK where the data is located. Determining a row of the first data, determining a location of the first data according to a column address where the first data is located, reading the first data, and continuing to read an address with the first data Adjacent adjacent data.
  • the memory controller transmits the read first data to the central processor and, on the other hand, caches the adjacent data.
  • the central processing unit B reads the data of the entire CHACHE line.
  • Two data operation instructions need to be sent to respectively read 64BYTE data, and receive two data response messages in succession.
  • the first data operation instruction issued is the first read request
  • the second data operation instruction is the subsequent read request.
  • the memory controller B After receiving the first read request, the memory controller B first detects whether the first data corresponding to the first read request has been cached, and the data amount of the first data is 64 BYTE.
  • the memory controller B reads the first data from the memory and loads it in the first data response message and sends it to the CHACHE line of the central processor B.
  • the memory controller B reads the first data, it continues to read the adjacent data of the first data (size 64BYTE) and performs buffering.
  • the subsequent read request of the central memory B reaches the memory controller B, if the subsequent read request requires reading the adjacent data of the first data, the memory controller B loads the cached adjacent data into the second data.
  • the response message is sent to the CHCHE line of central processor A.
  • the two data response messages return 128BYTE data, which satisfies the capacity of one CHACHE line.
  • the central processor B needs to read the data of the entire CHACHE line, and needs to send four data operation instructions to respectively read the 64BYTE. Data, and receives four data response messages in succession.
  • the memory controller B reads the first data with a data amount of 64 BYTE, Loaded in the first data response message, sent to the CHCHE line of central processor B.
  • the memory controller B reads the first data, it continues to read the adjacent data of the first data (size 1 92BYTE) and caches it.
  • the memory controller B When the subsequent read request of the central memory B reaches the memory controller 8 (the subsequent read request is three read requests, each read request requires 64BYTE to be read), the memory controller B will cache the adjacent The data is loaded into three data response messages and sent to the CHCHE line of central processor A.
  • a cache chip may be connected between the central processing unit and the memory controller. After receiving the first read request or the subsequent read request sent by the central processing unit, the cache chip detects whether it has cached corresponding data, and sends the self-cached data to the central processing unit. In addition, the cache chip forwards the first read request or the subsequent read request to the memory controller when the data corresponding to the first read request or the subsequent read request is not stored by itself.
  • the memory controller After the memory controller receives the first read request forwarded by the cache chip, detecting whether the first data has been cached, and when the first data is not cached, reading the first corresponding to the first read request from the memory The data continues to read adjacent data adjacent to the first data address and transmits the first data and the adjacent data to the cache chip.
  • the first read request and the subsequent read request mentioned in the embodiment of the present invention may be QP I (Qu i ckPa th I n t e r connec t).
  • the data reading method provided by the embodiment of the present invention reads out adjacent data and caches when reading data from the memory, so as to be directly taken out from the cache when the central processor requests to read adjacent data. Sent to the central processor, avoiding repeated read operations on the memory.
  • consecutive data operation instructions of the same central processor may arrive at the memory controller due to insertion of data processing instructions of other central processing units.
  • different central processors require that the addresses of the read data differ by 4 ,, resulting in frequent switching of BANKs and rows and columns when reading from memory.
  • the method provided by the embodiment of the present invention can pre-fetch and cache adjacent data, avoiding the influence of data operation instructions of other central processing units, and reducing frequent reading of the memory, so that the same central processing unit acquires a complete CACHE line.
  • the time consumed by the data is shortened, and the system processing efficiency is improved.
  • a cache chip is connected, which can further reduce data reading. The time spent to improve processing efficiency.
  • the embodiment of the present invention provides a memory controller.
  • the memory controller includes: a data reading unit 41, an adjacent data buffer unit 42, and an adjacent data transmitting unit 43.
  • the data reading unit 41 is configured to: after receiving the first read request sent by the central processing unit, read the first data corresponding to the first read request from the memory, and continue to read and The adjacent data of the first data address is adjacent.
  • the sum of the data amounts of the first data and the adjacent data is the capacity of the cache line of the central processing unit.
  • the adjacent data buffer unit 42 is configured to buffer the adjacent data.
  • the adjacent data transmitting unit 43 is configured to read the subsequent read request of the adjacent data according to the received request, and send the cached adjacent data to the central processing unit.
  • the memory controller further includes a cache detecting unit 44, configured to read the first data corresponding to the first read request in the memory, and continue to read with the first Before the adjacent data adjacent to the data address, detecting whether the first data has been cached; when detecting that the first data is not cached, reading the first data corresponding to the first read request in the memory .
  • a cache detecting unit 44 configured to read the first data corresponding to the first read request in the memory, and continue to read with the first Before the adjacent data adjacent to the data address, detecting whether the first data has been cached; when detecting that the first data is not cached, reading the first data corresponding to the first read request in the memory .
  • a cache chip may be connected between the central processing unit and the memory controller. After receiving the first read request or the subsequent read request sent by the central processing unit, the cache chip detects whether it has cached corresponding data, and sends the self-cached data to the central processing unit. In addition, the cache chip forwards the first read request or the subsequent read request to the memory controller when the data corresponding to the first read request or the subsequent read request is not stored by itself. As shown in Figure 6, the cache chip is connected between the central processor and the memory controller, and the memory controller is directly connected to the memory to operate on the data in the memory.
  • the data reading unit 41 is further configured to read from the memory and the first read after receiving the first read request forwarded by the cache chip.
  • the corresponding first data is requested and the adjacent data adjacent to the first data address is continuously read.
  • the adjacent data sending unit 43 is further configured to send the first data and the adjacent data to the cache chip.
  • the first read request and the subsequent read request mentioned in the embodiment of the present invention may be a QP I message.
  • the memory controller reads out adjacent data and caches when reading data from the memory, so as to be directly taken out from the cache and sent to the central processor when reading the adjacent data.
  • the central processor avoids repeated read operations on the memory.
  • consecutive data operation instructions of the same central processor may arrive at the memory controller due to insertion of data processing instructions of other central processing units.
  • different central processors require that the addresses of the read data differ by 4 ,, resulting in frequent switching of BANKs and rows and columns when reading from memory.
  • the method provided by the embodiment of the present invention can pre-fetch and cache adjacent data, avoiding the influence of data operation instructions of other central processing units, and reducing frequent reading of the memory, so that the same central processing unit acquires a complete CACHE line.
  • the time consumed by the data is shortened, and the system processing efficiency is improved.
  • the cache chip is connected, which can further reduce the time consumed for data reading and improve processing efficiency.
  • the present invention can be implemented by means of software plus necessary general hardware, and of course, by hardware, but in many cases, the former is a better implementation. .
  • the technical solution of the present invention which is essential or contributes to the prior art, may be embodied in the form of a software product stored in a readable storage medium, such as a floppy disk of a computer.
  • a hard disk or optical disk, etc. includes instructions for causing a computer device (which may be a personal computer, server, or network device, etc.) to perform the methods described in various embodiments of the present invention.

Abstract

Provided are a data reading method and DDR controller. The method comprises: reading a first data corresponding to a first reading request from the DDR and continuing to read the adjacent data with an address neighboring to the address of the first data after receiving the first reading request sent by a central processing unit, the sum of the data volume of the first data and the adjacent data is the cache line capacity of the central processing unit; caching the adjacent data; reading a subsequent reading request of the adjacent data according to the received request, and sending the cached adjacent data to the central processing unit. The technical solution, relating to the field of computer, is mainly applied to computer systems, and capable of reducing the time required for a central processing unit to obtain the data of a complete cache line and improving the data processing efficiency of a computer system.

Description

数据读取的方法和存储器控制器 技术领域  Data reading method and memory controller
本发明涉及计算机领域, 尤其涉及一种数据读取的方法和存储器控 制器。  The present invention relates to the field of computers, and in particular, to a data reading method and a memory controller.
背景技术 Background technique
在计算机系统中, 中央处理器 ( Central Processing Unit, CPU ) 通过向存储器控制器(DDR Controler)发送数据操作指令, 来读出存储器 的数据或对存储器进行数据写入。 所述存储器控制器会将接收到的数据 操作指令放入自身的緩存队列并重新排序。 其排序原则是: 使访问的存 储器地址接近(处于同一存储 BANK 或同一地址行)的数据操作指令依次 排列, 从而能依次执行数据读取, 避免频繁切换 BANK (存储区) 和地址 行造成的时间延迟。 存储器中的数据被读出后装入数据响应报文, 由所 述存储器控制器发送至中央处理器的 CACHE (高速緩冲存储器) 中, 以 供中央处理器取用。  In a computer system, a central processing unit (CPU) reads data of a memory or writes data to a memory by transmitting a data operation instruction to a memory controller (DDR Controler). The memory controller places the received data manipulation instructions into its own cache queue and reorders them. The ordering principle is: Arrange the data operation instructions of the accessed memory address close to (in the same storage BANK or the same address line), so that the data reading can be performed sequentially, avoiding the time caused by frequent switching of the BANK (storage area) and the address line. delay. The data in the memory is read and loaded into the data response message, which is sent by the memory controller to the CACHE (cache) of the central processing unit for use by the central processing unit.
通常, 中央处理器中的 CACHE以一个 CACHE行为单位来緩存由所述 存储器控制器返回的数据。 CHACHE行的容量可能与所述数据响应报文的 容量不同。 比如, 如果中央处理器的一个 CACHE行容量为 128字节,而计 算机系统中使用的数据响应报文只能返回 64字节的数据,则中央处理器 需要发送两个数据操作指令, 分别要求读取 128 字节的前 64 字节和后 64字节, 以对应 CACHE行的容量。  Typically, the CACHE in the central processor caches the data returned by the memory controller in a CACHE line unit. The capacity of the CHACHE line may be different from the capacity of the data response message. For example, if the CACHE line capacity of the central processing unit is 128 bytes, and the data response message used in the computer system can only return 64 bytes of data, the central processor needs to send two data operation instructions, respectively, requiring reading. Take the first 64 bytes and the last 64 bytes of 128 bytes to correspond to the capacity of the CACHE line.
在实现上述技术方案的过程中, 发明人发现现有技术至少存在以下 问题: 对应 CACHE行容量的多个数据操作指令要求读取数据的存储器地 址通常是连续的,在从存储器中读取数据时并不需频繁切换 BANK或地址 行。 但实际应用中, 在到达存储器控制器的緩存队列时, 所述对应 CACHE 行容量的多个数据操作指令中可能插入了其他的数据操作指令, 尤其是 在由多中央处理器、 多存储器级联扩展组成的高性能计算机系统中 (如 图 1所示),不同中央处理器访问不同的存储器用时不同, 并且路由方案 不同, 使得对应同一个 CACHE行的多个连续的数据操作指令中插入其他 的数据操作指令情况较为明显, 这使得对应同一个 CACHE行的多个数据 操作指令中后到达的数据操作指令不能进入緩存队列, 不能时序优化, 从而使得中央处理器获取一个完整的 CACHE 行的数据所消耗的时间增 长, 降低了系统处理效率。 In the process of implementing the above technical solution, the inventors have found that the prior art has at least the following problems: A plurality of data operation instructions corresponding to the CACHE line capacity require that the memory address for reading data is generally continuous, when reading data from the memory. It is not necessary to switch BANK or address lines frequently. However, in actual applications, when the cache queue of the memory controller is reached, other data operation instructions may be inserted in the plurality of data operation instructions corresponding to the CACHE line capacity, especially In a high-performance computer system consisting of multiple central processing units and multiple memory cascade extensions (as shown in Figure 1), different central processors access different memory times, and the routing schemes are different, so that corresponding to the same CACHE line Inserting other data operation instructions into multiple consecutive data operation instructions is more obvious, which makes the data operation instructions arriving in the multiple data operation instructions corresponding to the same CACHE line cannot enter the buffer queue, and cannot be optimized in timing, thereby making the center The time it takes for the processor to acquire data for a complete CACHE line reduces system processing efficiency.
发明内容  Summary of the invention
本发明的实施例提供一种数据读取的方法和存储器控制器, 减少了 中央处理器获取一个完整 CACHE行的数据所消耗的时间, 提高计算机系 统的数据处理效率。  Embodiments of the present invention provide a data reading method and a memory controller that reduce the time taken by the central processing unit to acquire data of a complete CACHE line, and improve the data processing efficiency of the computer system.
为达到上述目的, 本发明的实施例采用如下技术方案:  In order to achieve the above object, the embodiment of the present invention adopts the following technical solutions:
一种数据读取的方法, 包括:  A method of reading data, including:
在接收到中央处理器发出的第一读取请求后, 从存储器中读取与所 述第一读取请求对应的第一数据,并继续读取与所述第一数据地址相邻 的相邻数据; 所述第一数据与所述相邻数据的数据量之和为中央处理器 的 CHACHE緩存行的容量;  After receiving the first read request issued by the central processing unit, reading the first data corresponding to the first read request from the memory, and continuing to read the adjacent adjacent to the first data address Data; a sum of data amounts of the first data and the adjacent data is a capacity of a CHCHE cache line of a central processing unit;
緩存所述相邻数据;  Cache the adjacent data;
根据接收到的要求读取所述相邻数据的后续读取请求, 将緩存的所 述相邻数据发送给所述中央处理器。  The subsequent read request of the adjacent data is read according to the received request, and the cached adjacent data is sent to the central processor.
一种存储器控制器, 包括:  A memory controller includes:
数据读取单元, 用于在接收到中央处理器发出的第一读取请求后, 从存储器中读取与所述第一读取请求对应的第一数据,并继续读取与所 述第一数据地址相邻的相邻数据; 所述第一数据与所述相邻数据的数据 量之和为中央处理器的緩存行的容量;  a data reading unit, configured to: after receiving the first read request issued by the central processing unit, read the first data corresponding to the first read request from the memory, and continue reading with the first Adjacent data adjacent to the data address; a sum of data amounts of the first data and the adjacent data is a capacity of a cache line of the central processing unit;
相邻数据緩存单元, 用于緩存所述相邻数据; 相邻数据发送单元, 用于根据接收到的要求读取所述相邻数据的后 续读取请求, 将緩存的所述相邻数据发送给所述中央处理器。 An adjacent data buffer unit, configured to cache the adjacent data; And a neighboring data sending unit, configured to read the subsequent read request of the adjacent data according to the received request, and send the cached adjacent data to the central processor.
本发明实施例提供的数据读取的方法及存储器控制器, 在从存储器 进行数据读取时, 会读取出相邻数据并进行緩存, 以便在中央处理器要 求读取相邻数据时直接从緩存中取出并发送给中央处理器, 避免了重复 的对存储器进行读操作。 尤其是在由多中央处理器、 多存储器级联扩展 组成的高性能计算机系统中,同一中央处理器的连续多个数据操作指令 在到达存储器控制器时,可能因插入了其他中央处理器的数据操作指令, 而不能同时进入緩存队列以进行时序优化, 在此情况下不同的中央处理 器要求读取数据的地址相差很远, 导致在对存储器读取时频繁切换 BANK 以及行和列。 通过本发明实施例提供的方法, 可以预先取出并緩存相邻 数据, 避免其他中央处理器的数据操作指令的影响, 减少对存储器的频 繁读取, 使得同一中央处理器获取一个完整的 CACHE行的数据所消耗的 时间缩短, 提升计算机系统数据处理效率。  The data reading method and the memory controller provided by the embodiment of the present invention, when reading data from the memory, read out adjacent data and cache, so that when the central processor requests to read adjacent data, it directly The cache is fetched and sent to the central processor, avoiding repeated reads of the memory. Especially in a high-performance computer system consisting of multiple central processing units and multiple memory cascade extensions, consecutive data operation instructions of the same central processing unit may be inserted into data of other central processing units when reaching the memory controller. The instructions are manipulated and cannot enter the cache queue at the same time for timing optimization. In this case, different central processors require that the addresses of the read data are far apart, resulting in frequent switching of BANKs and rows and columns when reading from memory. With the method provided by the embodiment of the present invention, adjacent data can be taken out and cached in advance, the influence of data operation instructions of other central processing units is avoided, and frequent reading of the memory is reduced, so that the same central processing unit acquires a complete CACHE line. The time consumed by data is shortened, and the data processing efficiency of the computer system is improved.
附图说明 DRAWINGS
图 1 为多中央处理器、 多存储器级联扩展组成的高性能计算机系统 的示意图;  Figure 1 is a schematic diagram of a high performance computer system consisting of multiple central processing units and multiple memory cascade extensions;
图 2为本发明实施例 1 中一种数据读取的方法的流程图;  2 is a flowchart of a method for reading data in Embodiment 1 of the present invention;
图 3为本发明实施例 2中一种数据读取的方法的流程图;  3 is a flowchart of a method for reading data in Embodiment 2 of the present invention;
图 4为本发明实施例 3中一种存储器控制器的框图;  4 is a block diagram of a memory controller in Embodiment 3 of the present invention;
图 5为本发明实施例 3中另一种存储器控制器的框图;  Figure 5 is a block diagram of another memory controller in Embodiment 3 of the present invention;
图 6为本发明实施例 3中使用緩存芯片的计算机内部结构的示意图。 具体实施方式  FIG. 6 is a schematic diagram showing the internal structure of a computer using a cache chip in Embodiment 3 of the present invention. detailed description
下面结合本发明实施例的附图对本发明实施例的技术方案进行清楚、 完整地描述, 显然, 所描述的实施例仅仅是本发明一部分实施例, 而不是全 部的实施例。 基于本发明中的实施例, 本领域普通技术人员在没有作出创造 性劳动前提下所获得的所有其他实施例, 都属于本发明保护的范围。 实施例 1: The technical solutions of the embodiments of the present invention are clearly and completely described in the following with reference to the accompanying drawings of the embodiments of the present invention. It is obvious that the described embodiments are only a part of the embodiments of the present invention, and not An embodiment of the department. All other embodiments obtained by those skilled in the art based on the embodiments of the present invention without creative efforts are within the scope of the present invention. Example 1:
本发明实施例提供了一种数据读取的方法, 如图 2所示, 所述方 法包括以下步骤:  An embodiment of the present invention provides a data reading method. As shown in FIG. 2, the method includes the following steps:
1 01、 在接收到中央处理器发出的第一读取请求后, 从存储器中读 取与所述第一读取请求对应的第一数据,并继续读取与所述第一数据地 址相邻的相邻数据。  After receiving the first read request from the central processing unit, reading the first data corresponding to the first read request from the memory, and continuing to read adjacent to the first data address Adjacent data.
中央处理器对存储器的数据操作包括读操作和写操作。 当中央处 理器要对存储器进行读操作时, 向存储器控制器发出第一读取请求。存 储器控制器对接收到的所述第一读取请求进行分析及地址译码,从而获 取到所述第一读取请求对应的第一数据在存储器中的 BANK地址、 以及 行、 列地址。 之后, 存储器控制器寻址到所述第一数据所在的 BANK , 在所在的 BANK中激活所述第一数据所在的行, 并根据所述第一数据所 在的列地址确定所述第一数据的位置,读出所述第一数据, 同时还继续 读取出与所述第一数据的地址相邻的相邻数据。  The central processor's data operations on the memory include read operations and write operations. When the central processor is to read the memory, a first read request is issued to the memory controller. The memory controller analyzes and decodes the received first read request, thereby obtaining a BANK address, and a row and column address of the first data corresponding to the first read request in the memory. Afterwards, the memory controller addresses the BANK where the first data is located, activates the row where the first data is located in the BANK where it is located, and determines the first data according to the column address where the first data is located. Positioning, reading the first data while continuing to read adjacent data adjacent to the address of the first data.
1 02、 緩存所述相邻数据。  1 02. Cache the adjacent data.
存储器控制器将读取到的第一数据发送给中央处理器, 另一方面, 将所述相邻数据进行緩存。  The memory controller transmits the read first data to the central processor, and on the other hand, caches the adjacent data.
1 03、 根据接收到的要求读取所述相邻数据的后续读取请求, 将緩 存的所述相邻数据发送给所述中央处理器。  1 03. Read a subsequent read request of the adjacent data according to the received request, and send the cached adjacent data to the central processor.
当存储器控制器接收到要求读取所述相邻数据的后续读取请求 时, 将已緩存的所述相邻数据向中央处理器发送。  The cached adjacent data is sent to the central processor when the memory controller receives a subsequent read request to read the adjacent data.
举例来说, 若中央处理器 A的一个 CHACHE行的容量为 128BYTE , 且中央处理器 A 接收到的存储器控制器 A 的数据响应报文的容量为 64BYTE , 则中央处理器 A要读取整个 CHACHE行的数据, 需要发送两个 数据操作指令以分别要求读取 64BYTE的数据, 并先后接收两个数据响 应报文。 其中第一个发出的数据操作指令为第一读取请求, 第二个发出 的数据操作指令为后续读取请求。 第一数据的数据量为 64BYTE , 由存 储器控制器 A 在接收到第一读取请求后, 装载在第一个数据响应报文 中, 发送至中央处理器 A的 CHACHE行。 在存储器控制器 A读取第一数 据后, 继续读取第一数据的相邻数据 (大小 64BYTE ) , 并进行緩存。 当中央存储器 A的后续读取请求到达存储器控制器 A后,若后续读取请 求要求读取第一数据的相邻数据,则存储器控制器 A将緩存的相邻数据 装载入第二个数据响应报文并发送至中央处理器 A的 CHACHE行。 中央 处理器 A在要求读取所述相邻数据时, 存储器控制器 A已经预先緩存, 不必再进入存储器中进行读取操作。所述第一数据与所述相邻数据的数 据量之和为中央处理器 A的 CHACHE行的容量。 For example, if the capacity of a CHACHE line of the central processor A is 128 BYTE, and the capacity of the data response message of the memory controller A received by the central processor A is 64 BYTE, the central processor A needs to read the entire CHACHE. The data of the line needs to send two data operation instructions to respectively read the data of 64BYTE, and receive two data response messages in succession. The first data operation instruction issued is the first read request, and the second data operation instruction is the subsequent read request. The data amount of the first data is 64 BYTE, and the first data response message is loaded by the memory controller A after receiving the first read request. , sent to the CHCHE line of central processor A. After the memory controller A reads the first data, it continues to read the adjacent data of the first data (size 64 BYTE) and caches it. After the subsequent read request of the central memory A reaches the memory controller A, if the subsequent read request requires reading the adjacent data of the first data, the memory controller A loads the cached adjacent data into the second data. The response message is sent to the CHCHE line of central processor A. When the central processor A requests to read the adjacent data, the memory controller A has been pre-cached, and it is no longer necessary to enter the memory for the read operation. The sum of the data amounts of the first data and the adjacent data is the capacity of the CHCHE line of the central processor A.
本发明实施例提供的数据读取的方法, 在从存储器进行数据读取 时, 会读取出相邻数据并进行緩存, 以便在中央处理器要求读取相邻数 据时直接从緩存中取出并发送给中央处理器,避免了重复的对存储器进 行读操作。尤其是在由多中央处理器、 多存储器级联扩展组成的高性能 计算机系统中,同一中央处理器的连续多个数据操作指令在到达存储器 控制器时, 可能因插入了其他中央处理器的数据操作指令, 而不能同时 进入緩存队列以进行时序优化,在此情况下不同的中央处理器要求读取 数据的地址相差很远, 导致在对存储器读取时频繁切换 BANK以及行和 列。 通过本发明实施例提供的方法, 可以预先取出并緩存相邻数据, 避 免其他中央处理器的数据操作指令的影响, 减少对存储器的频繁读取, 使得同一中央处理器获取一个完整的 CACHE 行的数据所消耗的时间缩 短, 提升计算机系统数据处理效率。  The data reading method provided by the embodiment of the present invention reads out adjacent data and caches when reading data from the memory, so as to be directly taken out from the cache when the central processor requests to read adjacent data. Sent to the central processor, avoiding repeated read operations on the memory. Especially in a high-performance computer system consisting of multiple central processing units and multiple memory cascade extensions, consecutive data operation instructions of the same central processing unit may be inserted into data of other central processing units when reaching the memory controller. The instructions are manipulated and cannot enter the cache queue at the same time for timing optimization. In this case, the addresses of the different central processors that require reading data are far apart, resulting in frequent switching of BANKs and rows and columns when reading from memory. The method provided by the embodiment of the present invention can pre-fetch and cache adjacent data, avoiding the influence of data operation instructions of other central processing units, and reducing frequent reading of the memory, so that the same central processing unit acquires a complete CACHE line. The time consumed by data is shortened, and the data processing efficiency of the computer system is improved.
实施例 2 :  Example 2:
本发明实施例提供了一种数据读取的方法, 如图 3 所示, 包括以 下步骤:  An embodiment of the present invention provides a data reading method, as shown in FIG. 3, including the following steps:
301、 接收中央处理器发出的第一读取请求。  301. Receive a first read request from a central processing unit.
存储器控制器接收中央处理器的第一读取请求。 所述第一读取请 求要求读取第一数据。  The memory controller receives a first read request from the central processor. The first read request requires reading the first data.
302、 检测是否已緩存所述第一数据。  302. Detect whether the first data is cached.
存储器控制器对接收到的所述第一读取请求进行分析及地址译 码,获取到所述第一读取请求对应的第一数据在存储器中的 BANK地址、 以及行、 列地址, 若该地址对应的数据已经緩存在存储器控制器, 则直 接向中央处理器返回所述第一数据; 否则, 转向步骤 303。 303、 从存储器中读取所述第一数据,并继续读取与所述第一数据 地址相邻的相邻数据。 The memory controller analyzes and decodes the received first read request, and obtains a BANK address, a row and a column address of the first data corresponding to the first read request in the memory, if The data corresponding to the address has been cached in the memory controller, and the first data is directly returned to the central processor; otherwise, the process proceeds to step 303. 303. Read the first data from a memory, and continue to read adjacent data adjacent to the first data address.
存储器控制器获取到所述第一读取请求对应的第一数据在存储器 中的 BANK地址、以及行、列地址之后,寻址到所述第一数据所在的 BANK , 在所在的 BANK中激活所述第一数据所在的行, 并根据所述第一数据所 在的列地址确定所述第一数据的位置,读出所述第一数据, 同时还继续 读取出与所述第一数据的地址相邻的相邻数据。  The memory controller acquires the first data corresponding to the first read request, after the BANK address in the memory, and the row and column addresses, and addresses the BANK where the first data is located, and activates the location in the BANK where the data is located. Determining a row of the first data, determining a location of the first data according to a column address where the first data is located, reading the first data, and continuing to read an address with the first data Adjacent adjacent data.
304、 緩存所述相邻数据。  304. Cache the adjacent data.
一方面, 存储器控制器将读取到的第一数据发送给中央处理器, 另一方面, 将所述相邻数据进行緩存。  In one aspect, the memory controller transmits the read first data to the central processor and, on the other hand, caches the adjacent data.
305、 根据接收到的要求读取所述相邻数据的后续读取请求, 将緩 存的所述相邻数据发送给所述中央处理器。  305. Read a subsequent read request of the neighboring data according to the received request, and send the cached neighboring data to the central processor.
下面举例来说明上述流程。 不妨假设中央处理器 B的一个 CHACHE 行的容量为 128BYTE , 且中央处理器 B收到的存储器控制器 B的数据响 应报文的容量为 64BYTE ,则中央处理器 B要读取整个 CHACHE行的数据, 需要发送两个数据操作指令以分别要求读取 64BYTE的数据, 并先后接 收两个数据响应报文。 其中第一个发出的数据操作指令为第一读取请 求, 第二个发出的数据操作指令为后续读取请求。存储器控制器 B收到 第一读取请求后, 先检测是否已緩存第一读取请求对应的第一数据, 第 一数据的数据量为 64BYTE。 在未緩存第一数据的情况下, 存储器控制 器 B从存储器中读取第一数据并装载在第一个数据响应报文中,发送至 中央处理器 B的 CHACHE行。 另外, 在存储器控制器 B读取第一数据后, 还继续读取第一数据的相邻数据 (大小 64BYTE ) , 并进行緩存。 当中 央存储器 B的后续读取请求到达存储器控制器 B后,若后续读取请求要 求读取第一数据的相邻数据 ,则存储器控制器 B将緩存的相邻数据装载 入第二个数据响应报文并发送至中央处理器 A的 CHACHE行。 由此, 两 个数据响应报文返回 128BYTE的数据,整好满足一个 CHACHE行的容量。 考虑其他的情况, 当中央处理器 B的一个 CHACHE行的容量为 256BYTE 的情况下, 则中央处理器 B要读取整个 CHACHE行的数据, 需要发送四 个数据操作指令以分别要求读取 64BYTE的数据, 并先后接收四个数据 响应报文。 其中, 存储器控制器 B读取数据量为 64BYTE的第一数据, 装载在第一个数据响应报文中, 发送至中央处理器 B的 CHACHE行。 另 外, 在存储器控制器 B读取第一数据后,还继续读取第一数据的相邻数 据(大小 1 92BYTE ) , 并进行緩存。 当中央存储器 B的后续读取请求到 达存储器控制器 8后(此处的后续读取请求为三个读取请求,每个读取 请求要求读取 64BYTE ) , 存储器控制器 B将緩存的相邻数据分别装载 入三个数据响应报文并发送至中央处理器 A的 CHACHE行。 The following example is used to illustrate the above process. It may be assumed that the capacity of one CHACHE line of the central processing unit B is 128 BYTE, and the capacity of the data response message of the memory controller B received by the central processing unit B is 64 BYTE, then the central processing unit B reads the data of the entire CHACHE line. Two data operation instructions need to be sent to respectively read 64BYTE data, and receive two data response messages in succession. The first data operation instruction issued is the first read request, and the second data operation instruction is the subsequent read request. After receiving the first read request, the memory controller B first detects whether the first data corresponding to the first read request has been cached, and the data amount of the first data is 64 BYTE. In the case where the first data is not cached, the memory controller B reads the first data from the memory and loads it in the first data response message and sends it to the CHACHE line of the central processor B. In addition, after the memory controller B reads the first data, it continues to read the adjacent data of the first data (size 64BYTE) and performs buffering. After the subsequent read request of the central memory B reaches the memory controller B, if the subsequent read request requires reading the adjacent data of the first data, the memory controller B loads the cached adjacent data into the second data. The response message is sent to the CHCHE line of central processor A. Thus, the two data response messages return 128BYTE data, which satisfies the capacity of one CHACHE line. Considering other cases, when the capacity of a CHACHE line of the central processing unit B is 256 BYTE, the central processor B needs to read the data of the entire CHACHE line, and needs to send four data operation instructions to respectively read the 64BYTE. Data, and receives four data response messages in succession. Wherein, the memory controller B reads the first data with a data amount of 64 BYTE, Loaded in the first data response message, sent to the CHCHE line of central processor B. In addition, after the memory controller B reads the first data, it continues to read the adjacent data of the first data (size 1 92BYTE) and caches it. When the subsequent read request of the central memory B reaches the memory controller 8 (the subsequent read request is three read requests, each read request requires 64BYTE to be read), the memory controller B will cache the adjacent The data is loaded into three data response messages and sent to the CHCHE line of central processor A.
进一步的, 本发明实施例中, 可以在中央处理器和存储器控制器 之间连接一个緩存芯片。该緩存芯片在接收到所述中央处理器发出的第 一读取请求或后续读取请求后,检测自身是否緩存有相应的数据, 并将 自身緩存的数据发送所述中央处理器。此外, 该緩存芯片在自身未存储 所述第一读取请求或后续读取请求对应的数据时,将所述第一读取请求 或后续读取请求转发至存储器控制器。  Further, in the embodiment of the present invention, a cache chip may be connected between the central processing unit and the memory controller. After receiving the first read request or the subsequent read request sent by the central processing unit, the cache chip detects whether it has cached corresponding data, and sends the self-cached data to the central processing unit. In addition, the cache chip forwards the first read request or the subsequent read request to the memory controller when the data corresponding to the first read request or the subsequent read request is not stored by itself.
当存储器控制器接收緩存芯片转发的第一读取请求后, 检测是否 已緩存第一数据, 并在未緩存第一数据时,从存储器中读取与所述第一 读取请求对应的第一数据并继续读取与所述第一数据地址相邻的相邻 数据, 并将第一数据和相邻数据发送给緩存芯片。  After the memory controller receives the first read request forwarded by the cache chip, detecting whether the first data has been cached, and when the first data is not cached, reading the first corresponding to the first read request from the memory The data continues to read adjacent data adjacent to the first data address and transmits the first data and the adjacent data to the cache chip.
实际应用中, 本发明实施例提及的第一读取请求和后续读取请求 可以是 QP I ( Qu i ckPa th I n t e r connec t , 快速通道互联十办议) 才艮文。  In a practical application, the first read request and the subsequent read request mentioned in the embodiment of the present invention may be QP I (Qu i ckPa th I n t e r connec t).
本发明实施例提供的数据读取的方法, 在从存储器进行数据读取 时, 会读取出相邻数据并进行緩存, 以便在中央处理器要求读取相邻数 据时直接从緩存中取出并发送给中央处理器,避免了重复的对存储器进 行读操作。在由多中央处理器、 多存储器级联扩展组成的高性能计算机 系统中,同一中央处理器的连续多个数据操作指令在到达存储器控制器 时, 可能因插入了其他中央处理器的数据操作指令, 而不能同时进入緩 存队列以进行时序优化,在此情况下不同的中央处理器要求读取数据的 地址相差 4艮远, 导致在对存储器读取时频繁切换 BANK以及行和列。 通 过本发明实施例提供的方法, 可以预先取出并緩存相邻数据,避免其他 中央处理器的数据操作指令的影响, 减少对存储器的频繁读取,使得同 一中央处理器获取一个完整的 C A C H E行的数据所消耗的时间缩短,提升 系统处理效率。  The data reading method provided by the embodiment of the present invention reads out adjacent data and caches when reading data from the memory, so as to be directly taken out from the cache when the central processor requests to read adjacent data. Sent to the central processor, avoiding repeated read operations on the memory. In a high-performance computer system consisting of multiple central processing units and multiple memory cascade extensions, consecutive data operation instructions of the same central processor may arrive at the memory controller due to insertion of data processing instructions of other central processing units. Instead of simultaneously entering the cache queue for timing optimization, in this case different central processors require that the addresses of the read data differ by 4 ,, resulting in frequent switching of BANKs and rows and columns when reading from memory. The method provided by the embodiment of the present invention can pre-fetch and cache adjacent data, avoiding the influence of data operation instructions of other central processing units, and reducing frequent reading of the memory, so that the same central processing unit acquires a complete CACHE line. The time consumed by the data is shortened, and the system processing efficiency is improved.
另外, 本发明实施例中连入了緩存芯片, 能进一步减少数据读取 所消耗的时间, 提升处理效率。 In addition, in the embodiment of the present invention, a cache chip is connected, which can further reduce data reading. The time spent to improve processing efficiency.
实施例 3:  Example 3:
本发明实施例提供了一种存储器控制器, 如图 4 所示, 该存储器 控制器包括: 数据读取单元 41、 相邻数据緩存单元 42、 相邻数据发送 单元 43。  The embodiment of the present invention provides a memory controller. As shown in FIG. 4, the memory controller includes: a data reading unit 41, an adjacent data buffer unit 42, and an adjacent data transmitting unit 43.
其中, 数据读取单元 41用于在接收到中央处理器发出的第一读取 请求后, 从存储器中读取与所述第一读取请求对应的第一数据,并继续 读取与所述第一数据地址相邻的相邻数据。  The data reading unit 41 is configured to: after receiving the first read request sent by the central processing unit, read the first data corresponding to the first read request from the memory, and continue to read and The adjacent data of the first data address is adjacent.
所述第一数据与所述相邻数据的数据量之和为中央处理器的緩存 行的容量。  The sum of the data amounts of the first data and the adjacent data is the capacity of the cache line of the central processing unit.
相邻数据緩存单元 42用于緩存所述相邻数据。  The adjacent data buffer unit 42 is configured to buffer the adjacent data.
相邻数据发送单元 43用于根据接收到的要求读取所述相邻数据的 后续读取请求, 将緩存的所述相邻数据发送给所述中央处理器。  The adjacent data transmitting unit 43 is configured to read the subsequent read request of the adjacent data according to the received request, and send the cached adjacent data to the central processing unit.
进一步的, 如图 5所示, 该存储器控制器还包括緩存检测单元 44 , 用于在存储器中读取与所述第一读取请求对应的第一数据,并继续读取 与所述第一数据地址相邻的相邻数据前, 检测是否已緩存所述第一数 据; 在检测到未緩存所述第一数据时,在存储器中读取与所述第一读取 请求对应的第一数据。  Further, as shown in FIG. 5, the memory controller further includes a cache detecting unit 44, configured to read the first data corresponding to the first read request in the memory, and continue to read with the first Before the adjacent data adjacent to the data address, detecting whether the first data has been cached; when detecting that the first data is not cached, reading the first data corresponding to the first read request in the memory .
进一步的, 本发明实施例中, 可以在中央处理器和存储器控制器 之间连接一个緩存芯片。该緩存芯片在接收到所述中央处理器发出的第 一读取请求或后续读取请求后,检测自身是否緩存有相应的数据, 并将 自身緩存的数据发送所述中央处理器。此外, 该緩存芯片在自身未存储 所述第一读取请求或后续读取请求对应的数据时,将所述第一读取请求 或后续读取请求转发至存储器控制器。如图 6所示,緩存芯片连接在中 央处理器和存储器控制器之间, 而存储器控制器直接与存储器连接, 对 存储器中的数据进行操作。  Further, in the embodiment of the present invention, a cache chip may be connected between the central processing unit and the memory controller. After receiving the first read request or the subsequent read request sent by the central processing unit, the cache chip detects whether it has cached corresponding data, and sends the self-cached data to the central processing unit. In addition, the cache chip forwards the first read request or the subsequent read request to the memory controller when the data corresponding to the first read request or the subsequent read request is not stored by itself. As shown in Figure 6, the cache chip is connected between the central processor and the memory controller, and the memory controller is directly connected to the memory to operate on the data in the memory.
在存储器控制器连接緩存芯片的情况下, 所述数据读取单元 41还 用于在接收所述緩存芯片转发的所述第一读取请求后,从存储器中读取 与所述第一读取请求对应的第一数据并继续读取与所述第一数据地址 相邻的相邻数据。 所述相邻数据发送单元 43还用于将所述第一数据和 所述相邻数据发送给所述緩存芯片。 实际应用中, 本发明实施例提及的第一读取请求和后续读取请求 可以是 QP I报文。 In the case that the memory controller is connected to the cache chip, the data reading unit 41 is further configured to read from the memory and the first read after receiving the first read request forwarded by the cache chip. The corresponding first data is requested and the adjacent data adjacent to the first data address is continuously read. The adjacent data sending unit 43 is further configured to send the first data and the adjacent data to the cache chip. In a practical application, the first read request and the subsequent read request mentioned in the embodiment of the present invention may be a QP I message.
本发明实施例提供的存储器控制器, 在从存储器进行数据读取时, 会读取出相邻数据并进行緩存,以便在中央处理器要求读取相邻数据时 直接从緩存中取出并发送给中央处理器,避免了重复的对存储器进行读 操作。在由多中央处理器、 多存储器级联扩展组成的高性能计算机系统 中,同一中央处理器的连续多个数据操作指令在到达存储器控制器时, 可能因插入了其他中央处理器的数据操作指令,而不能同时进入緩存队 列以进行时序优化,在此情况下不同的中央处理器要求读取数据的地址 相差 4艮远, 导致在对存储器读取时频繁切换 BANK以及行和列。 通过本 发明实施例提供的方法, 可以预先取出并緩存相邻数据,避免其他中央 处理器的数据操作指令的影响, 减少对存储器的频繁读取,使得同一中 央处理器获取一个完整的 CACHE行的数据所消耗的时间缩短,提升系统 处理效率。  The memory controller provided by the embodiment of the present invention reads out adjacent data and caches when reading data from the memory, so as to be directly taken out from the cache and sent to the central processor when reading the adjacent data. The central processor avoids repeated read operations on the memory. In a high-performance computer system consisting of multiple central processing units and multiple memory cascade extensions, consecutive data operation instructions of the same central processor may arrive at the memory controller due to insertion of data processing instructions of other central processing units. Instead of simultaneously entering the cache queue for timing optimization, in this case different central processors require that the addresses of the read data differ by 4 ,, resulting in frequent switching of BANKs and rows and columns when reading from memory. The method provided by the embodiment of the present invention can pre-fetch and cache adjacent data, avoiding the influence of data operation instructions of other central processing units, and reducing frequent reading of the memory, so that the same central processing unit acquires a complete CACHE line. The time consumed by the data is shortened, and the system processing efficiency is improved.
另外, 本发明实施例中连入了緩存芯片, 能进一步减少数据读取 所消耗的时间, 提升处理效率。  In addition, in the embodiment of the present invention, the cache chip is connected, which can further reduce the time consumed for data reading and improve processing efficiency.
通过以上的实施方式的描述, 所属领域的技术人员可以清楚地了 解到本发明可借助软件加必需的通用硬件的方式来实现,当然也可以通 过硬件, 但很多情况下前者是更佳的实施方式。 基于这样的理解, 本发 明的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产 品的形式体现出来, 该计算机软件产品存储在可读取的存储介质中, 如 计算机的软盘,硬盘或光盘等, 包括若干指令用以使得一台计算机设备 (可以是个人计算机, 服务器, 或者网络设备等)执行本发明各个实施 例所述的方法。  Through the description of the above embodiments, those skilled in the art can clearly understand that the present invention can be implemented by means of software plus necessary general hardware, and of course, by hardware, but in many cases, the former is a better implementation. . Based on such understanding, the technical solution of the present invention, which is essential or contributes to the prior art, may be embodied in the form of a software product stored in a readable storage medium, such as a floppy disk of a computer. A hard disk or optical disk, etc., includes instructions for causing a computer device (which may be a personal computer, server, or network device, etc.) to perform the methods described in various embodiments of the present invention.
以上所述, 仅为本发明的具体实施方式, 但本发明的保护范围并 不局限于此,任何熟悉本技术领域的技术人员在本发明揭露的技术范围 内,可轻易想到变化或替换,都应涵盖在本发明的保护范围之内。 因此, 本发明的保护范围应所述以权利要求的保护范围为准。  The above is only the specific embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily think of changes or substitutions within the technical scope of the present invention. It should be covered by the scope of the present invention. Therefore, the scope of the invention should be determined by the scope of the claims.

Claims

权利 要求 书 Claim
1、 一种数据读取的方法, 其特征在于, 包括: A method for reading data, characterized in that it comprises:
在接收到中央处理器发出的第一读取请求后, 从存储器中读取与所 述第一读取请求对应的第一数据,并继续读取与所述第一数据地址相邻 的相邻数据; 所述第一数据与所述相邻数据的数据量之和为中央处理器 的 CHACHE緩存行的容量;  After receiving the first read request issued by the central processing unit, reading the first data corresponding to the first read request from the memory, and continuing to read the adjacent adjacent to the first data address Data; a sum of data amounts of the first data and the adjacent data is a capacity of a CHCHE cache line of a central processing unit;
緩存所述相邻数据;  Cache the adjacent data;
根据接收到的要求读取所述相邻数据的后续读取请求, 将緩存的所 述相邻数据发送给所述中央处理器。  The subsequent read request of the adjacent data is read according to the received request, and the cached adjacent data is sent to the central processor.
2、 根据权利要求 1所述的方法, 其特征在于, 在存储器中读取与所 述第一读取请求对应的第一数据,并继续读取与所述第一数据地址相邻 的相邻数据前, 还包括:  2. The method according to claim 1, wherein the first data corresponding to the first read request is read in a memory, and reading adjacent to the first data address is continued Before the data, it also includes:
检测是否已緩存所述第一数据; 在检测到未緩存所述第一数据时, 从所述存储器中读取与所述第一读取请求对应的第一数据。  Detecting whether the first data has been cached; when detecting that the first data is not cached, reading first data corresponding to the first read request from the memory.
3、 根据权利要求 1所述的方法, 其特征在于, 所述第一读取请求和 所述后续读取请求包括 QP I快速通道互联协议报文。  The method according to claim 1, wherein the first read request and the subsequent read request comprise a QP I fast channel interconnection protocol message.
4、 根据权利要求 1 - 3所述的方法, 其特征在于, 所述存储器控制器 与所述中央处理器之间连接有緩存芯片, 所述緩存芯片用于在接收到所 述中央处理器发出的第一读取请求或后续读取请求后, 将自身緩存的数 据发送所述中央处理器; 并在自身未存储所述第一读取请求或后续读取 请求对应的数据时, 将所述第一读取请求或后续读取请求转发至所述存 储器控制器。  The method according to claim 1 - 3, wherein a cache chip is connected between the memory controller and the central processor, and the cache chip is configured to receive the central processor After the first read request or the subsequent read request, the self-cached data is sent to the central processing unit; and when the data corresponding to the first read request or the subsequent read request is not stored by itself, the A first read request or a subsequent read request is forwarded to the memory controller.
5、 根据权利要求 4所述的方法, 其特征在于, 在接收所述存储芯片 发送的第一读取请求或后续读取请求后, 还包括:  The method according to claim 4, further comprising: after receiving the first read request or the subsequent read request sent by the memory chip, further comprising:
从存储器中读取与所述第一读取请求对应的第一数据并继续读取与 所述第一数据地址相邻的相邻数据, 将所述第一数据或所述相邻数据发 送给所述緩存芯片; 或  Reading first data corresponding to the first read request from a memory and continuing to read adjacent data adjacent to the first data address, and transmitting the first data or the adjacent data to The cache chip; or
从存储器中读取与所述第一数据地址相邻的相邻数据, 将所述第一 数据或所述相邻数据发送给所述緩存芯片。  Reading adjacent data adjacent to the first data address from the memory, and transmitting the first data or the adjacent data to the cache chip.
6、 一种存储器控制器, 其特征在于, 包括: 数据读取单元, 用于在接收到中央处理器发出的第一读取请求后, 从存储器中读取与所述第一读取请求对应的第一数据,并继续读取与所 述第一数据地址相邻的相邻数据; 所述第一数据与所述相邻数据的数据 量之和为中央处理器的緩存行的容量; 6. A memory controller, comprising: a data reading unit, configured to: after receiving the first read request issued by the central processing unit, read the first data corresponding to the first read request from the memory, and continue reading with the first Adjacent data adjacent to the data address; a sum of data amounts of the first data and the adjacent data is a capacity of a cache line of the central processing unit;
相邻数据緩存单元, 用于緩存所述相邻数据;  An adjacent data buffer unit, configured to cache the adjacent data;
相邻数据发送单元, 用于根据接收到的要求读取所述相邻数据的后 续读取请求, 将緩存的所述相邻数据发送给所述中央处理器。  And a neighboring data sending unit, configured to read the subsequent read request of the adjacent data according to the received request, and send the cached adjacent data to the central processor.
7、 根据权利要求 6所述的存储器控制器, 其特征在于, 还包括: 緩存检测单元, 用于在存储器中读取与所述第一读取请求对应的第 一数据,并继续读取与所述第一数据地址相邻的相邻数据前, 检测是否已 緩存所述第一数据; 在检测到未緩存所述第一数据时, 在存储器中读取 与所述第一读取请求对应的第一数据。  The memory controller according to claim 6, further comprising: a cache detecting unit, configured to read the first data corresponding to the first read request in the memory, and continue to read and Before the adjacent data of the first data address is adjacent, detecting whether the first data has been cached; when detecting that the first data is not cached, reading in the memory corresponds to the first read request The first data.
8、 根据权利要求 6所述的存储器控制器, 其特征在于, 数据读取单 元执行所述第一读取请求和所述后续读取请求过程中使用 QP I 快速通道 互联协议艮文。  The memory controller according to claim 6, wherein the data reading unit uses the QP I fast channel interconnection protocol in performing the first read request and the subsequent read request.
9、 根据权利要求 6-8所述的存储器控制器, 其特征在于, 还包括緩 存芯片, 所述存储器控制器与所述中央处理器之间连接所述緩存芯片, 所述緩存芯片用于在接收到所述中央处理器发出的第一读取请求或后续 读取请求后, 将自身緩存的数据发送所述中央处理器; 并在自身未存储 所述第一读取请求或后续读取请求对应的数据时, 将所述第一读取请求 或后续读取请求转发至所述存储器控制器。  The memory controller according to any one of claims 6-8, further comprising a cache chip, wherein the cache controller is connected to the central processor, and the cache chip is used in After receiving the first read request or the subsequent read request sent by the central processing unit, sending the cached data to the central processing unit; and not storing the first read request or the subsequent read request The corresponding read request or subsequent read request is forwarded to the memory controller.
1 0、 根据权利要求 9所述的存储器控制器, 其特征在于, 所述数据 读取单元还用于在接收所述緩存芯片转发的所述第一读取请求或后续读 取请求后, 从存储器中读取与所述第一读取请求对应的第一数据并继续 读取与所述第一数据地址相邻的相邻数据, 或读取与所述第一数据地址 相邻的相邻数据;  The memory controller according to claim 9, wherein the data reading unit is further configured to: after receiving the first read request or the subsequent read request forwarded by the cache chip, Reading first data corresponding to the first read request in the memory and continuing to read adjacent data adjacent to the first data address, or reading adjacent to the first data address Data
所述相邻数据发送单元还用于将所述第一数据和所述相邻数据发送 给所述緩存芯片。  The adjacent data sending unit is further configured to send the first data and the adjacent data to the cache chip.
PCT/CN2011/078077 2011-08-05 2011-08-05 Data reading method and ddr controller WO2012109882A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201180001724.2A CN102378971B (en) 2011-08-05 2011-08-05 Method for reading data and memory controller
PCT/CN2011/078077 WO2012109882A1 (en) 2011-08-05 2011-08-05 Data reading method and ddr controller

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2011/078077 WO2012109882A1 (en) 2011-08-05 2011-08-05 Data reading method and ddr controller

Publications (1)

Publication Number Publication Date
WO2012109882A1 true WO2012109882A1 (en) 2012-08-23

Family

ID=45796243

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2011/078077 WO2012109882A1 (en) 2011-08-05 2011-08-05 Data reading method and ddr controller

Country Status (2)

Country Link
CN (1) CN102378971B (en)
WO (1) WO2012109882A1 (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102378971B (en) * 2011-08-05 2014-03-12 华为技术有限公司 Method for reading data and memory controller
CN103064762B (en) * 2012-12-25 2015-12-02 华为技术有限公司 Heavily delete restoration methods and the device of Backup Data
EP2992438B1 (en) 2013-04-30 2019-08-28 Hewlett-Packard Enterprise Development LP Memory network
CN107589958B (en) * 2016-07-07 2020-08-21 瑞芯微电子股份有限公司 Multi-memory shared parallel data read-write system among multiple controllers and write-in and read-out method thereof
CN107274923A (en) * 2017-05-24 2017-10-20 记忆科技(深圳)有限公司 The method and solid state hard disc of order reading flow performance in a kind of raising solid state hard disc
CN109669897B (en) * 2017-10-13 2023-11-17 华为技术有限公司 Data transmission method and device
KR102482035B1 (en) * 2017-11-30 2022-12-28 에스케이하이닉스 주식회사 Memory controller, memory system and operating method thereof
CN109582508B (en) 2018-12-29 2023-12-26 西安紫光国芯半导体股份有限公司 Data backup and recovery method for NVDIMM, NVDIMM controller and NVDIMM
CN109901797A (en) * 2019-02-25 2019-06-18 深圳忆联信息系统有限公司 Data pre-head method, device, computer equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1499382A (en) * 2002-11-05 2004-05-26 华为技术有限公司 Method for implementing cache in high efficiency in redundancy array of inexpensive discs
CN1652091A (en) * 2004-02-07 2005-08-10 华为技术有限公司 Data preacquring method for use in data storage system
CN1858720A (en) * 2005-10-28 2006-11-08 中国人民解放军国防科学技术大学 Method realizing priority reading memory based on cache memory line shifting
CN101122888A (en) * 2006-08-09 2008-02-13 国际商业机器公司 Method and system for writing and reading application data
CN102378971A (en) * 2011-08-05 2012-03-14 华为技术有限公司 Method for reading data and memory controller

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009230374A (en) * 2008-03-21 2009-10-08 Fujitsu Ltd Information processor, program, and instruction sequence generation method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1499382A (en) * 2002-11-05 2004-05-26 华为技术有限公司 Method for implementing cache in high efficiency in redundancy array of inexpensive discs
CN1652091A (en) * 2004-02-07 2005-08-10 华为技术有限公司 Data preacquring method for use in data storage system
CN1858720A (en) * 2005-10-28 2006-11-08 中国人民解放军国防科学技术大学 Method realizing priority reading memory based on cache memory line shifting
CN101122888A (en) * 2006-08-09 2008-02-13 国际商业机器公司 Method and system for writing and reading application data
CN102378971A (en) * 2011-08-05 2012-03-14 华为技术有限公司 Method for reading data and memory controller

Also Published As

Publication number Publication date
CN102378971B (en) 2014-03-12
CN102378971A (en) 2012-03-14

Similar Documents

Publication Publication Date Title
WO2012109882A1 (en) Data reading method and ddr controller
US6378055B1 (en) Memory accessing and controlling method
JP4517237B2 (en) Memory hub with internal row caching and access method.
EP3028162B1 (en) Direct access to persistent memory of shared storage
US6272609B1 (en) Pipelined memory controller
CN101388824B (en) File reading method and system under sliced memory mode in cluster system
US20070079044A1 (en) Packet Combiner for a Packetized Bus with Dynamic Holdoff time
WO2011144175A1 (en) Data prefetching method, node and system for distributed hash table dht memory system
KR20170026116A (en) high performance transaction-based memory systems
WO2018232736A1 (en) Memory access technology and computer system
JP6287571B2 (en) Arithmetic processing device, information processing device, and control method of arithmetic processing device
KR20130121105A (en) Memory controllers, systems, and methods for applying page management policies based on stream transaction information
KR20060120272A (en) Scalable bus structure
US20050253858A1 (en) Memory control system and method in which prefetch buffers are assigned uniquely to multiple burst streams
TWI281609B (en) Apparatus, method, and system for reducing latency of memory devices
WO2014206230A1 (en) Memory access method and memory controller
WO2018125400A1 (en) System memory having point-to-point link that transports compressed traffic
EP3289462B1 (en) Multiple read and write port memory
WO2011157067A1 (en) Method and apparatus for reading data with synchronous dynamic random access memory controller
US7076627B2 (en) Memory control for multiple read requests
WO2020038466A1 (en) Data pre-fetching method and device
WO2014206220A1 (en) Data writing method and memory system
WO2013185660A1 (en) Instruction storage device of network processor and instruction storage method for same
JP5911548B1 (en) Apparatus, method, and computer program for scheduling access request to shared memory
JP2007018222A (en) Memory access control circuit

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 201180001724.2

Country of ref document: CN

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11858619

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 11858619

Country of ref document: EP

Kind code of ref document: A1