WO2017173844A1 - 一种目录读取的方法、装置及系统 - Google Patents

一种目录读取的方法、装置及系统 Download PDF

Info

Publication number
WO2017173844A1
WO2017173844A1 PCT/CN2016/109580 CN2016109580W WO2017173844A1 WO 2017173844 A1 WO2017173844 A1 WO 2017173844A1 CN 2016109580 W CN2016109580 W CN 2016109580W WO 2017173844 A1 WO2017173844 A1 WO 2017173844A1
Authority
WO
WIPO (PCT)
Prior art keywords
request information
directory
read
cache block
cache
Prior art date
Application number
PCT/CN2016/109580
Other languages
English (en)
French (fr)
Inventor
周玉龙
童元满
李仁刚
Original Assignee
浪潮电子信息产业股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 浪潮电子信息产业股份有限公司 filed Critical 浪潮电子信息产业股份有限公司
Publication of WO2017173844A1 publication Critical patent/WO2017173844A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0806Multiuser, multiprocessor or multiprocessing cache systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0806Multiuser, multiprocessor or multiprocessing cache systems
    • G06F12/084Multiuser, multiprocessor or multiprocessing cache systems with a shared cache

Definitions

  • the present invention relates to the field of communications technologies, and in particular, to a method, device, and system for directory reading.
  • each processor node includes at least one CPU, each CPU is equipped with a certain memory, and each processor node further includes at least one node controller NC, in different processor nodes
  • the CPUs are connected to each other through the node controller NC to achieve mutual access to the other party's memory.
  • each memory is cached with at least two directory caches corresponding to the data in the memory. After reading the data in the memory, the desired data is obtained by reading the directory cache. Storage location, read each time the in-memory directory cache is read, read at least two directories
  • each read directory request corresponds to a directory cache read.
  • Embodiments of the present invention provide a method, apparatus, and system for directory reading, which can reduce the delay of reading a directory operation.
  • An embodiment of the present invention provides a method for directory reading, including: Receiving first request information for reading the directory cache;
  • the first request information is cached in a preset buffer area, and after the second request information is acquired to the second cache block including the first directory and the second directory, And transmitting, according to the second cache block and the request information in the buffer area, at least two directories including the first directory and the second directory to the corresponding request information sending end.
  • the sending according to the second cache block and the request information in the buffer area, at least two directories including the first directory and the second directory to a corresponding request Information sender includes
  • the first request information is directly executed, and the first cache block including the first directory is obtained from the directory cache, according to the The first cache block and the request information in the buffer area are sent to at least one directory including the first directory to the corresponding request information sending end.
  • the sending, according to the first cache block and the request information in the buffer area, the at least one directory including the first directory to the corresponding request information sending end includes:
  • the request information is deleted from the buffer area.
  • An embodiment of the present invention further provides a device for reading a directory, including: a receiving unit, a determining unit, a determining unit, and an executing unit;
  • the receiving unit is configured to receive first request information for reading a directory cache
  • the determining unit is configured to determine a first directory to be read by the first request information received by the receiving unit;
  • the determining unit is configured to determine whether there is at least one second request information that is being executed, where the second directory to be read by the second request information is located in the same as the first directory determined by the determining unit On a cache block;
  • the execution unit is configured to: according to the determination result of the determining unit, if yes, buffering the first request information into a preset buffer area, and performing the second request information acquisition to include the After the first directory and the second cache block of the second directory, according to the second cache block and the request information in the buffer area, at least the first directory and the second directory are included Two directories are sent to the corresponding request message sender.
  • the execution unit includes: a sending subunit and a determining subunit;
  • the sending subunit configured to acquire the second directory from the second cache block, and send the second directory to the request information sending end corresponding to the second request information
  • the determining subunit is configured to determine, according to each request information in the buffer area, whether a directory to be read by the request information is in the second cache block;
  • the sending subunit is further configured to send, according to the determination result of the determining subunit, the directory to be read by the request information to the request information sending end corresponding to the request information.
  • the executing unit is further configured to: according to the determination result of the determining unit, if the first request information is directly executed, obtain the first directory including the first directory from the directory cache. And a cache block, according to the first cache block and the request information in the buffer area, sending at least one directory including the first directory to a corresponding request information sending end.
  • the execution unit includes: a sending subunit and a determining subunit;
  • the sending subunit configured to acquire the first directory from the first cache block, and use the first directory Recording the request information sending end corresponding to the first request information;
  • the determining subunit configured to determine, according to each request information in the buffer area, whether a directory to be read by the request information is in the first cache block;
  • the sending subunit is further configured to send, according to the determination result of the determining subunit, the directory to be read by the request information to the request information sending end corresponding to the request information.
  • the embodiment of the present invention further provides a system for reading a directory, including: a request information response end, at least one request information sending end, and any device for reading the directory provided by the embodiment of the present invention;
  • the request information response end is configured to store the directory cache
  • the request information sending end is configured to send request information to a receiving unit in the device that reads the directory, and receive a directory sent by an execution unit in the device that reads the directory.
  • An embodiment of the present invention provides a method, an apparatus, and a system for reading a directory. Since a directory is read from a directory cache in the form of a cache block, each cache block includes at least two directories, if newly received. The first directory to be read by the request information is on the same cache block as the second directory to be acquired by the second request information currently being executed, and the second cache block to which the second request information is read is included in the second cache block. a directory, the first request information is not required to be executed again to read the first directory, and the directory to be read by the plurality of request information is obtained by reading the directory cache at a time, thereby reducing the bandwidth occupation for reading the directory cache. Thereby reducing the delay in reading the directory operation.
  • FIG. 1 is a flowchart of a method for reading a directory according to an embodiment of the present invention
  • FIG. 2 is a flowchart of a method for processing request information according to an embodiment of the present invention
  • FIG. 3 is a flowchart of a method for processing a cache block according to an embodiment of the present invention
  • 4 is a schematic diagram of a device for reading a directory according to an embodiment of the present invention
  • FIG. 5 is a schematic diagram of a device for reading a directory according to another embodiment of the present invention.
  • FIG. 6 is a schematic diagram of a system for directory reading according to an embodiment of the present invention.
  • an embodiment of the present invention provides a method for directory reading, which may include the following steps:
  • Step 101 Receive first request information for reading a directory cache.
  • Step 102 Determine a first directory to be read by the first request information
  • Step 103 determining whether there is at least one second request information being executed, where the second directory to be read by the second request information is located on the same cache block as the first directory, and if yes, executing Step 104;
  • Step 104 Cache the first request information into a preset buffer area, and after the second request information is acquired to the second cache block including the first directory and the second directory, And transmitting, according to the second cache block and the request information in the buffer area, at least two directories including the first directory and the second directory to the corresponding request information sending end.
  • An embodiment of the present invention provides a directory reading method. Since a directory is read from a directory cache in the form of a cache block, each cache block includes at least two directories, and if the newly received first request information is required, The first directory to be read is on the same cache block as the second directory to be acquired by the second request information currently being executed, and the first directory is included in the second cache block to which the second request information is read. The first request information is executed again to read the first directory, and the directory to be read by the plurality of request information is obtained by reading the directory cache at a time, thereby reducing the bandwidth occupation of reading the directory cache, thereby reducing the reading. Delay in directory operations.
  • the second directory is obtained from the second cache block.
  • the second directory is sent to the request information sending end corresponding to the second request information, and on the other hand, it is sequentially determined whether the directory to be read by each request information in the buffer area is on the second cache block, and if so, the directory to be read is sent. Give the corresponding request information sender.
  • each request information is executed, under the premise of ensuring completion of the request information, other request information that can be replaced by the execution of the request information is determined, and the utilization of each request information is maximized, and multiple request information are combined. Execution, reducing the bandwidth usage of directory cache access, increasing the speed of reading, thereby reducing the latency of reading directory operations.
  • the directory to be read by each request information being executed is not on the same cache block as the first directory, the first request information is directly executed, and the read from the directory cache is included.
  • the first cache block of the first directory sends the first directory to the request information sending end corresponding to the first request information, and sends the directory included in the first cache block and which is to be read by each request information in the buffer to Corresponding request information sender.
  • the directory to be read by the newly received request information and the directory to be read by each request information being executed are not on the same cache block, and the newly received request information is directly executed to ensure that each request information is Can be obtained and executed to ensure the feasibility of the directory reading method.
  • the processing sends the first directory to the request information corresponding to the first request information.
  • the obtained cache block will be matched with the request information in the cache area, ensuring that the read and read of the directory cache is maximized every time, further reducing the directory cache.
  • the bandwidth usage of the read port is further reduced, thereby further reducing the delay of reading the directory operation and improving the efficiency of reading the directory.
  • the directory to be read by the request information is sent to the request information.
  • the request information is deleted from the buffer area to avoid multiple matching of the same request information and repeatedly sending the directory to be read, thereby causing waste of system resources. Health, improve the rationality of the directory reading method.
  • the method for reading a directory provided by the embodiment of the present invention includes two aspects, namely, a method for processing request information on the one hand, and a method for processing a cache block on the other hand, in order to enable the embodiment of the present invention.
  • the technical solution is more clear. The following describes the memory of the above two aspects separately.
  • an embodiment of the present invention provides a method for processing request information, including
  • Step 201 Receive request information for reading the directory cache.
  • a server including a plurality of processor nodes
  • the directory cache corresponding to the in-memory data receives the request information for reading the directory cache sent by each CPU.
  • one server includes three processor nodes, which are processor node 1 to processor node 4, respectively.
  • processor node 1 wants to read data in memory 1 on processor node 2
  • the CPU 1 first needs to read the directory cache of the memory 1 to acquire the address of the data 1, so that the CPU 1 transmits the request information 1 for reading the directory cache stored in the memory 1.
  • Step 202 Determine a target directory to be read by the request information.
  • the request information is analyzed to determine a target directory to be read by the request information.
  • the request information 1 is parsed, and it is determined that the target directory to be read by the request information 1 is the directory 1 corresponding to the data 1.
  • Step 203 Determine whether there is at least one directory to be read that is to be executed, and the directory to be read is located in the same cache block as the target directory. If yes, go to step 206, otherwise go to step 204.
  • step 204 After acquiring the target directory to be read by the newly received request information, matching the newly received request information with each request information currently being executed, determining whether at least The directory to be read by the request information being executed is located on the same cache block as the target directory, and if so, the cache block including one of the request information being executed includes the target directory. Recording, there is no need to execute the newly received request information again, and step 206 is performed accordingly; if not, it indicates that the target directory is not included in the cache block read by each request information currently being executed, and the target directory needs to be read separately. Take the corresponding execution step 204.
  • step 204 is performed for the request information 1; if the directory 2 to be read by the request information 2 is located on the same cache block as the directory 1, the step 206 is performed for the request information 1.
  • Step 204 Perform the request information directly.
  • the request information 3 and the directory to be read by the request information 4 are not on the same cache block as the directory 1, the request information 1 is executed, and the directory cache stored in the memory 1 is read. take.
  • Step 205 Acquire a cache block including the target directory, and end the current process.
  • a cache block including the directory to be read by the request information is read from the directory cache.
  • the cache block 1 including the directory 1 is read from the directory cache stored in the memory 1, and the cache block 1 includes the directory 2 to the directory 32 in addition to the directory 1.
  • Step 206 Cache the request information into the buffer area.
  • the request information corresponding to the target directory is cached in the buffer area.
  • an embodiment of the present invention provides a method for processing a cache block, including: [0077] Step 301: Parse a cache block to obtain each directory included. [0078] In an embodiment of the present invention, after executing a request message, reading a cache block including a directory to be read by the request information from a directory cache, parsing the read cache block, and obtaining the request The various directories included in the cache block.
  • the request information is executed, the cache block 1 is read from the directory cache in the memory 1, and the cache block 1 is parsed, and 32 blocks included in the cache block 1 are obtained. Directory, directory 1 to directory 32, respectively.
  • Step 302 Send a directory to be read by reading the request information corresponding to the cache block to the request information sending end corresponding to the request information.
  • the directory to be read by the request information obtained by acquiring the cache block is sent to the request information sending end corresponding to the request information, and is completed.
  • the task of requesting information to be read by the directory cache is completed.
  • the directory 1 to be read by the request information 1 corresponding to the execution of the cache block 1 is sent to the sender of the request information 1, that is, to the processor node.
  • CPU1 obtains the storage address of data 1.
  • Step 303 Determine whether the directory to be read by each request information in the buffer area is sequentially on the cache block. If yes, go to step 304, otherwise, end the current process.
  • the buffer area includes 2 request information, respectively request information 1 and request information 5, and sequentially determining request information 1 and request information. 5 Whether the directory to be read is in the 32 directories of the directory 1 to the directory 32, wherein the directory to be read by the request information 1 is the directory 1, which is the same as the directory 1 in the cache block 2, and the step 304 is performed for the request information 1.
  • the directory to be read by the request information 5 is the directory 50, and the directory included in the cache block 2 does not include the directory 50.
  • Step 304 Send the directory to be read to the corresponding request information sending end.
  • step 301 The obtained directory includes the directory to be read by the request information, and the directory to be read by the request information is sent to the request information sending end corresponding to the request information.
  • the directory 1 is sent to the request information sending end corresponding to the request information 1, that is, the directory 1 is sent to CPU 1 in processor node 1.
  • Step 305 The request information corresponding to the sent directory is deleted from the cache area.
  • the request information is cached. Deleted in the area.
  • the execution request information 1 obtains the cache block 1, obtains each directory from the cache block 1, and transmits the directory 1 to be read by the request information 1 to the CPU 1 in the processor 1. And determining whether the directory to be read by each request information in the buffer area is in each directory included in the cache block 1, and if yes, sending the directory to be read to the corresponding request information sending end, corresponding to the sent directory Request information is removed from the cache.
  • the embodiment of the present invention is to explain the process of directory reading more clearly, and divide the entire directory reading process into a method for processing request information and a method for processing a cache block. During the implementation of the service, there is no strict sequence of execution steps in the embodiment shown in FIG. 2 and FIG. 3.
  • An embodiment of the present invention provides a device for reading a directory, which can be implemented by software.
  • a non-volatile memory is used by a CPU of a device in which it is located.
  • the corresponding computer program instructions are read into memory and formed.
  • the device includes: a receiving unit 401, a determining unit 402, a determining unit 403, and an executing unit 404;
  • the receiving unit 401 is configured to receive first request information for reading a directory cache.
  • the determining unit 402 is configured to determine that the first request information received by the receiving unit 401 is to be read.
  • First directory
  • the determining unit 403 is configured to determine whether there is at least one second request information that is being executed, where the second directory to be read by the second request information and the first directory determined by the determining unit 402 Located on the same cache block;
  • the executing unit 404 is configured to: according to the determination result of the determining unit 403, if yes, buffering the first request information into a preset buffer area, and performing the second request information acquisition to include After the second cache block of the first directory and the second directory, according to the second cache block and the request information in the cache area, the first directory and the second directory are included At least two directories are sent to the corresponding request information sender.
  • the execution unit 404 includes: a sending subunit 4041 and a judging subunit 4042;
  • the sending subunit 4041 is configured to obtain the second directory from the second cache block, and send the second directory to the request information sending end corresponding to the second request information;
  • the determining sub-unit 4042 is configured to determine, according to each request information in the buffer area, whether a directory to be read by the request information is in the second cache block;
  • the sending subunit 4041 is further configured to send, according to the determination result of the determining subunit 4041, the directory to be read by the request information to the request information sending end corresponding to the request information.
  • the executing unit 404 is further configured to: according to the determination result of the determining unit 403, if not, directly execute the first request information, and obtain, by using, the directory cache
  • the first cache block of the first directory sends at least one directory including the first directory to the corresponding request information sending end according to the first cache block and the request information in the buffer area.
  • the execution unit 404 includes a transmitting subunit 4041 and a judging subunit 4042;
  • the sending subunit 4041 is further configured to: acquire the first directory from the first cache block, and send the first directory to a request information sending end corresponding to the first request information;
  • the determining subunit 4042 is further configured to determine, for each request information in the buffer area, whether a directory to be read by the request information is in the first cache block;
  • the sending subunit 4041 is further configured to: according to the judgment result of the determining subunit 4042, if Yes, the directory to be read by the request information is sent to the request information sending end corresponding to the request information.
  • an embodiment of the present invention provides a system for reading a directory, including: a request information response end 601, at least one request information sending end 603, and any one of the above-mentioned embodiments. Taking device 602;
  • the request information response end 601 is configured to store the directory cache
  • the request information sending end 603 is configured to send the request information to the receiving unit in the device 602 of the directory reading, and receive the directory sent by the execution unit in the device 602 of the directory reading.
  • the directory reading device 602 may be disposed on a node control chip connected to different processor nodes to implement efficient directory cache reading between different processor nodes, or may be set in In the CPU, to achieve efficient reading of the in-memory directory cache by the CPU.
  • the method, device and system for directory reading provided by the embodiments of the present invention have at least the following beneficial effects:
  • each cache block includes at least two directories, and if the newly received first request information is to be read first, The directory is on the same cache block as the second directory to be obtained by the second request information currently being executed, and the first directory is included in the second cache block to which the second request information is read, and the first request does not need to be executed again.
  • the information is read to the first directory, and the directory to be read by the plurality of request information is obtained by reading the directory cache at a time, thereby reducing the bandwidth occupation of reading the directory cache, thereby reducing the delay of reading the directory operation.
  • the request information for the request information directly executed, after the request information is read into the cache block, it is also determined whether the directory to be read by each request message in the buffer area is in the cache block. , The directory to be read by the matching request message is sent to the corresponding request message sender, so that each cache block is used to the maximum extent, thereby further reducing the bandwidth for reading the directory cache and improving the directory cache. The rate at which readings are made.
  • the request information is sent to the request information sending end corresponding to the request information, and the request information is sent to the request information sending end corresponding to the request information. It is deleted from the cache area, which avoids the repeated matching of the completed request information and the directory to be read, which causes the system resources to be wasted, which improves the rationality of the reading method of the directory.

Abstract

一种目录读取的方法、装置及系统,该方法包括:接收对目录缓存进行读取的第一请求信息(101);确定所述第一请求信息所要读取的第一目录(102);判断是否存在至少一个正在执行的第二请求信息,其中所述第二请求信息所要读取的第二目录与所述第一目录位于同一个缓存块上(103);如果是,将所述第一请求信息缓存到预设的缓存区内,执行所述第二请求信息获取到包括所述第一目录及所述第二目录的第二缓存块后,根据所述第二缓存块及所述缓存区内的请求信息,将包括所述第一目录及所述第二目录在内的至少两个目录发送给对应的请求信息发送端(104)。该方法能够减小读目录操作的延迟。

Description

说明书 发明名称:一种目录读取的方法、 装置及系统 技术领域
[0001] 本发明涉及通信技术领域, 特别涉及一种目录读取的方法、 装置及系统。
背景技术
[0002] 随着业务量及业务复杂程度的增加, 用户对服务器性能的要求也越来越高, 为 了保证用户的业务得到正常的运行, 一些服务器包括有多个处理器节点。 在包 括多个处理器节点的服务器中, 每个处理器节点包括至少一个 CPU, 每个 CPU配 备有一定的内存, 每个处理器节点还包括至少一个节点控制器 NC, 不同处理器 节点中的 CPU通过节点控制器 NC相互连接, 实现相互访问对方内存的目的。 为 了提高读取内存中数据的速度, 每一个内存中缓存有对应于该内存中各个数据 的至少两个目录缓存, 在对内存中的数据进行读取吋, 通过读取目录缓存获取 所需数据的存储位置, 每次对内存中的目录缓存进行读取吋读取至少两条目录
[0003] 目前, 针对于任意一个处理器节点, 当其他处理器节点中的 CPU对该处理器节 点内存中的目录缓存进行读取吋, 每一次读目录请求对应一次目录缓存的读取
[0004] 针对于现有技术对内存中目录缓存进行读取的方法, 每一次读目录请求都需要 对内存中的目录缓存进行一次读取, 当多个处理器节点中的多个 CPU对同一个内 存同吋发送读目录请求吋, 增加对内存总线带宽的占用, 造成读目录操作的延 迟较大。
技术问题
[0005] 本发明实施例提供了一种目录读取的方法、 装置及系统, 能够减小读目录操作 的延迟。
问题的解决方案
技术解决方案
[0006] 本发明实施例提供了一种目录读取的方法, 包括: [0007] 接收对目录缓存进行读取的第一请求信息;
[0008] 确定所述第一请求信息所要读取的第一目录;
[0009] 判断是否存在至少一个正在执行的第二请求信息, 其中所述第二请求信息所要 读取的第二目录与所述第一目录位于同一个缓存块上;
[0010] 如果是, 将所述第一请求信息缓存到预设的缓存区内, 执行所述第二请求信息 获取到包括所述第一目录及所述第二目录的第二缓存块后, 根据所述第二缓存 块及所述缓存区内的请求信息, 将包括所述第一目录及所述第二目录在内的至 少两个目录发送给对应的请求信息发送端。
[0011] 优选地, 所述根据所述第二缓存块及所述缓存区内的请求信息, 将包括所述第 一目录及所述第二目录在内的至少两个目录发送给对应的请求信息发送端包括
[0012] 从所述第二缓存块中获取所述第二目录, 将所述第二目录发送给所述第二请求 信息对应的请求信息发送端;
[0013] 针对于所述缓存区内的每一个请求信息, 判断该请求信息所要读取的目录是否 在所述第二缓存块内, 如果是, 将该请求信息所要读取的目录发送给该请求信 息对应的请求信息发送端。
[0014] 优选地, 如果不存在正在执行的所述第二请求信息, 则直接执行所述第一请求 信息, 从所述目录缓存中获取包括所述第一目录的第一缓存块, 根据所述第一 缓存块及所述缓存区内的请求信息, 将包括所述第一目录在内的至少一个目录 发送给对应的请求信息发送端。
[0015] 优选地, 所述根据所述第一缓存块及所述缓存区内的请求信息, 将包括所述第 一目录在内的至少一个目录发送给对应的请求信息发送端包括:
[0016] 从所述第一缓存块中获取所述第一目录, 将所述第一目录发送给所述第一请求 信息对应的请求信息发送端;
[0017] 针对于所述缓存区内的每一个请求信息, 判断该请求信息所要读取的目录是否 在所述第一缓存块内, 如果是, 将该请求信息所要读取的目录发送给该请求信 息对应的请求信息发送端。
[0018] 优选地, 针对于所述缓存区内的每一个请求信息, 将该请求信息所要读取的目 录发送给该请求信息对应的请求信息发送端后, 将该请求信息从所述缓存区中 刪除。
[0019] 本发明实施例还提供了一种目录读取的装置, 包括: 接收单元、 确定单元、 判 断单元及执行单元;
[0020] 所述接收单元, 用于接收对目录缓存进行读取的第一请求信息;
[0021] 所述确定单元, 用于确定所述接收单元接收到的第一请求信息所要读取的第一 目录;
[0022] 所述判断单元, 用于判断是否存在至少一个正在执行的第二请求信息, 其中所 述第二请求信息所要读取的第二目录与所述确定单元确定出的第一目录位于同 一个缓存块上;
[0023] 所述执行单元, 用于根据所述判断单元的判断结果, 如果是, 将所述第一请求 信息缓存到预设的缓存区内, 执行所述第二请求信息获取到包括所述第一目录 及所述第二目录的第二缓存块后, 根据所述第二缓存块及所述缓存区内的请求 信息, 将包括所述第一目录及所述第二目录在内的至少两个目录发送给对应的 请求信息发送端。
[0024] 优选地, 所述执行单元包括: 发送子单元及判断子单元;
[0025] 所述发送子单元, 用于从所述第二缓存块中获取所述第二目录, 将所述第二目 录发送给所述第二请求信息对应的请求信息发送端;
[0026] 所述判断子单元, 用于针对于所述缓存区内的每一个请求信息, 判断该请求信 息所要读取的目录是否在所述第二缓存块内;
[0027] 所述发送子单元, 进一步用于根据所述判断子单元的判断结果, 如果是, 将该 请求信息所要读取的目录发送给该请求信息对应的请求信息发送端。
[0028] 优选地, 所述执行单元, 进一步用于根据所述判断单元的判断结果, 如果否, 直接执行所述第一请求信息, 从所述目录缓存中获取包括所述第一目录的第一 缓存块, 根据所述第一缓存块及所述缓存区内的请求信息, 将包括所述第一目 录在内的至少一个目录发送给对应的请求信息发送端。
[0029] 优选地, 所述执行单元包括: 发送子单元及判断子单元;
[0030] 所述发送子单元, 用于从所述第一缓存块中获取所述第一目录, 将所述第一目 录发送给所述第一请求信息对应的请求信息发送端;
[0031] 所述判断子单元, 用于针对于所述缓存区内的每一个请求信息, 判断该请求信 息所要读取的目录是否在所述第一缓存块内;
[0032] 所述发送子单元, 进一步用于根据所述判断子单元的判断结果, 如果是, 将该 请求信息所要读取的目录发送给该请求信息对应的请求信息发送端。
[0033] 本发明实施例还提供了一种目录读取的系统, 包括: 请求信息响应端、 至少一 个请求信息发送端及本发明实施例提供的任意一种目录读取的装置;
[0034] 所述请求信息响应端, 用于存储所述目录缓存;
[0035] 所述请求信息发送端, 用于向所述目录读取的装置中的接收单元发送请求信息 , 并接收所述目录读取的装置中的执行单元发送的目录。
发明的有益效果
有益效果
[0036] 本发明实施例提供了一种目录读取的方法、 装置及系统, 由于以缓存块的形式 从目录缓存读取目录, 每个缓存块包括至少两条目录, 如果新接收到的第一请 求信息所要读取的第一目录与当前正在执行的第二请求信息所要获取的第二目 录在同一个缓存块上, 执行第二请求信息读取到的第二缓存块上已经包括了第 一目录, 无需再次执行第一请求信息以读取第一目录, 通过一次对目录缓存进 行读取获取多个请求信息所要读取的目录, 减小了对目录缓存进行读取吋的带 宽占用, 从而减小读目录操作的延迟。
对附图的简要说明
附图说明
[0037] 为了更清楚地说明本发明实施例或现有技术中的技术方案, 下面将对实施例或 现有技术描述中所需要使用的附图作简单地介绍, 显而易见地, 下面描述中的 附图是本发明的一些实施例, 对于本领域普通技术人员来讲, 在不付出创造性 劳动的前提下, 还可以根据这些附图获得其他的附图。
[0038] 图 1是本发明一个实施例提供的一种目录读取的方法流程图;
[0039] 图 2是本发明一个实施例提供的一种对请求信息进行处理的方法流程图;
[0040] 图 3是本发明一个实施例提供的一种对缓存块进行处理的方法流程图; [0041] 图 4是本发明一个实施例提供的一种目录读取的装置示意图;
[0042] 图 5是本发明另一个实施例提供的一种目录读取的装置示意图;
[0043] 图 6是本发明一个实施例提供的一种目录读取的系统示意图。
本发明的实施方式
[0044] 为使本发明实施例的目的、 技术方案和优点更加清楚, 下面将结合本发明实施 例中的附图, 对本发明实施例中的技术方案进行清楚、 完整地描述, 显然, 所 描述的实施例是本发明一部分实施例, 而不是全部的实施例, 基于本发明中的 实施例, 本领域普通技术人员在没有做出创造性劳动的前提下所获得的所有其 他实施例, 都属于本发明保护的范围。
[0045] 如图 1所示, 本发明实施例提供了一种目录读取的方法, 该方法可以包括以下 步骤:
[0046] 步骤 101 : 接收对目录缓存进行读取的第一请求信息;
[0047] 步骤 102: 确定所述第一请求信息所要读取的第一目录;
[0048] 步骤 103: 判断是否存在至少一个正在执行的第二请求信息, 其中所述第二请 求信息所要读取的第二目录与所述第一目录位于同一个缓存块上, 如果是, 执 行步骤 104;
[0049] 步骤 104: 将所述第一请求信息缓存到预设的缓存区内, 执行所述第二请求信 息获取到包括所述第一目录及所述第二目录的第二缓存块后, 根据所述第二缓 存块及所述缓存区内的请求信息, 将包括所述第一目录及所述第二目录在内的 至少两个目录发送给对应的请求信息发送端。
[0050] 本发明实施例提供了一种目录读取的方法, 由于以缓存块的形式从目录缓存读 取目录, 每个缓存块包括至少两条目录, 如果新接收到的第一请求信息所要读 取的第一目录与当前正在执行的第二请求信息所要获取的第二目录在同一个缓 存块上, 执行第二请求信息读取到的第二缓存块上已经包括了第一目录, 无需 再次执行第一请求信息以读取第一目录, 通过一次对目录缓存进行读取获取多 个请求信息所要读取的目录, 减小了对目录缓存进行读取吋的带宽占用, 从而 减小读目录操作的延迟。 [0051] 在本发明一个实施例中, 在执行第二请求信息读取到包括第一目录及第二目录 的第二缓存块后, 一方面从第二缓存块中获取第二目录, 将第二目录发送给第 二请求信息对应的请求信息发送端, 另一方面, 依次判断缓存区内各个请求信 息所要读取的目录是否在第二缓存块上, 如果在, 将所要读取的目录发送给对 应的请求信息发送端。 这样, 执行每一次请求信息吋, 在保证完成该请求信息 的前提下, 确定该请求信息的执行能够替代的其他请求信息, 实现执行每一次 请求信息利用率的最大化, 将多个请求信息合并执行, 减小对目录缓存访问的 带宽占用, 提高读取的速度, 从而减小读目录操作的延迟。
[0052] 在本发明一个实施例中, 如果正在执行的各个请求信息所要读取的目录与第一 目录均不在同一个缓存块上, 则直接执行第一请求信息, 从目录缓存中读取包 括第一目录的第一缓存块, 将第一目录发送给第一请求信息对应的请求信息发 送端, 并将第一缓存块中包括的且为缓冲区内各个请求信息所要读取的目录发 送给对应的请求信息发送端。 这样, 在新接收到的请求信息所要读取的目录与 正在执行的各个请求信息所要读取的目录均不在同一个缓存块上吋, 直接执行 该新接收到的请求信息, 保证各个请求信息都能够得到及吋执行, 保证该目录 读取方法的可行性。
[0053] 在本发明一个实施例中, 在对第一请求信息直接之后吋, 获取到包括第一目录 的第一缓存块后, 处理将第一目录发送给第一请求信息对应的请求信息发送端 夕卜, 还依次判断缓存区内的各个请求信息所要读取的目录是否在第一缓存块内 , 如果在, 则将所要读取的目录发送给对应的请求信息发送端。 这样, 每一次 对目录缓存进行读取, 获取到的缓存块都将与缓存区内的请求信息进行匹配, 保证每一次对目录缓存进行读取读得到最大化的利用, 进一步减小对目录缓存 进行读取吋的带宽占用, 从而进一步减小读目录操作的延吋, 提高对目录进行 读取的效率。
[0054] 在本发明一个实施例中, 针对于缓存区内的每一个请求信息, 当该请求信息与 读取到的缓存块匹配成功, 将该请求信息所要读取的目录发送给该请求信息对 应的请求信息发送端后, 将该请求信息从缓存区内刪除, 以避免对同一个请求 信息进行多次匹配及重复发送其所要读取的目录, 造成系统资源浪费的情况发 生, 提高了该目录读取方法的合理性。
[0055] 为使本发明的目的、 技术方案和优点更加清楚, 下面结合附图及具体实施例对 本发明作进一步地详细描述。
[0056] 本发明实施例提供的目录读取的方法, 包括两方面的内容, 一方面为对请求信 息进行处理的方法, 另一方面为对缓存块进行处理的方法, 为了使本发明实施 例的技术方案更加清楚, 下面分别对以上两个方面的内存进行消息说明。
[0057] 如图 2所示, 本发明一个实施例提供了一种对请求信息进行处理的方法, 包括
[0058] 步骤 201 : 接收对目录缓存进行读取的请求信息。
[0059] 在本发明一个实施例中, 在包括多个处理器节点的服务器中, 当一个处理器节 点中的一个 CPU需要读取其他处理器节点管理的内存中的数据吋, 首先需要读取 内存中数据对应的目录缓存, 实吋接收各个 CPU发送的对目录缓存进行读取的请 求信息。
[0060] 例如, 一个服务器包括 3个处理器节点, 分别为处理器节点 1至处理器节点 4, 当处理器节点 1中的 CPU1要读取处理器节点 2上内存 1中的数据 1吋, CPU1首先需 要对内存 1的目录缓存进行读取, 以获取数据 1的地址, 从而 CPU1发送对内存 1中 存储的目录缓存进行读取的请求信息 1。
[0061] 步骤 202: 确定该请求信息所要读取的目标目录。
[0062] 在本发明一个实施例中, 接收到 CPU发送的请求信息后, 对该请求信息进行解 析, 确定该请求信息所要读取的目标目录。
[0063] 例如, 在接收到 CPU1发送的请求信息 1后, 对请求信息 1进行解析, 确定出请 求信息 1所要读取的目标目录为对应于数据 1的目录 1。
[0064] 步骤 203: 判断是否存在至少一个正在执行的请求信息所要读取的目录与目标 目录位于同一缓存块, 如果是, 执行步骤 206, 否则执行步骤 204。
[0065] 在本发明一个实施例中, 在获取到新接收到的请求信息所要读取的目标目录后 , 将新接收到的请求信息与当前正在执行的各个请求信息进行匹配, 判断是否 存在至少一个正在执行的请求信息所要读取的目录与目标目录位于同一个缓存 块上, 如果是, 说明其中一个正在执行的请求信息读取的缓存块上包括目标目 录, 无需再次执行该新接收到的请求信息, 相应地执行步骤 206; 如果否, 说明 当前正在执行的各个请求信息读取到的缓存块中均不包括目标目录, 需要单独 对目标目录进行读取, 相应的执行步骤 204。
[0066] 例如, 在确定出请求信息 1所要读取的目录为目录 1后, 获取正在执行的各个请 求信息, 供获取到 3条正在执行的请求信息, 分别为请求信息 2、 请求信息 3及请 求信息 4, 依次判断请求信息 2、 请求信息 3及请求信息 4所要读取的目录与目录 1 是否位于同一个缓存块上, 如果 3条请求信息所要读取的目录与目录 1均不在同 一个缓存块上, 针对于请求信息 1执行步骤 204; 如果请求信息 2所要读取的目录 2与目录 1位于同一个缓存块上, 则针对于请求信息 1执行步骤 206。
[0067] 步骤 204: 直接对该请求信息进行执行。
[0068] 在本发明一个实施例中, 当判断正在执行的各个请求信息所要读取的目录与目 标目录均不在同一个缓存块上后, 直接对该请求信息进行执行, 对目录缓存进 行读取。
[0069] 例如, 在判断请求信息 2、 请求信息 3及请求信息 4所要读取的目录与目录 1均不 在同一个缓存块上后, 执行请求信息 1, 对内存 1中存储的目录缓存进行读取。
[0070] 步骤 205: 获取包括目标目录的缓存块, 并结束当前流程。
[0071] 在本发明一个实施例中, 在执行新接收到的请求信息吋, 从目录缓存中读取包 括该请求信息所要读取的目录的缓存块。
[0072] 例如, 从内存 1中存储的目录缓存中读取包括目录 1的缓存块 1, 其中缓存块 1上 除了目录 1外, 还包括目录 2至目录 32。
[0073] 步骤 206: 将该请求信息缓存到缓存区中。
[0074] 在本发明一个实施例中, 当判断初正在执行的一个请求信息所要读取的目录与 目标目录位于同一个缓存块上之后, 将目标目录对应的请求信息缓存到缓存区 中。
[0075] 例如, 在判断出正在执行的请求信息 2所要读取的目录 2与目录 1位于同一个缓 存块上之后, 将请求信息 1缓存到缓存区中。
[0076] 如图 3所示, 本发明一个实施例提供了一种对缓存块进行处理的方法, 包括: [0077] 步骤 301 : 对缓存块进行解析, 获取包括的各个目录。 [0078] 在本发明一个实施例中, 在执行一个请求信息, 从目录缓存中读取到包括该请 求信息所要读取目录的缓存块后, 对读取到的缓存块进行解析, 获取到该缓存 块包括的各个目录。
[0079] 例如, 如图 2所示实施例中, 执行请求信息 1吋, 从内存 1中的目录缓存中读取 到缓存块 1, 对缓存块 1进行解析, 获得缓存块 1包括的 32个目录, 分别为目录 1 至目录 32。
[0080] 步骤 302: 将读取该缓存块对应的请求信息所要读取的目录发送给该请求信息 对应的请求信息发送端。
[0081] 在本发明一个实施例中, 在获取到缓存块包括的各个目录后, 将获取该缓存块 吋执行的请求信息所要读取的目录发送给该请求信息对应的请求信息发送端, 完成该请求信息对目录缓存进行读取的任务。
[0082] 例如, 在获取缓存块 1包括的 32个目录后, 将获取缓存块 1吋对应执行的请求信 息 1所要读取的目录 1发送给请求信息 1的发送端, 即发送给处理器节点 1中的 CPU
1, CPU1获取到数据 1的存储地址。
[0083] 步骤 303: 依次判断缓存区中的各个请求信息所要读取的目录是否在该缓存块 上, 如果是, 执行步骤 304, 否则结束当前流程。
[0084] 在本发明一个实施例中, 针对于与缓存区中的每一个请求信息, 判断该请求信 息所要读取的目录是否在步骤 301中获取的各个目录中, 如果是, 相应地执行步 骤 303, 如果否, 则结束当前流程。
[0085] 例如, 如图 2所示的实施例, 当请求信息 1被缓存到缓存区中后, 执行请求信息
2获得缓存块 2, 从缓存块 2中获取到目录 1至目录 32共 32个目录, 缓存区共包括 2 个请求信息, 分别为请求信息 1和请求信息 5, 依次判断请求信息 1及请求信息 5 所要读取的目录是否在目录 1至目录 32这 32个目录内, 其中请求信息 1所要读取 的目录为目录 1, 与缓存块 2中的目录 1相同, 针对于请求信息 1执行步骤 304, 而 请求信息 5所要读取的目录为目录 50, 缓存块 2包括的 32个目录中不包括目录 50
, 针对于请求信息 5结束当前流程。
[0086] 步骤 304: 将所要读取的目录发送给对应的请求信息发送端。
[0087] 在本发明一个实施例中, 针对于缓存区中的任意一个请求信息, 当步骤 301中 获得的各个目录中包括该请求信息所要读取的目录吋, 将该请求信息所要读取 的目录发送给该请求信息对应的请求信息发送端。
[0088] 例如, 当判断缓存块 2中包括的 32个目录中包括缓存区中的请求信息 1所要读取 的目录 1后, 将目录 1发送给请求信息 1对应的请求信息发送端, 即将目录 1发送 给处理器节点 1中的 CPU 1。
[0089] 步骤 305: 将已发目录对应的请求信息从缓存区中刪除。
[0090] 在本发明一个实施例中, 针对于缓存区中的任意一个请求信息, 将该请求信息 所要读取的目录发送给该请求信息对应的请求信息发送端后, 将该请求信息从 缓存区中刪除。
[0091] 例如, 当将缓存区中的请求信息 1所要读取的目录 1发送给处理器节点 1中的 CP U1后, 将请求信息 1从缓存区中刪除。
[0092] 需要说明的是, 在图 3所示的实施例中, 仅对图 2所示实施例中请求信息 1被缓 存到缓存区中的情况进行了说明, 针对于请求信息 1被直接执行的情况, 与对请 求信息 2的处理过程相似, 执行请求信息 1获得缓存块 1, 从缓存块 1中获得各个 目录, 将请求信息 1所要读取的目录 1发送给处理器 1中的 CPU1后, 判断缓存区中 的各个请求信息所要读取的目录是否在缓存块 1包括的各个目录中, 如果是, 将 所要读取的目录发送给对应的请求信息发送端后, 将已发目录对应的请求信息 从缓存区中刪除。
[0093] 进一步需要说明的是, 本发明实施例是为了更加清楚的说明目录读取的过程, 将整个目录读取的过程划分为对请求信息处理的方法和对缓存块处理的方法, 在实际业务实现过程中, 图 2及图 3所示的实施例中执行步骤上没有严格的先后
[0094] 本发明一个实施例提供了一种目录读取的装置, 可以通过软件实现, 如图 4所 示, 作为一个逻辑意义上的装置, 是通过其所在设备的 CPU将非易失性存储器中 对应的计算机程序指令读取到内存中运行形成的。 该装置包括: 接收单元 401、 确定单元 402、 判断单元 403及执行单元 404;
[0095] 所述接收单元 401, 用于接收对目录缓存进行读取的第一请求信息;
[0096] 所述确定单元 402, 用于确定所述接收单元 401接收到的第一请求信息所要读取 的第一目录;
[0097] 所述判断单元 403, 用于判断是否存在至少一个正在执行的第二请求信息, 其 中所述第二请求信息所要读取的第二目录与所述确定单元 402确定出的第一目录 位于同一个缓存块上;
[0098] 所述执行单元 404, 用于根据所述判断单元 403的判断结果, 如果是, 将所述第 一请求信息缓存到预设的缓存区内, 执行所述第二请求信息获取到包括所述第 一目录及所述第二目录的第二缓存块后, 根据所述第二缓存块及所述缓存区内 的请求信息, 将包括所述第一目录及所述第二目录在内的至少两个目录发送给 对应的请求信息发送端。
[0099] 在本发明一个实施例中, 如图 5所示, 执行单元 404包括: 发送子单元 4041及判 断子单元 4042;
[0100] 所述发送子单元 4041, 用于从所述第二缓存块中获取所述第二目录, 将所述第 二目录发送给所述第二请求信息对应的请求信息发送端;
[0101] 所述判断子单元 4042, 用于针对于所述缓存区内的每一个请求信息, 判断该请 求信息所要读取的目录是否在所述第二缓存块内;
[0102] 所述发送子单元 4041, 进一步用于根据所述判断子单元 4041的判断结果, 如果 是, 将该请求信息所要读取的目录发送给该请求信息对应的请求信息发送端。
[0103] 在本发明一个实施例中, 所述执行单元 404, 进一步用于根据所述判断单元 403 的判断结果, 如果否, 直接执行所述第一请求信息, 从所述目录缓存中获取包 括所述第一目录的第一缓存块, 根据所述第一缓存块及所述缓存区内的请求信 息, 将包括所述第一目录在内的至少一个目录发送给对应的请求信息发送端。
[0104] 在本发明一个实施例中, 如图 5所示, 当执行单元 404包括发送子单元 4041及判 断子单元 4042吋;
[0105] 所述发送子单元 4041, 进一步用于从所述第一缓存块中获取所述第一目录, 将 所述第一目录发送给所述第一请求信息对应的请求信息发送端;
[0106] 所述判断子单元 4042, 进一步用于针对于所述缓存区内的每一个请求信息, 判 断该请求信息所要读取的目录是否在所述第一缓存块内;
[0107] 所述发送子单元 4041, 进一步用于根据所述判断子单元 4042的判断结果, 如果 是, 将该请求信息所要读取的目录发送给该请求信息对应的请求信息发送端。
[0108] 上述装置内的各单元之间的信息交互、 执行过程等内容, 由于与本发明方法实 施例基于同一构思, 具体内容可参见本发明方法实施例中的叙述, 此处不再赘 述。
[0109] 如图 6所示, 本发明一个实施例提供了一种目录读取的系统, 包括: 请求信息 响应端 601、 至少一个请求信息发送端 603及上述实施例提供的任意一种目录读 取的装置 602;
[0110] 所述请求信息响应端 601, 用于存储所述目录缓存;
[0111] 所述请求信息发送端 603, 用于向所述目录读取的装置 602中的接收单元发送请 求信息, 并接收所述目录读取的装置 602中的执行单元发送的目录。
[0112] 在本发明实施例中, 目录读取的装置 602可以设置于连接不同处理器节点的节 点控制芯片上, 以实现不同处理器节点之间进行高效的目录缓存读取, 也可以 设置于 CPU内, 以实现 CPU对内存中目录缓存进行高效的读取。
[0113] 本发明实施例提供的目录读取的方法、 装置及系统, 至少具有如下有益效果:
[0114] 1、 在本发明实施例中, 由于以缓存块的形式从目录缓存读取目录, 每个缓存 块包括至少两条目录, 如果新接收到的第一请求信息所要读取的第一目录与当 前正在执行的第二请求信息所要获取的第二目录在同一个缓存块上, 执行第二 请求信息读取到的第二缓存块上已经包括了第一目录, 无需再次执行第一请求 信息以读取第一目录, 通过一次对目录缓存进行读取获取多个请求信息所要读 取的目录, 减小了对目录缓存进行读取吋的带宽占用, 从而减小读目录操作的 延迟。
[0115] 2、 本发明实施例中, 如果正在执行的各个请求信息所要读取的目录与新接收 到的请求信息所要读取的目录均不在同一个缓存块上, 则直接对新接收到的请 求信息进行执行, 从目录缓存中获取新接收到的请求信息所要读取的目录, 保 证各个请求信息发送端发送的请求信息都能够得到执行, 保证该目录读取方法 的可行性。
[0116] 3、 本发明实施例中, 针对于直接执行的请求信息, 在该请求信息读取到缓存 块后, 同样判断缓存区内的各个请求消息所要读取的目录是否在该缓存块中, 将匹配成功的请求消息所要读取的目录发送给对应的请求消息发送端, 这样保 证每一个缓存块都得到最大程度的利用, 从而进一步降低对目录缓存进行读取 的带宽, 提高了对目录缓存进行读取的速率。
[0117] 4、 在本发明实施例中, 针对于缓存区中的任意一个请求信息, 在将该请求信 息所要读取的目录发送给该请求信息对应的请求信息发送端后, 将该请求信息 从缓存区中刪除, 这样可以避免重复对已完成的请求信息进行匹配及发送其所 要读取的目录, 造成系统资源浪费的情况发生, 提高了该目录读取方法的合理 性。
[0118] 需要说明的是, 在本文中, 诸如第一和第二之类的关系术语仅仅用来将一个实 体或者操作与另一个实体或操作区分幵来, 而不一定要求或者暗示这些实体或 操作之间存在任何这种实际的关系或者顺序。 而且, 术语"包括"、 "包含"或者其 任何其他变体意在涵盖非排他性的包含, 从而使得包括一系列要素的过程、 方 法、 物品或者设备不仅包括那些要素, 而且还包括没有明确列出的其他要素, 或者是还包括为这种过程、 方法、 物品或者设备所固有的要素。 在没有更多限 制的情况下, 由语句 "包括一个…… "限定的要素, 并不排除在包括所述要素的过 程、 方法、 物品或者设备中还存在另外的相同因素。
[0119] 本领域普通技术人员可以理解: 实现上述方法实施例的全部或部分步骤可以通 过程序指令相关的硬件来完成, 前述的程序可以存储在计算机可读取的存储介 质中, 该程序在执行吋, 执行包括上述方法实施例的步骤; 而前述的存储介质 包括: ROM、 RAM. 磁碟或者光盘等各种可以存储程序代码的介质中。
[0120] 最后需要说明的是: 以上所述仅为本发明的较佳实施例, 仅用于说明本发明的 技术方案, 并非用于限定本发明的保护范围。 凡在本发明的精神和原则之内所 做的任何修改、 等同替换、 改进等, 均包含在本发明的保护范围内。

Claims

权利要求书
[权利要求 1] 一种目录读取的方法, 其特征在于, 包括:
接收对目录缓存进行读取的第一请求信息;
确定所述第一请求信息所要读取的第一目录;
判断是否存在至少一个正在执行的第二请求信息, 其中所述第二请求 信息所要读取的第二目录与所述第一目录位于同一个缓存块上; 如果是, 将所述第一请求信息缓存到预设的缓存区内, 执行所述第二 请求信息获取到包括所述第一目录及所述第二目录的第二缓存块后, 根据所述第二缓存块及所述缓存区内的请求信息, 将包括所述第一目 录及所述第二目录在内的至少两个目录发送给对应的请求信息发送端
[权利要求 2] 根据权利要求 1所述的方法, 其特征在于,
所述根据所述第二缓存块及所述缓存区内的请求信息, 将包括所述第 一目录及所述第二目录在内的至少两个目录发送给对应的请求信息发 送端包括:
从所述第二缓存块中获取所述第二目录, 将所述第二目录发送给所述 第二请求信息对应的请求信息发送端;
针对于所述缓存区内的每一个请求信息, 判断该请求信息所要读取的 目录是否在所述第二缓存块内, 如果是, 将该请求信息所要读取的目 录发送给该请求信息对应的请求信息发送端。
[权利要求 3] 根据权利要求 1所述的方法, 其特征在于,
如果不存在正在执行的所述第二请求信息, 则直接执行所述第一请求 信息, 从所述目录缓存中获取包括所述第一目录的第一缓存块, 根据 所述第一缓存块及所述缓存区内的请求信息, 将包括所述第一目录在 内的至少一个目录发送给对应的请求信息发送端。
[权利要求 4] 根据权利要求 3所述的方法, 其特征在于,
所述根据所述第一缓存块及所述缓存区内的请求信息, 将包括所述第 一目录在内的至少一个目录发送给对应的请求信息发送端包括: 从所述第一缓存块中获取所述第一目录, 将所述第一目录发送给所述 第一请求信息对应的请求信息发送端;
针对于所述缓存区内的每一个请求信息, 判断该请求信息所要读取的 目录是否在所述第一缓存块内, 如果是, 将该请求信息所要读取的目 录发送给该请求信息对应的请求信息发送端。
[权利要求 5] 根据权利要求 1至 4中任一所述的方法, 其特征在于,
针对于所述缓存区内的每一个请求信息, 将该请求信息所要读取的目 录发送给该请求信息对应的请求信息发送端后, 将该请求信息从所述 缓存区中刪除。
[权利要求 6] —种目录读取的装置, 其特征在于, 包括: 接收单元、 确定单元、 判 断单元及执行单元;
所述接收单元, 用于接收对目录缓存进行读取的第一请求信息; 所述确定单元, 用于确定所述接收单元接收到的第一请求信息所要读 取的第一目录;
所述判断单元, 用于判断是否存在至少一个正在执行的第二请求信息 , 其中所述第二请求信息所要读取的第二目录与所述确定单元确定出 的第一目录位于同一个缓存块上;
所述执行单元, 用于根据所述判断单元的判断结果, 如果是, 将所述 第一请求信息缓存到预设的缓存区内, 执行所述第二请求信息获取到 包括所述第一目录及所述第二目录的第二缓存块后, 根据所述第二缓 存块及所述缓存区内的请求信息, 将包括所述第一目录及所述第二目 录在内的至少两个目录发送给对应的请求信息发送端。
[权利要求 7] 根据权利要求 6所述的装置, 其特征在于, 所述执行单元包括: 发送 子单元及判断子单元;
所述发送子单元, 用于从所述第二缓存块中获取所述第二目录, 将所 述第二目录发送给所述第二请求信息对应的请求信息发送端; 所述判断子单元, 用于针对于所述缓存区内的每一个请求信息, 判断 该请求信息所要读取的目录是否在所述第二缓存块内; 所述发送子单元, 进一步用于根据所述判断子单元的判断结果, 如果 是, 将该请求信息所要读取的目录发送给该请求信息对应的请求信息 发送端。
根据权利要求 6所述的装置, 其特征在于,
所述执行单元, 进一步用于根据所述判断单元的判断结果, 如果否, 直接执行所述第一请求信息, 从所述目录缓存中获取包括所述第一目 录的第一缓存块, 根据所述第一缓存块及所述缓存区内的请求信息, 将包括所述第一目录在内的至少一个目录发送给对应的请求信息发送 山 根据权利要求 8所述的装置, 其特征在于, 所述执行单元包括: 发送 子单元及判断子单元;
所述发送子单元, 用于从所述第一缓存块中获取所述第一目录, 将所 述第一目录发送给所述第一请求信息对应的请求信息发送端; 所述判断子单元, 用于针对于所述缓存区内的每一个请求信息, 判断 该请求信息所要读取的目录是否在所述第一缓存块内;
所述发送子单元, 进一步用于根据所述判断子单元的判断结果, 如果 是, 将该请求信息所要读取的目录发送给该请求信息对应的请求信息 发送端。
一种目录读取的系统, 其特征在于, 包括: 请求信息响应端、 至少一 个请求信息发送端及权利要求 6至 9中任一所述的目录读取的装置; 所述请求信息响应端, 用于存储所述目录缓存;
所述请求信息发送端, 用于向所述目录读取的装置中的接收单元发送 请求信息, 并接收所述目录读取的装置中的执行单元发送的目录。
PCT/CN2016/109580 2016-04-05 2016-12-13 一种目录读取的方法、装置及系统 WO2017173844A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610204376.3 2016-04-05
CN201610204376.3A CN105912477B (zh) 2016-04-05 2016-04-05 一种目录读取的方法、装置及系统

Publications (1)

Publication Number Publication Date
WO2017173844A1 true WO2017173844A1 (zh) 2017-10-12

Family

ID=56745137

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/109580 WO2017173844A1 (zh) 2016-04-05 2016-12-13 一种目录读取的方法、装置及系统

Country Status (2)

Country Link
CN (1) CN105912477B (zh)
WO (1) WO2017173844A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110765086A (zh) * 2019-10-25 2020-02-07 浪潮电子信息产业股份有限公司 一种小文件的目录读取方法、系统、电子设备及存储介质

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105912477B (zh) * 2016-04-05 2019-01-01 浪潮电子信息产业股份有限公司 一种目录读取的方法、装置及系统

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101354682A (zh) * 2008-09-12 2009-01-28 中国科学院计算技术研究所 一种用于解决多处理器访问目录冲突的装置和方法
CN103544269A (zh) * 2013-10-17 2014-01-29 华为技术有限公司 目录的存储方法、查询方法及节点控制器
CN104899160A (zh) * 2015-05-30 2015-09-09 华为技术有限公司 一种缓存数据控制方法、节点控制器和系统
CN105912477A (zh) * 2016-04-05 2016-08-31 浪潮电子信息产业股份有限公司 一种目录读取的方法、装置及系统

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7337280B2 (en) * 2005-02-10 2008-02-26 International Business Machines Corporation Data processing system and method for efficient L3 cache directory management
CN1328670C (zh) * 2005-03-30 2007-07-25 中国人民解放军国防科学技术大学 目录协议对多处理器结点内脏数据共享的支持方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101354682A (zh) * 2008-09-12 2009-01-28 中国科学院计算技术研究所 一种用于解决多处理器访问目录冲突的装置和方法
CN103544269A (zh) * 2013-10-17 2014-01-29 华为技术有限公司 目录的存储方法、查询方法及节点控制器
CN104899160A (zh) * 2015-05-30 2015-09-09 华为技术有限公司 一种缓存数据控制方法、节点控制器和系统
CN105912477A (zh) * 2016-04-05 2016-08-31 浪潮电子信息产业股份有限公司 一种目录读取的方法、装置及系统

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110765086A (zh) * 2019-10-25 2020-02-07 浪潮电子信息产业股份有限公司 一种小文件的目录读取方法、系统、电子设备及存储介质
CN110765086B (zh) * 2019-10-25 2022-08-02 浪潮电子信息产业股份有限公司 一种小文件的目录读取方法、系统、电子设备及存储介质

Also Published As

Publication number Publication date
CN105912477A (zh) 2016-08-31
CN105912477B (zh) 2019-01-01

Similar Documents

Publication Publication Date Title
WO2018059222A1 (zh) 一种文件切片上传方法、装置及云存储系统
WO2018076793A1 (zh) 一种NVMe数据读写方法及NVMe设备
WO2017114206A1 (zh) 短链接处理方法、装置及短链接服务器
AU2014235793B2 (en) Automatic tuning of virtual data center resource utilization policies
CN106534243B (zh) 基于http协议的缓存、请求、响应方法及相应装置
US20110289126A1 (en) Content delivery network
WO2012139474A1 (zh) 数据的获取方法、设备和系统
WO2015067117A1 (zh) 一种云上传方法及系统、调度设备、客户端
TW201514745A (zh) 客戶端應用的登錄方法及其相應的伺服器
WO2017185633A1 (zh) Cdn服务器及其缓存数据的方法
WO2014078989A1 (zh) 消息处理方法及服务器
WO2012034518A1 (zh) 一种提供包含网页地址的消息的方法和系统
WO2021007752A1 (zh) 内容分发网络中的回源方法及相关装置
WO2016173441A1 (zh) 服务器缓存处理方法、装置及系统
WO2015062228A1 (zh) 一种访问共享内存的方法和装置
WO2013044628A1 (zh) 在Nginx上实现云缓存的REST接口的方法和系统
WO2015027806A1 (zh) 一种内存数据的读写处理方法和装置
WO2022007470A1 (zh) 一种数据传输的方法、芯片和设备
US20160337467A1 (en) Method and system for information exchange utilizing an asynchronous persistent store protocol
WO2020056850A1 (zh) 一种基于http协议的数据请求方法和服务器
CN113419824A (zh) 数据处理方法、装置、系统及计算机存储介质
WO2017032152A1 (zh) 将数据写入存储设备的方法及存储设备
CN113422793A (zh) 数据传输方法、装置、电子设备及计算机存储介质
CN115964319A (zh) 远程直接内存访问的数据处理方法及相关产品
CN111200637B (zh) 一种缓存的处理方法及装置

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16897773

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 16897773

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 160519)

122 Ep: pct application non-entry in european phase

Ref document number: 16897773

Country of ref document: EP

Kind code of ref document: A1