WO2015067004A9 - Concurrent access request processing method and device - Google Patents

Concurrent access request processing method and device Download PDF

Info

Publication number
WO2015067004A9
WO2015067004A9 PCT/CN2014/075558 CN2014075558W WO2015067004A9 WO 2015067004 A9 WO2015067004 A9 WO 2015067004A9 CN 2014075558 W CN2014075558 W CN 2014075558W WO 2015067004 A9 WO2015067004 A9 WO 2015067004A9
Authority
WO
WIPO (PCT)
Prior art keywords
access
engine
concurrent
hash
access requests
Prior art date
Application number
PCT/CN2014/075558
Other languages
French (fr)
Chinese (zh)
Other versions
WO2015067004A1 (en
Inventor
童燕群
李成林
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2015067004A1 publication Critical patent/WO2015067004A1/en
Publication of WO2015067004A9 publication Critical patent/WO2015067004A9/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/17Details of further file system functions
    • G06F16/176Support for shared access to files; File sharing support
    • G06F16/1767Concurrency control, e.g. optimistic or pessimistic approaches

Definitions

  • the present invention relates to the field of computer data storage technologies, and in particular, to a method and apparatus for processing concurrent access requests. Background technique
  • the object storage technology based on HTTP is rapidly developing, and the object storage technology based on the two-layer business model of container Container and object Object is more and more widely used.
  • a container can be understood as a special top-level directory.
  • An object can be a file or a directory, and the object belongs to the container.
  • user data is stored in a container in an object mode, using a technical architecture in which an upper layer application establishes object storage on the underlying distributed storage engine.
  • the number of objects in the container is unrestricted. When there are many objects in the container, there will be a very large index table between the container and the object. Ordinary stand-alone databases can no longer meet the storage requirements. Therefore, the storage engine based on weak consistency is generally maintained in a B-tree structure.
  • Figure 1 shows a simple B-tree structure
  • Figure 2 shows an object storage system based on the underlying distributed storage engine architecture.
  • the sub-blocks N1, N2, N3, ... of the B-tree structure in Fig. 1 can be stored in the figure.
  • the child data block N1 may be stored on the child node 2, the child node 4, and the child node 6.
  • the process of "read-modify-write" is generally employed.
  • Figure 2 based on the object storage system set up by the underlying distributed storage engine, it is reflected that the client initiates an access request to the data resource when accessing After the data resource is obtained, the data resource is modified and then written back to the access storage engine.
  • the initiating access request process is: the client based on the HTTP protocol sends an access request for accessing the data resource, and the upper layer application analyzes the metadata of the related container and the object and the data resource to be accessed from the received access request, and then The corresponding engine access proxy of the upper application requests data resources from the underlying distributed storage engine.
  • the sub-blocks in the B-tree structure become hot spots. For example, there may be multiple clients that need to write the sub-block N1 into the B-tree at the same time, which will cause the sub-block N1 in the B-tree structure to become a hot spot.
  • a concurrent access request is initiated for multiple clients, so that multiple engine access agents simultaneously request access to the child nodes of the storage engine that store the sub-block N1. , causing concurrent access violations of the underlying storage engine.
  • the upper-layer applications APP1, APP2, and APP3 will simultaneously request access to the sub-node 6 through their respective engine access agents.
  • the storage engine based on the weak consistency uses the B-tree structure to maintain the index list
  • the underlying distributed storage engine returns a data conflict response to the upper layer application, and the upper layer application according to the specific service. Choose to rewrite.
  • it will affect the write performance of the B-tree structure, and even cause the data to be rewritten, eventually resulting in the loss of objects in the container.
  • Embodiments of the present invention provide a method and apparatus for processing concurrent access requests to avoid concurrent access conflicts.
  • a method for processing a concurrent access request including:
  • the concurrent access request is sorted, including:
  • the at least two concurrent access requests are ordered in an engine access proxy.
  • the sorting the at least two concurrent access requests includes:
  • An access request that is not on the sorting engine access agent is routed to the sorting engine access agent, and the sorting engine accesses the agent to sort the at least two concurrent access requests.
  • the method further includes:
  • the same data resource is cached in an engine access proxy that sorts the concurrent access requests.
  • a processing apparatus for concurrent access request including a receiving unit, a sorting unit, and an access unit, where
  • the receiving unit is configured to receive at least two concurrent access requests for the same data resource, and transmit the at least two concurrent access requests to the sorting unit;
  • the sorting unit is configured to receive the at least two concurrent access requests transmitted by the receiving unit, and sort the at least two concurrent access requests, and transmit the sorted concurrent access request to the access unit;
  • the access unit is configured to receive the sorted concurrent access request transmitted by the sorting unit, and sequentially access the same data resource according to the sorted concurrent access request.
  • the sorting unit is specifically configured to: in an engine access proxy, sort the at least two concurrent access requests.
  • the sorting unit is specifically configured to: associate each data resource on the storage engine with a hash key value to form a hash space;
  • An access request that is not on the sorting engine access agent is routed to the sorting engine access agent, and the sorting engine accesses the agent to sort the at least two concurrent access requests.
  • the sorting unit is further configured to:
  • a cache unit is further included, where
  • the cache unit is configured to cache the same data resource in an engine access proxy that sorts the concurrent access requests.
  • the concurrent access request processing method provided by the first aspect of the present invention and the concurrent access request processing apparatus provided by the second aspect after receiving at least two concurrent access requests for the same data resource, sorting the at least two concurrent access requests According to the sorted concurrent access request, sequentially accessing the same data resource can ensure that only one request accesses the corresponding data resource at the same time, thereby avoiding concurrent access conflicts.
  • FIG. 1 is a schematic diagram of an organization structure of an index table based on a B-tree structure in the prior art
  • FIG. 2 is a schematic diagram of an architecture of an object storage system based on an underlying distributed storage engine in the prior art
  • FIG. 3 is a schematic diagram of an access violation occurring in a concurrent access request in the prior art
  • FIG. 4 is a schematic flowchart of a method for processing a concurrent access request according to an embodiment of the present invention
  • FIG. 5 is a flowchart of a method for scheduling concurrent access requests according to an embodiment of the present invention.
  • FIG. 6 is a schematic diagram of hash space division in an embodiment of the present invention.
  • FIG. 7 is a schematic diagram of a process of sorting concurrent access requests in an embodiment of the present invention.
  • FIG. 8 is a schematic structural diagram of a concurrent access request processing apparatus according to an embodiment of the present invention
  • FIG. 9 is another schematic structural diagram of a concurrent access request processing apparatus according to an embodiment of the present invention.
  • FIG. 11 is still another schematic diagram of a controller configuration according to an embodiment of the present invention.
  • FIG. 12 is still another schematic diagram of a controller configuration according to an embodiment of the present invention. detailed description
  • different clients may initiate concurrent access requests to the same data resource, and the upper layer application may receive at least two concurrent data resources.
  • Access request the at least two concurrent access requests are processed by different ⁇ (upper application) and engine access agent.
  • the at least two concurrent access requests are processed in the upper layer application to ensure that only one request accesses the corresponding data resource at the same time, thereby preventing concurrent access conflicts.
  • a first embodiment of the present invention provides a method for processing a concurrent access request, as shown in FIG. 4, including:
  • 5101 Receive at least two concurrent access requests for the same data resource.
  • an access request for the data resource is initiated by different clients based on the HTTP protocol, that is, at least the same data resource exists.
  • the upper application receives at least two concurrent access requests for the same data resource.
  • 5102 Sort the received at least two concurrent access requests.
  • At least two concurrent access requests of the same data resource received by the upper layer application are sorted, and may be directly sorted on different engine access agents that request data resources of the underlying storage engine, or may be different.
  • the access request on the engine access proxy is routed to an engine access proxy for sorting.
  • the embodiment of the present invention preferably routes access requests on different engine access agents to an engine access agent for sorting, so that when the concurrent access requests are sorted, no state query is required between different engine access agents.
  • At least two concurrent access requests for accessing the same data resource are sorted. According to the sorted concurrent access request, accessing the data resource in turn can ensure that only one request accesses the data resource at a time, thereby avoiding concurrent access conflicts.
  • a client adds an object to a container at the same time, it causes the sub-block data of the index relation table to become a hot spot, that is, there is more than one access request.
  • the access table between the container and the object is requested to be accessed.
  • the concurrent access requests of the same index table are sorted to ensure that the same index table has only one read and write operation at the same time, thereby avoiding concurrent access conflicts.
  • an engine access proxy in an engine access proxy, at least two concurrent access requests of the same data resource are sorted, and access requests on different engine access agents are routed to an engine access proxy for sorting, so that concurrent access is performed.
  • access requests on different engine access agents are routed to an engine access proxy for sorting, so that concurrent access is performed.
  • the process of sorting at least two concurrent access requests is as shown in FIG. 5, including:
  • each data resource on the storage engine is hashed according to a unified hash algorithm between the upper layer applications.
  • Each data resource corresponds to a hash key value and constitutes a hash space.
  • the hash space is a one-dimensional space that is sufficiently large for the number of APPs applied to the upper layer.
  • the hash space in the partition S201 is N sub-hash spaces.
  • the number of divided sub-hash spaces is the same as the number of engine access agents corresponding to the upper layer application. If the number of the upper layer application is N, the hash space is divided into N parts, and N sub-hash spaces are obtained.
  • the upper layer application includes APP1, APP2, and APP3, and the hash space partitioning process is shown in FIG. 6. Shown.
  • the partitioning may be performed by using an equal division method, or may be performed by using an unequal division method, as long as the number of divided sub-hash spaces is corresponding to the engine accessing agent corresponding to the upper layer application.
  • the number is the same.
  • the number of upper layer applications is N
  • the hash space can be equally divided into N parts, in the embodiment of the present invention, the hash may be equally divided.
  • the space is divided into N parts, and a sub-hash space with the same hash space is obtained. If the hash space cannot be equally divided, the hash space can be divided into N by using the unequal division method in the embodiment of the present invention.
  • - 1 sub-hash space with equal hash space divide the remaining hash space into 1 sub-hash space, and finally get a sub-hash space.
  • a simple cluster can be established between the engine access agents of the upper layer application, the agent number is accessed for each engine, and a sub-hash space is allocated for each engine access agent's own sequence number.
  • the sub-hash space divided in S202 is allocated to one engine access agent, and each engine access agent allocates a sub-hash space, and different sub-hash spaces allocated by different engine access agents are different.
  • 5204 Determine a hash value of at least two data resources requested to access, and determine an engine access proxy that sorts at least two concurrent access requests according to a sub-hash space allocated to the sub-engine access proxy, and obtain a sorting engine access proxy. .
  • each data resource corresponds to a hash value
  • the hash value belongs to a sub-hash space obtained by dividing in S202.
  • each hash space is assigned to an engine access agent. Therefore, the engine access agent that sorts the at least two concurrent access requests may be determined according to the sub-hash space to which the hash value of the same data resource accessed by the at least two concurrent access requests belongs.
  • the determined engine access agent is referred to as a sorting engine access agent through which the at least two concurrent access requests are sorted.
  • 5205 Forward concurrent access requests that are not on the sort engine access agent to the sorting engine access agent.
  • Each access request in the embodiment of the present invention is sent to the upper application , so different access requests will be on different engine access agents.
  • the access request that is not on the determined sorting engine accessing proxy is routed to the sorting engine accessing proxy, and the sorting engine accesses the proxy, and routes to the proxy All concurrent access requests are sorted. For example, there are currently 3 concurrent access requests that need to be accessed at the storage engine.
  • the hash value obtained by hashing the data resource on the node 6 and storing the data resource on the engine sub-node 6 belongs to the engine access proxy corresponding to APP1.
  • the access request on the engine access proxy corresponding to APP2 and APP3 is respectively routed to the engine access proxy corresponding to APP1.
  • the three concurrent access requests are sorted by the engine access agent corresponding to APP1, as shown in FIG.
  • the request for the index table of the same container is routed to the engine access proxy of the same application for sorting processing, and sequentially accessed according to the sorted concurrent access request.
  • Index tables ensure that there are no access violations, thus avoiding data conflicts with the underlying distributed storage engine.
  • the number of accessing agents of the engine is monitored.
  • the hash space is re-divided to accommodate the case where the upper application node exits or newly joins.
  • the read/write cache may be added to the engine access proxy that sorts the concurrent access requests, and the data resources are cached to improve the data resources. Access speed.
  • the embodiment of the present invention provides a concurrent access request processing device.
  • the concurrent access request processing device provided by the embodiment of the present invention includes a receiving unit. 801, a sorting unit 802 and an access unit 803, wherein
  • the receiving unit 801 is configured to receive at least two concurrent access requests for the same data resource, and transmit at least two concurrent access requests to the sorting unit 802.
  • the sorting unit 802 is configured to receive at least two concurrent access requests transmitted by the receiving unit 801, and sort the at least two concurrent access requests, and transmit the sorted concurrent access requests to the access unit 803;
  • the access unit 803 is configured to receive the sorted concurrent access request transmitted by the sorting unit 802, and sequentially access the same data resource according to the sorted concurrent access request.
  • the sorting unit 802 is configured to sort at least two concurrent access requests in an engine access proxy.
  • the sorting unit 802 is specifically configured to:
  • an engine access proxy that sorts at least two concurrent access requests according to a sub-hash space to which the hash value of the same data resource accessed by the at least two concurrent access requests belongs, to obtain a sorting engine accessing the proxy;
  • An access request that is not on the sort engine access proxy is routed to the sort engine access proxy, and the sort engine accesses the proxy to sort at least two concurrent access requests.
  • sorting unit 802 in the embodiment of the present invention is further configured to:
  • the concurrent access request processing apparatus further includes a cache unit 804, as shown in FIG. 9, wherein the cache unit 804 is configured to cache the same data resource in an engine access proxy that sorts concurrent access requests.
  • the concurrent access request processing apparatus when there are at least two concurrent access requests for the same data resource, sorts the at least two concurrent access requests, and sequentially accesses the same data according to the sorted concurrent access request.
  • the resource can ensure that only one request accesses the corresponding data resource at a time, thereby avoiding concurrent access conflicts.
  • the above-mentioned concurrent access request processing device provided by the embodiment of the present invention may be an independent component, or may be integrated into other components.
  • the concurrent access request processing device provided by the embodiment of the present invention may be an engine access proxy, or may be New components integrated into the engine access agent. It should be noted that the function implementation and the interaction manner of each module/unit of the concurrent access request processing apparatus in the embodiment of the present invention may be further referred to the description of the related method embodiments.
  • the embodiment of the present invention provides a controller, which can be applied to an object storage service based on a container and an object two-layer service model, as shown in FIG. 10, in accordance with an embodiment of the present invention.
  • the controller includes: a processor 1001 and an I/O interface 1002, wherein the processor 1001 is configured to receive at least two concurrent access requests for the same data resource, and receive the received at least two concurrent access requests. Sorting, and transmitting the sorted concurrent access request to the I/O interface 1002;
  • the I/O interface 1002 is configured to receive the sorted concurrent access request transmitted by the processor 1001, and output the sorted concurrent access request.
  • processor 1001 is configured to sort at least two concurrent access requests in an engine access agent.
  • the processor 1001 is specifically configured to: associate each data resource on the storage engine with a hash key value to form a hash space; divide the hash space into a sub-hash space, where ⁇ is the number of engine access agents; A sub-hash space is allocated to each of the engine access agents, so that each engine access agent is assigned a sub-hash space, and different sub-hash spaces are assigned by different engine access agents; accesses according to at least two concurrent access requests The sub-hash space to which the hash value of the same data resource belongs, determine the engine access proxy that sorts at least two concurrent access requests, obtain the sorting engine access proxy, and route the access request that is not in the sorting engine access proxy to the sorting On the engine access agent, the sorting engine accesses the agent and sorts at least two concurrent access requests.
  • the controller in the embodiment of the present invention further includes a monitor 1003. As shown in FIG. 11, the monitor 1003 monitors the number of engine access agents, and sends a re-division to the processor 1001 when the number of engine access agents changes. Hash space instructions.
  • the controller further includes a buffer 1004.
  • the buffer 1004 is used in an engine access proxy for sorting concurrent access requests by the processor 1001. Cache the same data resource.
  • the controller provided by the embodiment of the present invention, when there are at least two concurrent access requests for the same data resource, sort the at least two concurrent access requests, and sequentially access the same data resource according to the sorted concurrent access request. It is guaranteed that only one request for accessing the corresponding data resource at the same time can avoid concurrent access conflicts.

Abstract

Disclosed are a concurrent access request processing method and device to avoid concurrent access conflict. The method comprises: receiving at least two concurrent access requests for the same data resource and sorting the at least two concurrent access requests; and accessing the same data resource sequentially according to the sorted concurrent access requests. The present invention ensures that there is only the data resource corresponding to one request access, thereby avoiding concurrent access conflict.

Description

一种并发访问请求的处理方法及装置 本申请要求于 2013年 11月 07日提交中国专利局,申请号 201310549721.3 发明名称^ 一种并发访问请求的处理方法及装置"的中国专利申请的优先权, 其全部内容通过引用结合在本申请中。 技术领域  Method and apparatus for processing concurrent access requests. The present application claims priority to Chinese patent application filed on November 7, 2013, the Chinese Patent Office, Application No. 201310549721.3, the name of the invention, and the processing method and apparatus of a concurrent access request. The entire contents thereof are incorporated herein by reference.
本发明涉及计算机数据存储技术领域,尤其涉及一种并发访问请求的处理 方法及装置。 背景技术  The present invention relates to the field of computer data storage technologies, and in particular, to a method and apparatus for processing concurrent access requests. Background technique
基于 HTTP ( Hyper Text Transfer Protocol ,超文本传输协议)的对象存储 技术迅速发展,而以容器 Container和对象 Object的两层业务模型为基础的对 象存储技术,应用越来越广泛。  The object storage technology based on HTTP (Hyper Text Transfer Protocol) is rapidly developing, and the object storage technology based on the two-layer business model of container Container and object Object is more and more widely used.
容器可以理解成一个特殊的顶层目录,对象可以是一个文件或者一个目 录,对象隶属于容器。 通常用户数据以对象方式、 采用上层应用在底层分布式 存储引擎上面建立对象存储的技术架构,存储于容器中。 而容器中对象的个数 是不加限制的,当容器内对象非常多时,会存在一个非常庞大的容器与对象之 间的索引关系表。 普通的单机数据库已经无法满足存储要求,因此一般选择基 于弱一致性的存储引擎采用 B树结构进行维护。  A container can be understood as a special top-level directory. An object can be a file or a directory, and the object belongs to the container. Usually, user data is stored in a container in an object mode, using a technical architecture in which an upper layer application establishes object storage on the underlying distributed storage engine. The number of objects in the container is unrestricted. When there are many objects in the container, there will be a very large index table between the container and the object. Ordinary stand-alone databases can no longer meet the storage requirements. Therefore, the storage engine based on weak consistency is generally maintained in a B-tree structure.
图 1所示为一种简易的 B树结构;图 2所示为基于底层分布式存储引擎架 设的对象存储系统。 图 1中 B树结构的子数据块 Nl、 N2、 N3... ...可存储于图 Figure 1 shows a simple B-tree structure; Figure 2 shows an object storage system based on the underlying distributed storage engine architecture. The sub-blocks N1, N2, N3, ... of the B-tree structure in Fig. 1 can be stored in the figure.
2中分布式存储引擎中的子节点 1、 2、 3... ...中的一个或多个上,例如可以将 子数据块 N1存储在子节点 2、 子节点 4和子节点 6上。 向 B树结构中写入记 录时,一般采用" 读取-修改-写入" 的过程。 在图 2中基于底层分布式存储引 擎架设的对象存储系统上,则体现为客户端对数据资源发起访问请求,当访问 得到数据资源后,对数据资源进行修改后再写回访问存储引擎中。 该发起访问 请求过程为:基于 HTTP协议的客户端发送访问数据资源的访问请求,上层应 用从接收到的访问请求中分析得出相关的容器和对象的元数据以及需要访问 的数据资源,继而由上层应用对应的引擎访问代理向底层的分布式存储引擎请 求数据资源。 On one or more of the child nodes 1, 2, 3, ... in the distributed storage engine, for example, the child data block N1 may be stored on the child node 2, the child node 4, and the child node 6. When writing a record to a B-tree structure, the process of "read-modify-write" is generally employed. In Figure 2, based on the object storage system set up by the underlying distributed storage engine, it is reflected that the client initiates an access request to the data resource when accessing After the data resource is obtained, the data resource is modified and then written back to the access storage engine. The initiating access request process is: the client based on the HTTP protocol sends an access request for accessing the data resource, and the upper layer application analyzes the metadata of the related container and the object and the data resource to be accessed from the received access request, and then The corresponding engine access proxy of the upper application requests data resources from the underlying distributed storage engine.
在进行对象存储时,当存在多个客户端同时向同一容器内添加对象时,会 造成 B树结构中的子数据块成为热点。例如可能存在多个客户端同时需要将子 数据块 N1写入 B树中,此时则会导致 B树结构中的子数据块 N1成为热点。 体现在图 2中的基于底层分布式存储引擎架设的对象存储系统上,则为多个客 户端发起并发访问请求,使得多个引擎访问代理同时请求访问存储引擎中存储 子数据块 N1的子节点,造成底层存储引擎的并发访问冲突。 如图 3所示,假 如子数据块 N1存储在子节点 6上,则上层应用 APP1、 APP2和 APP3将分别 通过各自对应的引擎访问代理同时请求访问子节点 6。  When object storage is performed, when there are multiple clients adding objects to the same container at the same time, the sub-blocks in the B-tree structure become hot spots. For example, there may be multiple clients that need to write the sub-block N1 into the B-tree at the same time, which will cause the sub-block N1 in the B-tree structure to become a hot spot. In the object storage system based on the underlying distributed storage engine erected in FIG. 2, a concurrent access request is initiated for multiple clients, so that multiple engine access agents simultaneously request access to the child nodes of the storage engine that store the sub-block N1. , causing concurrent access violations of the underlying storage engine. As shown in FIG. 3, if the sub-block N1 is stored on the sub-node 6, the upper-layer applications APP1, APP2, and APP3 will simultaneously request access to the sub-node 6 through their respective engine access agents.
基于弱一致性的存储引擎采用 B树结构进行维护索引列表时,当底层存储 引擎发生并发访问冲突时,底层分布式存储引擎会向上层应用返回数据冲突响 应,并由上层应用根据具体的业务来选择重新写入。然而当有多个并发冲突时, 则会影响 B树结构的写入性能,甚至导致无法完成数据的重新写入,最终导致 容器内的对象丟失。  When the storage engine based on the weak consistency uses the B-tree structure to maintain the index list, when the underlying storage engine has a concurrent access conflict, the underlying distributed storage engine returns a data conflict response to the upper layer application, and the upper layer application according to the specific service. Choose to rewrite. However, when there are multiple concurrency conflicts, it will affect the write performance of the B-tree structure, and even cause the data to be rewritten, eventually resulting in the loss of objects in the container.
因此,在基于底层分布式存储引擎架设的对象存储系统中,对于成为热点 的数据资源进行并发访问请求时,如何避免并发访问冲突至关重要。 发明内容  Therefore, in an object storage system based on the underlying distributed storage engine, it is important to avoid concurrent access conflicts when concurrent access requests are made to data resources that become hotspots. Summary of the invention
本发明实施例提供一种并发访问请求的处理方法及装置,以避免并发访问 冲突。  Embodiments of the present invention provide a method and apparatus for processing concurrent access requests to avoid concurrent access conflicts.
第一方面,提供一种并发访问请求的处理方法,包括:  In a first aspect, a method for processing a concurrent access request is provided, including:
接收对同一数据资源的至少两个并发访问请求,并对所述至少两个并发访 问请求进行排序; 依照排序后的并发访问请求,依次访问所述同一数据资源。 Receiving at least two concurrent access requests for the same data resource, and sorting the at least two concurrent access requests; The same data resource is accessed in turn according to the sorted concurrent access request.
结合第一方面,在第一种可能的实现方式中,对所述并发访问请求进行排 序,包括:  In conjunction with the first aspect, in a first possible implementation, the concurrent access request is sorted, including:
在一个引擎访问代理中,对所述至少两个并发访问请求进行排序。  The at least two concurrent access requests are ordered in an engine access proxy.
结合第一方面,在第二种可能的实现方式中,所述对所述至少两个并发访 问请求进行排序,包括:  In conjunction with the first aspect, in a second possible implementation, the sorting the at least two concurrent access requests includes:
将存储引擎上的每个数据资源对应哈希键值,构成哈希空间;  Corresponding to a hash key value for each data resource on the storage engine to form a hash space;
划分哈希空间为 N个子哈希空间,其中 N为存储系统中引擎访问代理的 数目 ;  Divide the hash space into N sub-hash spaces, where N is the number of engine access agents in the storage system;
将 N个子哈希空间分配给 N个引擎访问代理,使每一引擎访问代理被分 配一个子哈希空间,且不同引擎访问代理被分配的子哈希空间不同;  Allocating N sub-hash spaces to N engine access agents, so that each engine access agent is assigned a sub-hash space, and different sub-hash spaces allocated by different engine access agents are different;
根据所述至少两个并发访问请求访问的同一数据资源的哈希值所属的子 哈希空间,确定对所述至少两个并发访问请求进行排序的引擎访问代理,得到 排序引擎访问代理;  Determining an engine accessing agent that sorts the at least two concurrent access requests according to a sub-hash space to which the hash value of the same data resource accessed by the at least two concurrent access requests belongs, to obtain a sorting engine accessing the proxy;
将不处于所述排序引擎访问代理上的访问请求,路由到所述排序引擎访问 代理上,由所述排序引擎访问代理,对所述至少两个并发访问请求进行排序。  An access request that is not on the sorting engine access agent is routed to the sorting engine access agent, and the sorting engine accesses the agent to sort the at least two concurrent access requests.
结合第一方面的第二种可能实现方式,在第三种可能的实现方式中,划分 哈希空间为 N个子哈希空间之后,该方法还包括:  With the second possible implementation of the first aspect, in a third possible implementation, after the hash space is divided into N sub-hash spaces, the method further includes:
监测引擎访问代理的数目 ,当引擎访问代理的数目发生变化时,重新划分 哈希空间。  Monitor the number of engine access agents and repartition the hash space as the number of engine access agents changes.
结合第一方面的第一种可能实现方式,在第四种可能的实现方式中,在对 所述并发访问请求进行排序的引擎访问代理中,缓存所述同一数据资源。  In conjunction with the first possible implementation of the first aspect, in a fourth possible implementation, the same data resource is cached in an engine access proxy that sorts the concurrent access requests.
第二方面,提供一种并发访问请求的处理装置,包括接收单元、 排序单元 和访问单元,其中,  In a second aspect, a processing apparatus for concurrent access request is provided, including a receiving unit, a sorting unit, and an access unit, where
所述接收单元,用于接收对同一数据资源的至少两个并发访问请求,并向 所述排序单元传输所述至少两个并发访问请求; 所述排序单元,用于接收所述接收单元传输的所述至少两个并发访问请 求,并对所述至少两个并发访问请求进行排序,将排序后的并发访问请求向所 述访问单元传输; The receiving unit is configured to receive at least two concurrent access requests for the same data resource, and transmit the at least two concurrent access requests to the sorting unit; The sorting unit is configured to receive the at least two concurrent access requests transmitted by the receiving unit, and sort the at least two concurrent access requests, and transmit the sorted concurrent access request to the access unit;
所述访问单元,用于接收所述排序单元传输的排序后的并发访问请求,并 依照排序后的并发访问请求,依次访问所述同一数据资源。  The access unit is configured to receive the sorted concurrent access request transmitted by the sorting unit, and sequentially access the same data resource according to the sorted concurrent access request.
结合第二方面,在第一种可能的实现方式中,所述排序单元,具体用于: 在一个引擎访问代理中,对所述至少两个并发访问请求进行排序。  With reference to the second aspect, in a first possible implementation, the sorting unit is specifically configured to: in an engine access proxy, sort the at least two concurrent access requests.
结合第二方面,在第二种可能的实现方式中,所述排序单元,具体用于: 将存储引擎上的每个数据资源对应哈希键值,构成哈希空间;  With reference to the second aspect, in a second possible implementation manner, the sorting unit is specifically configured to: associate each data resource on the storage engine with a hash key value to form a hash space;
划分哈希空间为 Ν个子哈希空间,其中 Ν为存储系统中引擎访问代理的 数目 ;  Divide the hash space into a sub-hash space, where Ν is the number of engine access agents in the storage system;
将 Ν个子哈希空间分配给 Ν个引擎访问代理,使每一引擎访问代理被分 配一个子哈希空间,且不同引擎访问代理被分配的子哈希空间不同;  Assigning a sub-hash space to each of the engine access agents, so that each engine access agent is assigned a sub-hash space, and different sub-hash spaces are assigned by different engine access agents;
根据所述至少两个并发访问请求访问的同一数据资源的哈希值所属的子 哈希空间,确定对所述至少两个并发访问请求进行排序的引擎访问代理,得到 排序引擎访问代理;  Determining an engine accessing agent that sorts the at least two concurrent access requests according to a sub-hash space to which the hash value of the same data resource accessed by the at least two concurrent access requests belongs, to obtain a sorting engine accessing the proxy;
将不处于所述排序引擎访问代理上的访问请求,路由到所述排序引擎访问 代理上,由所述排序引擎访问代理,对所述至少两个并发访问请求进行排序。  An access request that is not on the sorting engine access agent is routed to the sorting engine access agent, and the sorting engine accesses the agent to sort the at least two concurrent access requests.
结合第二方面的第二种可能实现方式,在第三种可能的实现方式中,所述 排序单元,还用于:  In conjunction with the second possible implementation of the second aspect, in a third possible implementation, the sorting unit is further configured to:
监测引擎访问代理的数目 ,当引擎访问代理的数目发生变化时,重新划分 哈希空间。  Monitor the number of engine access agents and repartition the hash space as the number of engine access agents changes.
结合第二方面的第一种可能实现方式,在第四种可能的实现方式中,还包 括缓存单元,其中,  In conjunction with the first possible implementation of the second aspect, in a fourth possible implementation, a cache unit is further included, where
所述缓存单元,用于在对所述并发访问请求进行排序的引擎访问代理中, 缓存所述同一数据资源。 本发明第一方面提供的并发访问请求处理方法和第二方面提供的并发访 问请求处理装置,当接收到对同一数据资源的至少两个并发访问请求后,对该 至少两个并发访问请求进行排序,依照排序后的并发访问请求,依次访问同一 数据资源,能够保证同一时刻只有一个请求访问对应的数据资源,进而能够避 免并发访问冲突。 附图说明 The cache unit is configured to cache the same data resource in an engine access proxy that sorts the concurrent access requests. The concurrent access request processing method provided by the first aspect of the present invention and the concurrent access request processing apparatus provided by the second aspect, after receiving at least two concurrent access requests for the same data resource, sorting the at least two concurrent access requests According to the sorted concurrent access request, sequentially accessing the same data resource can ensure that only one request accesses the corresponding data resource at the same time, thereby avoiding concurrent access conflicts. DRAWINGS
图 1为现有技术中基于 B树结构的索引表组织结构示意图;  1 is a schematic diagram of an organization structure of an index table based on a B-tree structure in the prior art;
图 2为现有技术中基于底层分布式存储引擎架设的对象存储系统构架示意 图;  2 is a schematic diagram of an architecture of an object storage system based on an underlying distributed storage engine in the prior art;
图 3为现有技术中并发访问请求发生访问冲突示意图;  FIG. 3 is a schematic diagram of an access violation occurring in a concurrent access request in the prior art; FIG.
图 4为本发明实施例中并发访问请求处理方法流程示意图;  4 is a schematic flowchart of a method for processing a concurrent access request according to an embodiment of the present invention;
图 5为本发明实施例中并发访问请求排序方法流程图;  FIG. 5 is a flowchart of a method for scheduling concurrent access requests according to an embodiment of the present invention;
图 6为本发明实施例中哈希空间划分示意图;  6 is a schematic diagram of hash space division in an embodiment of the present invention;
图 7为本发明实施例中并发访问请求排序过程示意图;  7 is a schematic diagram of a process of sorting concurrent access requests in an embodiment of the present invention;
图 8为本发明实施例提供的并发访问请求处理装置结构示意图; 图 9为本发明实施例提供的并发访问请求处理装置另一结构示意图; 图 10为本发明实施例提供的控制器构成示意图;  8 is a schematic structural diagram of a concurrent access request processing apparatus according to an embodiment of the present invention; FIG. 9 is another schematic structural diagram of a concurrent access request processing apparatus according to an embodiment of the present invention;
图 11为本发明实施例提供的控制器构成又一示意图;  FIG. 11 is still another schematic diagram of a controller configuration according to an embodiment of the present invention;
图 12为本发明实施例提供的控制器构成再一示意图。 具体实施方式  FIG. 12 is still another schematic diagram of a controller configuration according to an embodiment of the present invention. detailed description
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清 楚、 完整地描述。 显然,所描述的实施例仅仅是本发明一部分实施例,并不是 全部的实施例。  The technical solutions in the embodiments of the present invention will be clearly and completely described in the following with reference to the accompanying drawings in the embodiments. It is apparent that the described embodiments are only a part of the embodiments of the invention, not all of them.
基于底层分布式存储引擎架设的对象存储系统,不同客户端可能对同一数 据资源发起并发访问请求,上层应用则会接收到同一数据资源的至少两个并发 访问请求,该至少两个并发访问请求由不同的 ΑΡΡ (上层应用)以及引擎访问 代理进行处理。本发明实施例中在上层应用中对该至少两个并发访问请求进行 处理,以保证同一时刻只有一个请求访问对应的数据资源,进而能够免并发访 问冲突。 Based on the object storage system set up by the underlying distributed storage engine, different clients may initiate concurrent access requests to the same data resource, and the upper layer application may receive at least two concurrent data resources. Access request, the at least two concurrent access requests are processed by different ΑΡΡ (upper application) and engine access agent. In the embodiment of the present invention, the at least two concurrent access requests are processed in the upper layer application to ensure that only one request accesses the corresponding data resource at the same time, thereby preventing concurrent access conflicts.
实施例一  Embodiment 1
本发明实施例一提供一种并发访问请求处理方法,如图 4所示,包括: A first embodiment of the present invention provides a method for processing a concurrent access request, as shown in FIG. 4, including:
5101:接收对同一数据资源的至少两个并发访问请求。 5101: Receive at least two concurrent access requests for the same data resource.
具体的,对于存在热点访问的数据资源,在基于底层分布式存储引擎架设 的对象存储系统中,会由基于 HTTP协议的不同客户端发起对该数据资源的访 问请求,即对同一数据资源存在至少两个并发访问请求。 上层应用接收该对同 一数据资源的至少两个并发访问请求。  Specifically, for a data resource with hotspot access, in an object storage system based on the underlying distributed storage engine, an access request for the data resource is initiated by different clients based on the HTTP protocol, that is, at least the same data resource exists. Two concurrent access requests. The upper application receives at least two concurrent access requests for the same data resource.
5102:对接收到的至少两个并发访问请求进行排序。  5102: Sort the received at least two concurrent access requests.
具体的,本发明实施例中对上层应用接收到的同一数据资源的至少两个并 发访问请求进行排序,可以在对底层存储引擎请求数据资源的不同引擎访问代 理上直接进行排序,也可将不同引擎访问代理上的访问请求路由到一个引擎访 问代理上进行排序。 本发明实施例优选将不同引擎访问代理上的访问请求路由 到一个引擎访问代理上进行排序,使得在对并发访问请求进行排序时,无需不 同引擎访问代理之间进行状态查询。  Specifically, in the embodiment of the present invention, at least two concurrent access requests of the same data resource received by the upper layer application are sorted, and may be directly sorted on different engine access agents that request data resources of the underlying storage engine, or may be different. The access request on the engine access proxy is routed to an engine access proxy for sorting. The embodiment of the present invention preferably routes access requests on different engine access agents to an engine access agent for sorting, so that when the concurrent access requests are sorted, no state query is required between different engine access agents.
5103:依照排序后的并发访问请求,依次访问数据资源。  5103: Access the data resources in turn according to the sorted concurrent access request.
本发明实施例中,对访问同一数据资源的至少两个并发访问请求进行排 序。 依照排序后的并发访问请求,依次访问该数据资源,能够保证同一时刻只 有一个请求访问该数据资源,进而能够避免并发访问冲突。  In the embodiment of the present invention, at least two concurrent access requests for accessing the same data resource are sorted. According to the sorted concurrent access request, accessing the data resource in turn can ensure that only one request accesses the data resource at a time, thereby avoiding concurrent access conflicts.
实施例二  Embodiment 2
本发明实施例二将结合实际应用对实施例一中涉及的并发访问请求处理 方法进行详细说明。  The second embodiment of the present invention will be described in detail in conjunction with the actual application for the concurrent access request processing method involved in the first embodiment.
对于以容器和对象的两层业务模型为基础的对象存储技术,当有不止一个 客户端同时向一个容器内添加对象时,会导致索引关系表的子数据块数据成为 热点,即存在不止一个访问请求。 请求访问容器与对象之间的索引表,本发明 实施例中将对同一索引表的并发访问请求进行排序,保证同一索引表同一时刻 只有一个读写操作,避免并发访问冲突。 For object storage technologies based on two-tier business models of containers and objects, when there is more than one When a client adds an object to a container at the same time, it causes the sub-block data of the index relation table to become a hot spot, that is, there is more than one access request. The access table between the container and the object is requested to be accessed. In the embodiment of the present invention, the concurrent access requests of the same index table are sorted to ensure that the same index table has only one read and write operation at the same time, thereby avoiding concurrent access conflicts.
本发明实施例中对访问请求进行排序的过程进行详细说明,其他对并发访 问请求的处理步骤可参考实施例一,在此不再赘述。  The process of sorting the access request in the embodiment of the present invention is described in detail. For the processing steps of the concurrent access request, refer to the first embodiment, and details are not described herein again.
本发明实施例中在一个引擎访问代理中,对同一数据资源的至少两个并发 访问请求进行排序,将不同引擎访问代理上的访问请求路由到一个引擎访问代 理上进行排序,使得在对并发访问请求进行排序时,无需不同引擎访问代理之 间进行状态查询。  In an embodiment of the present invention, in an engine access proxy, at least two concurrent access requests of the same data resource are sorted, and access requests on different engine access agents are routed to an engine access proxy for sorting, so that concurrent access is performed. When requesting a sort, there is no need for a different state to access the status query between the agents.
本发明实施例,在一个引擎访问代理中,对至少两个并发访问请求进行排 序的过程如图 5所示,包括:  In the embodiment of the present invention, in an engine accessing agent, the process of sorting at least two concurrent access requests is as shown in FIG. 5, including:
5201:将存储引擎上的每个数据资源对应哈希键值,构成哈希空间。 本发明实施例中,当存在不止一个访问请求,请求访问容器与对象之间的 索引表时,根据各个上层应用之间统一的哈希算法对存储引擎上的数据资源进 行哈希。 每个数据资源对应哈希键值,构成哈希空间。 哈希空间为一个相对上 层应用 APP的个数足够大的一维空间。  5201: Corresponding to a hash key value for each data resource on the storage engine to form a hash space. In the embodiment of the present invention, when there is more than one access request requesting access to the index table between the container and the object, the data resource on the storage engine is hashed according to a unified hash algorithm between the upper layer applications. Each data resource corresponds to a hash key value and constitutes a hash space. The hash space is a one-dimensional space that is sufficiently large for the number of APPs applied to the upper layer.
5202:划分 S201中的哈希空间为 N个子哈希空间。  5202: The hash space in the partition S201 is N sub-hash spaces.
本发明实施例中,划分的子哈希空间的数目与上层应用对应的引擎访问代 理数目相同。 若上层应用的数目为 N ,则将哈希空间划分为 N份,得到 N个 子哈希空间,例如本发明实施例中上层应用包括 APP1、 APP2和 APP3 ,则哈 希空间划分过程示意图如图 6所示。  In the embodiment of the present invention, the number of divided sub-hash spaces is the same as the number of engine access agents corresponding to the upper layer application. If the number of the upper layer application is N, the hash space is divided into N parts, and N sub-hash spaces are obtained. For example, in the embodiment of the present invention, the upper layer application includes APP1, APP2, and APP3, and the hash space partitioning process is shown in FIG. 6. Shown.
进一步的,本发明实施例中进行哈希空间划分时,可采用等分方式进行划 分,也可采用不等分方式进行划分,只要使得划分的子哈希空间数目与上层应 用对应的引擎访问代理数目相同即可。 例如,本发明实施例中上层应用的数目 为 N ,若哈希空间能够等分为 N份,则本发明实施例中可采用等分方式将哈希 空间划分为 N份,得到 Ν个哈希空间相等的子哈希空间;若哈希空间不能够 等分为 Ν份 ,则本发明实施例中可采用不等分方式将哈希空间划分为 N-1个哈 希空间相等的子哈希空间,将剩余的哈希空间划分为 1个子哈希空间,最终得 到 Ν个子哈希空间。 Further, when performing hash space partitioning in the embodiment of the present invention, the partitioning may be performed by using an equal division method, or may be performed by using an unequal division method, as long as the number of divided sub-hash spaces is corresponding to the engine accessing agent corresponding to the upper layer application. The number is the same. For example, in the embodiment of the present invention, the number of upper layer applications is N, and if the hash space can be equally divided into N parts, in the embodiment of the present invention, the hash may be equally divided. The space is divided into N parts, and a sub-hash space with the same hash space is obtained. If the hash space cannot be equally divided, the hash space can be divided into N by using the unequal division method in the embodiment of the present invention. - 1 sub-hash space with equal hash space, divide the remaining hash space into 1 sub-hash space, and finally get a sub-hash space.
5203:将 Ν个子哈希空间分配给 Ν个引擎访问代理。  5203: Assign a sub-hash space to the engine access agent.
本发明实施例中可在上层应用的引擎访问代理之间建立简单的集群,为每 一个引擎访问代理编号,为每个引擎访问代理自身的序号分配一段子哈希空 间。 将 S202中划分得到的 Ν个子哈希空间分配给 Ν个引擎访问代理,每一引 擎访问代理分配一个子哈希空间,且不同引擎访问代理分配的子哈希空间不 同。  In the embodiment of the present invention, a simple cluster can be established between the engine access agents of the upper layer application, the agent number is accessed for each engine, and a sub-hash space is allocated for each engine access agent's own sequence number. The sub-hash space divided in S202 is allocated to one engine access agent, and each engine access agent allocates a sub-hash space, and different sub-hash spaces allocated by different engine access agents are different.
5204:确定至少两个请求访问的数据资源的哈希值,并根据分配给子引擎 访问代理的子哈希空间,确定对至少两个并发访问请求进行排序的引擎访问代 理,得到排序引擎访问代理。  5204: Determine a hash value of at least two data resources requested to access, and determine an engine access proxy that sorts at least two concurrent access requests according to a sub-hash space allocated to the sub-engine access proxy, and obtain a sorting engine access proxy. .
本发明实施例中对数据资源进行哈希后,每个数据资源对应一个哈希值, 该哈希值归属于 S202中划分得到的一个子哈希空间。 而每个哈希空间被分配 给一个引擎访问代理。故根据至少两个并发访问请求访问的同一数据资源的哈 希值所属的子哈希空间,则可确定对该至少两个并发访问请求进行排序的引擎 访问代理。 将该确定的引擎访问代理称之为排序引擎访问代理,通过该排序引 擎访问代理对至少两个并发访问请求进行排序。  After the data resource is hashed in the embodiment of the present invention, each data resource corresponds to a hash value, and the hash value belongs to a sub-hash space obtained by dividing in S202. And each hash space is assigned to an engine access agent. Therefore, the engine access agent that sorts the at least two concurrent access requests may be determined according to the sub-hash space to which the hash value of the same data resource accessed by the at least two concurrent access requests belongs. The determined engine access agent is referred to as a sorting engine access agent through which the at least two concurrent access requests are sorted.
5205:将不处于排序引擎访问代理上的并发访问请求,转发到排序引擎访 问代理上。  5205: Forward concurrent access requests that are not on the sort engine access agent to the sorting engine access agent.
本发明实施例中每一个访问请求,都被发送至上层应用 ΑΡΡ 中,故不同 的访问请求将处于不同的引擎访问代理上。 本发明实施例中当在 S204中确定 了排序引擎访问代理后,则将不处于该确定的排序引擎访问代理上的访问请求 路由到排序引擎访问代理上,由排序引擎访问代理,对路由到其上的所有并发 访问请求进行排序。 例如,当前有 3个并发访问请求需要访问位于存储引擎子 节点 6上的数据资源且存储引擎子节点 6上的数据资源哈希后得到的哈希值, 属于 APP1对应的引擎访问代理。 则将 APP2和 APP3分别对应的引擎访问代 理上的访问请求路由到 APP1对应的引擎访问代理上。 由 APP1对应的引擎访 问代理对 3个并发访问请求进行排序,如图 7所示。 Each access request in the embodiment of the present invention is sent to the upper application ,, so different access requests will be on different engine access agents. In the embodiment of the present invention, after determining that the sorting engine accesses the proxy in S204, the access request that is not on the determined sorting engine accessing proxy is routed to the sorting engine accessing proxy, and the sorting engine accesses the proxy, and routes to the proxy All concurrent access requests are sorted. For example, there are currently 3 concurrent access requests that need to be accessed at the storage engine. The hash value obtained by hashing the data resource on the node 6 and storing the data resource on the engine sub-node 6 belongs to the engine access proxy corresponding to APP1. Then, the access request on the engine access proxy corresponding to APP2 and APP3 is respectively routed to the engine access proxy corresponding to APP1. The three concurrent access requests are sorted by the engine access agent corresponding to APP1, as shown in FIG.
本发明实施例中,通过上述对并发访问请求的处理方式,将对同一容器的 索引表的请求被路由到了同一个应用的引擎访问代理上进行排序处理,依照排 序后的并发访问请求,依次访问索引表,能够确保没有访问冲突,从而避免了 底层分布式存储引擎的数据冲突问题。  In the embodiment of the present invention, through the foregoing processing manner of the concurrent access request, the request for the index table of the same container is routed to the engine access proxy of the same application for sorting processing, and sequentially accessed according to the sorted concurrent access request. Index tables ensure that there are no access violations, thus avoiding data conflicts with the underlying distributed storage engine.
进一步的,本发明实施例中在 S202划分子哈希空间后,监测引擎访问代 理的数目。 当引擎访问代理的数目发生变化时,重新划分哈希空间,以适应上 层应用节点退出或新加入的情况。  Further, in the embodiment of the present invention, after the sub-hash space is divided by S202, the number of accessing agents of the engine is monitored. When the number of engine access agents changes, the hash space is re-divided to accommodate the case where the upper application node exits or newly joins.
更进一步的,由于对 B树结构中索引表子数据块中的数据写入都是在上一 次已完成数据的基础上进行的。 故本发明实施例中,当采用一个引擎访问代理 对并发访问请求进行排序时,可在该对并发访问请求进行排序的引擎访问代理 中增加读写缓存,缓存数据资源,以提高对数据资源的访问速度。  Further, since the data writing in the index table sub-block of the B-tree structure is performed on the basis of the last completed data. Therefore, in the embodiment of the present invention, when an engine access proxy is used to sort concurrent access requests, the read/write cache may be added to the engine access proxy that sorts the concurrent access requests, and the data resources are cached to improve the data resources. Access speed.
实施例三  Embodiment 3
基于实施例一和实施例二提供的并发访问请求处理方法,本发明实施例提 供一种并发访问请求处理装置,如图 8所示,本发明实施例提供的并发访问请 求处理装置,包括接收单元 801、 排序单元 802和访问单元 803 ,其中,  Based on the concurrent access request processing method provided by the first embodiment and the second embodiment, the embodiment of the present invention provides a concurrent access request processing device. As shown in FIG. 8, the concurrent access request processing device provided by the embodiment of the present invention includes a receiving unit. 801, a sorting unit 802 and an access unit 803, wherein
接收单元 801 ,用于接收对同一数据资源的至少两个并发访问请求,并向 排序单元 802传输至少两个并发访问请求;  The receiving unit 801 is configured to receive at least two concurrent access requests for the same data resource, and transmit at least two concurrent access requests to the sorting unit 802.
排序单元 802 ,用于接收接收单元 801传输的至少两个并发访问请求,并 对至少两个并发访问请求进行排序,将排序后的并发访问请求向访问单元 803 传输;  The sorting unit 802 is configured to receive at least two concurrent access requests transmitted by the receiving unit 801, and sort the at least two concurrent access requests, and transmit the sorted concurrent access requests to the access unit 803;
访问单元 803 ,用于接收排序单元 802传输的排序后的并发访问请求,并 依照排序后的并发访问请求,依次访问同一数据资源。 具体的,本发明实施例中排序单元 802用于在一个引擎访问代理中,对至 少两个并发访问请求进行排序。 The access unit 803 is configured to receive the sorted concurrent access request transmitted by the sorting unit 802, and sequentially access the same data resource according to the sorted concurrent access request. Specifically, in the embodiment of the present invention, the sorting unit 802 is configured to sort at least two concurrent access requests in an engine access proxy.
其中,排序单元 802 ,具体用于:  The sorting unit 802 is specifically configured to:
将存储引擎上的每个数据资源对应哈希键值,构成哈希空间;  Corresponding to a hash key value for each data resource on the storage engine to form a hash space;
划分哈希空间为 Ν个子哈希空间,其中 Ν为存储系统中引擎访问代理的 数目 ;  Divide the hash space into a sub-hash space, where Ν is the number of engine access agents in the storage system;
将 Ν个子哈希空间分配给 Ν个引擎访问代理,使每一引擎访问代理被分 配一个子哈希空间,且不同引擎访问代理被分配的子哈希空间不同;  Assigning a sub-hash space to each of the engine access agents, so that each engine access agent is assigned a sub-hash space, and different sub-hash spaces are assigned by different engine access agents;
根据至少两个并发访问请求访问的同一数据资源的哈希值所属的子哈希 空间,确定对至少两个并发访问请求进行排序的引擎访问代理,得到排序引擎 访问代理;  Determining an engine access proxy that sorts at least two concurrent access requests according to a sub-hash space to which the hash value of the same data resource accessed by the at least two concurrent access requests belongs, to obtain a sorting engine accessing the proxy;
将不处于排序引擎访问代理上的访问请求,路由到排序引擎访问代理上, 由排序引擎访问代理,对至少两个并发访问请求进行排序。  An access request that is not on the sort engine access proxy is routed to the sort engine access proxy, and the sort engine accesses the proxy to sort at least two concurrent access requests.
进一步的,本发明实施例中排序单元 802 ,还用于:  Further, the sorting unit 802 in the embodiment of the present invention is further configured to:
监测引擎访问代理的数目 ,当引擎访问代理的数目发生变化时,重新划分 哈希空间。  Monitor the number of engine access agents and repartition the hash space as the number of engine access agents changes.
本发明实施例提供的并发访问请求处理装置,还包括缓存单元 804 ,如图 9所示,其中,缓存单元 804 ,用于在对并发访问请求进行排序的引擎访问代 理中,缓存同一数据资源。  The concurrent access request processing apparatus provided by the embodiment of the present invention further includes a cache unit 804, as shown in FIG. 9, wherein the cache unit 804 is configured to cache the same data resource in an engine access proxy that sorts concurrent access requests.
本发明实施例提供的并发访问请求处理装置,当对同一数据资源存在至少 两个并发访问请求时,对该至少两个并发访问请求进行排序,依照排序后的并 发访问请求,依次访问该同一数据资源,能够保证同一时刻只有一个请求访问 对应的数据资源,进而能够避免并发访问冲突。  The concurrent access request processing apparatus provided by the embodiment of the present invention, when there are at least two concurrent access requests for the same data resource, sorts the at least two concurrent access requests, and sequentially accesses the same data according to the sorted concurrent access request. The resource can ensure that only one request accesses the corresponding data resource at a time, thereby avoiding concurrent access conflicts.
本发明实施例提供的上述并发访问请求处理装置,可以是独立的部件,也 可以是集成于其他部件中,例如本发明实施例提供的上述并发访问请求处理装 置可以是引擎访问代理,也可以是集成于引擎访问代理内的新的部件。 需要说明的是 ,本发明实施例中的并发访问请求处理装置的各个模块 /单元 的功能实现以及交互方式可以进一步参照相关方法实施例的描述。 The above-mentioned concurrent access request processing device provided by the embodiment of the present invention may be an independent component, or may be integrated into other components. For example, the concurrent access request processing device provided by the embodiment of the present invention may be an engine access proxy, or may be New components integrated into the engine access agent. It should be noted that the function implementation and the interaction manner of each module/unit of the concurrent access request processing apparatus in the embodiment of the present invention may be further referred to the description of the related method embodiments.
实施例四  Embodiment 4
基于本发明实施例提供的并发访问请求处理方法和装置,本发明实施例提 供一种控制器,该控制器可应用于以容器和对象两层业务模型为基础的对象存 储业务,如图 10所示,该控制器包括:处理器 1001和 I/O接口 1002 ,其中, 处理器 1001 ,用于接收对同一数据资源的至少两个并发访问请求,并对接 收到的上述至少两个并发访问请求进行排序,将排序后的并发访问请求向 I/O 接口 1002传输;  The embodiment of the present invention provides a controller, which can be applied to an object storage service based on a container and an object two-layer service model, as shown in FIG. 10, in accordance with an embodiment of the present invention. The controller includes: a processor 1001 and an I/O interface 1002, wherein the processor 1001 is configured to receive at least two concurrent access requests for the same data resource, and receive the received at least two concurrent access requests. Sorting, and transmitting the sorted concurrent access request to the I/O interface 1002;
I/O接口 1002 ,用于接收处理器 1001传输的排序后的并发访问请求,并 将排序后的并发访问请求输出。  The I/O interface 1002 is configured to receive the sorted concurrent access request transmitted by the processor 1001, and output the sorted concurrent access request.
进一步的,处理器 1001用于在一个引擎访问代理中,对至少两个并发访 问请求进行排序。  Further, the processor 1001 is configured to sort at least two concurrent access requests in an engine access agent.
其中,处理器 1001具体用于:将存储引擎上的每个数据资源对应哈希键 值,构成哈希空间;划分哈希空间为 Ν个子哈希空间,其中 Ν为引擎访问代 理的数目 ;将 Ν个子哈希空间分配给 Ν个引擎访问代理,使每一引擎访问代 理被分配一个子哈希空间,且不同引擎访问代理被分配的子哈希空间不同;根 据至少两个并发访问请求访问的同一数据资源的哈希值所属的子哈希空间,确 定对至少两个并发访问请求进行排序的引擎访问代理,得到排序引擎访问代 理;将不处于排序引擎访问代理上的访问请求,路由到排序引擎访问代理上, 由排序引擎访问代理,对至少两个并发访问请求进行排序。  The processor 1001 is specifically configured to: associate each data resource on the storage engine with a hash key value to form a hash space; divide the hash space into a sub-hash space, where Ν is the number of engine access agents; A sub-hash space is allocated to each of the engine access agents, so that each engine access agent is assigned a sub-hash space, and different sub-hash spaces are assigned by different engine access agents; accesses according to at least two concurrent access requests The sub-hash space to which the hash value of the same data resource belongs, determine the engine access proxy that sorts at least two concurrent access requests, obtain the sorting engine access proxy, and route the access request that is not in the sorting engine access proxy to the sorting On the engine access agent, the sorting engine accesses the agent and sorts at least two concurrent access requests.
进一步的,本发明实施例中控制器,还包括监测器 1003 ,如图 11所示, 监测器 1003监测引擎访问代理的数目 ,当引擎访问代理的数目发生变化时, 向处理器 1001发送重新划分哈希空间的指令。  Further, the controller in the embodiment of the present invention further includes a monitor 1003. As shown in FIG. 11, the monitor 1003 monitors the number of engine access agents, and sends a re-division to the processor 1001 when the number of engine access agents changes. Hash space instructions.
更进一步的,本发明实施例中控制器还包括缓存器 1004 ,如图 12所示, 缓存器 1004用于在处理器 1001对并发访问请求进行排序的引擎访问代理中, 缓存同一数据资源。 Further, in the embodiment of the present invention, the controller further includes a buffer 1004. As shown in FIG. 12, the buffer 1004 is used in an engine access proxy for sorting concurrent access requests by the processor 1001. Cache the same data resource.
本发明实施例提供的控制器,当对同一数据资源存在至少两个并发访问请 求时,对该至少两个并发访问请求进行排序,依照排序后的并发访问请求,依 次访问该同一数据资源,能够保证同一时刻只有一个请求访问对应的数据资 源,进而能够避免并发访问冲突。  The controller provided by the embodiment of the present invention, when there are at least two concurrent access requests for the same data resource, sort the at least two concurrent access requests, and sequentially access the same data resource according to the sorted concurrent access request. It is guaranteed that only one request for accessing the corresponding data resource at the same time can avoid concurrent access conflicts.
显然,本领域的技术人员可以对本发明进行各种改动和变型而不脱离本发 明的精神和范围。 这样,倘若本发明的这些修改和变型属于本发明权利要求及 其等同技术的范围之内,则本发明也意图包含这些改动和变型在内。  It is apparent that those skilled in the art can make various modifications and variations to the invention without departing from the spirit and scope of the invention. Thus, it is intended that the present invention cover the modifications and the modifications of the

Claims

权 利 要 求 书 claims
1、 一种并发访问请求的处理方法,其特征在于,包括: 1. A method for processing concurrent access requests, which is characterized by including:
接收对同一数据资源的至少两个并发访问请求,并对所述至少两个并发访 问请求进行排序; Receive at least two concurrent access requests for the same data resource, and sort the at least two concurrent access requests;
依照排序后的并发访问请求,依次访问所述同一数据资源。 The same data resource is accessed sequentially according to the sorted concurrent access requests.
2、 如权利要求 1所述的方法,其特征在于,对所述并发访问请求进行排 序,包括: 2. The method of claim 1, characterized in that sorting the concurrent access requests includes:
在一个引擎访问代理中,对所述至少两个并发访问请求进行排序。 In an engine access agent, the at least two concurrent access requests are sequenced.
3、 如权利要求 1所述的方法,其特征在于,所述对所述至少两个并发访 问请求进行排序,包括: 3. The method of claim 1, wherein said sorting the at least two concurrent access requests includes:
将存储引擎上的每个数据资源对应哈希键值,构成哈希空间; Each data resource on the storage engine corresponds to a hash key value to form a hash space;
划分哈希空间为 N个子哈希空间,其中 N为存储系统中引擎访问代理的 数目 ; Divide the hash space into N sub-hash spaces, where N is the number of engine access agents in the storage system;
将 N个子哈希空间分配给 N个引擎访问代理,使每一引擎访问代理被分 配一个子哈希空间,且不同引擎访问代理被分配的子哈希空间不同; Allocate N sub-hash spaces to N engine access agents, so that each engine access agent is assigned a sub-hash space, and different engine access agents are assigned different sub-hash spaces;
根据所述至少两个并发访问请求访问的同一数据资源的哈希值所属的子 哈希空间,确定对所述至少两个并发访问请求进行排序的引擎访问代理,得到 排序引擎访问代理; According to the sub-hash space to which the hash values of the same data resource accessed by the at least two concurrent access requests belong, determine the engine access agent that sorts the at least two concurrent access requests, and obtain the sorting engine access agent;
将不处于所述排序引擎访问代理上的访问请求,路由到所述排序引擎访问 代理上,由所述排序引擎访问代理,对所述至少两个并发访问请求进行排序。 Access requests that are not on the sorting engine access proxy are routed to the sorting engine access proxy, and the sorting engine access proxy sorts the at least two concurrent access requests.
4、 如权利要求 3所述的方法,其特征在于,划分哈希空间为 N个子哈希 空间之后,该方法还包括: 4. The method of claim 3, wherein after dividing the hash space into N sub-hash spaces, the method further includes:
监测引擎访问代理的数目 ,当引擎访问代理的数目发生变化时,重新划分 哈希空间。 Monitor the number of engine access proxies. When the number of engine access proxies changes, the hash space is re-divided.
5、 如权利要求 2所述的方法,其特征在于,该方法还包括: 5. The method of claim 2, wherein the method further includes:
在对所述并发访问请求进行排序的引擎访问代理中,缓存所述同一数据资 源。 In the engine access proxy that sorts the concurrent access requests, the same data resource is cached. source.
6、 一种并发访问请求的处理装置,其特征在于,包括接收单元、 排序单 元和访问单元,其中, 6. A device for processing concurrent access requests, characterized by including a receiving unit, a sorting unit and an access unit, wherein,
所述接收单元,用于接收对同一数据资源的至少两个并发访问请求,并向 所述排序单元传输所述至少两个并发访问请求; The receiving unit is configured to receive at least two concurrent access requests for the same data resource and transmit the at least two concurrent access requests to the sorting unit;
所述排序单元,用于接收所述接收单元传输的所述至少两个并发访问请 求,并对所述至少两个并发访问请求进行排序,将排序后的并发访问请求向所 述访问单元传输; The sorting unit is configured to receive the at least two concurrent access requests transmitted by the receiving unit, sort the at least two concurrent access requests, and transmit the sorted concurrent access requests to the access unit;
所述访问单元,用于接收所述排序单元传输的排序后的并发访问请求,并 依照排序后的并发访问请求,依次访问所述同一数据资源。 The access unit is configured to receive the sorted concurrent access requests transmitted by the sorting unit, and to access the same data resource in sequence according to the sorted concurrent access requests.
7、 如权利要求 6所述的处理装置,其特征在于,所述排序单元,具体用 于: 7. The processing device according to claim 6, wherein the sorting unit is specifically used for:
在一个引擎访问代理中,对所述至少两个并发访问请求进行排序。 In an engine access agent, the at least two concurrent access requests are sequenced.
8、 如权利要求 6所述的处理装置,其特征在于,所述排序单元,具体用 于: 8. The processing device according to claim 6, wherein the sorting unit is specifically used for:
将存储引擎上的每个数据资源对应哈希键值,构成哈希空间; Each data resource on the storage engine corresponds to a hash key value to form a hash space;
划分哈希空间为 N个子哈希空间,其中 N为存储系统中引擎访问代理的 数目 ; Divide the hash space into N sub-hash spaces, where N is the number of engine access agents in the storage system;
将 N个子哈希空间分配给 N个引擎访问代理,使每一引擎访问代理被分 配一个子哈希空间,且不同引擎访问代理被分配的子哈希空间不同; Allocate N sub-hash spaces to N engine access agents, so that each engine access agent is assigned a sub-hash space, and different engine access agents are assigned different sub-hash spaces;
根据所述至少两个并发访问请求访问的同一数据资源的哈希值所属的子 哈希空间,确定对所述至少两个并发访问请求进行排序的引擎访问代理,得到 排序引擎访问代理; According to the sub-hash space to which the hash values of the same data resource accessed by the at least two concurrent access requests belong, determine the engine access agent that sorts the at least two concurrent access requests, and obtain the sorting engine access agent;
将不处于所述排序引擎访问代理上的访问请求,路由到所述排序引擎访问 代理上,由所述排序引擎访问代理,对所述至少两个并发访问请求进行排序。 Access requests that are not on the sorting engine access proxy are routed to the sorting engine access proxy, and the sorting engine access proxy sorts the at least two concurrent access requests.
9、 如权利要求 8所述的处理装置,其特征在于,所述排序单元,还用于: 监测引擎访问代理的数目 ,当引擎访问代理的数目发生变化时,重新划分 哈希空间。 9. The processing device according to claim 8, wherein the sorting unit is also used for: Monitor the number of engine access proxies, and re-divide the hash space when the number of engine access proxies changes.
10、 如权利要求 7所述的处理装置,其特征在于,还包括缓存单元,其中, 所述缓存单元,用于在对所述并发访问请求进行排序的引擎访问代理中, 缓存所述同一数据资源。 10. The processing device according to claim 7, further comprising a cache unit, wherein the cache unit is used to cache the same data in the engine access agent that sorts the concurrent access requests. resource.
PCT/CN2014/075558 2013-11-07 2014-04-17 Concurrent access request processing method and device WO2015067004A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201310549721.3 2013-11-07
CN201310549721.3A CN103634374B (en) 2013-11-07 2013-11-07 Method and device for processing concurrent access requests

Publications (2)

Publication Number Publication Date
WO2015067004A1 WO2015067004A1 (en) 2015-05-14
WO2015067004A9 true WO2015067004A9 (en) 2015-09-03

Family

ID=50214990

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2014/075558 WO2015067004A1 (en) 2013-11-07 2014-04-17 Concurrent access request processing method and device

Country Status (2)

Country Link
CN (1) CN103634374B (en)
WO (1) WO2015067004A1 (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103634374B (en) * 2013-11-07 2017-04-12 华为技术有限公司 Method and device for processing concurrent access requests
CN105354328B (en) * 2015-11-25 2019-03-26 南京莱斯信息技术股份有限公司 A kind of system and method solving the access conflict of NoSQL database concurrency
CN106649141B (en) * 2016-11-02 2019-10-18 郑州云海信息技术有限公司 A kind of storage interactive device and storage system based on ceph
CN113253933B (en) 2017-04-17 2024-02-09 伊姆西Ip控股有限责任公司 Method, apparatus, and computer readable storage medium for managing a storage system
CN111600940B (en) * 2020-05-06 2022-11-11 中国银行股份有限公司 Distributed session management method and system

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101064604B (en) * 2006-04-29 2012-04-18 西门子公司 Remote access process, system and equipment
US9026748B2 (en) * 2011-01-11 2015-05-05 Hewlett-Packard Development Company, L.P. Concurrent request scheduling
WO2011113390A2 (en) * 2011-04-27 2011-09-22 华为技术有限公司 Method and device for improving user access speed of mobile broadband internet
CN103297456B (en) * 2012-02-24 2016-09-28 阿里巴巴集团控股有限公司 Access method and the distributed system of resource is shared under a kind of distributed system
CN102739440A (en) * 2012-05-24 2012-10-17 大唐移动通信设备有限公司 Method and device for accessing hardware device
CN102999377B (en) * 2012-11-30 2015-06-10 北京东方通科技股份有限公司 Service concurrent access control method and device
CN103634374B (en) * 2013-11-07 2017-04-12 华为技术有限公司 Method and device for processing concurrent access requests

Also Published As

Publication number Publication date
WO2015067004A1 (en) 2015-05-14
CN103634374B (en) 2017-04-12
CN103634374A (en) 2014-03-12

Similar Documents

Publication Publication Date Title
EP3361387B1 (en) Data transmission method, equipment and system
CN105144121B (en) Cache content addressable data block is for Storage Virtualization
US9590915B2 (en) Transmission of Map/Reduce data in a data center
US20160132541A1 (en) Efficient implementations for mapreduce systems
US9998531B2 (en) Computer-based, balanced provisioning and optimization of data transfer resources for products and services
CN103155524B (en) The system and method for IIP address is shared between the multiple cores in multiple nucleus system
TW201220197A (en) for improving the safety and reliability of data storage in a virtual machine based on cloud calculation and distributed storage environment
JP6275119B2 (en) System and method for partitioning a one-way linked list for allocation of memory elements
JP2019139759A (en) Solid state drive (ssd), distributed data storage system, and method of the same
WO2015067004A9 (en) Concurrent access request processing method and device
WO2010072083A1 (en) Web application based database system and data management method therof
KR20210075845A (en) Native key-value distributed storage system
CN103312624A (en) Message queue service system and method
US11080207B2 (en) Caching framework for big-data engines in the cloud
JP2005056077A (en) Database control method
US9483523B2 (en) Information processing apparatus, distributed processing system, and distributed processing method
WO2016101662A1 (en) Data processing method and relevant server
CN107493309B (en) File writing method and device in distributed system
US20200394077A1 (en) Map reduce using coordination namespace hardware acceleration
WO2017113277A1 (en) Data processing method, device, and system
Ren et al. Design, implementation, and evaluation of a NUMA-aware cache for iSCSI storage servers
US11714573B1 (en) Storage optimization in a distributed object store
US10824640B1 (en) Framework for scheduling concurrent replication cycles
WO2016197607A1 (en) Method and apparatus for realizing route lookup
KR101512647B1 (en) Method For Choosing Query Processing Engine

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14860733

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase in:

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14860733

Country of ref document: EP

Kind code of ref document: A1