CN114595167A - Distributed cache system, method and device - Google Patents

Distributed cache system, method and device Download PDF

Info

Publication number
CN114595167A
CN114595167A CN202210138924.2A CN202210138924A CN114595167A CN 114595167 A CN114595167 A CN 114595167A CN 202210138924 A CN202210138924 A CN 202210138924A CN 114595167 A CN114595167 A CN 114595167A
Authority
CN
China
Prior art keywords
storage node
cache
address space
node
segment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210138924.2A
Other languages
Chinese (zh)
Inventor
杨丰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba China Co Ltd
Original Assignee
Alibaba China Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba China Co Ltd filed Critical Alibaba China Co Ltd
Priority to CN202210138924.2A priority Critical patent/CN114595167A/en
Publication of CN114595167A publication Critical patent/CN114595167A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0866Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches for peripheral storage systems, e.g. disk cache
    • G06F12/0871Allocation or management of cache space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/06Addressing a physical block of locations, e.g. base addressing, module addressing, memory dedication
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0866Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches for peripheral storage systems, e.g. disk cache
    • G06F12/0873Mapping of cache memory to specific storage devices or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0629Configuration or reconfiguration of storage systems
    • G06F3/0631Configuration or reconfiguration of storage systems by allocating resources to storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

Embodiments of the present specification provide a distributed cache system, a method, and an apparatus, where the distributed cache system includes: the system comprises a management node, a plurality of storage nodes and a client, wherein the storage nodes are used for storing cache resources; the management node acquires cache resource access indexes of each storage node, and determines a first storage node and a target address space section which accord with a preset overload condition and a second storage node which accord with a preset scheduling condition based on the cache resource access indexes of each storage node; sending a creation request aiming at the target address space segment to a second storage node, and sending a mapping establishment request to the client after receiving a response message of successful creation returned by the second storage node; and after receiving the mapping establishment request, the client establishes the link mapping among the target disk, the target address space segment and the second storage node. The scheme can improve the utilization balance of the cache resources in the distributed cache system.

Description

Distributed cache system, method and device
Technical Field
The embodiment of the specification relates to the technical field of computers, in particular to a distributed cache system.
Background
With the development of computer technology, cache resources in a distributed cache system including a plurality of storage nodes are generally utilized to speed up read/write requests of various items, so as to improve the processing efficiency of the read/write requests. Therefore, how to allocate the cache resources of the distributed cache system is crucial.
In the related art, each of the storage nodes generally stores an equivalent Cache resource, for example, an equivalent Cache module Cache. On the basis, each storage node uniformly corresponds to a plurality of address space sections, and the read/write requests corresponding to the address space sections access cache resources in the corresponding storage nodes respectively, so that read-write acceleration is realized.
However, in a specific application, the number of read/write requests of different items is different, and a problem of unbalanced access to cache resources in each node is likely to occur. For example, the load of part of the storage nodes is too large, and the cache utilization of part of the storage nodes is too low. Therefore, there is a need to provide a more reliable solution.
Disclosure of Invention
In view of this, the embodiments of the present disclosure provide a distributed cache system. One or more embodiments of the present disclosure also relate to a distributed caching method, a distributed caching apparatus, a computing device, a computer-readable storage medium, and a computer program, so as to solve technical deficiencies in the prior art.
According to a first aspect of embodiments of the present specification, there is provided a distributed cache system, including: the system comprises a management node, a plurality of storage nodes and a client, wherein the storage nodes store cache resources;
the management node is configured to obtain cache resource access indexes of each storage node, and determine a first storage node and a target address space section which meet a preset overload condition and a second storage node which meet a preset scheduling condition based on the cache resource access indexes of each storage node; sending a creation request aiming at the target address space segment to the second storage node, and sending a mapping establishment request to the client after receiving a response message of successful creation returned by the second storage node;
the client is configured to create a new link map of a target address space segment for a target disk after receiving the map creation request, where the new link map is a link map among the target disk, the target address space segment created by the second storage node, and the second storage node.
According to a second aspect of the embodiments of the present specification, there is provided a distributed caching method applied to a management node, including:
obtaining cache resource access indexes of each storage node;
determining a first storage node and a target address space section which accord with a preset overload condition and a second storage node which accord with a preset scheduling condition based on the cache resource access indexes of the storage nodes;
sending a create request for the target address space segment to the second storage node;
and after receiving a response message of successful creation returned by the second storage node, sending a mapping establishment request to a client, wherein the mapping establishment request is used for the client to create a new link mapping of a target address space segment for a target disk after receiving the mapping establishment request, and the new link mapping is a link mapping among the target disk, the target address space segment created by the second storage node and the second storage node.
According to a third aspect of the embodiments of the present specification, there is provided a distributed caching method applied to a client, including:
receiving a mapping establishment request sent by a management node, wherein the mapping establishment acquires cache resource access indexes of each storage node for the management node, determines a first storage node and a target address space section which accord with a preset overload condition and a second storage node which accord with a preset scheduling condition based on the cache resource access indexes of each storage node, sends a creation request aiming at the target address space section to the second storage node, and sends the creation request after receiving a response message of successful creation returned by the second storage node;
and creating a new link map of a target address space segment for the target disk according to the mapping establishment request, wherein the new link map is a link map among the target disk, the target address space segment created by the second storage node and the second storage node.
According to a fourth aspect of the embodiments of the present specification, there is provided a distributed cache apparatus, applied to a management node, including:
the load performance monitoring module is configured to acquire cache resource access indexes of each storage node; determining a first storage node and a target address space section which accord with a preset overload condition and a second storage node which accord with a preset scheduling condition based on the cache resource access indexes of the storage nodes;
a load balancing scheduling module configured to send a create request for the target address space segment to the second storage node; and after receiving a response message of successful creation returned by the second storage node, sending a mapping establishment request to a client, wherein the mapping establishment request is used for the client to create a new link mapping of a target address space segment for a target disk after receiving the mapping establishment request, and the new link mapping is a link mapping among the target disk, the target address space segment created by the second storage node and the second storage node.
According to a fifth aspect of embodiments in the present specification, there is provided a distributed cache apparatus, applied to a client, including:
the request receiving module is configured to receive a mapping establishment request sent by a management node, wherein the mapping establishment is used for acquiring cache resource access indexes of each storage node for the management node, determining a first storage node and a target address space section which accord with a preset overload condition and a second storage node which accords with a preset scheduling condition based on the cache resource access indexes of each storage node, sending a creation request aiming at the target address space section to the second storage node, and sending the creation request after receiving a response message of successful creation returned by the second storage node;
a mapping establishing module configured to create a new link mapping of a target address space segment for a target disk according to the mapping establishing request, wherein the new link mapping is a link mapping between the target disk, the target address space segment created by the second storage node, and the second storage node.
According to a sixth aspect of embodiments herein, there is provided a computing device comprising:
a memory and a processor;
the memory is configured to store computer-executable instructions and the processor is configured to execute the computer-executable instructions, which when executed by the processor implement the steps of the distributed caching method described above.
According to a seventh aspect of embodiments herein, there is provided a computer-readable storage medium storing computer-executable instructions that, when executed by a processor, implement the steps of the distributed caching method described above.
According to an eighth aspect of embodiments herein, there is provided a computer program, wherein the computer program, when executed on a computer, causes the computer to perform the steps of the distributed caching method described above.
One embodiment of this specification implements a distributed cache system, including: the system comprises a management node, a plurality of storage nodes for storing cache resources and a client. The management node is configured to obtain cache resource access indexes of each storage node, and determine a first storage node and a target address space section which accord with a preset overload condition and a second storage node which accord with a preset scheduling condition based on the cache resource access indexes of each storage node; sending a creation request aiming at the target address space segment to a second storage node, and sending a mapping establishment request to the client after receiving a response message of successful creation returned by the second storage node; and the client is configured to create a new link mapping of the target address space segment for the target disk after receiving the mapping establishment request, wherein the new link mapping is a link mapping among the target disk, the target address space segment created by the second storage node and the second storage node. And, the read/write request accesses the cache resource according to the link map. Thus, a new link map for load balancing scheduling can be established based on load performance monitoring of the storage nodes; and dispatching the read/write requests processed by the target address space segment from the overloaded first storage node to the second storage node by using the new link mapping aiming at the target address space segment. Therefore, the read/write request for accessing the cache resource in the first storage node is converted into accessing the cache resource in the second storage node, so that the balanced scheduling of the storage node load is realized, and the utilization balance of the cache resource in the distributed cache system is improved.
Drawings
Fig. 1 is a schematic structural diagram of a distributed cache system in the related art;
fig. 2 is a schematic structural diagram of a distributed cache system according to an embodiment of the present disclosure;
fig. 3 is a flowchart of a distributed caching method applied to a distributed caching system according to an embodiment of the present specification;
fig. 4 is a diagram illustrating an application scenario of a distributed cache system according to an embodiment of the present specification;
FIG. 5 is a flowchart of a distributed caching method applied to a management node according to an embodiment of the present specification;
fig. 6 is a flowchart of a distributed caching method applied to a client according to an embodiment of the present specification;
fig. 7 is a schematic structural diagram of a distributed cache apparatus applied to a management node according to an embodiment of the present disclosure;
fig. 8 is a schematic structural diagram of a distributed caching apparatus applied to a client according to an embodiment of the present specification;
fig. 9 is a block diagram of a computing device according to an embodiment of the present disclosure.
Detailed Description
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present specification. This description may be embodied in many different forms and should not be construed as limited to the embodiments set forth herein, as those skilled in the art will be able to make and use the present disclosure without departing from the spirit and scope of the present disclosure.
The terminology used in the description of the one or more embodiments is for the purpose of describing the particular embodiments only and is not intended to be limiting of the description of the one or more embodiments. As used in one or more embodiments of the present specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used in one or more embodiments of the present specification refers to and encompasses any and every possible combination of one or more of the associated listed items.
It should be understood that although the terms first, second, etc. may be used herein in one or more embodiments to describe various information, these information should not be limited by these terms. These terms are only used to distinguish one type of information from another. For example, a first can also be referred to as a second and, similarly, a second can also be referred to as a first without departing from the scope of one or more embodiments of the present description. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
First, the noun terms to which one or more embodiments of the present specification relate are explained.
And (4) Cache: namely a cache function module in a computing and storage system architecture. Illustratively, according to the spatial and temporal locality principles of program access or IO access, a high-speed front-end Cache storage medium such as a Cache can be used to accelerate reading and writing of a back-end relatively slow storage medium, so as to meet the performance SLA requirements of a user on a computing and storage system. The SLA (Service Level Agreement) refers to an Agreement or contract that is agreed between an enterprise providing services and a client and is commonly accepted by both parties with respect to quality, Level, performance, and the like of the services.
IO request: read/write requests. I/O is an abbreviation for input/output, i.e., input/output port. For a disk of a storage system, data of the disk can be read through an output port, or data can be written into the disk through an input port. Therefore, each read and write request is called an IO request, abbreviated as IO.
Segment: i.e., a discontinuous address space (address space) of a segment of a block device, such as a disk, may be referred to as an address space segment. Illustratively, the size of the Disk "x" is 256GiB, the stripe is 128KiB, and Disk "x" is managed in segments by 8 storage nodes. Then the overall space 256GiB of Disk x can be evenly scattered across 8 storage service nodes, each managing Segment size of 32GiB, based on the stripe 128KiB granularity, each storage node comprising multiple non-contiguous address space segments of 128KiB capacity. Wherein, the address space represents the size of the memory occupied by any one computer entity. Such as a peripheral, a file, a server, or a network computer. The address space includes a physical space and a virtual space.
Referring to fig. 1, fig. 1 is a schematic structural diagram of a distributed cache system in the related art, which specifically includes: a host and a plurality of storage nodes;
the host is used to run project a0 through project An. Thus, a host may comprise at least one client. Also, a specified number of block devices may be allocated for an item to handle read/write requests for the item using the capacity of the block devices. For example, item A0 was assigned disk D0 and disk D1. On this basis, in order to increase the processing speed of the read/write request through the distributed cache, the total space, that is, the total capacity of the disk may be divided into a plurality of stripes according to a preset stripe granularity, for example, 128 KiB: a plurality of non-contiguous address space segments and evenly distributed among the plurality of storage nodes. For example, the Disk D0, i.e., Disk0, of the item a0 in fig. 1 is divided into a plurality of discontinuous address space segments, which are respectively distributed from the storage node 0 to the storage node m, that is: address space segment S0-0, address space segment S0-1, … … address space segment S0-m. Disk D1 is divided into: address space segment S1-0, address space segment S1-1, … … address space segment S1-m. Similarly, the disks of item a1 to item An, for example, disks D2 to Dy, are similarly divided, except that the divided disks are different. Thus, when the disk processes a read/write request of an item, the read/write request accesses the address space segment and accesses a Cache resource, such as a Cache, in the storage node to which the address space segment belongs.
In the distributed cache system shown in fig. 1, the number of read/write requests for different items is different, which easily causes the number of read/write requests for accessing cache resources of different storage nodes to be uneven, resulting in unbalanced load of the storage nodes. Referring to fig. 1, the number of read/write requests for An item An exceeds a number threshold, the load of the disk Dx is too large, and the address space segment Sx-0 and the address space segment Sx-1 of the disk Dx are utilized: the number of read/write requests processed by Segment x-0 and Segment x-1 exceeds a number threshold, so that the Cache resources of the storage node 0 and the storage node 1 are insufficient, and the hit rate of the read/write requests processed by Disk x to Cache is low. In the system shown in fig. 1, the cache resources of each storage node may be regarded as a high-speed performance layer, and the low-speed capacity of each storage node may be regarded as a low-speed capacity layer. Under the condition that the hit rate of cache resources is reduced, part of read/write requests are converted into low-speed capacity access, so that the Disk x performance is sharply reduced, and even drops to zero within a certain time period under severe conditions: the access hit rate is zero. In this way, the SLA of the project is greatly reduced. While there are relatively free storage nodes, such as storage node m, in the distributed cache system. Therefore, there is a need to provide a more reliable solution: a distributed cache system having a higher utilization balance of cache resources.
To this end, in the present specification, a distributed caching system is provided, and the present specification relates to a distributed caching method, a distributed caching apparatus, a computing device, and a computer-readable storage medium, which are described in detail in the following embodiments one by one.
Referring to fig. 2, fig. 2 is a schematic structural diagram of a distributed cache system provided in an embodiment of the present specification, which specifically includes:
a management node 202, a plurality of storage nodes 206 storing cache resources, and a client 204.
In particular implementations, client 204 may be configured to run the target item, similar to the host in the distributed caching system described above in FIG. 1. The number of target items may be one or more, and may be, for example, item a0 through item An. Also, the structure of the storage node 206 may be set according to a specific application scenario, for example, the storage node 206 may or may not include a low-speed capacity layer. It is reasonable that the management node 202 may be specifically one or more computing devices, and this embodiment is not limited thereto.
The management node 202 is configured to obtain cache resource access indexes of the storage nodes 206, and determine a first storage node and a target address space section which meet a preset overload condition and a second storage node which meet a preset scheduling condition based on the cache resource access indexes of the storage nodes 206; and sending a creation request aiming at the target address space segment to the second storage node, and sending a mapping establishment request to the client after receiving a response message of successful creation returned by the second storage node.
In a specific application, the cache resource access index is generated when the read/write request accesses the cache resource in the storage node, and can represent the utilization rate of the cache resource in the storage node. For example, the cache resource access metrics may include at least one of the following: the cache resource index of the storage node is obtained by comparing the node cache access index of the storage node with the segment cache access index of the address space segment to the storage node. Furthermore, the management node 202 may obtain the cache resource access index of each storage node 206 in various ways. For convenience of understanding and reasonable layout, the cache resource access indicators and the manner of obtaining the cache resource access indicators are specifically described in the following with an optional embodiment.
The first storage node and the target address space segment meet the preset overload condition, which indicates that the access of the read/write request distributed to the target address space segment to the cache resource in the first storage node is overloaded. And the second storage node meets the preset scheduling condition, which indicates that the cache resources in the second storage node are relatively idle. Thus, a read/write request to access a cache resource of a first storage node may be scheduled to a second storage node. To this end, the access path of the cache resource may be accessed for read/write requests: and adjusting link mapping: and establishing a new link mapping among a target disk to which the target address space segment belongs, the target address space segment and the second storage node.
The client 204 is configured to, after receiving the mapping establishment request, create a new link mapping of the target address space segment for the target disk, where the new link mapping is a link mapping between the target disk, the target address space segment created by the second storage node, and the second storage node.
In a specific application, the link map represents an access path for the read/write request to access the cache resource, and the read/write request can be sent to a corresponding address space segment for processing according to the link map. And, the process may access the cache resource of the corresponding storage node according to the link map, so as to improve the processing efficiency through the cache resource. Thus, in an alternative embodiment, the client 204 is further configured to:
determining a target disk for processing the read/write request aiming at the read/write request of the target item;
searching a new link map containing a target disk;
and sending the read/write request to a target address space segment created by the second storage node for processing according to the new link mapping.
Wherein, the target disk may refer to a disk of a client running the target item. Thus, determining a target disk for processing read/write requests may include: and acquiring an item identifier contained in the read/write request of the target item, and searching the target disk corresponding to the target item from the pre-established corresponding relation between the disks and the items by using the item identifier. Similar to the distributed cache system shown in fig. 1, the relationship between the target disk and the target address space segment in this embodiment is as follows: and dividing the target disk to obtain a plurality of address space sections, wherein the target address space sections belong to the plurality of address space sections. The difference is that the distributed caching system is different. Therefore, the read/write request of the embodiment can access the cache resource through the new link mapping, thereby ensuring the load balancing scheduling of the storage nodes and improving the balance of the utilization of the cache resource.
In one embodiment of the present description, the read/write requests access the cache resources according to a link map. Thus, a new link map for load balancing scheduling can be established based on load performance monitoring of the storage nodes; and dispatching the read/write requests processed by the target address space segment from the overloaded first storage node to the second storage node by using the new link mapping aiming at the target address space segment. Therefore, the read/write request originally accessing the cache resource in the first storage node is converted into the read/write request accessing the cache resource in the second storage node, so that the balanced scheduling of the storage node load is realized, and the utilization balance of the cache resource in the distributed cache system is improved.
For ease of understanding, the following description will use the application of the distributed cache system shown in fig. 3 as an example. Referring to fig. 3, fig. 3 is a flowchart of a distributed caching method applied to a distributed caching system according to an embodiment of the present disclosure, where the method specifically includes the following steps:
s302, the management node acquires cache resource access indexes of each storage node;
s304, the management node determines a first storage node and a target address space section which accord with a preset overload condition and a second storage node which accords with a preset scheduling condition based on the cache resource access index of each storage node;
s306, the management node sends a creation request aiming at the target address space segment to a second storage node;
s308, the second storage node returns a response message of successful creation to the management node;
s310, the management node sends a mapping establishment request to the client;
s312, the client creates a new link mapping of the target address space segment for the target disk, where the new link mapping is a link mapping between the target disk, the target address space segment created by the second storage node, and the second storage node.
The technical solution of this embodiment is the same as the technical solution of the distributed cache system described above, and details that are not described in detail in this embodiment can be referred to the description of the technical solution of the distributed cache system described above.
In an optional embodiment, caching the resource access indicator includes: the node cache access index and the segment cache access index; presetting overload conditions, including: a first node cache access threshold and a segment cache access condition; presetting scheduling conditions, including: a second node cache access threshold;
a management node 202 further configured to:
determining a first storage node with a node cache access index larger than a first node cache access threshold value and a second storage node with a node cache access index smaller than a second node cache access threshold value from all storage nodes;
and determining a target address space section which accords with the access condition of the segment cache in each address space section of the first storage node according to the segment cache access index.
Illustratively, the node cache access index cache _ locality of the storage node 1 is greater than a first node cache access threshold χ, which may be 90%, and may be modified in a manner of configuring a file according to a specific application requirement. Thus, storage node 1 may be determined to be the first storage node. And the node cache access index cache _ locality of the storage node m is smaller than a second node cache access threshold gamma, the second node cache access threshold can be 60%, and the second node cache access index can be modified in a mode of configuring files and the like according to specific application requirements. Therefore, the storage node m may be determined as the second storage node. Thus, the embodiment realizes the determination of the first storage node, the second storage node and the target address space segment through diversified cache resource access indexes and corresponding conditions,
also, the segment cache access conditions may be varied. This is explained in more detail below in the form of alternative embodiments.
In an alternative embodiment, the segment cache access condition comprises: a segment cache access threshold and a cache hit threshold; the cache resource access index further comprises: caching a resource hit index;
a management node further configured to:
and determining a target address space segment of which the segment cache access index is smaller than the segment cache access threshold and the cache resource hit index is smaller than the cache hit threshold from each address space segment of the first storage node.
Illustratively, the Segment cache hit index Segment _ cache _ hit of the address space Segment x-1 is smaller than a Segment cache hit threshold α, the cache hit threshold may be 30%, and the Segment cache hit index Segment _ cache _ hit may be modified in a manner of configuring a file or the like according to a specific application requirement; and, the Segment cache access index Segment _ cache _ occupancy of the address space Segment x-1 is less than the cache access threshold β, which may be 4%, and may be modified in a manner of configuring a file or the like according to specific application requirements.
In this embodiment, the target address space segment is determined by the segment cache access index and the cache resource hit index, so that an unbalanced condition that the cache resource hit index of the read/write requests corresponding to the address space segment is greater than the cache hit threshold, and the segment cache access index is smaller than the segment cache access threshold due to a relatively large number of read/write requests can be reduced: the cache resources are allocated to a certain address space segment in the set, and the address space segments outside the address space segment lack or even have no unbalanced condition that the cache resources can access. Therefore, the embodiment can further improve the balance of the utilization of the cache resources.
In another alternative embodiment, the segment cache access condition includes: a segment cache access threshold or a cache hit threshold; the cache resource access index further comprises: caching a resource hit index;
a management node further configured to:
and determining a target address space segment with a segment cache access index smaller than a segment cache access threshold value or a cache resource hit index smaller than a cache hit threshold value from each address space segment of the first storage node.
The present embodiment determines the target address space segment using an index and a condition, and thus, the determination efficiency of the target address space segment can be improved and the overload problem can be reduced to some extent.
In an optional embodiment, the management node is further configured to:
broadcasting an acquisition request aiming at the cache resource access index to each storage node;
a storage node further configured to:
determining the access index of the cache resource, generating response information containing the access index of the cache resource, and sending the response information to the management node.
Illustratively, the CacheMaster broadcasts Ping RPC Request to the CacheServer of each storage service node, and the CacheServer returns cache resource access index in Ping RPC Response. Wherein, ping (packet Internet groper) is an Internet packet explorer, a program for testing network connection amount; RPC (Remote Procedure Call ). Moreover, the specific determination manner of the cache resource access index may be to obtain statistical cache resource access parameters, and for convenience of understanding and reasonable layout, a specific description is subsequently made in the form of an optional embodiment.
In another optional embodiment, the management node is further configured to:
broadcasting an acquisition request aiming at the cache resource access index to each storage node;
a storage node further configured to:
determining cache resource access parameters, generating response information containing the cache resource access parameters, and sending the response information to the management node;
a management node further configured to:
and counting the cache resource access parameters in the response information to obtain cache resource access indexes.
In this embodiment, the management node performs statistics on the access parameters of the cache resources, so that the occupation of the storage node resources can be reduced, the performance of the storage node is further improved, and the utilization of the cache resources is facilitated.
In an optional embodiment, the cache resource access index includes at least one of the following indexes: node cache access indexes of the storage nodes, segment cache access indexes of the address space segments to the storage nodes, and cache resource hit indexes of the address space segments to the storage nodes;
the node cache access index comprises: the ratio of the access capacity of the cache resources of the storage nodes to the total capacity of the cache resources of the storage nodes;
segment cache access metrics include: the ratio of the access capacity of the address space section to the cache resources of the storage nodes to the capacity of the address space section;
the cache resource hit indicators include: the ratio of the number of times of cache resource hits of the storage nodes caused by the read/write requests processed by the address space segment to the total number of the read/write requests processed by the space segment.
The embodiment provides a determination mode of a node cache access index of a storage node, a segment cache access index of an address space segment to the storage node, and a cache resource hit index of the address space segment to the storage node. The parameters in each determination mode can be regarded as cache resource access parameters. Illustratively, the node cache access metrics include: the ratio of the access capacity of the cache resources of the storage nodes to the total capacity of the cache resources of the storage nodes is as follows: the overall cache resource occupancy rate of any storage node is cache _ occupancy, which is the cache resource occupancy capacity of any storage node divided by the total capacity of the cache resources. Segment cache access metrics include: the ratio of the access capacity of the address space segment to the cache resource of the storage node to the capacity of the address space segment is: the respective cache resource occupation proportion Segment _ cache _ occupancy of each Segment is the cache resource occupation capacity of the Segment (the specific occupation amount of the cache, the cache amount applied after the IO request) ÷ Segment capacity (the size of the Segment itself). The cache resource hit indicators include: the ratio of the number of times of cache resource hits of the storage node by the read/write requests processed by the address space segment to the total number of the read/write requests processed by the space segment is as follows: the number of times that the Segment IO request hits the cache and the total number of times that the Segment IO request hits are divided into segments.
In an optional embodiment, the management node 202 is further configured to:
sending a delete request for the target address space segment to the first storage node;
a first storage node configured to:
and deleting the target address space segment in the first storage node according to the deletion request, and returning deletion success feedback aiming at the target address space segment to the management node.
Illustratively, the CacheServer of the storage node 1 receives the destroy request of Segment x-1, and returns a feedback of successful Segment destroy to the CacheMaster after the destroy is completed, that is, the destroy is the delete in this embodiment. In this way, the memory resources occupied by useless link mapping in the client can be saved. And, the access error that useless link map may cause can be reduced.
The following description will further describe the distributed caching method by taking an application of the distributed caching system provided in this specification as an example, with reference to fig. 4. Fig. 4 shows an application scenario example of a distributed cache system provided in an embodiment of the present specification. The distributed caching system shown in fig. 4 includes: multicient: the multi-client is mainly responsible for maintaining and managing data link mapping from the host to each Segment of each storage node; the CacheMaster: the Cache resource load balancing scheduling center, namely a management node, is mainly responsible for monitoring the overall Cache resource occupancy rate of each storage service node, and the Cache resource occupancy condition and hit rate of the related Segment, and when the three meet certain conditions (explained in the following flow description), the balanced scheduling among the storage nodes is initiated for the related Segment; cacheServer: and the Cache function service module is used for providing Cache service and is mainly responsible for providing IO acceleration capability for Segment of the storage node.
The distributed caching method applied to the distributed caching system shown in fig. 4 specifically includes the following steps:
s1, the cache Master broadcasts Ping RPC Request (Ping (packet Internet groper) to the cache Server of each storage service node, which is a program for testing network connection amount, RPC (remote Procedure Call) remote Procedure call protocol, the cache Server returns the whole cache resource occupancy cache _ occupancy of any storage node, the respective cache resource occupancy proportion of each Segment, and the hit rate Segment _ cache _ hit in Ping RPC Response; the above key information calculation method is as follows: the overall cache resource occupancy rate of any storage node is cache _ occupancy, which is the cache resource occupancy capacity of any storage node divided by the total capacity of the cache resources; the respective cache resource occupation proportion of each Segment is Segment _ cache _ occupancy, which is the cache resource occupation capacity of the Segment ÷ Segment capacity; the Segment _ cache _ hit of each Segment is the number of times that the Segment IO request hits the cache ÷ the total number of times that the Segment IO request hits.
S2, the node cache access index cache _ occupancy of the storage node 1 is greater than a first node cache access threshold χ (default is 90%, configuration modification is supported), and Segment _ cache _ hit of Segment x-1 is lower than a certain threshold α (default is 30%, configuration modification is supported), and Segment _ cache _ occupancy of Segment x-1 is lower than a certain threshold β (default is 4%, configuration modification is supported); meanwhile, the node cache access index cache _ locality of the storage node m is smaller than the second node cache access threshold γ (the default value is 60%, and configuration modification is supported).
S3, the Cache Master receives the Cache occupation condition and Segment hit rate condition returned by each Cache Server, and if the returned index meets the condition in S2, the Cache Master prepares to initiate balanced scheduling on Segment x-1;
s4, the cacheMaster sends a request for creating a new Segment x-1 to the storage node m;
s5, the CacheServer of the storage node m receives the creation request of Segment x-1, and returns the success of Segment creation to the CacheMaster after the creation is completed;
s6, after receiving a Segment x-1 creation success response returned by the cacheServer of the storage node m, the cacheMaster sends a new link mapping establishment request of Segment x-1 of Disk x to the MultiClient of the host;
s7, creating a new link mapping of Segment x-1 for Disk x by a MultiClient, updating link mapping information at the same time, and issuing new related IO to a new Segment x-1 positioned at a storage node m;
s8, the cacheMaster sends a deletion request aiming at the original Segment x-1 to the cacheServer of the storage node 1;
s9, the CacheServer of the storage node 1 receives the destroy request of Segment x-1, and returns the feedback of Segment destroy success to the CacheMaster after the destroy is completed.
Referring to fig. 5, fig. 5 is a flowchart illustrating a distributed caching method applied to a management node according to an embodiment of the present specification, which specifically includes the following steps.
S502, obtaining cache resource access indexes of each storage node;
s504, determining a first storage node and a target address space section which accord with a preset overload condition and a second storage node which accords with a preset scheduling condition based on the cache resource access index of each storage node;
s506, sending a creation request aiming at the target address space segment to a second storage node;
and S508, after receiving a response message of successful creation returned by the second storage node, sending a mapping establishment request to the client, wherein the mapping establishment request is used for the client to establish a new link mapping of the target address space segment for the target disk after receiving the mapping establishment request, and the new link mapping is the link mapping among the target disk, the target address space segment established by the second storage node and the second storage node.
In one embodiment of the present description, the new link map is a link map between the target disk, the target address space segment created by the second storage node, and the second storage node. And, the read/write requests access the cache resources according to the link map. Thus, a new link map for load balancing scheduling can be established based on load performance monitoring of the storage nodes; and dispatching the read/write requests processed by the target address space segment from the overloaded first storage node to the second storage node by using the new link mapping aiming at the target address space segment. Therefore, the read/write request for accessing the cache resource in the first storage node is converted into accessing the cache resource in the second storage node, so that the balanced scheduling of the storage node load is realized, and the utilization balance of the cache resource in the distributed cache system is improved.
The foregoing is an exemplary scheme of a distributed caching method according to this embodiment. It should be noted that the technical solution of the distributed caching method and the technical solution of the distributed caching system belong to the same concept, and details that are not described in detail in the technical solution of the distributed caching method can be referred to the description of the technical solution of the distributed caching system.
Referring to fig. 6, fig. 6 is a flowchart illustrating a distributed caching method applied to a client according to an embodiment of the present specification, which includes the following steps.
S602, receiving a mapping establishment request sent by a management node, wherein the mapping establishment is that the management node acquires cache resource access indexes of each storage node, determines a first storage node and a target address space section which accord with a preset overload condition and a second storage node which accord with a preset scheduling condition based on the cache resource access indexes of each storage node, sends a creation request aiming at the target address space section to the second storage node, and sends the creation request after receiving a response message of successful creation returned by the second storage node;
s604, according to the mapping establishment request, a new link mapping of the target address space segment is established for the target disk, wherein the new link mapping is the link mapping among the target disk, the target address space segment established by the second storage node and the second storage node.
In one embodiment of the present description, the new link map is a link map between the target disk, the target address space segment created by the second storage node, and the second storage node. And, the read/write requests access the cache resources according to the link map. Thus, a new link map for load balancing scheduling can be established based on load performance monitoring of the storage nodes; and dispatching the read/write requests processed by the target address space segment from the overloaded first storage node to the second storage node by using the new link mapping aiming at the target address space segment. Therefore, the read/write request for accessing the cache resource in the first storage node is converted into accessing the cache resource in the second storage node, so that the balanced scheduling of the storage node load is realized, and the utilization balance of the cache resource in the distributed cache system is improved.
The foregoing is a schematic scheme of a distributed caching method according to this embodiment. It should be noted that the technical solution of the distributed caching method and the technical solution of the distributed caching system belong to the same concept, and details that are not described in detail in the technical solution of the distributed caching method can be referred to the description of the technical solution of the distributed caching system.
Corresponding to the foregoing method embodiment, the present specification further provides an embodiment of a distributed cache apparatus, and fig. 7 illustrates a schematic structural diagram of a distributed cache apparatus applied to a management node according to an embodiment of the present specification. As shown in fig. 7, the apparatus includes:
a load performance monitoring module 702 configured to obtain cache resource access indexes of each storage node; determining a first storage node and a target address space section which accord with a preset overload condition and a second storage node which accord with a preset scheduling condition based on cache resource access indexes of all storage nodes;
a load balancing scheduling module 704 configured to send a create request for a target address space segment to a second storage node; and after receiving a response message of successful creation returned by the second storage node, sending a mapping establishment request to the client, wherein the mapping establishment request is used for the client to establish a new link mapping of the target address space segment aiming at the target disk after receiving the mapping establishment request, and the new link mapping is the link mapping among the target disk, the target address space segment established by the second storage node and the second storage node.
In one embodiment of the present description, the new link map is a link map between the target disk, the target address space segment created by the second storage node, and the second storage node. And, the read/write requests access the cache resources according to the link map. Thus, a new link map for load balancing scheduling can be established based on load performance monitoring of the storage nodes; and dispatching the read/write requests processed by the target address space segment from the overloaded first storage node to the second storage node by using the new link mapping aiming at the target address space segment. Therefore, the read/write request for accessing the cache resource in the first storage node is converted into accessing the cache resource in the second storage node, so that the balanced scheduling of the storage node load is realized, and the utilization balance of the cache resource in the distributed cache system is improved.
The foregoing is an exemplary scheme of a distributed cache apparatus in this embodiment. It should be noted that the technical solution of the distributed caching apparatus and the technical solution of the distributed caching method applied to the management node belong to the same concept, and details of the technical solution of the distributed caching apparatus, which are not described in detail, can be referred to the description of the technical solution of the distributed caching method applied to the management node.
Fig. 8 shows a schematic structural diagram of a distributed caching apparatus applied to a client according to an embodiment of the present specification. As shown in fig. 8, the apparatus includes:
a request receiving module 802, configured to receive a mapping establishment request sent by a management node, where mapping establishment is to obtain cache resource access indexes of each storage node for the management node, determine, based on the cache resource access indexes of each storage node, a first storage node and a target address space segment that meet a preset overload condition, and a second storage node that meets a preset scheduling condition, send a creation request for the target address space segment to the second storage node, and send the creation request after receiving a response message that is returned by the second storage node and that succeeds in creation;
a mapping establishing module 804 configured to create a new link mapping of the target address space segment for the target disk according to the mapping establishing request, wherein the new link mapping is a link mapping between the target disk, the target address space segment created by the second storage node, and the second storage node.
In one embodiment of the present description, the new link map is a link map between the target disk, the target address space segment created by the second storage node, and the second storage node. And, the read/write requests access the cache resources according to the link map. Thus, a new link map for load balancing scheduling can be established based on load performance monitoring of the storage nodes; and dispatching the read/write requests processed by the target address space segment from the overloaded first storage node to the second storage node by using the new link mapping aiming at the target address space segment. Therefore, the read/write request for accessing the cache resource in the first storage node is converted into accessing the cache resource in the second storage node, so that the balanced scheduling of the storage node load is realized, and the utilization balance of the cache resource in the distributed cache system is improved.
The foregoing is an exemplary scheme of a distributed caching apparatus applied to a client according to this embodiment. It should be noted that the technical solution of the distributed caching apparatus and the technical solution of the distributed caching method applied to the client belong to the same concept, and details of the technical solution of the distributed caching apparatus, which are not described in detail, can be referred to the description of the technical solution of the distributed caching method applied to the client.
FIG. 9 illustrates a block diagram of a computing device, according to one embodiment of the present description. Components of the computing device 900 include, but are not limited to, a memory 910 and a processor 920. The processor 920 is coupled to the memory 910 via a bus 930, and a database 950 is used to store data.
Computing device 900 also includes access device 940, access device 940 enabling computing device 900 to communicate via one or more networks 960. Examples of such networks include a Public Switched Telephone Network (PSTN), a Local Area Network (LAN), a Wide Area Network (WAN), a Personal Area Network (PAN), or a combination of communication networks such as the internet. Access device 940 may include one or more of any type of Network Interface (e.g., a Network Interface Controller (NIC)) whether wired or Wireless, such as an IEEE802.11 Wireless Local Area Network (WLAN) Wireless Interface, a Worldwide Interoperability for Microwave Access (Wi-MAX) Interface, an ethernet Interface, a Universal Serial Bus (USB) Interface, a cellular Network Interface, a bluetooth Interface, a Near Field Communication (NFC) Interface, and so forth.
In one embodiment of the present description, the above-described components of computing device 900, as well as other components not shown in FIG. 9, may also be connected to each other, such as by a bus. It should be understood that the block diagram of the computing device structure shown in FIG. 9 is for purposes of example only and is not limiting as to the scope of the description. Those skilled in the art may add or replace other components as desired.
Computing device 900 may be any type of stationary or mobile computing device, including a mobile computer or mobile computing device (e.g., tablet, personal digital assistant, laptop, notebook, netbook, etc.), a mobile phone (e.g., smartphone), a wearable computing device (e.g., smartwatch, smartglasses, etc.), or other type of mobile device, or a stationary computing device such as a desktop computer or PC. Computing device 900 may also be a mobile or stationary server.
Wherein the processor 920 is configured to execute computer-executable instructions, which when executed by the processor, implement the steps of the distributed caching method described above.
The above is an illustrative scheme of a computing device of the present embodiment. It should be noted that the technical solution of the computing device and the technical solution of the distributed caching method belong to the same concept, and details that are not described in detail in the technical solution of the computing device can be referred to the description of the technical solution of the distributed caching method.
An embodiment of the present specification further provides a computer-readable storage medium storing computer-executable instructions, which when executed by a processor implement the steps of the distributed caching method described above.
The above is an illustrative scheme of a computer-readable storage medium of the present embodiment. It should be noted that the technical solution of the storage medium belongs to the same concept as the technical solution of the above-mentioned distributed caching method, and details that are not described in detail in the technical solution of the storage medium can be referred to the description of the technical solution of the above-mentioned distributed caching method.
An embodiment of the present specification further provides a computer program, wherein when the computer program is executed in a computer, the computer is caused to execute the steps of the distributed caching method.
The above is an illustrative scheme of a computer program of the present embodiment. It should be noted that the technical solution of the computer program and the technical solution of the distributed caching method belong to the same concept, and details that are not described in detail in the technical solution of the computer program can be referred to the description of the technical solution of the distributed caching method.
The foregoing description has been directed to specific embodiments of this disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The computer instructions comprise computer program code which may be in the form of source code, object code, an executable file or some intermediate form, or the like. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like.
It should be noted that, for the sake of simplicity, the foregoing method embodiments are described as a series of acts, but those skilled in the art should understand that the present embodiment is not limited by the described acts, because some steps may be performed in other sequences or simultaneously according to the present embodiment. Further, those skilled in the art should also appreciate that the embodiments described in this specification are preferred embodiments and that acts and modules referred to are not necessarily required for an embodiment of the specification.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
The preferred embodiments of the present specification disclosed above are intended only to aid in the description of the specification. Alternative embodiments are not exhaustive and do not limit the invention to the precise embodiments described. Obviously, many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the embodiments and the practical application, to thereby enable others skilled in the art to best understand and utilize the embodiments. The specification is limited only by the claims and their full scope and equivalents.

Claims (13)

1. A distributed cache system, comprising: the system comprises a management node, a plurality of storage nodes and a client, wherein the storage nodes are used for storing cache resources;
the management node is configured to obtain cache resource access indexes of each storage node, and determine a first storage node and a target address space section which meet a preset overload condition and a second storage node which meet a preset scheduling condition based on the cache resource access indexes of each storage node; sending a creation request aiming at the target address space segment to the second storage node, and sending a mapping establishment request to the client after receiving a response message of successful creation returned by the second storage node;
the client is configured to create a new link map of a target address space segment for a target disk after receiving the map creation request, where the new link map is a link map among the target disk, the target address space segment created by the second storage node, and the second storage node.
2. The system of claim 1, the cache resource access metric, comprising: the node cache access index and the segment cache access index; the preset overload condition comprises the following steps: a first node cache access threshold and a segment cache access condition; the preset scheduling conditions comprise: a second node cache access threshold;
the management node further configured to:
determining a first storage node with the node cache access index larger than the first node cache access threshold value and a second storage node with the node cache access index smaller than the second node cache access threshold value from the storage nodes;
and determining a target address space segment which meets the segment cache access condition in each address space segment of the first storage node according to the segment cache access index.
3. The system of claim 2, the segment cache access condition comprising: a segment cache access threshold and a cache hit threshold; the cache resource access index further comprises: caching a resource hit index;
the management node further configured to:
and determining a target address space segment of which the segment cache access index is smaller than the segment cache access threshold and the cache resource hit index is smaller than the cache hit threshold from each address space segment of the first storage node.
4. The system of any of claims 1 to 3, the management node further configured to:
broadcasting an acquisition request aiming at the cache resource access index to each storage node;
the storage node further configured to:
and determining the cache resource access index, generating response information containing the cache resource access index, and sending the response information to the management node.
5. The system of claim 1, the cache resource access metrics comprising at least one of: node cache access indexes of the storage nodes, segment cache access indexes of the address space segments to the storage nodes, and cache resource hit indexes of the address space segments to the storage nodes;
the node cache access index comprises: the ratio of the access capacity of the cache resources of the storage nodes to the total capacity of the cache resources of the storage nodes;
the segment cache access index comprises: the ratio of the cache resource access capacity of the address space segment to the storage node to the capacity of the address space segment;
the cache resource hit indicator includes: the ratio of the number of times of cache resource hits of the storage nodes caused by the read/write requests processed by the address space segment to the total number of the read/write requests processed by the space segment.
6. The system of any of claims 1-3 and 5, the management node, further configured to:
sending a delete request for the target address space segment to a first storage node;
the first storage node configured to:
and deleting the target address space segment in the first storage node according to the deletion request, and returning deletion success feedback aiming at the target address space segment to the management node.
7. The system of any of claims 1-3 and 5, the client, further configured to:
determining a target disk for processing a read/write request aiming at the read/write request of a target item;
searching the new link mapping containing the target disk;
and sending the read/write request to a target address space section created by the second storage node for processing according to the new link mapping.
8. A distributed caching method is applied to a management node, and comprises the following steps:
obtaining cache resource access indexes of each storage node;
determining a first storage node and a target address space section which accord with a preset overload condition and a second storage node which accord with a preset scheduling condition based on the cache resource access indexes of the storage nodes;
sending a create request for the target address space segment to the second storage node;
and after receiving a response message of successful creation returned by the second storage node, sending a mapping establishment request to a client, wherein the mapping establishment request is used for the client to create a new link mapping of a target address space segment for a target disk after receiving the mapping establishment request, and the new link mapping is a link mapping among the target disk, the target address space segment created by the second storage node and the second storage node.
9. A distributed caching method is applied to a client, and comprises the following steps:
receiving a mapping establishment request sent by a management node, wherein the mapping establishment acquires cache resource access indexes of each storage node for the management node, determines a first storage node and a target address space section which accord with a preset overload condition and a second storage node which accord with a preset scheduling condition based on the cache resource access indexes of each storage node, sends a creation request aiming at the target address space section to the second storage node, and sends the creation request after receiving a response message of successful creation returned by the second storage node;
and creating a new link map of a target address space segment for the target disk according to the mapping establishment request, wherein the new link map is a link map among the target disk, the target address space segment created by the second storage node and the second storage node.
10. A distributed caching device applied to a management node comprises:
the load performance monitoring module is configured to acquire cache resource access indexes of each storage node; determining a first storage node and a target address space section which accord with a preset overload condition and a second storage node which accord with a preset scheduling condition based on the cache resource access indexes of the storage nodes;
a load balancing scheduling module configured to send a create request for the target address space segment to the second storage node; and after receiving a response message of successful creation returned by the second storage node, sending a mapping establishment request to a client, wherein the mapping establishment request is used for the client to create a new link mapping of a target address space segment for a target disk after receiving the mapping establishment request, and the new link mapping is a link mapping among the target disk, the target address space segment created by the second storage node and the second storage node.
11. A distributed caching device is applied to a client and comprises:
the mapping establishment module is configured to acquire cache resource access indexes of each storage node for the management node, determine a first storage node and a target address space section which meet a preset overload condition and a second storage node which meet a preset scheduling condition based on the cache resource access indexes of each storage node, send a creation request for the target address space section to the second storage node, and send the creation request after receiving a response message of successful creation returned by the second storage node;
a mapping establishing module configured to create a new link mapping of a target address space segment for a target disk according to the mapping establishing request, wherein the new link mapping is a link mapping between the target disk, the target address space segment created by the second storage node, and the second storage node.
12. A computing device, comprising:
a memory and a processor;
the memory is configured to store computer-executable instructions and the processor is configured to execute the computer-executable instructions, which when executed by the processor, implement the steps of the distributed caching method of any one of claims 8 to 9.
13. A computer readable storage medium storing computer executable instructions which, when executed by a processor, perform the steps of the distributed caching method of any one of claims 8 to 9.
CN202210138924.2A 2022-02-15 2022-02-15 Distributed cache system, method and device Pending CN114595167A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210138924.2A CN114595167A (en) 2022-02-15 2022-02-15 Distributed cache system, method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210138924.2A CN114595167A (en) 2022-02-15 2022-02-15 Distributed cache system, method and device

Publications (1)

Publication Number Publication Date
CN114595167A true CN114595167A (en) 2022-06-07

Family

ID=81806165

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210138924.2A Pending CN114595167A (en) 2022-02-15 2022-02-15 Distributed cache system, method and device

Country Status (1)

Country Link
CN (1) CN114595167A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116048413A (en) * 2023-02-08 2023-05-02 苏州浪潮智能科技有限公司 IO request processing method, device and system for multipath storage and storage medium
CN117194439A (en) * 2023-11-07 2023-12-08 杭州优云科技有限公司 Method for creating resource storage system, electronic equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103761298A (en) * 2014-01-20 2014-04-30 华东师范大学 Distributed-architecture-based entity matching method
US20160026667A1 (en) * 2014-07-22 2016-01-28 Oracle International Corporation Memory-aware joins based in a database cluster
US9880933B1 (en) * 2013-11-20 2018-01-30 Amazon Technologies, Inc. Distributed in-memory buffer cache system using buffer cache nodes
CN113377530A (en) * 2021-05-31 2021-09-10 阿里巴巴新加坡控股有限公司 Load balancing method, system and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9880933B1 (en) * 2013-11-20 2018-01-30 Amazon Technologies, Inc. Distributed in-memory buffer cache system using buffer cache nodes
CN103761298A (en) * 2014-01-20 2014-04-30 华东师范大学 Distributed-architecture-based entity matching method
US20160026667A1 (en) * 2014-07-22 2016-01-28 Oracle International Corporation Memory-aware joins based in a database cluster
CN113377530A (en) * 2021-05-31 2021-09-10 阿里巴巴新加坡控股有限公司 Load balancing method, system and device

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116048413A (en) * 2023-02-08 2023-05-02 苏州浪潮智能科技有限公司 IO request processing method, device and system for multipath storage and storage medium
CN116048413B (en) * 2023-02-08 2023-06-09 苏州浪潮智能科技有限公司 IO request processing method, device and system for multipath storage and storage medium
CN117194439A (en) * 2023-11-07 2023-12-08 杭州优云科技有限公司 Method for creating resource storage system, electronic equipment and storage medium
CN117194439B (en) * 2023-11-07 2024-03-22 杭州优云科技有限公司 Method for creating resource storage system, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN111522636B (en) Application container adjusting method, application container adjusting system, computer readable medium and terminal device
CN106533723B (en) Virtual resource scheduling method, device and system
US10235047B2 (en) Memory management method, apparatus, and system
CN114595167A (en) Distributed cache system, method and device
US20050038829A1 (en) Service placement for enforcing performance and availability levels in a multi-node system
US20200274943A1 (en) Data Processing Method and Apparatus, Server, and Controller
WO2020042612A1 (en) Method and device for storing and reading a message, server, and storage medium
CN112346871A (en) Request processing method and micro-service system
CN111639356A (en) Smart city data sharing system and method
US20240070148A1 (en) Processing queries on restricted views
CN112600761A (en) Resource allocation method, device and storage medium
CN113285884A (en) Flow control method and system
WO2023029610A1 (en) Data access method and device, and storage medium
CN110198267A (en) A kind of traffic scheduling method, system and server
CN114466031B (en) CDN system node configuration method, device, equipment and storage medium
CN103220363A (en) Distributed network training resource management system based on cloud computing and scheduling method
Kang et al. A multiagent brokering protocol for supporting Grid resource discovery
CN117435129A (en) Storage cluster expansion method and device, computer equipment and storage medium
US11093493B1 (en) Dynamically switching between query and scan for optimizing table reads
KR102064466B1 (en) Method for allocationing virtual desktop in virtualization system and virtualization system thereof
WO2019169998A1 (en) Method, system, and related apparatus for selecting data node
CN110971647B (en) Node migration method of big data system
CN113014408A (en) Distributed system and management method thereof
CN111435319A (en) Cluster management method and device
CN114385596A (en) Data processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination