CN115729438A - Data access method, device and storage medium - Google Patents

Data access method, device and storage medium Download PDF

Info

Publication number
CN115729438A
CN115729438A CN202111010014.8A CN202111010014A CN115729438A CN 115729438 A CN115729438 A CN 115729438A CN 202111010014 A CN202111010014 A CN 202111010014A CN 115729438 A CN115729438 A CN 115729438A
Authority
CN
China
Prior art keywords
application
cache
data
task
resource
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111010014.8A
Other languages
Chinese (zh)
Inventor
李秀桥
孙宏伟
丁肇辉
高帅
江喆
陈强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
XFusion Digital Technologies Co Ltd
Original Assignee
XFusion Digital Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by XFusion Digital Technologies Co Ltd filed Critical XFusion Digital Technologies Co Ltd
Priority to CN202111010014.8A priority Critical patent/CN115729438A/en
Priority to PCT/CN2022/095010 priority patent/WO2023029610A1/en
Publication of CN115729438A publication Critical patent/CN115729438A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0806Multiuser, multiprocessor or multiprocessing cache systems
    • G06F12/0811Multiuser, multiprocessor or multiprocessing cache systems with multilevel cache hierarchies
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0866Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches for peripheral storage systems, e.g. disk cache
    • G06F12/0871Allocation or management of cache space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

A data access method comprises the following steps: and scheduling cache resources for the first application according to the cache policy which is submitted by the user and aims at the first application, and prefetching the data into the cache resources of the first application according to the mapping directory information submitted by the user. Subsequently, in the process of running the first application, the cache resource of the first application is accessed according to the cache strategy. Therefore, the requirement of the user is sensed, the resource use condition of the application is controlled according to the requirement of the user, and the application performance is improved.

Description

Data access method, device and storage medium
Technical Field
The present application relates to the field of data processing technologies, and in particular, to a data access method, an apparatus, and a storage medium.
Background
In data centers, large cluster systems are often employed to provide a shared application execution environment for multiple users. A management node and a plurality of compute nodes are typically included in the cluster system. For any application to be run, the management node may allocate a corresponding computing node to the application, and then the computing node runs the application. In this way, the user cannot control the use of the resources of the computing node by the application, and thus the application performance cannot meet the user requirements.
Disclosure of Invention
The application provides a data access method, a data access device and a storage medium, which can control the resource use condition of an application according to the requirement of a user so as to enable the application performance to meet the requirement of the user. The technical scheme is as follows:
in a first aspect, a data access method is provided, the method including: receiving a cache configuration request for a first application submitted by a user, wherein the cache configuration request comprises a cache policy and mapping directory information, the cache policy is used for indicating the cache requirement of the first application, and the mapping directory information is information of a first directory in which application data of the first application stored in a storage system is located; scheduling cache resources for the first application according to the cache policy; according to the mapping directory information, application data of the first application are prefetched into cache resources of the first application; and in the process of running the first application, accessing the cache resource of the first application according to the cache strategy.
As can be seen from the above description, the cache resource is scheduled for the first application according to the cache policy for the first application submitted by the user, and the data is prefetched into the cache resource of the first application according to the mapping directory information submitted by the user. Subsequently, in the process of running the first application, accessing the cache resource of the first application according to the cache strategy. Therefore, the data access method provided by the application can sense the requirements of the user, and further control the resource use condition of the application according to the requirements of the user, so that the application performance is improved.
In a possible implementation manner, the implementation process of scheduling a cache resource for the first application according to the cache policy includes: determining resource demand information of each task in a plurality of tasks of the first application according to the cache policy, wherein the resource demand information comprises the size of cache space required by each task and the type of a storage medium; and allocating a cache space for each task according to the resource demand information of each task.
In the application, the first application may be divided into a plurality of tasks, so that a user may specify resource demand information of each task of the first application to control the cluster system to allocate a cache space to the corresponding task according to the resource demand information of each task. Wherein, the resource requirement information of different tasks can be the same or different.
In a possible implementation manner, the mapping directory information includes a directory path of the first directory, and the implementation process of prefetching the application data of the first application into the cache resource of the first application according to the mapping directory information includes: determining a directory identifier of a subdirectory corresponding to each task in a plurality of tasks of the first application; acquiring data under the subdirectory corresponding to each task stored under the first directory from the storage system according to the directory path of the first directory and the directory identifier of the subdirectory corresponding to each task; and storing the data under the subdirectory corresponding to each task into the cache resource of the first application.
The data of the first application are prefetched into the cache resources allocated for the first application through the mapping directory information specified by the user, so that the user does not need to perform explicit data copying, and the use complexity of the user is reduced.
In a possible implementation manner, the implementation process of accessing the cache resource of the first application according to the cache policy includes: when the caching strategy comprises a hierarchical caching strategy, caching different types of task data of each task of the first application into corresponding types of storage media in the caching resources of the first application according to the hierarchical caching strategy; and when the cache policy comprises a data consistency policy, when the task data in any cache space is accessed, locking operation is performed on the accessed task data.
In the application, data can be cached and accessed in the cache resource of the first application according to the data caching and access policy in the caching policy specified by the user, for example, the data is cached according to the hierarchical caching policy, so that the data access performance is improved, and the resource consumption of the caching space is saved. And accessing the data according to the data consistency strategy so as to ensure the accuracy of the data in the data access process. In addition, the user can flexibly customize other strategies, so that the flexible setting of the data caching and access modes can be realized.
In a possible implementation manner, before accessing the cache resource of the first application according to the cache policy, the method further includes: acquiring an input/output (IO) request; and if the data accessed by the IO request is the data under the first directory indicated by the mapping directory information, executing the step of accessing the cache resource of the first application according to the cache policy.
In the application, by setting the mapping directory information, the access to the first directory indicated by the mapping directory information can be directly intercepted, and further, the data access is realized by accessing the cache resource allocated to the first application, so that the access efficiency is improved, and the user is not aware in the whole process.
In one possible implementation, the method further includes: acquiring bandwidth requirements of data to be migrated to the storage system in cache resources of each of a plurality of applications, wherein the plurality of applications comprise the first application; allocating IO bandwidth to the data to be migrated in the cache resource of the first application according to the bandwidth requirement; and storing the data to be migrated in the cache resource of the first application into the storage system according to the IO bandwidth.
In the application, when detecting that the data amount in the cache resource allocated to the application by each computing node reaches the second threshold, each computing node may apply for allocating the bandwidth requirement of the to-be-migrated data of the application operated by each computing node to the management node. The management node can allocate IO bandwidth for migrating the data to be migrated to the application operated by each computing node according to the collected bandwidth requirements of each computing node, so as to control the data volume of migrating the data to the storage system by each computing node, thereby avoiding I/O bandwidth competition caused by the fact that the access volume of the application data of different computing nodes exceeds the available bandwidth of the storage system, realizing the ordered access of each application to the storage system in the global view, and reducing the problem of application performance caused by the IO competition. In addition, according to the mapping directory information specified by the user, the cluster system can automatically complete data copying without the need of the user to perform self-operation to complete data copying, and the operation complexity of the user is reduced.
In a second aspect, a data access method is provided, the method comprising: the method comprises the steps that a management node receives a cache configuration request of a first application submitted by a user, wherein the cache configuration request comprises a cache policy and mapping directory information, the cache policy is used for indicating cache requirements of the first application, and the mapping directory information is information of a first directory where application data of the first application stored in a storage system are located; the management node schedules cache resources for the first application from a target computing node according to the cache strategy; the management node controls the target computing node to pre-fetch the application data of the first application into the cache resource of the first application according to the mapping directory information, and controls the target computing node to access the cache resource of the first application in the running process of the first application through the cache strategy.
According to the method and the device, the management node schedules cache resources for the first application according to the cache strategy which is submitted by the user and aims at the first application, and controls the computing node to pre-fetch data into the cache resources of the first application according to the mapping directory information which is submitted by the user. Subsequently, in the process of running the first application, the control computing node accesses the cache resource of the first application according to the cache strategy. Therefore, the method and the device for controlling the resource utilization of the application can sense the requirement of the user, and further control the resource utilization of the application according to the requirement of the user, so that the application performance is improved.
In a possible implementation manner, the scheduling, by the management node, a cache resource for the first application from a target computing node according to the cache policy includes: the management node acquires resource demand information of each task in a plurality of tasks of the first application from the cache policy, wherein the resource demand information comprises the size of a cache space required by each task and the type of a storage medium; the management node distributes target computing nodes for executing all tasks of the first application according to the resource demand information; and the management node sends the cache strategy to the target computing node to indicate the target computing node to allocate a cache space for the corresponding task from the own cache space according to the resource demand information in the cache strategy.
In the application, the management node can control the resource use condition of the first application to the computing node according to the cache policy specified by the user, so that the resource use condition of the computing node is controlled by the user, and the application performance of the first application can better meet the requirement of the user.
In a possible implementation manner, the mapping directory information includes a directory path of the first directory, and the implementation process that the management node controls the target computing node to prefetch the application data of the first application into the cache resource of the first application according to the mapping directory information may be: and the management node sends the directory path of the first directory to the target computing node to instruct the target computing node to pre-fetch data under the subdirectories of each task stored under the first directory from the storage system according to the directory path of the first directory, and stores the obtained data into the cache resource of the first application.
In the application, the management node controls the computing node to prefetch the data of the first application to the cache resources allocated to the first application through the mapping directory information specified by the user, so that the user does not need to copy the data explicitly, and the use complexity of the user is reduced.
In one possible implementation, the method further includes: the management node receives bandwidth requirements of data to be migrated to the storage system in cache resources of each application sent by a plurality of computing nodes, wherein the computing nodes comprise the target computing node; according to the bandwidth requirement, allocating IO bandwidth to the data to be migrated in the cache resource of the first application; and sending the IO bandwidth allocated to the first application to the target computing node to indicate the target computing node to store the data to be migrated in the cache resource of the first application to the storage system according to the IO bandwidth allocated to the first application.
In the application, the management node can allocate an IO bandwidth for migrating the data to be migrated to the application operated by each computing node according to the collected bandwidth requirements of each computing node, so as to control the data volume of migrating the data to the storage system by each computing node, thereby avoiding that the access volume of the application data of different computing nodes exceeds the available bandwidth of the storage system to generate I/O bandwidth competition, realizing that each application in a global view orderly accesses the storage system, and further reducing the application performance problem caused by the IO competition.
In a third aspect, a data access method is provided, the method including: the method comprises the steps that a computing node receives a caching strategy and mapping directory information of a first application, wherein the caching strategy is used for indicating the caching requirement of the first application, and the mapping directory information is information of a first directory where application data of the first application stored in a storage system are located; according to the cache strategy, allocating cache resources for the first application, and according to the mapping directory information, prefetching application data of the first application into the cache resources of the first application; and in the process of running the first application, accessing the cache resource of the first application according to the cache policy.
In the application, the computing node can allocate corresponding cache resources to the first application according to the cache policy of the first application specified by the user, so that the resource use condition of the computing node by the first application meets the user requirement, and the application performance of the first application can meet the user requirement. On the basis, in the process of running the first application, data access can be directly performed in the cache resources allocated to the first application by the computing nodes, so that the access to a storage system is reduced, and the competition among the computing nodes is reduced. Moreover, after the cache resource is allocated to the first application, the computing node can pre-fetch the application data of the first application in the first directory in the storage system into the first cache space according to the mapping directory information, and a user does not need to copy manual data, so that the operation complexity is reduced.
In a possible implementation manner, when allocating cache resources for the first application according to the cache policy, the computing node obtains resource requirement information of each task in a plurality of tasks of the first application from the cache policy, where the resource requirement information includes a size of a cache space required by each task and a type of a storage medium included in the cache space; according to the resource demand information of each task, allocating a cache space for a first task running on the computing node in the cache resource of the task, wherein the first task is any one of the plurality of tasks running on the computing node.
In a possible implementation manner, the implementation process of prefetching the application data of the first application into the cache resource of the first application according to the mapping directory information includes: determining a directory identifier of a subdirectory corresponding to the first task; acquiring data under the subdirectory corresponding to the first task stored under the first directory from a storage system according to the directory path of the first directory and the directory identifier of the subdirectory corresponding to the first task; and storing the data under the subdirectory corresponding to the first task into the cache space of the first task.
In a possible implementation manner, if the caching policy includes a hierarchical caching policy, the implementation process of storing the data under the subdirectory corresponding to the first task into the caching space of the first task includes: and storing different data into different types of storage media according to the data type of the data in the subdirectory corresponding to the first task.
According to the method and the device, the data are cached according to the hierarchical caching strategy, and different types of data can be stored in appropriate storage media, so that the data access performance is improved, and the resource consumption of the caching space is saved.
In a possible implementation manner, the implementation process of accessing the cache resource of the first application according to the cache policy during the process of running the first application may include: and obtaining an IO request, and if the data accessed by the IO request is the data under the first directory indicated by the mapping directory information, accessing the cache resource of the first application according to the IO request and the cache policy.
In the application, by setting the mapping directory information, the access to the first directory indicated by the mapping directory information can be directly intercepted, and further, the data access is realized by accessing the cache resource allocated to the first application, so that the access efficiency is improved, and the user is not aware in the whole process.
In a possible implementation manner, if the cache policy includes a data consistency policy, when the computing node accesses data in the cache resource of the first application, a locking operation is performed on the accessed data, so that the accuracy of the data in the data access process is ensured.
In a possible implementation manner, when the computing node detects that the amount of data in the cache resource allocated to the first application by the computing node reaches a reference threshold, sending a bandwidth requirement of the first application to a management node, where the bandwidth requirement is used to indicate a bandwidth required for migrating data to be migrated in the cache resource of the first application on the computing node to a storage system; receiving IO bandwidth allocated by the management node for the data to be migrated in the cache resource of the first application, and migrating the data to be migrated in the cache resource of the first application to a storage system according to the allocated IO bandwidth.
In the application, the computing node can request the management node to allocate the IO bandwidth to the first application by sending the bandwidth requirement of the first application to the management node, and the management node can simultaneously collect the bandwidth requirements of each computing node, so that each computing node can generate I/O bandwidth competition when data is migrated according to the IO bandwidth allocated by the management node, and the ordered access of each application to the storage system in the global view is realized, thereby reducing the application performance problem caused by the IO competition.
In a fourth aspect, a data access apparatus is provided, which has the function of implementing the behavior of the data access method in the first aspect. The data access device comprises at least one module, and the at least one module is used for implementing the data access method provided by the first aspect.
In a fifth aspect, a data access device is provided, where the data access device has a function of implementing the behavior of the data access method in the second aspect, and the data access device includes at least one module, where the at least one module is used to implement the data access method provided in the second aspect.
In a sixth aspect, a data access device is provided, where the data access device has a function of implementing the behavior of the data access method in the third aspect, and the data access device includes at least one module, where the at least one module is used to implement the data access method provided in the third aspect.
In a seventh aspect, a cluster system is provided, where the cluster system includes a management node and a computing node, where the management node and the computing node each include a processor and a memory, and the memory is used to store a program that supports the cluster system to execute the data access method provided in the first aspect, and store data involved in implementing the data access method provided in the first aspect. The processor is configured to execute programs stored in the memory.
In an eighth aspect, a management node is provided, where the structure of the management node includes a processor and a memory, and the memory is used to store a program that supports the management node to execute the data access method provided in the second aspect, and store data used to implement the data access method provided in the second aspect. The processor is configured to execute programs stored in the memory.
A ninth aspect provides a computing node, where the structure of the computing node includes a processor and a memory, and the memory is used for storing a program for supporting the computing node to execute the data access method provided by the third aspect, and storing data used for implementing the data access method provided by the third aspect. The processor is configured to execute programs stored in the memory.
In a tenth aspect, there is provided a computer-readable storage medium having stored therein instructions, which, when run on a computer, cause the computer to perform the data access method of the first or second or third aspect described above.
In an eleventh aspect, there is provided a computer program product comprising instructions which, when run on a computer, cause the computer to perform the data access method of the first or second or third aspect.
Drawings
Fig. 1 is a system architecture diagram of a data center provided in an embodiment of the present application;
FIG. 2 is a flow chart of a method of a computer device according to an embodiment of the present application;
FIG. 3 is a flow chart of a data access method provided by an embodiment of the present application;
fig. 4 is a schematic structural diagram of a data access device according to an embodiment of the present application;
FIG. 5 is a schematic structural diagram of another data access device provided in an embodiment of the present application;
fig. 6 is a schematic structural diagram of another data access device according to an embodiment of the present application.
Detailed Description
For ease of understanding, the system architecture related to the embodiments of the present application will be described first.
The data access method provided by the application can be applied to a data center, and the data center can provide a shared application execution environment for a plurality of users. Among other things, the applications running in the data center may be data intensive applications such as high performance computing applications, big data applications, and so on.
Illustratively, referring to fig. 1, the data center includes a cluster system 10 and a storage system 11, and the cluster system 10 and the storage system 11 establish a communication connection. The cluster system 10 is configured to provide an execution environment for a plurality of applications, and the storage system 11 is configured to store application data of the plurality of applications.
Referring to fig. 1, the cluster system 10 may include a management node 101 and a plurality of computing nodes 102, and the management node 101 and each computing node 102 may communicate through a wired network or a wireless network, and the computing nodes 102 may also communicate with each other through a wired network or a wireless network. In this embodiment of the present application, the management node 101 is configured to allocate, according to a cache policy of an application specified by a user, a computing node 102 for executing the application to the application, and send the cache policy and mapping directory information specified by the user to the computing node 102.
After receiving the cache policy and the mapping directory information specified by the user and sent by the management node 101, the computing node 102 allocates a cache resource for the application from its own cache resource according to the cache policy, and prefetches the application data stored in the storage system 11 to the application cache resource according to the mapping directory information. And then, running the application, and accessing the cache resource of the application according to the cache policy in the running process of the application. The cache resource of the computing node 102 refers to a storage medium included in the computing node 102. For example, the cache resource of the computing node 102 may include a Dynamic Random Access Memory (DRAM), a mass Storage Class Memory (SCM), a Solid State Disk (SSD), and other types of storage media, which is not limited in this embodiment of the present invention.
It should be noted that the number of computing nodes 102 that execute the application and that the management node 102 allocates to the application may be multiple, so that each computing node 102 may be used to run one or more tasks of the application.
Through the above method, the management node 101 may schedule cache resources in the computing node 102 for each application to be run by different users according to user requirements, thereby controlling the corresponding computing node 102 to run the corresponding application.
In the process that the computing node 102 runs an application, after detecting that the amount of data cached in a cache resource of a certain application reaches a second threshold, the computing node 102 may send, to the management node 101, a bandwidth requirement of data to be migrated into the storage system 11 in the cache resource of the application, so as to request the management node 101 to allocate IO bandwidth to the data to be migrated of the application.
After receiving the bandwidth requirement of each application sent by one or more computing nodes 102, the management node 101 may allocate an IO bandwidth to the data to be migrated of each application according to the bandwidth requirement, and send the allocated IO bandwidth to the corresponding computing node 102. Accordingly, after receiving the IO bandwidth allocated by the management node 101 for the data to be migrated of the application, the computing node 102 may send the data to be migrated of the application to the storage system 11 for storage according to the IO bandwidth.
The storage system 11 includes a plurality of storage nodes 111. Wherein, each storage node 111 and each computing node 102 can be in wired or wireless communication. Each storage node 111 is configured to receive an IO request of the computing node 102, where when the IO request is a read request sent by the computing node 102 according to the mapping directory information, the storage node 111 obtains application data according to the read request and returns the application data to the computing node 102, so that the computing node 102 caches the application data in a cache resource allocated to the application. When the IO request is a write request carrying data to be migrated of an application, the storage node 111 may perform persistent storage on the data to be migrated according to the write request.
It should be noted that, in one possible implementation, the storage node 111 may include a control unit, a network card, and a plurality of storage devices. The control unit is configured to communicate with the computing node 102 through a network card, and access a plurality of storage devices according to an IO request of the computing node 102. The plurality of storage devices may include Storage Class Memories (SCMs), solid State Disks (SSDs), and other types of storage devices, which are not limited in this embodiment of the present invention.
Optionally, in this embodiment of the present application, the data center may further provide a login node for submitting the caching policy and the mapping directory information for the user. The user submits the caching policy and the mapping directory information of the application to be run to the management node 101 through the login node, so that the management node 101 schedules resources for the application according to the caching policy and the mapping directory information of the application.
Each of the management node 101, the computing node 102, the storage node 111, and the login node described above may be a separate computer device. The login node may be a terminal device, such as a notebook computer, a desktop computer, a tablet computer, a smart phone, and the like. The management node 101 and the computing node 102 may be terminal devices or servers. The storage node 111 may be a server.
Fig. 2 is a schematic structural diagram of a computer device according to an embodiment of the present application. The management node and the computing node in the system architecture shown in fig. 1 can be implemented by the computer device. Referring to fig. 2, the computer device may include one or more processors 201, a communication bus 202, a main memory 203, and one or more communication interfaces 204.
The processor 201 may be a general-purpose Central Processing Unit (CPU), a Network Processor (NP), a microprocessor, or one or more integrated circuits such as an application-specific integrated circuit (ASIC), a Programmable Logic Device (PLD), or a combination thereof for implementing the disclosed aspects. The PLD may be a Complex Programmable Logic Device (CPLD), a field-programmable gate array (FPGA), a General Array Logic (GAL), or any combination thereof.
A communication bus 202 is used to transfer information between the above components. The communication bus 202 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
The main memory 203 may be, but is not limited to, a read-only memory (ROM), a Random Access Memory (RAM), or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. When the main Memory 203 is a RAM, the main Memory may be a Dynamic Random Access Memory (DRAM), an SCM, or the like. The main memory 203 may be stand-alone and connected to the processor 201 through the communication bus 202. Main memory 203 may also be integrated with processor 201.
The communication interface 204 uses any transceiver or the like for communicating with other devices or communication networks. The communication interface 204 includes a wired communication interface, and may also include a wireless communication interface. The wired communication interface may be an ethernet interface, for example. The ethernet interface may be an optical interface, an electrical interface, or a combination thereof. The wireless communication interface may be a Wireless Local Area Network (WLAN) interface, a cellular network communication interface, or a combination thereof.
In some embodiments, the computer device may also include other storage media 205, for example, the other storage media 205 may include a mechanical hard disk, a solid state hard disk, and the like.
In some embodiments, the computer device may include multiple processors, such as processor 201 and processor 206 shown in fig. 2. Each of these processors may be a single-core processor or a multi-core processor. A processor herein may refer to one or more devices, circuits, and/or processing cores for processing data (e.g., computer program instructions).
In particular implementations, computer device may also include an output device 207 and an input device 208, as one embodiment. An output device 207 is in communication with the processor 201 and may display information in a variety of ways. For example, the output device 207 may be a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display device, a Cathode Ray Tube (CRT) display device, a projector (projector), or the like. The input device 208 is in communication with the processor 201 and may receive user input in a variety of ways. For example, the input device 208 may be a mouse, a keyboard, a touch screen device, or a sensing device, among others.
In some embodiments, main memory 203 is used for storing kernels (kernel), program code for executing aspects of the present application, and other instructions and data, and processor 201 may execute the program code stored in main memory 203. The program code may include one or more software modules, and the computer device may implement the data access method provided in the embodiment of fig. 3 below through the processor 201 and the program code in the main memory 203.
In the data access method provided by the application, the user can flexibly customize the cache strategy for the application, and the cluster system can schedule the cache resource for the application according to the cache strategy submitted by the user, that is, the cluster system can sense the requirement of the user, and further control the resource use condition of the application according to the requirement of the user, so that the effect of improving the application performance is achieved. In addition, the cluster system can prefetch data into the cache resources of the application according to the mapping directory information submitted by the user, so that the data access speed is increased, and the use complexity of the user is reduced. In addition, the management node in the cluster system can allocate IO bandwidth to the applications running on the computing nodes by collecting the bandwidth requirements of each computing node, so that the problem of application performance caused by IO competition is reduced. Embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
Fig. 3 is a flowchart of a data access method according to an embodiment of the present application. The method may be applied in a cluster system in a data center as shown in fig. 1, see fig. 3, the method comprising the steps of:
step 301: the management node receives a cache configuration request which is submitted by a user and aims at a first application, the cache configuration request comprises a cache policy and mapping directory information, the cache policy is used for indicating cache requirements of the first application, and the mapping directory information is information of a first directory where application data of the first application stored in a storage system are located.
In the embodiment of the application, the user inputs the caching strategy and the mapping directory information aiming at the first application on the login node. And the login node generates a cache configuration request of the first application according to the cache strategy and the mapping directory information input by the user, and sends the cache configuration request to the management node. The cache configuration request carries the cache policy and the mapping directory information, and the first application refers to an application to be run by a user. Accordingly, the management node receives a cache configuration request of the first application sent by the login node.
Illustratively, a command line tool is deployed on the login node, and the user can input the cache policy and the mapping directory information of the first application in a command line interface of the command line tool displayed by the login node. The login node acquires the caching strategy and the mapping directory information of the first application input by the user in the command line interface.
Optionally, a service configuration client may also be deployed on the login node, and the user may input the cache policy and the mapping directory information of the first application in an interface of the service configuration client displayed by the login node. Correspondingly, the login node can acquire the cache policy and the mapping directory information of the first application through the service configuration client, and further generate a cache configuration request of the first application.
It should be noted that the caching policy of the first application may include resource requirement information and a data caching and accessing policy of the first application.
In an embodiment of the application, the first application may be divided into a plurality of tasks to be executed by a plurality of computing nodes. In this case, the resource requirement information of the first application designated by the user may include resource requirement information of respective tasks of the first application. In addition, the resource requirement information of each task of the first application may be the same or different. The resource requirement information may include computing resource requirement information and cache resource requirement information. The computing resource requirement information is used to indicate computing resources required by the respective tasks of the first application, e.g., the number of cores, dominant frequency, etc. of the processor required to run the respective tasks of the first application. The cache resource requirement information includes a size of a cache space required by each task of the first application, and may further include a type of a storage medium included in the cache space required by each task of the first application, for example, the cache space required by each task of the first application may include two different storage media, which are DRAM and SCM. Optionally, the cache resource requirement information may further include a topology of a cache space required by each task of the first application, that is, a topology of each level of storage media constituting the cache space on the corresponding computing node.
Optionally, the resource requirement information of the first application may also be directly used to indicate the resource requirement of the first application, that is, the resource requirement information is not the above-mentioned resource requirement information at the task granularity level, but is the resource requirement information at the application granularity level.
The data caching and access policy may be used to indicate a caching manner and access policy for the application data of the first application. For example, the data caching and access policies may include a hierarchical caching policy to instruct caching of different types of application data of the first application into different types of storage media. For another example, the data caching and access policy may include a data coherency policy that indicates that, when any data in the cache resource of the first application is accessed, a locking operation is performed on the accessed data to ensure data coherency. For another example, the data caching and access policies may also include a security level policy to indicate access rights to data in the application's caching resources. In addition, the data caching and access policies may further include other policies flexibly customized by the user, so as to better meet the user requirements and improve the application performance.
The mapping directory information is information of a first directory in which application data of a first application stored in the storage system is located. Illustratively, the mapping directory information may be a directory path of a first directory in the storage system. Alternatively, the mapping directory information may also be other information that can be used to indicate a storage location of the application data of the first application in the storage system, which is not limited in this embodiment of the present application.
Step 302: and the management node distributes and executes a target computing node of the first application according to the cache strategy.
After receiving a cache configuration request of a first application, a management node allocates a target computing node for executing the first application from a plurality of computing nodes according to resource demand information included in a cache policy in the cache configuration request.
For example, if the caching policy includes resource requirement information at a task granularity level, the management node may obtain the resource requirement information of each task of the multiple tasks of the first application from the caching policy, and allocate a target computing node for executing each task of the first application according to the resource requirement information.
As can be seen from the introduction of step 301, the resource requirement information of each task may include computing resource requirement information and cache resource requirement information of each task. Based on the method, the management node can collect and update the use conditions of the computing resources and the cache resources of each computing node in real time, and determines candidate computing nodes capable of meeting the computing resource requirements required by the task operation of the first application from the plurality of computing nodes according to the computing resource requirement information of each task and the use conditions of the computing resources of each computing node updated last time. And then, according to the cache resource demand information of each task and the use condition of the cache resource of each candidate computing node updated recently, further determining the computing node capable of meeting the cache resource demand of the task of the first application from the candidate computing nodes, and taking the finally determined computing node as a target computing node.
For example, the management node may determine, according to the resource occupied by the application running on each of the computing nodes updated most recently, the remaining computing resource on each of the computing nodes, and further determine, from the plurality of computing nodes, a candidate computing node whose remaining computing resource satisfies the computing resource requirement of the task of the first application. And then, according to the size of the residual cache space of each candidate computing node which is updated last time and the type of the storage medium forming the residual cache space, determining the computing nodes of which the residual cache space is larger than the size of the cache space required by the task of the first application and the residual cache space contains the storage medium required by the task of the first application from the candidate computing nodes so as to obtain the target computing node.
Optionally, the management node may also determine a candidate computing node from the multiple computing nodes according to the cache resource demand information of each task, and then determine a target computing node from the candidate computing node according to the computing resource demand information of each task of the first application, which is not described herein again.
It is noted that by the above method, the management node can determine the compute node that runs each task of the first application. Wherein the compute nodes running each task may be different, such that there will be multiple target compute nodes. Alternatively, it is also possible that the compute nodes running each task are the same target compute node, and thus the target compute node would be one. Alternatively, a partial task may be performed by one target computing node and a partial task may be performed by another target computing node, so that there will be multiple target computing nodes.
In another implementation manner, if the caching policy includes the resource requirement information at the application granularity level, the management node may directly allocate a target computing node to the first application according to the resource requirement information of the first application, and the implementation manner may refer to the implementation manner in which the target computing node is allocated to each task in the foregoing, which is not described herein again in this embodiment of the present application.
Optionally, the management node may also determine resource demand information of each task in the multiple tasks of the first application according to the resource demand information of the first application and the task division principle of the first application, and allocate a target computing node to each task by the method for allocating a target computing node described in the foregoing.
Step 303: and the management node sends the caching strategy and the mapping directory information to the target computing node.
After determining the target computing node for executing each task of the first application, the management node may issue, to the target computing node, the cache policy and the mapping directory information of each task of the first application, so as to control the target computing node to allocate cache resources to each task of the first application according to the cache policy, and access the cache resources of the first application according to the mapping directory information and the cache policy.
Optionally, when there are multiple target computing nodes and the caching policies of the tasks are the same, the management node may issue the caching policies and the mapping directory information to each target computing node. When there are multiple target computing nodes and the resource demand information of each task included in the cache policy is different, the management node may use the data cache and the access policy included in the cache policy and the resource demand information of each task as the cache policy of the corresponding task, and then issue the mapping directory information and the cache policy of each task to the target computing node corresponding to the task, where the target computing node corresponding to the task runs the target computing node of the task.
Optionally, while issuing the caching policy and the mapping directory information to the target computing node, the management node may also issue an identifier of a task to be run to each target computing node, so as to indicate which task of the first application the target computing node is to run. Wherein the task identification can uniquely identify the task.
After each target computing node receives the caching policy and the mapping directory information sent by the management node, the first application may be executed through the following steps 304 to 306.
Step 304: and the target computing node allocates cache resources for the first application according to the cache strategy.
After receiving the cache policy and the mapping directory information issued by the management node, the target computing node firstly allocates cache resources for the first application according to the cache policy.
The target computing node can obtain the cache resource demand information from the received cache strategy, and then allocate the cache resources for the task of the first application to be executed according to the cache resource demand information. Next, a description will be given by taking one target computing node as an example, and for convenience of description, this target computing node is referred to as a first target computing node.
For example, if a first task runs on a first target computing node, the first target computing node obtains cache resource demand information of the first task from a received cache policy, and then allocates a cache space meeting the cache resource demand for the first task in its cache resource according to the cache resource demand information of the first task.
Optionally, when the cache resource demand information of each task is the same and one task of the first application runs on one target computing node, the first target computing node may allocate, according to the cache resource demand information, a cache space having the same size as that indicated by the cache resource demand information from its own cache resource as a cache space of the first application, where the allocated cache space of the first application may be a cache space of the first task of the first application running on the first target computing node, that is, used to store task data of the first task, and may also be a cache space of another task of the first application running on another target computing node, that is, used to store task data of another task.
And each target computing node allocates a cache space for each task of the first application according to the cache policy issued by the management node, so that the cache space of each task of the first application on each target computing node forms a cache resource of the first application.
Optionally, if the resource requirement information in the caching policy is resource requirement information at an application granularity level, and the management node directly issues the resource requirement information of the first application to the target computing node, there is one target computing node, and in this case, after receiving the cache resource requirement information of the first application, the target computing node may allocate a cache space for the first application from its own cache resource according to the cache resource requirement information. Thus, the cache resource of the first application will be located on one compute node.
Step 305: and the target computing node prefetches the application data of the first application into the cache resource of the first application according to the mapping directory information.
After allocating the corresponding cache resource to the first application, the target computing node may obtain the application data of the first application from the storage system according to the mapping directory information, and further cache the application data in the cache resource of the first application. The first target computing node is still used as an example for explanation.
In a first possible situation, if the first target computing node allocates a cache space for a first task of a first application executed by the first target computing node, and maps the directory information to a directory path of a first directory in the storage system, the first target computing node may determine a directory identifier of a subdirectory corresponding to the first task, acquire, from the storage system, data in a subdirectory corresponding to the first task stored in the first directory according to the directory path of the first directory and the directory identifier of the subdirectory corresponding to the first task, and further store the acquired data in the subdirectory corresponding to the first task into the cache space allocated to the first task.
The first target computing node may obtain, according to the task identifier of the first task, a directory identifier of a subdirectory corresponding to the first task from the preset task identifier and directory identifiers of the subdirectories. Or, the first target computing node may also generate, according to the task identifier of the first task, a directory identifier of a subdirectory corresponding to the first task by using a preset rule. For example, if the task number of the first task is 1, and the preset rule for generating the directory identifier of the subdirectory corresponding to the task is processor + task number, the directory identifier of the subdirectory corresponding to the first task may be obtained as processor 1 according to the preset rule.
Then, the first target computing node may obtain, from the first directory stored in the storage system, data under the subdirectory having the same directory identifier as the subdirectory corresponding to the first task, that is, task data of the first task, according to the directory path of the first directory, and then store the task data of the first task in the cache space of the first task.
It should be noted that, when storing the task data of the first task in the cache space of the first task, if the cache policy further includes a data cache and access policy, and the data cache and access policy includes a hierarchical cache policy, the first target computing node may further store different data in different types of storage media according to the data type of the task data of the first task.
For example, hot spot data with access frequency higher than a first threshold in the task data of the first task is stored in a higher-performance memory (which may also be referred to as a storage medium), and data with access frequency lower than the first threshold is stored in a relatively lower-performance storage medium. For example, metadata and other data than metadata may be stored in different types of storage media. The first threshold may be set according to a service requirement, may be set according to processing efficiency of task data, may also be an empirical value, and may also be set according to a system processing capability.
In addition, the number of the tasks of the first application executed by the first target computing node may be one, or may be multiple, and when the first target computing node executes multiple tasks of the first application, the data in the subdirectory corresponding to each task to be executed may be prefetched into the cache space corresponding to the corresponding task by referring to the above manner.
In a second possible case, if the cache resource demand information of each task is the same, and each target computing node allocates a cache space having the same size as the space indicated by the cache resource demand information from its own cache resource, the first target computing node may obtain, from the storage system, data in the first directory indicated by the mapping directory information, and then perform a hash operation on a directory path of the obtained data, to obtain a hash value corresponding to the data. A target compute node is determined from the plurality of target compute nodes whose node identification matches the hash value. And if the node identification is that the target computing node matched with the hash value is the node, storing the data into a cache space allocated to the first application by the node. If the node identifies that the target computing node matching the hash value is another target computing node, for example, a second target computing node, the first target computing node may send the data to the second target computing node, and the second target computing node, after receiving the data, stores the data in a cache space allocated by itself for the first application.
When storing the data into the cache space allocated to the first application, the data may be stored into the storage medium of the corresponding type according to the hierarchical cache policy included in the data cache and the access policy with reference to the method described above.
In a third possible scenario, if the target computing node allocates a cache space for the first application according to the cache resource information of the first application at the application granularity level, the target computing node may directly obtain data under the first directory indicated by the mapping directory information from the storage system, and store the data in the cache space allocated for the first application.
The above is some possible implementations of prefetching the application data of the first application according to the embodiment of the present application. Optionally, the target computing node may also prefetch task data of each task of the first application by using multiple pairs of local mechanisms, or prefetch data of the first application by using other implementation manners, which is not limited in this embodiment of the present application.
In addition, it should be noted that the data prefetched from the first directory of the storage system may be all data in the first directory or may be partial data in the first directory. For each task, all data of the task may be pre-fetched, or partial data of the task may be pre-fetched, which is not limited in this embodiment of the present application. When partial data is prefetched, more important data can be prefetched according to the access frequency of the data or other information capable of indicating the importance degree of the data.
Step 306: and the target computing node accesses the cache resource of the first application according to the cache strategy in the process of running the first application.
After allocating the cache resource for the first application through steps 304 and 305 and prefetching the application data of the first application into the cache resource of the first application, the target computing node starts the running script of the first application, thereby starting to run the first application.
The first target computing node is still taken as an example for explanation. The first target computing node starts the running script of the first application and executes the first task of the first application distributed to the first target computing node.
During execution of the first task, the first target computing node may need to read application data of the first application, or write data generated during execution of the task into a cache resource of the first application. Based on this, the first target computing node may generate an IO request according to an operation to be executed, where the IO request may be a read request or a write request. Also, the IO request may include a directory path of a directory in which the accessed target data resides.
After obtaining the IO request, the first target computing node may first compare the directory path of the accessed target data with the mapping directory information, and if the directory path of the accessed target data includes the mapping directory information, may determine that the target data to be accessed currently is data in the first directory. In this case, since the data in the first directory is prefetched into the cache resource of the first application in step 305, the first target computing node may directly access the cache resource of the first application.
If the target computing nodes are data pre-fetched by the method described in the first possible case in step 305, after determining that the data to be accessed by the IO request is data in the first directory, the first target computing node accesses the cache resource of the first application according to the IO request.
It should be noted that, if the IO request is a read request, the first target computing node may first search for target data from the cache space of the first task, and if the target data is hit in the cache space of the first task, obtain the target data. If the target data fails to hit in the cache space of the first task, the IO request is sent to other target compute nodes. After receiving the IO request, the other target computing nodes search target data from a cache space of a task allocated to the first application, return the target data to the first target computing node if the target data is hit, and return a notification message to the first target computing node to notify the first target computing node that the data acquisition fails if the target data is not hit. If each of the other target computing nodes fails to hit the target data, the first target computing node may retrieve the target data from the storage system. Alternatively, if the IO request is a write request, the first target compute node may write the target data into the cache space of the first task.
As can be seen from the above description, when the first target computing node fails to hit target data in the cache space of the first task according to the IO request, the IO request may be sent to other target computing nodes. Similarly, after the IO request is generated by other target computing nodes, if the data to be accessed cannot be hit in the cache space allocated to the first application by the other target computing nodes, the IO request may also be sent to the first target computing node. In this case, the first target computing node may also receive an IO request sent by another target computing node, and access the cache space of the first task according to the IO request. Optionally, the IO request may be sent between the computing nodes through Remote Direct Memory Access (RDMA) technology to access the cache space of the other side.
Optionally, if each target computing node is data prefetched by the method introduced in the second possible scenario in step 305, the first target computing node may perform a hash operation on a directory path of target data to be accessed by the IO request to obtain a hash value corresponding to the target data, determine a node identifier of the target computing node matching the hash value corresponding to the target data, and if the determined target computing node is itself, the first target computing node accesses a cache space allocated to the first application by itself to implement reading or writing of the target data.
Optionally, if the determined target computing node is another target computing node, the first target computing node sends the IO request to the determined target computing node, and the determined target computing node accesses the cache space allocated to the first application by itself to read or write the target data.
In the case that the target data cannot be hit in the cache space of the first application, the corresponding target computing node may also obtain the target data from the storage system.
Alternatively, if the target computing node is data pre-fetched by the method described in the third possible scenario of step 305, then the target computing node may access the cache space allocated for the first application itself according to the IO request since the target computing node is one. The access method refers to the foregoing implementation method, and details are not described herein in this embodiment of the present application.
Optionally, if the data caching and accessing policy in the caching policy further includes a data consistency policy, in this step, when a certain target computing node modifies, deletes, or writes target data in the caching resource of the first application according to the IO request, the target computing node may further perform a locking operation on the target data, so as to prevent other target computing nodes from accessing the target data, and ensure data consistency.
Step 307: the method comprises the steps that a plurality of computing nodes send bandwidth requirements of data to be migrated to a storage system in cache resources of applications running by the computing nodes to a management node, the computing nodes comprise target computing nodes, and the applications comprise first applications.
In this embodiment of the present application, when the computing node detects that the amount of data cached in the cache space allocated to a certain application by itself reaches the second threshold, the bandwidth requirement of the application may be sent to the management node. Accordingly, the management node may receive, in real time, the bandwidth requirements of the respective applications sent by the respective computing nodes. The bandwidth requirement is used for indicating the bandwidth required for migrating the data to be migrated in the cache resource of the corresponding application on the corresponding computing node to the storage system. Illustratively, the bandwidth requirement may include a data amount of data to be migrated in a cache resource of a corresponding application on the compute node. Optionally, other information such as an application identifier may also be included, which is not limited in this embodiment of the application. In addition, the second threshold may be preset according to the size of the cache space allocated to the application on the computing node, for example, the second threshold may be a preset proportion of the total capacity of the cache space allocated to the application, for example, may be 80% of the cache space allocated to the application, or another numerical value, which is not limited in this embodiment of the present application.
The plurality of computing nodes include a target computing node, that is, when the target computing node detects that the amount of data cached in the cache space allocated for the first application by itself reaches the second threshold, the bandwidth requirement of the first application may be sent to the management node. At this time, the bandwidth requirement of the first application is used for indicating the bandwidth required for migrating the data to be migrated in the cache space of the first application on the target computing node to the storage system.
Step 308: and the management node allocates IO bandwidth for the data to be migrated in the cache resource of the first application according to the bandwidth requirement.
After receiving bandwidth requirements of each application sent by a plurality of computing nodes including a target computing node, a management node may allocate corresponding IO bandwidth to data to be migrated of each application according to a bandwidth required by data to be migrated of a corresponding application indicated by the bandwidth requirements of each application and a current remaining bandwidth of a storage system.
For example, the management node may calculate a ratio of bandwidth required by each application, and then allocate IO bandwidth to each application according to the ratio and the current remaining bandwidth of the storage system. If the current remaining bandwidth of the storage system is not greater than the total bandwidth required by each application, the IO bandwidth allocated to each application will be less than the required bandwidth, and if the current remaining bandwidth of the storage system is greater than the total bandwidth required by each application, the IO bandwidth allocated to each application may be equal to the required bandwidth.
The management node may also allocate IO bandwidth to the data to be migrated of each application by using other principles, which is not limited in this embodiment of the present application.
In addition, the IO bandwidth allocated to the data to be migrated of each application can indicate the maximum data amount of the data that is allowed to be migrated by each application per unit time. For example, when the IO bandwidth allocated for the data to be migrated of the first application is 30MB/s, this indicates that the target computing node is allowed to migrate the cached data of the first application to the storage system by 30MB at most every second.
Since the plurality of applications include the first application, the management node may allocate IO bandwidth to the data to be migrated in the cache resource of the first application by the above method.
Step 309: and the management node sends the IO bandwidth allocated for the first application to the target computing node.
After allocating the IO bandwidth to the data to be migrated in the cache resource of each application, the management node may send the IO bandwidth allocated to the corresponding application to the corresponding computing node.
For example, the management node may send IO bandwidth allocated for data to be migrated in the cache resource of the first application to the target compute node.
Step 310: and the target computing node stores the data to be migrated in the cache resource of the first application into the storage system according to the IO bandwidth allocated to the first application.
The IO bandwidth allocated to the first application is used for indicating the data volume of the cache data of the first application allowing the target computing node to migrate at this time. Based on the data migration method, the target computing node acquires data with the quantity not larger than the IO bandwidth from the cache space allocated to the first application by the target computing node according to the IO bandwidth allocated to the first application by the management node, and then migrates the data to be migrated to the storage system for persistent storage according to the mapping directory information specified by the user. The operation of migrating data to the storage system according to the mapping directory information is a reverse operation of prefetching data from the storage system according to the mapping directory information, and the specific implementation manner may refer to the foregoing description, and details of the embodiment of the present application are not described herein again.
After the target computing node starts to run the first application, whenever it is detected that the data amount in the cache space allocated to the first application by itself reaches the second threshold, IO bandwidth may be applied to the management node through the above steps 307 to 310, so that the data in the cache space of the first application is migrated to the storage system according to the IO bandwidth until the first application finishes running and all the data in the cache space of the first application is migrated to the storage system, and the target computing node may release the cache space allocated to the first application.
In the embodiment of the application, the cache resources are scheduled for the first application according to the cache policy which is submitted by the user and aims at the first application, and the data are prefetched into the cache resources of the first application according to the mapping directory information submitted by the user. Subsequently, in the process of running the first application, the cache resource of the first application is accessed according to the cache strategy. Therefore, the method and the device can sense the requirements of the user, and further control the resource using condition of the application according to the requirements of the user, so that the application performance is improved.
Secondly, in the embodiment of the present application, data of the first application stored in the storage system may be prefetched into the cache resource allocated to the first application according to the mapping directory information specified by the user, so that, in the subsequent process of running the first application, if the data to be accessed is data under the directory indicated by the mapping directory information, the cache resource allocated to the first application may be directly accessed, and the data access speed is increased. And the data in the storage system is prefetched by the mapping directory information specified by the user, so that the use complexity of the user is reduced.
Thirdly, in the embodiment of the present application, data may be cached and accessed in the cache resource of the first application according to the data caching and accessing policy in the caching policy specified by the user, for example, the data is cached according to the hierarchical caching policy, so as to improve the data access performance and save the resource consumption of the caching space. And accessing the data according to the data consistency strategy so as to ensure the accuracy of the data in the data access process. In addition, the user can flexibly customize other strategies, so that the flexible setting of the data caching and the access mode can be realized.
Finally, in the embodiment of the present application, when detecting that the data amount in the cache resource allocated to the application by each computing node reaches the second threshold, each computing node may apply for allocating, to the management node, a bandwidth requirement of the data to be migrated of the application running by itself. The management node can allocate IO bandwidth for migrating the data to be migrated to the application operated by each computing node according to the collected bandwidth requirements of each computing node, so as to control the data volume of the data migrated to the storage system by each computing node, thereby avoiding I/O bandwidth competition caused by the fact that the access volume of the application data of different computing nodes exceeds the available bandwidth of the storage system, realizing the ordered access of each application to the storage system in the global view, and reducing the application performance problem caused by the IO competition. In addition, according to the mapping directory information specified by the user, the cluster system can automatically complete data copying without the need of the user to perform self-operation to complete data copying, and the operation complexity of the user is reduced.
It should be noted that, in the above embodiments, the step related to the management node may be implemented separately as a data access method on the management node side, and the step related to the computing node side may be implemented separately as a data access method on the computing node side.
The method for providing data access according to the embodiment of the present application is described in detail above with reference to fig. 1 to 3, and the data access device provided according to the embodiment of the present application is described below with reference to fig. 4 to 6.
Referring to fig. 4, an embodiment of the present application provides a data access apparatus 400, where the apparatus may be applied in a cluster system, where the apparatus 400 includes:
a receiving module 401, configured to execute step 301 in the foregoing embodiment;
a scheduling module 402 for performing steps 302-304 in the above embodiments;
a prefetch module 403, configured to execute step 305 in the foregoing embodiment;
an accessing module 404, configured to perform step 306 in the foregoing embodiments.
It should be understood that the data access apparatus 400 according to the embodiment of the present invention may be implemented by a Central Processing Unit (CPU), an application-specific integrated circuit (ASIC), or a Programmable Logic Device (PLD), where the PLD may be a Complex Programmable Logic Device (CPLD), a field-programmable gate array (FPGA), a General Array Logic (GAL), or any combination thereof. When the data access method shown in fig. 3 can also be implemented by software, the data access apparatus 400 and its modules may also be software modules.
Optionally, the scheduling module 402 is mainly configured to:
determining resource demand information of each task in a plurality of tasks of the first application according to the cache policy, wherein the resource demand information comprises the size of cache space required by each task and the type of a storage medium;
and allocating a cache space for each task according to the resource demand information of each task.
Optionally, the mapping directory information includes a directory path of the first directory, and the pre-fetching module 403 is mainly configured to:
determining a directory identifier of a subdirectory corresponding to each task in a plurality of tasks of a first application;
acquiring data under the subdirectory corresponding to each task stored under the first directory from the storage system according to the directory path of the first directory and the directory identifier of the subdirectory corresponding to each task;
and storing the data under the subdirectory corresponding to each task into the cache resource of the first application.
Optionally, the access module 404 is mainly configured to:
when the caching strategy comprises a hierarchical caching strategy, caching different types of task data of each task of the first application into corresponding types of storage media in the caching resource of the first application according to the hierarchical caching strategy;
when the cache policy comprises a data consistency policy, when the task data in any cache space is accessed, locking operation is carried out on the accessed task data.
Optionally, the apparatus 400 is further configured to:
acquiring an input/output (IO) request;
and if the data accessed by the IO request is the data in the first directory indicated by the mapping directory information, executing a step of accessing the cache resource of the first application according to the cache policy.
Optionally, the apparatus 400 is further configured to:
acquiring bandwidth requirements of data to be migrated to a storage system in cache resources of each application in a plurality of applications, wherein the plurality of applications comprise a first application;
allocating IO bandwidth for data to be migrated in the cache resource of the first application according to the bandwidth requirement;
and storing the data to be migrated in the cache resource of the first application to a storage system according to the IO bandwidth.
The data access apparatus 400 according to the embodiment of the present application may correspondingly perform the method described in the embodiment of the present application, and the above and other operations and/or functions of each unit in the data access apparatus 400 are respectively for implementing corresponding processes performed by corresponding nodes in each method in fig. 3, and for brevity, are not described again here.
In summary, in the embodiment of the present application, a cache resource is scheduled for a first application according to a cache policy submitted by a user for the first application, and data is prefetched into the cache resource of the first application according to mapping directory information submitted by the user. Subsequently, in the process of running the first application, the cache resource of the first application is accessed according to the cache strategy. Therefore, the method and the device for controlling the resource utilization of the application can sense the requirement of the user, and further control the resource utilization of the application according to the requirement of the user, so that the application performance is improved.
Referring to fig. 5, an embodiment of the present application provides an apparatus 500 for data access, where the apparatus 500 may be applied in a management node, and the apparatus 500 includes:
a receiving module 501, configured to execute step 301 in the foregoing embodiment;
the scheduling module 502 is configured to perform the operations of sending the cache policy to the target computing node in step 302 and step 303 in the foregoing embodiments, so as to control the target computing node to perform step 304 to step 306.
It should be understood that the apparatus 500 according to the embodiment of the present invention may be implemented by a Central Processing Unit (CPU), an application-specific integrated circuit (ASIC), or a Programmable Logic Device (PLD), which may be a Complex Programmable Logic Device (CPLD), a field-programmable gate array (FPGA), a General Array Logic (GAL), or any combination thereof. When the data access method shown in fig. 3 can also be implemented by software, the apparatus 500 and its various modules may also be software modules.
Optionally, the scheduling module 502 is mainly configured to:
acquiring resource demand information of each task in a plurality of tasks of a first application from a cache strategy, wherein the resource demand information comprises the size of a cache space required by each task and the type of a storage medium;
distributing target computing nodes for executing all tasks of the first application according to the resource demand information;
and sending the cache strategy to the target computing node to instruct the target computing node to allocate cache space for the corresponding task from the own cache space according to the resource demand information in the cache strategy.
Optionally, the mapping directory information includes a directory path of the first directory, and the scheduling module 502 is mainly configured to:
and sending the directory path of the first directory to the target computing node to instruct the target computing node to pre-fetch data under the subdirectories of each task stored under the first directory from the storage system according to the directory path of the first directory, and storing the obtained data into the cache resource of the first application.
Optionally, the apparatus 500 is further configured to:
receiving bandwidth requirements of data to be migrated to a storage system in cache resources of each application sent by a plurality of computing nodes, wherein the plurality of computing nodes comprise target computing nodes;
allocating IO bandwidth for data to be migrated in the cache resource of the first application according to the bandwidth requirement;
and sending the IO bandwidth allocated to the first application to the target computing node to indicate the target computing node to store the data to be migrated in the cache resource of the first application into the storage system according to the IO bandwidth allocated to the first application.
The apparatus 500 according to the embodiment of the present application may correspond to performing the method described in the embodiment of the present application, and the above and other operations and/or functions of each unit in the apparatus 500 are respectively for implementing corresponding processes performed by corresponding nodes in each method in fig. 3, and for brevity, are not described again here.
In summary, in the embodiment of the present application, the management node schedules the cache resource for the first application according to the cache policy for the first application submitted by the user, and controls the computing node to prefetch data into the cache resource of the first application according to the mapping directory information submitted by the user. Subsequently, in the process of running the first application, the computing node is controlled to access the cache resource of the first application according to the cache strategy. Therefore, the method and the device for controlling the resource utilization of the application can sense the requirement of the user, and further control the resource utilization of the application according to the requirement of the user, so that the application performance is improved.
Referring to fig. 6, the present application further provides a data access apparatus 600, as shown in fig. 6, where the data access apparatus 600 may be applied in a computing node, and the data access apparatus 600 includes:
a receiving module 601, configured to receive a caching policy and mapping directory information of a first application specified by a user, where the caching policy is used to indicate a caching requirement of the first application, and the mapping directory information is information of a first directory in which application data of the first application stored in a storage system is located;
an assigning module 602 for performing step 304 in the foregoing embodiments;
a prefetch module 603 configured to perform step 305 in the foregoing embodiment;
an accessing module 604, configured to perform step 306 in the foregoing embodiments.
It should be understood that the data access device 600 according to the embodiment of the present invention may be implemented by a Central Processing Unit (CPU), an application-specific integrated circuit (ASIC), or a Programmable Logic Device (PLD), which may be a Complex Programmable Logic Device (CPLD), a field-programmable gate array (FPGA), a General Array Logic (GAL), or any combination thereof. When the data access method shown in fig. 3 can be implemented by software, the data access apparatus 600 and each module thereof may also be a software module.
Optionally, the allocating module 602 is mainly configured to:
acquiring resource demand information of each task in a plurality of tasks of a first application from a cache strategy, wherein the resource demand information comprises the size of a cache space required by each task and the type of a storage medium;
according to the resource demand information of each task, allocating a cache space for a first task running on the first task in the cache resource of the first task, wherein the first task is any one of a plurality of tasks running on the computing node.
Optionally, the pre-fetch module 603 is primarily configured to:
determining a directory identifier of a subdirectory corresponding to the first task;
acquiring data under the subdirectory corresponding to the first task stored under the first directory from the storage system according to the directory path of the first directory and the directory identifier of the subdirectory corresponding to the first task;
and storing the data under the subdirectory corresponding to the first task into the cache space of the first task.
Optionally, if the cache policy comprises a hierarchical cache policy, the prefetch module is further configured to: and storing different data into different types of storage media according to the data type of the data in the subdirectory corresponding to the first task.
Optionally, the accessing module 604 is mainly configured to:
obtaining an IO request;
and if the data accessed by the IO request is the data under the first directory indicated by the mapping directory information, accessing the cache resource of the first application according to the IO request and the cache policy.
Optionally, if the caching policy includes a data consistency policy, the access module is mainly configured to:
when data in the cache resource of the first application is accessed, a locking operation is performed on the accessed data.
Optionally, the apparatus 600 is further configured to: when detecting that the data volume in the cache resource allocated for the first application per se reaches a reference threshold value, sending a bandwidth requirement of the first application to a management node, wherein the bandwidth requirement is used for indicating a bandwidth required for migrating data to be migrated in the cache resource of the first application on the computing node to a storage system; receiving IO bandwidth allocated by the management node for the data to be migrated in the cache resource of the first application, and migrating the data to be migrated in the cache resource of the first application to the storage system according to the allocated IO bandwidth.
The apparatus 600 according to the embodiment of the present application may correspond to perform the method described in the embodiment of the present application, and the above and other operations and/or functions of each unit in the apparatus 600 are respectively for implementing corresponding processes performed by corresponding nodes in each method in fig. 3, and are not described herein again for brevity.
In the embodiment of the application, the computing node can allocate corresponding cache resources to the first application according to the cache policy of the first application specified by the user, so that the resource use condition of the computing node by the first application meets the user requirement, and the application performance of the first application can meet the user requirement. On the basis, in the process of running the first application, data access can be directly performed in the cache resources allocated to the first application by the computing nodes, so that the access to a storage system is reduced, and the competition among the computing nodes is reduced. In addition, after the cache resources are allocated to the first application, the computing node can prefetch the application data of the first application in the first directory in the storage system into the first cache space according to the mapping directory information, and a user does not need to copy the manual data, so that the operation complexity is reduced.
The present application further provides a data access system, which includes a management node and a computing node, where a connection manner between the management node and the computing node in the system shown in fig. 1 may refer to a connection manner between the management node and the computing node, and structures of the management node and the computing node may refer to a structure of a computer device shown in fig. 2. In the data access system, the management node is configured to implement the function of the management node in the data access method shown in fig. 3, and the computing node is configured to implement the function of the computing node in the data access method shown in fig. 3, which is not described herein again in this embodiment of the present application.
It should be noted that: in the data access device provided in the foregoing embodiment, when data is read and written, only the division of the functional modules is illustrated, and in practical applications, the function distribution may be completed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to complete all or part of the functions described above. In addition, the data access device and the data access method provided by the above embodiments belong to the same concept, and specific implementation processes thereof are described in the method embodiments and are not described herein again.
In the above embodiments, the implementation may be wholly or partly realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored on a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website, computer, server, or data center to another website, computer, server, or data center via wire (e.g., coaxial cable, fiber optic, digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., digital Versatile Disk (DVD)), or a semiconductor medium (e.g., solid State Disk (SSD)), among others.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description should not be taken as limiting the embodiments of the present application, and any modifications, equivalents, improvements, etc. made within the spirit and principle of the embodiments of the present application should be included in the scope of the embodiments of the present application.

Claims (13)

1. A method of data access, the method comprising:
receiving a cache configuration request for a first application submitted by a user, wherein the cache configuration request comprises a cache policy and mapping directory information, the cache policy is used for indicating the cache requirement of the first application, and the mapping directory information is information of a first directory in which application data of the first application stored in a storage system is located;
scheduling cache resources for the first application according to the cache policy;
according to the mapping directory information, application data of the first application are prefetched into cache resources of the first application;
and in the process of running the first application, accessing the cache resource of the first application according to the cache policy.
2. The method of claim 1, wherein scheduling cache resources for the first application according to the cache policy comprises:
determining resource demand information of each task in a plurality of tasks of the first application according to the cache policy, wherein the resource demand information comprises the size of cache space required by each task and the type of a storage medium;
and allocating a cache space for each task according to the resource demand information of each task.
3. The method of claim 1, wherein the mapping directory information comprises a directory path of the first directory, and wherein prefetching application data of the first application into cache resources of the first application according to the mapping directory information comprises:
determining a directory identifier of a subdirectory corresponding to each task in a plurality of tasks of the first application;
acquiring data under the subdirectory corresponding to each task stored under the first directory from the storage system according to the directory path of the first directory and the directory identifier of the subdirectory corresponding to each task;
and storing the data under the subdirectory corresponding to each task into the cache resource of the first application.
4. The method of claim 2 or 3, wherein the accessing the cache resource of the first application according to the cache policy comprises:
when the caching strategy comprises a hierarchical caching strategy, caching different types of task data of each task of the first application into corresponding types of storage media in the caching resources of the first application according to the hierarchical caching strategy;
when the cache policy comprises a data consistency policy, locking operation is performed on accessed task data when the data in any cache space is accessed.
5. The method according to any one of claims 1-4, wherein before accessing the cache resource of the first application according to the cache policy, further comprising:
acquiring an input/output (IO) request;
and if the data accessed by the IO request is the data under the first directory indicated by the mapping directory information, executing the step of accessing the cache resource of the first application according to the cache policy.
6. The method according to any one of claims 1-5, further comprising:
acquiring bandwidth requirements of data to be migrated to the storage system in cache resources of each of a plurality of applications, wherein the plurality of applications comprise the first application;
according to the bandwidth requirement, allocating IO bandwidth to the data to be migrated in the cache resource of the first application;
and storing the data to be migrated in the cache resource of the first application into the storage system according to the IO bandwidth.
7. A data access apparatus, the apparatus comprising:
the system comprises a receiving module, a cache configuration module and a processing module, wherein the receiving module is used for receiving a cache configuration request which is submitted by a user and aims at a first application, the cache configuration request comprises a cache policy and mapping directory information, the cache policy is used for indicating the cache requirement of the first application, and the mapping directory information is information of a first directory in which application data of the first application stored in a storage system is located;
the scheduling module is used for scheduling cache resources for the first application according to the cache strategy;
the prefetching module is used for prefetching the application data of the first application into the cache resource of the first application according to the mapping directory information;
and the access module is used for accessing the cache resource of the first application according to the cache strategy in the process of running the first application.
8. The apparatus of claim 7, wherein the scheduling module is configured to:
determining resource demand information of each task in the multiple tasks of the first application according to the cache policy, wherein the resource demand information comprises the size of a cache space required by each task and the type of a storage medium;
and allocating a cache space for each task according to the resource demand information of each task.
9. The apparatus of claim 7, wherein the mapping directory information comprises a directory path of the first directory, and wherein the prefetch module is configured to:
determining a directory identifier of a subdirectory corresponding to each task in a plurality of tasks of the first application;
acquiring data under the subdirectory corresponding to each task stored under the first directory from the storage system according to the directory path of the first directory and the directory identifier of the subdirectory corresponding to each task;
and storing the data under the subdirectory corresponding to each task into the cache resource of the first application.
10. The apparatus according to claim 8 or 9, wherein the access module is configured to:
when the caching strategy comprises a hierarchical caching strategy, caching different types of task data of each task of the first application into corresponding types of storage media in the caching resources of the first application according to the hierarchical caching strategy;
and when the cache policy comprises a data consistency policy, when the task data in any cache space is accessed, locking operation is performed on the accessed task data.
11. The apparatus of any of claims 7-10, wherein the apparatus is further configured to:
acquiring an input/output (IO) request;
and if the data accessed by the IO request is the data under the first directory indicated by the mapping directory information, executing the step of accessing the cache resource of the first application according to the cache policy.
12. The apparatus of any of claims 7-11, the apparatus further to:
acquiring bandwidth requirements of data to be migrated to the storage system in cache resources of each of a plurality of applications, wherein the plurality of applications comprise the first application;
allocating IO bandwidth to the data to be migrated in the cache resource of the first application according to the bandwidth requirement;
and storing the data to be migrated in the cache resource of the first application into the storage system according to the IO bandwidth.
13. A computer-readable storage medium storing instructions that, when executed on a computer, cause the computer to perform the data access method of any of claims 1-6.
CN202111010014.8A 2021-08-31 2021-08-31 Data access method, device and storage medium Pending CN115729438A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202111010014.8A CN115729438A (en) 2021-08-31 2021-08-31 Data access method, device and storage medium
PCT/CN2022/095010 WO2023029610A1 (en) 2021-08-31 2022-05-25 Data access method and device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111010014.8A CN115729438A (en) 2021-08-31 2021-08-31 Data access method, device and storage medium

Publications (1)

Publication Number Publication Date
CN115729438A true CN115729438A (en) 2023-03-03

Family

ID=85291204

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111010014.8A Pending CN115729438A (en) 2021-08-31 2021-08-31 Data access method, device and storage medium

Country Status (2)

Country Link
CN (1) CN115729438A (en)
WO (1) WO2023029610A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118034613B (en) * 2024-04-11 2024-06-11 深圳市铨兴科技有限公司 Intelligent scheduling method, system and memory for storage space data
CN118426705B (en) * 2024-07-03 2024-10-01 深圳星云智联科技有限公司 Access scheduling method, computer equipment and medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6185659B1 (en) * 1999-03-23 2001-02-06 Storage Technology Corporation Adapting resource use to improve performance in a caching memory system
US10541940B2 (en) * 2017-05-03 2020-01-21 International Business Machines Corporation Quality of service (QoS) stored procedures
US11055225B2 (en) * 2019-10-01 2021-07-06 Microsoft Technology Licensing, Llc Cache and I/O management for analytics over disaggregated stores
CN114691547B (en) * 2019-12-31 2023-05-12 华为云计算技术有限公司 Method for deploying instance, instance management node, computing node and computing device

Also Published As

Publication number Publication date
WO2023029610A1 (en) 2023-03-09

Similar Documents

Publication Publication Date Title
US11349940B2 (en) Server side data cache system
US10303646B2 (en) Memory sharing for working data using RDMA
US10657101B2 (en) Techniques for implementing hybrid flash/HDD-based virtual disk files
US11392428B2 (en) Fork handling in application operations mapped to direct access persistent memory
US10235047B2 (en) Memory management method, apparatus, and system
JP7539202B2 (en) Direct data access between accelerators and storage in a computing environment
US8639658B1 (en) Cache management for file systems supporting shared blocks
KR20120068454A (en) Apparatus for processing remote page fault and method thereof
WO2023029610A1 (en) Data access method and device, and storage medium
JP2022539950A (en) Storage system, memory management method and management node
JP2019057151A (en) Memory system and control method
US11836087B2 (en) Per-process re-configurable caches
US8543770B2 (en) Assigning memory to on-chip coherence domains
US11157191B2 (en) Intra-device notational data movement system
CN113138851B (en) Data management method, related device and system
US11940917B2 (en) System and method for network interface controller based distributed cache
CN115562871A (en) Memory allocation management method and device
US11334496B2 (en) Method and system for providing processor-addressable persistent memory to guest operating systems in a storage system
US20130318534A1 (en) Method and system for leveraging performance of resource aggressive applications
CN117271107A (en) Data processing method, device, electronic equipment and computer readable storage medium
KR20230100522A (en) Storage device, operation method of storage device, and storage system
CN116982023A (en) Facilitating object storage using object attributes

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination