WO2018032519A1 - 一种资源分配方法、装置及numa系统 - Google Patents

一种资源分配方法、装置及numa系统 Download PDF

Info

Publication number
WO2018032519A1
WO2018032519A1 PCT/CN2016/096113 CN2016096113W WO2018032519A1 WO 2018032519 A1 WO2018032519 A1 WO 2018032519A1 CN 2016096113 W CN2016096113 W CN 2016096113W WO 2018032519 A1 WO2018032519 A1 WO 2018032519A1
Authority
WO
WIPO (PCT)
Prior art keywords
file
access request
accessed
resource
resource partition
Prior art date
Application number
PCT/CN2016/096113
Other languages
English (en)
French (fr)
Inventor
黄永兵
徐君
王元钢
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to PCT/CN2016/096113 priority Critical patent/WO2018032519A1/zh
Priority to CN201680004180.8A priority patent/CN107969153B/zh
Publication of WO2018032519A1 publication Critical patent/WO2018032519A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L61/00Network arrangements, protocols or services for addressing or naming

Definitions

  • the present invention relates to the field of communications technologies, and in particular, to a resource allocation method and apparatus, and a NUMA system.
  • the file system when the file system is running, it not only needs to access the storage medium, but also needs to fully utilize the Dynamic Random Access Memory (DRAM) to cache part of the data, but contains multiple non-uniform memory access architectures (Non-Uniform).
  • DRAM Dynamic Random Access Memory
  • Non-Uniform Memory access architecture
  • NUMA Memory access architecture
  • the processing resource and the storage resource managed by the file system may be partitioned according to the NUMA structure.
  • a NUMA node may be used as a resource partition.
  • Each NUMA node includes a processor core and a storage medium in a storage resource, and a DRAM as a cache.
  • Subdirectories in the file system are sequentially assigned to different resource partitions. For all files in a subdirectory, only resources in the resource partition to which the subdirectory belongs can be used, that is, when processing files mapped to a resource partition. Only DRAM and processor cores in the resource partition can be used and cannot be accessed across regions.
  • Embodiments of the present invention provide a resource allocation method and apparatus, and a NUMA system, which can solve the problem of low file processing efficiency.
  • an embodiment of the present invention provides a resource allocation method, which is applied to a computer system having a non-uniform memory access architecture NUMA, where the computer system includes multiple NUMA nodes, and multiple NUMA nodes are connected by interconnecting devices.
  • Each NUMA node includes at least one processor core, and the method includes: a first NUMA node of the plurality of NUMA nodes acquires a process to which the file access request belongs, a file access request is used to access the target file, and then according to a preset in the computer system The file information of at least part of the process access file during the time period determines the resource partition for processing the file access request, thereby allocating the process to which the file access request belongs to the resource partition for processing the file access request for processing.
  • the computer system includes at least two resource partitions, each of the resource partitions includes at least one processor core and at least one storage unit, and the processor core and the storage unit in one resource partition are located on the same NUMA node and on one NUMA node. Includes at least one resource partition. Since the file information of at least part of the process access files in the preset time period can reflect the information of the files that may be simultaneously accessed, determining the resource partition according to the information of the file may avoid assigning the file access request of the file that may be simultaneously accessed to The same resource partitioning enables different resource partitions to process file access requests for simultaneous access to different files in parallel, thereby improving the efficiency of file processing.
  • the file access request includes the file identifier of the target file, and is required to determine the resource partition of the file access request according to the file information of at least part of the process access file in the preset time period in the computer system. Determining that the target file has not been accessed according to the file identifier and the resource allocation mapping relationship, wherein the resource allocation mapping relationship includes the file identifier of the file that has been accessed and the information of the resource partition of the file access request for processing the file that has been accessed. .
  • the resource partition of the process that processes the file access request may be directly determined according to the resource allocation mapping relationship.
  • At least part of the process is at least except for the process to which the file access request belongs.
  • Part of the process; when the process to which the file access request belongs accesses the file before accessing the file to be processed, at least part of the process is the process to which the file access request belongs.
  • the directory determine the resource partition for processing file access requests according to the directory to which the target file belongs. Try to allocate different resource partitions for files in different subdirectories under the same directory. If the number of subdirectories in the same directory is greater than the number of resource partitions, allow A resource partition contains multiple subdirectories.
  • the resource partition number of the file access request can be determined according to the hash value of the directory string of the target file.
  • the ratio of the large file to the file accessed by at least part of the process in the preset time period is the ratio of the number of files whose file size exceeds the preset value to the total number of files accessed by at least part of the process in the preset time period. It can be seen that the files accessed by at least part of the process in the preset time period are stored in different subdirectories in the same directory, indicating that files in different subdirectories in the same directory may be concurrently accessed, and the file access request is determined according to the directory to which the target file belongs.
  • the resource partition can realize different resource partitions for file access requests of files in different subdirectories, so that files in different subdirectories can be accessed in parallel, which can improve the efficiency of file processing.
  • the file identifier of the target file carried in the access request determines the resource partition for processing the file access request, wherein the file identifier is a file index node inode number, which is used to indicate a file in the file system, and the inode number is unique in the file system representing different files. Label.
  • the resource partition number of the file access request can be determined according to the hash value of the inode number of the target file.
  • the resource partition for processing the file access request is determined according to the process information to which the file access request belongs. If the proportion of large files in the file accessed by at least some processes in the preset time period does not exceed the preset threshold, it indicates that at least some of the files accessed by the process in the preset time period have more small files, and the time required to access the small files is larger. It is shorter, so there is no need to migrate the process to which the file access request belongs. The resource partition allocated for the process of the file access request still does not run the resource partition where the processor core of the process to which the file access request belongs is located, which can avoid the migration of the process.
  • the storage resource in the computer system needs to be partitioned by determining the SCM storage unit and the DRAM memory unit in the storage resource. And the respective concurrent access granularity of the Flash storage unit, and the number of storage subunits that can be concurrently accessed under respective concurrent access granularities, and then determine the number of processor cores in the storage resource, according to the SCM storage unit, the DRAM memory unit, and The number of storage subunits and the number of processor cores that the flash storage unit can access concurrently divides the storage resource into at least two resource partitions.
  • the resource partition includes: at least one processor core, at least one DRAM subunit, at least one SCM subunit, and at least one Flash subunit, at least one processor core in one resource partition, at least one DRAM subunit, at least An SCM subunit and at least one Flash subunit are located on a NUMA node. Since the processor cores in different resource partitions can run in parallel, and the storage subunits in different resource partitions can be accessed in parallel, these files can be processed in parallel when receiving file access requests for files in the unused resource partitions. Access requests can improve the efficiency of file processing.
  • the embodiment of the present invention provides a resource allocation device, which can implement the function of the first NUMA node in the foregoing method embodiment, and the function can be implemented by using hardware or by executing corresponding software by hardware.
  • the hardware or software includes one or more modules corresponding to the above functions.
  • the apparatus includes a processor and a transceiver configured to support the apparatus to perform the corresponding functions of the above methods.
  • the transceiver is used to support communication between the device and other network elements.
  • the apparatus can also include a memory for coupling with the processor that retains the program instructions and data necessary for the apparatus.
  • an embodiment of the present invention provides a communication system, where the system includes the resource allocation apparatus and the apparatus that can carry the storage resource.
  • an embodiment of the present invention provides a computer storage medium for storing computer software instructions for use in the resource allocation apparatus, including a program designed to perform the above aspects.
  • the embodiment of the present invention may determine, according to file information of at least part of a process access file, a resource partition for processing a file access request, because at least part of the process access file information may be reflected in a preset time period.
  • Information about files that may be accessed at the same time so determining resource partitions based on the information of these files can avoid assigning file access requests that may simultaneously access different files to the same resource partition, so that different resource partitions can be processed in parallel and simultaneously accessed differently.
  • File access requests for files which improves the efficiency of file processing.
  • FIG. 1 is a schematic structural diagram of a distributed storage system according to an embodiment of the present disclosure
  • FIG. 2 is a schematic structural diagram of a storage node according to an embodiment of the present invention.
  • FIG. 3 is a schematic structural diagram of a NUMA node according to an embodiment of the present invention.
  • FIG. 4 is a schematic structural diagram of a processor in a NUMA node according to an embodiment of the present invention.
  • FIG. 5 is a flowchart of a resource allocation method according to an embodiment of the present invention.
  • FIG. 6 is a flowchart of another resource allocation method according to an embodiment of the present invention.
  • FIG. 7 is a schematic diagram of a resource partition provided by an embodiment of the present invention.
  • FIG. 8 is a flowchart of another resource allocation method according to an embodiment of the present invention.
  • FIG. 9 is a schematic diagram of a logical structure of a resource allocation apparatus according to an embodiment of the present invention.
  • the single-machine storage system can be expanded into a distributed storage system, as shown in FIG.
  • the system includes a storage client and a plurality of storage nodes.
  • the storage nodes are interconnected by a high-speed network, and the plurality of storage nodes form a distributed storage system.
  • the storage client can access the storage resources on the distributed storage system through the network.
  • Each storage node includes at least one NUMA node, and the storage space on the storage node is uniformly managed and allocated by the distributed file system, and the distributed file system can concurrently access files stored in different storage nodes, and are different in the same storage node. Files stored in NUMA nodes can also be accessed concurrently.
  • a storage node contains at least one processor, each processor corresponds to a NUMA node, a NUMA node contains a processor, and is mounted on the memory bus.
  • Memory resources and SCM resources, Flash in Figure 1 is used to replace traditional hard drives.
  • Flash, memory (DRAM) and Storage Class Memory (SCM) all contain many memory chips, with the ability to process concurrently.
  • the concurrent granularity of SCM storage unit and DRAM memory unit is NUMA, memory channel, Rank, and Bank from coarse to fine. That is to say, if the corresponding data of the request belongs to different banks, these requests can be executed concurrently.
  • the concurrent granularity of a Flash storage unit can be a queue or a Die. Some flash storage units provide multiple storage queues, and requests from different queues can be executed concurrently without interference. For the processor, each processor core is its concurrent granularity. Different processor cores can be executed concurrently. Concurrent access refers to requests to access different memory chips, which can be processed by different memory chips at the same time, rather than being serially executed.
  • the architecture of the NUMA node in FIG. 2 is as shown in FIG. 3.
  • the NUMA node includes a flash memory 301, a nonvolatile memory SCM 302, a dynamic random access memory DRAM 303, a processor 304, an I/O bus 305, and a memory bus 306.
  • the flash memory 301, the nonvolatile memory SCM 302, and the dynamic random access memory DRAM 303 constitute a storage unit of the NUMA node.
  • the flash memory 301, the nonvolatile memory SCM 302, the dynamic random access memory DRAM 303, and the processor are connected by an I/O bus 305 and a memory bus 306.
  • the flash memory 301 is a kind of a memory chip, which not only has the performance of electronically erasable programmable (EEPROM), but also has the advantage of quickly reading data, so that data is not lost due to power failure.
  • EEPROM electronically erasable programmable
  • the non-volatile memory SCM302 is a memory-class memory that can be used to directly write new data without erasing old data. It is a new type of high-performance non-volatile memory.
  • the dynamic random access memory DRAM 303 can only store data for a short time, so it is used to cache data.
  • the processor 304 may be a Central Processing Unit (CPU) or an Application Specific Integrated Circuit (ASIC), or one or more integrated circuits for executing an application. After receiving the file access request, the processor 304 may execute the process to which the file access request belongs, and perform read and write operations on the files in the storage unit of the NUMA node through the memory bus and the I/O bus according to the file access request.
  • CPU Central Processing Unit
  • ASIC Application Specific Integrated Circuit
  • the embodiment of the present invention is applied to a computer system having NUMA.
  • the computer system includes a plurality of NUMA nodes, and a plurality of NUMA nodes are connected by interconnecting devices, and each NUMA node includes at least one processor core.
  • connections and information interactions can be performed between NUMA nodes through interconnected devices.
  • the processor in each NUMA node can access the entire computer system.
  • the CPUs in the computer system can be interconnected via an interconnect bus.
  • the processor in each NUMA node includes four CPU cores as an example.
  • FIG. 4 only illustrates the structure of the processor in the NUMA node.
  • each NUMA node has four CPU cores in its processor. Different CPUs are interconnected via an interconnect bus.
  • a common interconnect bus is the Quick-Path Interconnect (QPI). ).
  • QPI Quick-Path Interconnect
  • Processors of different NUMA nodes can be interconnected by connecting devices such as network controllers to be able to communicate with each other.
  • a dedicated node in the NUMA system can be used as a management node to implement management functions for the entire system.
  • a NUMA node acting as a management node can allocate access requests to other NUMA nodes for processing traffic, thereby Ability to store data on a NUMA node.
  • a dedicated management node may not be provided in the NUMA system, and each NUMA node can process the service and implement a part of the management function.
  • NUMA node 1 can handle the access request it receives, and NUMA node 1 can also act as a management node, and the access request it receives is assigned to other NUMA nodes (for example, NUMA node 2) for processing.
  • the function of a specific NUMA node is not limited.
  • the process is a basic unit that is dynamically executed by the operating system, and is also a basic allocation unit of resources.
  • a thread also known as a lightweight process, is an entity in a process.
  • a thread does not own system resources. It only has some resources that are essential to its operation, but it can share processes with other threads of the same process. All resources owned.
  • the operating system allocates the running processor core to the process/thread according to a certain strategy.
  • the application can specify the desired processor core by setting the function affinity of the process.
  • the embodiment of the present invention provides a resource allocation method. As shown in FIG. 5, the method includes:
  • the file access request is used to access the target file, and the file access request may be a file creation request, a file read/write request, or a file deletion request.
  • the NUMA node that is the management node in the NUMA system can obtain the process to which the file access request belongs, and allocate a resource partition for the process.
  • the computer system includes at least two resource partitions, each of the resource partitions includes at least one processor core and at least one storage unit, and the processor core and the storage unit in one resource partition are located on the same NUMA node and on one NUMA node. Includes at least one resource partition.
  • the at least two resource partitions are used to process different processes.
  • the process to which the file access request belongs does not access the file before accessing the target file, the process is a new process, and the file information of the file accessed by the process cannot be confirmed, so it is necessary to divide the file according to the preset time.
  • the process outside the process accesses the file information of the file to determine the resource partition, and when the process to which the file access request belongs needs to access other files before accessing the target file, the resource partition can be determined according to the file information accessed by the process.
  • the file information of at least part of the process access file is a large file proportion in a file accessed by at least part of the process, and at least part of the files accessed by the process are in different subdirectories under the same directory.
  • any one of the resource partitions may be instructed to process the file access request, or the processor core having a relatively low utilization rate may be selected to perform the File access request.
  • the method for resource allocation acquires a process to which a file access request belongs, and then determines a resource partition for processing a file access request according to file information of at least part of a process access file in a preset time period in the computer system, thereby accessing the file
  • the process to which the request belongs is assigned to the resource partition that handles the file access request for processing.
  • the operation of the file in a subdirectory in the prior art can only be processed by using the processor core and the storage resource in the resource partition to which the subdirectory belongs, resulting in a lower processing efficiency of the file.
  • Determining a resource partition for processing a file access request according to file information of at least part of the process access file in the preset time because at least part of the process access file information of the file may reflect information of a file that may be simultaneously accessed, so according to these
  • the information of the file is used to determine the resource partition, which can avoid assigning the same resource partition to the file access request that may access different files at the same time, so that different resource partitions can process the file access request of different files at the same time, thereby improving the efficiency of file processing.
  • the file access request includes the file identifier of the target file. Before performing the foregoing steps 502 and 503, it is necessary to determine whether the target file has been accessed. If not, the step 502 and the step are followed. 503 resource allocation method for processing file access The requested process allocates a resource partition. If it is accessed, the resource partition of the process that processes the file access request can be directly determined according to the resource allocation mapping relationship.
  • whether the target file is accessed may be determined according to the file identifier and the resource allocation mapping relationship.
  • the resource allocation mapping relationship includes the file identifier of the file that has been accessed and the resource partition of the file access request for processing the file that has been accessed.
  • the resource partition of the file access request for processing the file that has been accessed is the resource partition of the file for storing the file that has been accessed, and the resource partition of the process that processes the file access request of the file that has been accessed.
  • the information of the resource partition may be a resource partition number, and the file identifier may be a file name, and the resource partition mapping relationship may be expressed in a form of a table, for example:
  • the first file access request for a file is generally a file creation request, that is, it can be understood that each time a file is created, a mapping relationship between the file and the resource partition is generated, so the file name in the above table is "/".
  • the file of mnt/nvfs/test is stored in the resource partition 1, and the resource allocation mapping relationship can also be used to indicate the mapping relationship between the file name of the created file and the resource partition number of the resource partition to which the created file belongs.
  • the storage resource in the computer system needs to be partitioned.
  • the method for resource partitioning is described. Shown.
  • the concurrent granularity of the SCM storage unit and the DRAM memory unit mounted on the memory bus is NUMA, memory channel, Rank, and Bank in order from coarse to fine.
  • the minimum concurrent granularity of the storage unit is generally adopted.
  • the concurrent access granularity of the SCM is Bank, so the storage sub-unit in the SCM is a bank, and the number of banks that can be concurrently accessed in the SCM storage unit needs to be determined. Similarly, the number of banks that can be concurrently accessed in the DRAM memory unit needs to be determined. It can be understood that if the file requested by the file access request is located on a different bank that can be accessed concurrently, these file access requests can be executed concurrently.
  • the concurrent granularity of the Flash storage unit may be a storage queue or a Die, so the storage subunit in the Flash storage unit is a storage queue or a Die, for example, if the file requested by the file access request is located in a different storage queue that can be concurrently accessed. In the case, these file access requests can be executed concurrently.
  • the storage resource into at least two resource partitions according to the number of storage subunits and the number of processor cores that are respectively concurrently accessible by the SCM storage unit, the DRAM memory unit, and the Flash storage unit.
  • FIG. 7 is an exemplary schematic diagram of a resource partition, and the number of banks that can be concurrently accessed in the SCM storage unit determined by the computer system according to the above step 601, the number of banks that can be concurrently accessed in the DRAM memory unit, and the flash storage unit.
  • the resource partitions are divided by the number of concurrently accessible Dies and the number of processor cores in the storage resources determined in step 602.
  • At least one processor core needs to be allocated for each resource partition, and according to the number of banks that can be concurrently accessed in the SCM storage unit, the banks that can be concurrently accessed in the SCM storage unit are equally distributed to each resource partition as much as possible; The number of banks that can be accessed concurrently in the memory unit, the Bank that can concurrently access the DRAM memory unit is allocated to each resource partition as much as possible; according to the number of Dies that the Flash storage unit can concurrently access, the Flash storage unit can concurrently access the Die. As far as possible, it is allocated to each resource partition.
  • the resource partition includes: at least one processor core, at least one DRAM subunit, at least one SCM subunit, and at least one Flash subunit.
  • At least one processor core in the resource partition, at least one DRAM subunit, at least one SCM sub The unit and at least one Flash subunit are located in a NUMA node.
  • FIG. 7 The left side of FIG. 7 is the resource distribution before the resource partition, and the right side is the result of the resource partitioning by using the processor core as a standard.
  • the NUMA node 0 is divided.
  • N resource partitions are resource partition 1, resource partition 2, and resource partition N.
  • a resource partition can contain more than one processor core and can also contain multiple processor cores.
  • Processor cores in different resource partitions can run in parallel, and storage sub-units in different resource partitions can be accessed in parallel, so when file access requests for files in unused resource partitions are received, these file accesses can be processed in parallel. Requests can improve the efficiency of file processing.
  • the number of storage sub-units that can be concurrently accessed in the storage unit may not be divisible by the processor core. In this case, the number of storage sub-units in different resource partitions may be unbalanced, and a certain difference is allowed. .
  • the manner of resource allocation may be sequential allocation, according to the number of the processor core, the number of DRAM subunits that can be concurrently accessed, and the number of SCM subunits that can be concurrently accessed, and the processor is in the order of number from small to large.
  • the core, DRAM subunit, and SCM subunit are sequentially assigned to different resource partitions.
  • processor core 0, DRAM bank 0, SCM bank 0, and flash die 0 are assigned to resource partition 0, and processor core 1, DRAM bank 1, SCM bank 1, and flash die 1 are assigned to resource partition 1.
  • the resources are allocated as follows: a total of 4 resource partitions, each containing 1 processor core; due to the number of SCM banks Cannot be divisible by the processor core, the first 3 resource partitions each contain 2 SCM banks, and the last resource partition contains only 1 SCM bank.
  • the resource partition for processing the file access request can be determined, based on which another embodiment provided by the embodiment of the present invention is provided.
  • the foregoing step 502 is to determine, according to the file information of the at least part of the process access file in the preset time period of the computer system, the resource partition for processing the file access request, which may be implemented as step 5021 to step 5023. .
  • the ratio of the large file to the file accessed by at least part of the process in the preset time period is the ratio of the number of files whose file size exceeds the preset value to the total number of files accessed by at least part of the process in the preset time period.
  • the basic idea of determining the resource partition for processing file access requests according to the directory to which the target file belongs is to allocate different resource partitions for files in different subdirectories under the same directory. If the number of subdirectories in the same directory is greater than the number of resource partitions, one is allowed.
  • the resource partition contains multiple subdirectories.
  • the resource partition for processing the file access request may be determined according to the hash value of the directory string of the target file.
  • the specific implementation process is: first determining a hash value of the directory string of the target file, and accessing the hash value as a processing file.
  • the resource partition number of the requested resource partition thereby determining the resource partition for processing the file access request as the resource partition corresponding to the resource partition number.
  • hash value obtained by the hash function operation is as uniform as possible.
  • files that are accessed by at least some processes in a preset time period are stored in different subdirectories in the same directory, indicating that files in different subdirectories in the same directory may be concurrently accessed, and different subdirectories under the same file directory.
  • the proportion of large files in the file is high.
  • the resource partitions for processing file access requests are determined according to the directory to which the target file belongs, so that subsequent files in different subdirectories under the same directory can be concurrently accessed to improve file processing efficiency.
  • the file identifier of the file determines the resource partition that handles the file access request.
  • the file identifier is a file index node inode number, which is used to indicate a file in the file system.
  • the Inode number is a unique label that represents a different file in the file system.
  • the resource partition for processing the file access request may be determined according to the hash value of the inode number of the target file.
  • the specific implementation process is: determining a hash value of the Inode value of the target file, and using the hash value as a resource for processing the file access request.
  • the partition's resource partition number and then determine the processing text
  • the resource partition of the access request is the resource partition corresponding to the resource partition number.
  • the calculation method of the Inode hash value is similar to the calculation method of the directory hash value in step 5021, and details are not described herein again.
  • the file is Distribute as much as possible in different resource partitions, so that subsequent large files can be processed in parallel, avoiding large files being stored in the same resource partition, and causing a processor core to serially process these large files consumes too long. time.
  • the specific method is as follows: the resource partition where the processor core of the process to which the file access request belongs is used as the resource partition for processing the file access request, so as to ensure the locality of the process execution and avoid the migration of the process.
  • different resource partitions may be allocated to the target files that may be requested to be accessed at the same time, so that the access requests for the operation target files of the files to be created may be processed in parallel, thereby improving the efficiency of file processing.
  • each network element for example, a NUMA node as a management node in the NUMA system, in order to implement the above functions, includes hardware structures and/or software modules corresponding to each function.
  • the present invention can be implemented in a combination of hardware or hardware and computer software in combination with the elements and algorithm steps of the various examples described in the embodiments disclosed herein. Whether a function is implemented in hardware or computer software to drive hardware depends on the specific application and design constraints of the solution. A person skilled in the art can use different methods for implementing the described functions for each particular application, but such implementation should not be considered to be beyond the scope of the present invention.
  • the function module of the NUMA node as the management node in the NUMA system may be divided according to the foregoing method.
  • each function module may be divided according to each function, or two or more functions may be integrated in the function.
  • a processing module In a processing module.
  • the above integrated modules can be implemented in the form of hardware or in the form of software functional modules. Implementation. It should be noted that the division of the module in the embodiment of the present invention is schematic, and is only a logical function division, and the actual implementation may have another division manner.
  • FIG. 9 is a schematic diagram showing a possible structure of the resource allocation apparatus involved in the foregoing embodiment.
  • the resource allocation apparatus includes: an obtaining module 901, a determining module 902, and an allocation. Module 903.
  • the obtaining module 901 is configured to support the resource allocating device to perform step 501 in FIG. 5;
  • the determining module 902 is configured to support the resource allocating device to perform step 502 in FIG. 5, steps 5021 to 5023 in FIG. 8;
  • the allocating module 903 is configured to support resources.
  • the dispensing device performs step 503 in FIG.
  • the steps of a method or algorithm described in connection with the present disclosure may be implemented in a hardware, or may be implemented by a processor executing software instructions.
  • the software instructions may be composed of corresponding software modules, which may be stored in a random access memory (RAM), a flash memory, a read only memory (ROM), an erasable programmable read only memory ( Erasable Programmable ROM (EPROM), electrically erasable programmable read only memory (EEPROM), registers, hard disk, removable hard disk, compact disk read only (CD-ROM) or any other form of storage medium known in the art.
  • An exemplary storage medium is coupled to the processor to enable the processor to read information from, and write information to, the storage medium.
  • the storage medium can also be an integral part of the processor.
  • the processor and the storage medium can be located in an ASIC. Additionally, the ASIC can be located in a core network interface device.
  • the processor and the storage medium may also exist as discrete components in the core network interface device.
  • the functions described herein can be implemented in hardware, software, firmware, or any combination thereof.
  • the functions may be stored in a computer readable medium or transmitted as one or more instructions or code on a computer readable medium.
  • Computer readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one location to another.
  • a storage medium may be any available media that can be accessed by a general purpose or special purpose computer.
  • the present invention can be implemented by means of software plus necessary general hardware, and of course, by hardware, but in many cases, the former is a better implementation. .
  • the technical solution of the present invention which is essential or contributes to the prior art, can be embodied in the form of a software product stored in a readable storage medium, such as a floppy disk of a computer.
  • a hard disk or optical disk, etc. includes instructions for causing a computer device (which may be a personal computer, server, or network device, etc.) to perform the methods described in various embodiments of the present invention.

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

本发明公开一种资源分配方法、装置及NUMA系统,涉及通信技术领域,可以解决文件处理的效率低的问题。本发明实施例通过获取文件访问请求所属的进程,然后根据计算机系统中预设时间段内至少部分进程访问文件的文件信息确定处理文件访问请求的资源分区,从而将文件访问请求所属的进程分配给处理文件访问请求的资源分区进行处理。本发明实施例提供的方案适用于为文件访问请求的进程分配资源。

Description

一种资源分配方法、装置及NUMA系统 技术领域
本发明涉及通信技术领域,尤其涉及一种资源分配方法、装置及NUMA系统。
背景技术
随着移动设备、社交网络、互联网、大数据等多种应用的蓬勃发展,人类社会产生的数据呈爆炸式增长,同时对存储系统的性能以及存储资源的容量的要求也越来越高。应用程序一般需通过文件系统来访问存储系统中的文件,所以可以通过提升文件系统的处理性能来提高数据的存储性能。
目前,文件系统在运行时,不仅需要访问存储介质,还需要充分利用动态随机存取存储器(Dynamic Random Access Memory,DRAM)来缓存部分数据,然而在包含多个非统一内存访问架构(Non-Uniform Memory Access Architecture,NUMA)节点的多核系统上,会存在跨NUMA访问DRAM的情况,跨NUMA访问DRAM会增加数据访问时延,导致文件系统的性能较差,现有技术可以通过优化文件数据的存储位置的方法来提升文件系统的性能。具体的,可以根据NUMA结构对处理资源和文件系统所管理的存储资源进行分区,例如,可以将一个NUMA节点作为一个资源分区。每个NUMA节点中包括处理器核以及存储资源中的存储介质、作为缓存(cache)的DRAM。将文件系统中的子目录依次分配给不同的资源分区,对于一个子目录下所有文件的操作都只能使用该子目录所属资源分区中的资源,即当处理映射到某个资源分区的文件时,只能使用该资源分区中的DRAM和处理器核,不能跨区访问。
然而,采用这种方法虽然减少了跨NUMA访问DRAM的情况,但是由于对于一个子目录下的文件的操作只能使用该子目录所属的资源分区中的DRAM以及处理器核,无法并行地对文件进行处理,例如,如果同时接收到对一个子目录下的5个文件的访问请求,只能使用该子目录所属资源分区的处理器核串行处理对这5个文件的访问请求,会导致文件处理 的效率低。
发明内容
本发明的实施例提供一种资源分配方法、装置及NUMA系统,可以解决文件处理的效率低的问题。
一方面,本发明实施例提供了一种资源分配方法,该方法应用于具有非统一内存访问架构NUMA的计算机系统中,该计算机系统包括多个NUMA节点,多个NUMA节点间通过互联器件进行连接,每个NUMA节点包括至少一个处理器核,该方法包括:多个NUMA节点中的第一NUMA节点获取文件访问请求所属的进程,文件访问请求用于访问目标文件,然后根据计算机系统中预设时间段内至少部分进程访问文件的文件信息确定处理文件访问请求的资源分区,从而将文件访问请求所属的进程分配给处理文件访问请求的资源分区进行处理。其中,计算机系统包括至少两个资源分区,每个资源分区分别包括至少一个处理器核以及至少一个存储单元,一个资源分区中的处理器核以及存储单元位于同一个NUMA节点上,一个NUMA节点上包括至少一个资源分区。由于预设时间内至少部分进程访问文件的文件信息可以反映可能被同时访问的文件的信息,所以根据这些文件的信息来确定资源分区,可以避免将可能被同时访问的文件的文件访问请求分配至相同的资源分区,使得不同的资源分区可以并行处理同时访问不同的文件的文件访问请求,从而提升了文件处理的效率。
在一种可能的设计中,文件访问请求中包含有目标文件的文件标识,在根据计算机系统中预设时间段内至少部分进程访问文件的文件信息确定处理文件访问请求的资源分区之前,还需要根据文件标识以及资源分配映射关系确定目标文件尚未被访问过,其中,资源分配映射关系中包括已被访问过的文件的文件标识以及处理已被访问过的文件的文件访问请求的资源分区的信息。此外,如果根据文件标识以及资源分配映射关系确定目标文件被访问过,则可以直接根据资源分配映射关系确定处理文件访问请求的进程的资源分区。
在一种可能的设计中,当文件访问请求所属的进程在访问目标文件之前未访问过文件时,至少部分进程为除文件访问请求所属进程之外的至少 部分进程;当文件访问请求所属的进程在访问待处理文件之前访问过文件时,至少部分进程为文件访问请求所属的进程。
在一种可能的设计中,当预设时间段内至少部分进程访问的文件中大文件的比例超过预设阈值,且预设时间段内至少部分进程访问的文件存放在同一目录下不同的子目录中时,按照目标文件所属的目录确定处理文件访问请求的资源分区,尽量为同一目录下不同子目录中的文件分配不同的资源分区,如果同一目录下子目录的数量大于资源分区的数量,允许一个资源分区中包含多个子目录。确定资源分区时可根据目标文件的目录字符串的哈希值来确定处理文件访问请求的资源分区号。其中,大文件比例为预设时间段内至少部分进程访问的文件中,文件大小超过预设值的文件数量占预设时间段内至少部分进程访问的文件总数量的比例。可见,预设时间段内至少部分进程访问的文件存放在同一目录下不同的子目录中,说明同一目录下不同子目录中的文件可能被并发访问,按照目标文件所属的目录确定处理文件访问请求的资源分区,可以实现为不同子目录中的文件的文件访问请求分配不同的资源分区,使得不同子目录中的文件能够被并行访问,可以提高文件处理的效率。
在一种可能的设计中,当预设时间段内至少部分进程访问的文件中大文件的比例超过预设阈值,且预设时间段内至少部分进程访问的文件不在同一目录下时,按照文件访问请求中携带的目标文件的文件标识确定处理文件访问请求的资源分区,其中,文件标识为文件索引节点Inode号,用于指示文件系统中的文件,Inode号是文件系统中代表不同文件的唯一标号。确定资源分区时可根据目标文件的Inode号的哈希值来确定处理文件访问请求的资源分区号。可见,若预设时间段内至少部分进程访问的文件未位于同一目录下的不同子目录中,但是预设时间段内至少部分进程访问的文件中的大文件比例较高,说明大文件可能被同时访问,按照文件访问请求中携带的目标文件的文件标识确定处理文件访问请求的资源分区,可以实现为大文件的文件访问请求尽可能地分配不同的资源分区,使得大文件能够被并行访问,可以提高文件处理的效率。
在一种可能的设计中,当预设时间段内至少部分进程访问的文件中大 文件的比例未超过预设阈值时,按照文件访问请求所属的进程信息确定处理文件访问请求的资源分区。如果预设时间段内至少部分进程访问的文件中大文件的比例未超过预设阈值,说明预设时间段内至少部分进程访问的文件中的小文件较多,而访问小文件所需的时间较短,所以无需对文件访问请求所属的进程进行迁移,为文件访问请求的进程分配的资源分区仍未运行文件访问请求所属进程的处理器核所在的资源分区,可以避免进程的迁移。
在一种可能的设计中,在为文件访问请求所属的资源进程分配资源分区之前,需先对计算机系统中的存储资源进行分区,分区的方法为:确定存储资源中SCM存储单元、DRAM内存单元和Flash存储单元各自的可并发访问粒度,以及在各自的可并发访问粒度下的可并发访问的存储子单元数量,然后确定存储资源中的处理器核数量,根据SCM存储单元、DRAM内存单元和Flash存储单元各自可并发访问的存储子单元数量以及处理器核数量,将存储资源划分为至少两个资源分区。其中,一个资源分区中包括:至少一个处理器核,至少一个DRAM子单元、至少一个SCM子单元以及至少一个Flash子单元,一个资源分区中的至少一个处理器核,至少一个DRAM子单元、至少一个SCM子单元以及至少一个Flash子单元位于一个NUMA节点上。由于不同资源分区中的处理器核可以并行运行,且不同资源分区中的存储子单元可以被并行访问,所以,当接收到对不用资源分区中的文件的文件访问请求时,可以并行处理这些文件访问请求,可以提高文件处理的效率。
另一方面,本发明实施例提供了一种资源分配装置,该装置可以实现上述方法实施例中第一NUMA节点的功能,所述功能可以通过硬件实现,也可以通过硬件执行相应的软件实现。所述硬件或软件包括一个或多个上述功能相应的模块。
在一种可能的设计中,该装置的结构中包括处理器和收发器,该处理器被配置为支持该装置执行上述方法中相应的功能。该收发器用于支持该装置与其他网元之间的通信。该装置还可以包括存储器,该存储器用于与处理器耦合,其保存该装置必要的程序指令和数据。
又一方面,本发明实施例提供了一种通信系统,该系统包括上述方面所述的资源分配装置和可以承载存储资源的装置。
再一方面,本发明实施例提供了一种计算机存储介质,用于储存为上述资源分配装置所用的计算机软件指令,其包含用于执行上述方面所设计的程序。
相比于现有技术,本发明实施例可以根据预设时间内至少部分进程访问文件的文件信息来确定处理文件访问请求的资源分区,因为预设时间内至少部分进程访问文件的文件信息可以反映可能被同时访问的文件的信息,所以根据这些文件的信息来确定资源分区,可以避免将可能同时访问不同文件的文件访问请求分配至相同的资源分区,使得不同的资源分区可以并行处理同时访问不同文件的文件访问请求,从而提升了文件处理的效率。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例的附图。
图1为本发明实施例提供的一种分布式存储系统的结构示意图;
图2为本发明实施例提供的一种存储节点的结构示意图;
图3为本发明实施例供的一种NUMA节点的结构示意图;
图4为本发明实施例提供的一种NUMA节点中的处理器的结构示意图;
图5为本发明实施例提供的一种资源分配方法的流程图;
图6为本发明实施例提供的另一种资源分配方法的流程图;
图7本发明实施例提供的一种资源分区的示例性示意图;
图8为本发明实施例提供的另一种资源分配方法的流程图;
图9本发明实施例提供的一种资源分配装置的逻辑结构示意图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例, 而不是全部的实施例。
为了方便对后续实施例的描述,首先对本发明实施例所应用的相关技术进行介绍,为了提高存储系统的性能,可以将单机存储系统扩充为分布式存储系统,如图1所示,分布式存储系统中包含存储客户端以及多个存储节点,存储节点之间通过高速网络互连,多个存储节点组成一个分布式的存储系统,存储客户端可以通过网络访问分布式存储系统上的存储资源。其中,每个存储节点中包括至少一个NUMA节点,存储节点上的存储空间由分布式文件系统统一管理和分配,分布式文件系统可以并发访问不同存储节点中存储的文件,且同一存储节点中不同NUMA节点中存储的文件也可以被并发访问。
图1中的存储节点的架构如图2所示,一个存储节点中包含至少一个处理器,每个处理器对应一个NUMA节点,一个NUMA节点中包含一个处理器,以及挂载在内存总线上的内存资源和SCM资源,图1中的Flash(闪存)用于取代传统的硬盘。其中,Flash、内存(DRAM)和存储级内存(Storage Class Memory,SCM)都包含许多存储芯片,具有并发处理的能力。例如,SCM存储单元和DRAM内存单元的并发粒度从粗到细依次是NUMA、内存通道、Rank,以及Bank。也就是说,如果请求对应的数据分属在不同的Bank上,这些请求是可以并发执行的。Flash存储单元的并发粒度可以是队列,或者是Die。部分Flash存储单元提供了多个存储队列,不同队列的请求是可以并发执行,互不干扰的。对于处理器来说,每个处理器核是其并发粒度。不同的处理器核是可以并发执行的。并发访问是指访问不同存储芯片的请求,是可以同时被不同存储芯片处理的,而不是串行被执行。
图2中的NUMA节点的架构如图3所示,NUMA节点中包括Flash存储器301、非易失性存储器SCM302、动态随机存储器DRAM303、处理器304、I/O总线305以及内存总线306。其中,Flash存储器301、非易失性存储器SCM302和动态随机存储器DRAM303构成NUMA节点的存储单元。Flash存储器301、非易失性存储器SCM302、动态随机存储器DRAM303和处理器之间通过I/O总线305和内存总线306连接。
其中,Flash存储器301为存储芯片的一种,不仅具备电子可擦除可编程(EEPROM)的性能,还具有快速读取数据的优势,使数据不会因为断电而丢失。
非易失性存储器SCM302为存储级内存,可以实现在无需拭去旧有数据的情况下直接写入新数据,为新型的高性能非易失性存储器。
动态随机存储器DRAM303,只能将数据保存很短的时间,所以用于缓存数据。
处理器304,可以是中央处理器(Central Processing Unit,CPU)或者是特定集成电路(Application Specific Integrated Circuit,ASIC),或者是一个或多个集成电路,用于执行应用程序。处理器304接收到文件访问请求后,可以执行文件访问请求所属的进程,并根据文件访问请求通过内存总线以及I/O总线对NUMA节点的存储单元中的文件进行读写操作。
本发明实施例应用于具有NUMA的计算机系统中,计算机系统中包括多个NUMA节点,多个NUMA节点之间通过互联器件进行连接,每个NUMA节点包括至少一个处理器核。
需要说明的是,在具有NUMA的计算机系统中,NUMA节点之间可以通过互联器件进行连接和信息交互,以图3中的处理器为例,每个NUMA节点中的处理器可以访问整个计算机系统中的存储单元。计算机系统中的CPU之间可以通过互联总线进行互联。如图4所示,图4中以每个NUMA节点中的处理器包括4个CPU核为例。需要说明的是,图4仅图示了NUMA节点中的处理器的结构。如图4所示,每个NUMA节点的处理器中均包含有4个CPU核,不同的CPU之间通过互联总线互连,一种常见的互联总线是高速互联协议(Quick-Path Interconnect,QPI)。不同NUMA节点的处理器之间可以通过网络控制器等连接器件进行互联,以便能够相互通信。
实际应用中,一种情况下,NUMA系统中可以有一个专门的节点作为管理节点,以实现对整个系统的管理功能。例如,作为管理节点的NUMA节点能够给其他用于处理业务的NUMA节点分配访问请求,从而 能够将数据存储到某一个NUMA节点上。在另一种情况下,NUMA系统中可以不设置专门的管理节点,每个NUMA节点均可以处理业务并实现一部分的管理功能。例如,NUMA节点1既可以处理其接收的访问请求,NUMA节点1也可以作为管理节点,将其接收的访问请求分配给其他NUMA节点(例如NUMA节点2)进行处理。在本发明实施例中,不对具体的NUMA节点的功能进行限定。
需要说明的是,进程是一个是操作系统动态执行的基本单元,也是资源的基本分配单元。线程,也被称为轻量级进程,线程是进程中的一个实体,线程不拥有系统资源,只拥有一些在运行中必不可少的资源,但它可与同属一个进程的其它线程共享进程所拥有的全部资源。在进程执行时,操作系统根据一定的策略为进程/线程分配所运行的处理器核,应用程序可以通过设置进程亲和度的函数接口来指定需要所期望的处理器核。
基于上述描述的NUMA系统,为了提高文件处理的效率,本发明实施例提供了一种资源分配方法,如图5所示,该方法包括:
501、获取文件访问请求所属的进程。
其中,文件访问请求用于访问目标文件,文件访问请求可以为文件创建请求、文件读写请求或者是文件删除请求。
需要说明的是,可以由NUMA系统中作为管理节点的NUMA节点来获取文件访问请求所属的进程,并为该进程分配资源分区。
502、根据计算机系统中预设时间段内至少部分进程访问文件的文件信息确定处理文件访问请求的资源分区。
其中,计算机系统包括至少两个资源分区,每个资源分区分别包括至少一个处理器核以及至少一个存储单元,一个资源分区中的处理器核以及存储单元位于同一个NUMA节点上,一个NUMA节点上包括至少一个资源分区。所述至少两个资源分区用于处理不同的进程。
需要说明的是,当文件访问请求所属的进程在访问目标文件之前未访问过文件时,至少部分进程为除文件访问请求所属进程之外的至少部分进程;
当文件访问请求所属的进程在访问待处理文件之前访问过文件时,至 少部分进程为文件访问请求所属的进程。
可以理解的是,当文件访问请求所属的进程在访问目标文件之前未访问过文件,说明该进程是一个新的进程,无法确认该进程访问文件的文件信息,所以需要根据预设时间内除该进程之外的进程访问文件的文件信息来确定资源分区,而当文件访问请求所属的进程在访问目标文件之前还需访问其他文件,则根据该进程访问的文件信息来确定资源分区即可。
其中,至少部分进程访问文件的文件信息为至少部分进程访问的文件中的大文件比例,以及至少部分进程访问的文件是否处于同一目录下不同的子目录中。
503、将文件访问请求所属的进程分配给处理文件访问请求的资源分区进行处理。
需要说明的是,若资源分区包括多个处理器核,则可以指示该资源分区中的任意一个处理器核处理所述文件访问请求,也可以选择利用率相对较低的处理器核执行所述文件访问请求。
本发明实施例提供的资源分配的方法,获取文件访问请求所属的进程,然后根据计算机系统中预设时间段内至少部分进程访问文件的文件信息确定处理文件访问请求的资源分区,进而将文件访问请求所属的进程分配给处理文件访问请求的资源分区进行处理。与现有技术中对一个子目录下的文件的操作只能使用该子目录所属的资源分区中的处理器核以及存储资源进行处理,导致文件的处理效率较低相比,本发明实施例可以根据预设时间内至少部分进程访问文件的文件信息来确定处理文件访问请求的资源分区,因为预设时间内至少部分进程访问文件的文件信息可以反映可能被同时访问的文件的信息,所以根据这些文件的信息来确定资源分区,可以避免为可能同时访问不同文件的文件访问请求分配相同的资源分区,使得不同的资源分区可以并行处理同时访问不同文件的文件访问请求,从而提升了文件处理的效率。
需要说明的是,文件访问请求中包含有目标文件的文件标识,在执行上述步骤502和步骤503之前,需先判断目标文件是否被访问过,如果未被访问过,才会按照步骤502和步骤503的资源分配方法为处理文件访问 请求的进程分配资源分区,如果被访问过,则可以直接根据资源分配映射关系确定处理文件访问请求的进程的资源分区。
具体的,可以根据文件标识以及资源分配映射关系确定目标文件是否被访问过。
其中,资源分配映射关系中包括已被访问过的文件的文件标识以及处理已被访问过的文件的文件访问请求的资源分区的信息。处理已被访问过的文件的文件访问请求的资源分区为:用于存储已被访问过的文件的文件的资源分区,也是处理已被访问过的文件的文件访问请求的进程的资源分区。
所以,可以查找资源分配映射关系中是否包含目标文件的文件标识,以及处理目标文件的文件访问请求的资源分区的信息,如果包含,则直接将文件访问请求所属的进程分配给从资源分配映射关系中查找到的资源分区进行处理;如果不包含,则说明目标文件尚未被访问过,则继续按照步骤502和步骤503的方法处理文件访问请求的资源分区。
其中,资源分区的信息可以为资源分区号,文件标识可以为文件名称,可以以表格的形式表示资源分区映射关系,例如:
文件名 资源分区号
/mnt/nvfs/test 0
/mnt/nvfs/test2/test3 1
还需说明的是,对一个文件的第一次文件访问请求一般为文件创建请求,即可以理解为每创建一个文件都会生成该文件与资源分区的映射关系,所以上表中文件名为“/mnt/nvfs/test”的文件存储于资源分区1中,资源分配映射关系也可以用于表示已创建文件的文件名与已创建文件所属的资源分区的资源分区号之间的映射关系。
在执行图5所示的方法流程之前,需先对计算机系统中的存储资源进行分区,在本发明实施例提供的一种实现方式中,对资源分区的方法进行了说明,该方法如图6所示。
601、确定存储资源中SCM存储单元、DRAM内存单元和Flash存储单元各自的可并发访问粒度,以及在各自的可并发访问粒度下的可并发访 问的存储子单元数量。
一般而言,挂载在内存总线的SCM存储单元和DRAM内存单元的可并发粒度从粗到细依次是NUMA、内存通道、Rank、Bank,本发明实施例一般采用存储单元的最小可并发粒度,SCM的可并发访问粒度为Bank,所以SCM中的存储子单元为Bank,需要确定SCM存储单元中可并发访问的Bank数量,同理,还需确定DRAM内存单元中可并发访问的Bank数量。可以理解的是,如果文件访问请求所请求访问的文件位于可并发访问的不同Bank上,那么这些文件访问请求是可以并发执行的。
此外,Flash存储单元的并发粒度可以是存储队列或者Die,所以Flash存储单元中的存储子单元为存储队列或者Die,例如,如果文件访问请求所请求访问的文件位于可并发访问的不同的存储队列中,则这些文件访问请求是可以并发执行的。
602、确定存储资源中的处理器核数量。
603、根据SCM存储单元、DRAM内存单元和Flash存储单元各自可并发访问的存储子单元数量以及处理器核数量,将存储资源划分为至少两个资源分区。
如图7所示,图7为资源分区的示例性示意图,计算机系统会根据上述步骤601确定的SCM存储单元中可并发访问的Bank数量、DRAM内存单元中可并发访问的Bank数量、Flash存储单元可并发访问的Die的数量,以及步骤602中确定的存储资源中处理器核的数量来划分资源分区。
其中,需为每个资源分区分配至少一个处理器核,并根据SCM存储单元中可并发访问的Bank数量,将SCM存储单元中可并发访问的Bank尽量平均分配到每个资源分区中;根据DRAM内存单元中可并发访问的Bank数量,将DRAM内存单元中可并发访问的Bank尽量分配到每个资源分区中;根据Flash存储单元可并发访问的Die的数量,将Flash存储单元可并发访问的Die尽量分配到每个资源分区中,最终分配的结果为一个资源分区中包括:至少一个处理器核,至少一个DRAM子单元、至少一个SCM子单元以及至少一个Flash子单元,需要说明的是,一个资源分区中的至少一个处理器核,至少一个DRAM子单元、至少一个SCM子 单元以及至少一个Flash子单元位于一个NUMA节点中。具体可参见图7,图7左侧为资源分区前的资源分布,右侧为以处理器核为标准进行资源分区的结果,如图7所示,在完成资源分区后,NUMA节点0被分成了N个资源分区,分别为资源分区1、资源分区2……资源分区N。此外,一个资源分区中可以不只包含一个处理器核,还可以包含多个处理器核。不同资源分区中的处理器核可以并行运行,且不同资源分区中的存储子单元可以被并行访问,所以,当接收到对不用资源分区中的文件的文件访问请求时,可以并行处理这些文件访问请求,可以提高文件处理的效率。
需要说明的是,存储单元中的可并发访问的存储子单元数可能不会被处理器核数整除,此时,不同资源分区中的存储子单元数量可以是不均衡的,允许存在一定的差异。示例性的,资源分配的方式可以是顺序分配,根据处理器核的编号、可并发访问的DRAM子单元的编号和可并发访问的SCM子单元的编号,按照编号从小到大的顺序将处理器核、DRAM子单元和SCM子单元依次分配给不同的资源分区。例如,将处理器核0、DRAM bank0、SCM bank 0和flash die 0分配给资源分区0,将处理器核1、DRAM bank1、SCM bank 1和flash die 1分配给资源分区1。如果资源不能平均分配,例如系统中存在4个处理器核,7个SCM bank,则资源分配方式如下:总共包含4个资源分区,每个资源分区各包含1个处理器核;由于SCM bank数量不能被处理器核数整除,则前3个资源分区各包含2个SCM bank,最后一个资源分区仅包含1个SCM bank。
可以理解的是,在完成对计算机系统中的存储资源的资源分区之后,当接收到文件访问请求时,即可确定处理文件访问请求的资源分区,基于此,在本发明实施例提供的另一种实现方式中,如图8所示,上述步骤502、根据计算机系统中预设时间段内至少部分进程访问文件的文件信息确定处理文件访问请求的资源分区,具体可以实现为步骤5021至步骤5023。
5021、当预设时间段内至少部分进程访问的文件中大文件的比例超过预设阈值,且预设时间段内至少部分进程访问的文件存放在同一目录下不同的子目录中时,按照目标文件所属的目录确定处理文件访问请求的资源 分区。
其中,大文件比例为预设时间段内至少部分进程访问的文件中,文件大小超过预设值的文件数量占预设时间段内至少部分进程访问的文件总数量的比例。
按照目标文件所属的目录确定处理文件访问请求的资源分区的基本思路为尽量为同一目录下不同子目录中的文件分配不同的资源分区,如果同一目录下子目录的数量大于资源分区的数量,允许一个资源分区中包含多个子目录。
具体可以根据目标文件的目录字符串的哈希值来确定处理文件访问请求的资源分区,具体实现过程为:首先确定目标文件的目录字符串的哈希值,将该哈希值作为处理文件访问请求的资源分区的资源分区号,进而确定处理文件访问请求的资源分区为该资源分区号对应的资源分区。这里提到的哈希值是指根据一定的哈希函数如取余数运算,将一个字符串转化为一个数值,例如hash(字符串)=(字符串的ASCII码)mode(资源分区数)。其中,通过哈希函数运算得出的哈希值尽量均匀。
需要说明的是,预设时间段内至少部分进程访问的文件存放在同一目录下不同的子目录中,说明同一目录下不同子目录中的文件可能被并发访问,且同一文件目录下不同子目录中的大文件比例较高,按照目标文件所属的目录确定处理文件访问请求的资源分区,可以使得后续能够并发访问同一目录下不同子目录中的文件,以提高文件处理的效率。
5022、当预设时间段内至少部分进程访问的文件中大文件的比例超过预设阈值,且预设时间段内至少部分进程访问的文件不在同一目录下时,按照文件访问请求中携带的目标文件的文件标识确定处理文件访问请求的资源分区。
其中,文件标识为文件索引节点Inode号,用于指示文件系统中的文件。Inode号是文件系统中代表不同文件的唯一标号。
具体可以根据目标文件的Inode号的哈希值来确定处理文件访问请求的资源分区,具体实现过程为:确定目标文件的Inode值的哈希值,将该哈希值作为处理文件访问请求的资源分区的资源分区号,进而确定处理文 件访问请求的资源分区为该资源分区号对应的资源分区。Inode哈希值的计算方式与步骤5021中目录哈希值的计算方式相似,此处不再赘述。
需要说明的是,若预设时间段内至少部分进程访问的文件未位于同一目录下的不同子目录中,但是预设时间段内至少部分进程访问的文件中的大文件比例较高,将文件尽可能的分散存储在不同的资源分区中,可以使得后续能够并行对大文件进行处理,避免大文件被存储在同一资源分区中,而导致一个处理器核串行处理这些大文件消耗过长的时间。
5023、当预设时间段内至少部分进程访问的文件中大文件的比例未超过预设阈值时,按照文件访问请求所属的进程信息确定处理文件访问请求的资源分区。
具体方法为:将运行文件访问请求所属进程的处理器核所在的资源分区作为处理文件访问请求的资源分区,以保证进程执行的局部性,避免进程的迁移。
对于本发明实施例,为可能被同时请求访问的目标文件分配不同的资源分区,可以使得后续可以并行处理对这些待创建文件的操作目标文件的访问请求,提高了文件处理的效率。
上述主要从各个网元之间交互的角度对本发明实施例提供的方案进行了介绍。可以理解的是,各个网元,例如NUMA系统中作为管理节点的NUMA节点为了实现上述功能,其包含了执行各个功能相应的硬件结构和/或软件模块。本领域技术人员应该很容易意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,本发明能够以硬件或硬件和计算机软件的结合形式来实现。某个功能究竟以硬件还是计算机软件驱动硬件的方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本发明的范围。
本发明实施例可以根据上述方法示例对NUMA系统中作为管理节点的NUMA节点等进行功能模块的划分,例如,可以对应各个功能划分各个功能模块,也可以将两个或两个以上的功能集成在一个处理模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形 式实现。需要说明的是,本发明实施例中对模块的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。
在采用对应各个功能划分各个功能模块的情况下,图9示出了上述实施例中所涉及的资源分配装置的一种可能的结构示意图,资源分配装置包括:获取模块901,确定模块902,分配模块903。获取模块901用于支持资源分配装置执行图5中的步骤501;确定模块902用于支持资源分配装置执行图5中的步骤502,图8中的步骤5021至5023;分配模块903用于支持资源分配装置执行图5中的步骤503。
其中,上述方法实施例涉及的各步骤的所有相关内容均可以援引到对应功能模块的功能描述,在此不再赘述。
结合本发明公开内容所描述的方法或者算法的步骤可以硬件的方式来实现,也可以是由处理器执行软件指令的方式来实现。软件指令可以由相应的软件模块组成,软件模块可以被存放于随机存取存储器(Random Access Memory,RAM)、闪存、只读存储器(Read Only Memory,ROM)、可擦除可编程只读存储器(Erasable Programmable ROM,EPROM)、电可擦可编程只读存储器(Electrically EPROM,EEPROM)、寄存器、硬盘、移动硬盘、只读光盘(CD-ROM)或者本领域熟知的任何其它形式的存储介质中。一种示例性的存储介质耦合至处理器,从而使处理器能够从该存储介质读取信息,且可向该存储介质写入信息。当然,存储介质也可以是处理器的组成部分。处理器和存储介质可以位于ASIC中。另外,该ASIC可以位于核心网接口设备中。当然,处理器和存储介质也可以作为分立组件存在于核心网接口设备中。
本领域技术人员应该可以意识到,在上述一个或多个示例中,本发明所描述的功能可以用硬件、软件、固件或它们的任意组合来实现。当使用软件实现时,可以将这些功能存储在计算机可读介质中或者作为计算机可读介质上的一个或多个指令或代码进行传输。计算机可读介质包括计算机存储介质和通信介质,其中通信介质包括便于从一个地方向另一个地方传送计算机程序的任何介质。存储介质可以是通用或专用计算机能够存取的任何可用介质。
通过以上的实施方式的描述,所属领域的技术人员可以清楚地了解到本发明可借助软件加必需的通用硬件的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在可读取的存储介质中,如计算机的软盘,硬盘或光盘等,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本发明各个实施例所述的方法。
以上所述,仅为本发明的具体实施方式,但本发明的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本发明揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本发明的保护范围之内。因此,本发明的保护范围应以所述权利要求的保护范围为准。

Claims (18)

  1. 一种资源分配方法,其特征在于,所述方法应用于具有非统一内存访问架构NUMA的计算机系统中,所述计算机系统包括多个NUMA节点,所述多个NUMA节点间通过互联器件进行连接,每个NUMA节点包括至少一个处理器核,所述方法包括:
    获取文件访问请求所属的进程,所述文件访问请求用于访问目标文件;
    根据所述计算机系统中预设时间段内至少部分进程访问文件的文件信息确定处理所述文件访问请求的资源分区,其中,所述计算机系统包括至少两个资源分区,每个资源分区分别包括至少一个处理器核以及至少一个存储单元,一个资源分区中的处理器核以及存储单元位于同一个NUMA节点上,一个NUMA节点上包括至少一个资源分区;
    将所述文件访问请求所属的进程分配给所述处理所述文件访问请求的资源分区进行处理。
  2. 根据权利要求1所述的方法,其特征在于,所述文件访问请求中包含有所述目标文件的文件标识,在所述根据所述计算机系统中预设时间段内至少部分进程访问文件的文件信息确定处理所述文件访问请求的资源分区之前,所述方法还包括:
    根据所述文件标识以及资源分配映射关系确定所述目标文件尚未被访问过,其中,所述资源分配映射关系中包括已被访问过的文件的文件标识以及处理所述已被访问过的文件的文件访问请求的资源分区的信息。
  3. 根据权利要求1或2所述的方法,其特征在于:
    当所述文件访问请求所属的进程在访问所述目标文件之前未访问过文件时,所述至少部分进程为除所述文件访问请求所属进程之外的至少部分进程;
    当所述文件访问请求所属的进程在访问所述待处理文件之前访问过文件时,所述至少部分进程为所述文件访问请求所属的进程。
  4. 根据权利要求3所述的方法,其特征在于,所述根据所述计算机系统中预设时间段内至少部分进程访问文件的文件信息确定处理所述文件访问请求的资源分区包括:
    当预设时间段内所述至少部分进程访问的文件中大文件的比例超过预设阈值,且所述预设时间段内所述至少部分进程访问的文件存放在同一目录下不同的子目录中时,按照所述目标文件所属的目录确定处理所述文件访问请求的资源分区。
  5. 根据权利要求3所述的方法,其特征在于,所述根据所述计算机系统中预设时间段内至少部分进程访问文件的文件信息确定处理所述文件访问请求的资源分区包括:
    当预设时间段内所述至少部分进程访问的文件中大文件的比例超过预设阈值,且所述预设时间段内所述至少部分进程访问的文件不在同一目录下时,按照所述文件访问请求中携带的所述目标文件的文件标识确定处理所述文件访问请求的资源分区。
  6. 根据权利要求3所述的方法,其特征在于,所述根据所述计算机系统中预设时间段内至少部分进程访问文件的文件信息确定处理所述文件访问请求的资源分区包括:
    当预设时间段内所述至少部分进程访问的文件中大文件的比例未超过预设阈值时,按照所述文件访问请求所属的进程信息确定处理所述文件访问请求的资源分区。
  7. 一种资源分配装置,其特征在于,所述装置应用于具有非统一内存访问架构NUMA的计算机系统中,所述计算机系统包括多个NUMA节点,所述多个NUMA节点间通过互联器件进行连接,每个NUMA节点包括至少一个处理器核,所述装置包括:
    获取模块,用于获取文件访问请求所属的进程,所述文件访问请求用于访问目标文件;
    确定模块,用于根据所述计算机系统中预设时间段内至少部分进程访问文件的文件信息确定处理所述文件访问请求的资源分区,其中,所述计算机系统包括至少两个资源分区,每个资源分区分别包括至少一个处理器核以及至少一个存储模块,一个资源分区中的处理器核以及存储模块位于同一个NUMA节点上,一个NUMA节点上包括至少一个资源分区;
    分配模块,用于将所述获取模块获取的所述文件访问请求所属的进程 分配给所述确定模块确定的处理所述文件访问请求的资源分区进行处理。
  8. 根据权利要求7所述的装置,其特征在于,所述文件访问请求中包含有所述目标文件的文件标识;
    所述确定模块,还用于根据所述文件标识以及资源分配映射关系确定所述目标文件尚未被访问过,其中,所述资源分配映射关系中包括已被访问过的文件的文件标识以及处理所述已被访问过的文件的文件访问请求的资源分区的信息。
  9. 根据权利要求7或8所述的装置,其特征在于,
    当所述文件访问请求所属的进程在访问所述目标文件之前未访问过文件时,所述至少部分进程为除文件访问请求所属进程之外的至少部分进程;
    当所述文件访问请求所属的进程在访问所述待处理文件之前访问过文件时,所述至少部分进程为所述文件访问请求所属的进程。
  10. 根据权利要求9所述的装置,其特征在于,
    所述确定模块,还用于当预设时间段内所述至少部分进程访问的文件中大文件的比例超过预设阈值,且所述预设时间段内所述至少部分进程访问的文件存放在同一目录下不同的子目录中时,按照所述目标文件所属的目录确定处理所述文件访问请求的资源分区。
  11. 根据权利要求9所述的装置,其特征在于,
    所述确定模块,还用于当预设时间段内所述至少部分进程访问的文件中大文件的比例超过预设阈值,且所述预设时间段内所述至少部分进程访问的文件不在同一目录下时,按照所述文件访问请求中携带的所述目标文件的文件标识确定处理所述文件访问请求的资源分区。
  12. 根据权利要求9所述的装置,其特征在于,
    所述确定模块,还用于当预设时间段内所述至少部分进程访问的文件中大文件的比例未超过预设阈值时,按照所述文件访问请求所属的进程信息确定处理所述文件访问请求的资源分区。
  13. 一种非统一内存访问架构NUMA系统,其特征在于,所述NUMA系统包括多个NUMA节点,所述多个NUMA节点间通过互联器件进行连接,每个NUMA节点包括至少一个处理器核,所述多个NUMA节点中的 第一NUMA节点用于:获取文件访问请求所属的进程,所述文件访问请求用于请求访问目标文件;
    根据所述NUMA系统中预设时间段内至少部分进程访问文件的文件信息确定处理所述文件访问请求的资源分区,其中,所述NUMA系统包括至少两个资源分区,每个资源分区分别包括至少一个处理器核以及至少一个存储单元,一个资源分区中的处理器核以及存储单元位于同一个NUMA节点上,一个NUMA节点上包括至少一个资源分区;
    将所述文件访问请求所属的进程分配给所述处理所述文件访问请求的资源分区进行处理。
  14. 根据权利要求13所述的系统,其特征在于,所述文件访问请求中包含有所述目标文件的文件标识;
    所述第一NUMA节点,还用于根据所述文件标识以及资源分配映射关系确定所述目标文件尚未被访问过,其中,所述资源分配映射关系中包括已被访问过的文件的文件标识以及处理所述已被访问过的文件的文件访问请求的资源分区的信息。
  15. 根据权利要求13或14所述的系统,其特征在于,
    当所述文件访问请求所属的进程在访问所述目标文件之前未访问过文件时,所述至少部分进程为除文件访问请求所属进程之外的至少部分进程;
    当所述文件访问请求所属的进程在访问所述待处理文件之前访问过文件时,所述至少部分进程为所述文件访问请求所属的进程。
  16. 根据权利要求15所述的系统,其特征在于,
    所述第一NUMA节点,还用于当预设时间段内所述至少部分进程访问的文件中大文件的比例超过预设阈值,且所述预设时间段内所述至少部分进程访问的文件存放在同一目录下不同的子目录中时,按照所述目标文件所属的目录确定处理所述文件访问请求的资源分区。
  17. 根据权利要求15所述的系统,其特征在于,
    所述第一NUMA节点,还用于当预设时间段内所述至少部分进程访问的文件中大文件的比例超过预设阈值,且所述预设时间段内所述至少部分进程访问的文件不在同一目录下时,按照所述文件访问请求中携带的所述 目标文件的文件标识确定处理所述文件访问请求的资源分区。
  18. 根据权利要求15所述的系统,其特征在于,
    所述第一NUMA节点,还用于当预设时间段内所述至少部分进程访问的文件中大文件的比例未超过预设阈值时,按照所述文件访问请求所属的进程信息确定处理所述文件访问请求的资源分区。
PCT/CN2016/096113 2016-08-19 2016-08-19 一种资源分配方法、装置及numa系统 WO2018032519A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2016/096113 WO2018032519A1 (zh) 2016-08-19 2016-08-19 一种资源分配方法、装置及numa系统
CN201680004180.8A CN107969153B (zh) 2016-08-19 2016-08-19 一种资源分配方法、装置及numa系统

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2016/096113 WO2018032519A1 (zh) 2016-08-19 2016-08-19 一种资源分配方法、装置及numa系统

Publications (1)

Publication Number Publication Date
WO2018032519A1 true WO2018032519A1 (zh) 2018-02-22

Family

ID=61196207

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/096113 WO2018032519A1 (zh) 2016-08-19 2016-08-19 一种资源分配方法、装置及numa系统

Country Status (2)

Country Link
CN (1) CN107969153B (zh)
WO (1) WO2018032519A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109522102A (zh) * 2018-09-11 2019-03-26 华中科技大学 一种基于i/o调度的多任务外存模式图处理方法
CN111445349A (zh) * 2020-03-13 2020-07-24 贵州电网有限责任公司 一种适用于能源互联网的混合式数据存储处理方法及系统
CN115996203A (zh) * 2023-03-22 2023-04-21 北京华耀科技有限公司 网络流量分域方法、装置、设备和存储介质

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112231099A (zh) * 2020-10-14 2021-01-15 北京中科网威信息技术有限公司 一种处理器的内存访问方法及装置
CN115705247A (zh) * 2021-08-16 2023-02-17 华为技术有限公司 一种运行进程的方法及相关设备
CN115996153A (zh) * 2021-10-19 2023-04-21 华为技术有限公司 一种数据处理的方法和相关装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1531302A (zh) * 2003-03-10 2004-09-22 �Ҵ���˾ 用于将节点分成多个分区的方法及多节点系统
WO2013163008A1 (en) * 2012-04-27 2013-10-31 Microsoft Corporation Systems and methods for partitioning of singly linked lists for allocation memory elements
CN103440173A (zh) * 2013-08-23 2013-12-11 华为技术有限公司 一种多核处理器的调度方法和相关装置
CN104375899A (zh) * 2014-11-21 2015-02-25 北京应用物理与计算数学研究所 高性能计算机numa感知的线程和内存资源优化方法与系统
US20160224388A1 (en) * 2015-02-03 2016-08-04 International Business Machines Corporation Autonomous dynamic optimization of platform resources

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7756898B2 (en) * 2006-03-31 2010-07-13 Isilon Systems, Inc. Systems and methods for notifying listeners of events
WO2012119369A1 (zh) * 2011-08-02 2012-09-13 华为技术有限公司 基于cc-numa的报文处理方法、装置和系统
CN102508638B (zh) * 2011-09-27 2014-09-17 华为技术有限公司 用于非一致性内存访问的数据预取方法和装置
JP2014123254A (ja) * 2012-12-21 2014-07-03 International Business Maschines Corporation メディア上のファイルをユーザ単位で分割管理する方法、プログラム、及びストレージ・システム
CN103150394B (zh) * 2013-03-25 2014-07-23 中国人民解放军国防科学技术大学 面向高性能计算的分布式文件系统元数据管理方法
CN104063487B (zh) * 2014-07-03 2017-02-15 浙江大学 基于关系型数据库及k‑d树索引的文件数据管理方法
CN104077084B (zh) * 2014-07-22 2017-07-21 中国科学院上海微系统与信息技术研究所 分布式随机访问文件系统及其访问控制方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1531302A (zh) * 2003-03-10 2004-09-22 �Ҵ���˾ 用于将节点分成多个分区的方法及多节点系统
WO2013163008A1 (en) * 2012-04-27 2013-10-31 Microsoft Corporation Systems and methods for partitioning of singly linked lists for allocation memory elements
CN103440173A (zh) * 2013-08-23 2013-12-11 华为技术有限公司 一种多核处理器的调度方法和相关装置
CN104375899A (zh) * 2014-11-21 2015-02-25 北京应用物理与计算数学研究所 高性能计算机numa感知的线程和内存资源优化方法与系统
US20160224388A1 (en) * 2015-02-03 2016-08-04 International Business Machines Corporation Autonomous dynamic optimization of platform resources

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109522102A (zh) * 2018-09-11 2019-03-26 华中科技大学 一种基于i/o调度的多任务外存模式图处理方法
CN109522102B (zh) * 2018-09-11 2022-12-02 华中科技大学 一种基于i/o调度的多任务外存模式图处理方法
CN111445349A (zh) * 2020-03-13 2020-07-24 贵州电网有限责任公司 一种适用于能源互联网的混合式数据存储处理方法及系统
CN111445349B (zh) * 2020-03-13 2023-09-05 贵州电网有限责任公司 一种适用于能源互联网的混合式数据存储处理方法及系统
CN115996203A (zh) * 2023-03-22 2023-04-21 北京华耀科技有限公司 网络流量分域方法、装置、设备和存储介质

Also Published As

Publication number Publication date
CN107969153A (zh) 2018-04-27
CN107969153B (zh) 2021-06-22

Similar Documents

Publication Publication Date Title
WO2018032519A1 (zh) 一种资源分配方法、装置及numa系统
KR102589155B1 (ko) 메모리 관리 방법 및 장치
US9778856B2 (en) Block-level access to parallel storage
KR102044023B1 (ko) 키 값 기반 데이터 스토리지 시스템 및 이의 운용 방법
US10248346B2 (en) Modular architecture for extreme-scale distributed processing applications
US9489409B2 (en) Rollover strategies in a N-bit dictionary compressed column store
WO2021008197A1 (zh) 资源分配方法、存储设备和存储系统
US20200364145A1 (en) Information processing apparatus and method for controlling storage device
US9760314B2 (en) Methods for sharing NVM SSD across a cluster group and devices thereof
JP2019133391A (ja) メモリシステムおよび制御方法
CN113515483A (zh) 一种数据传输方法及装置
US11755241B2 (en) Storage system and method for operating storage system based on buffer utilization
CN115904212A (zh) 数据处理的方法、装置、处理器和混合内存系统
CN106354428B (zh) 一种多物理层分区计算机体系结构的存储共享系统
CN115729849A (zh) 内存管理方法及计算设备
US9697048B2 (en) Non-uniform memory access (NUMA) database management system
US20150220430A1 (en) Granted memory providing system and method of registering and allocating granted memory
CN110178119B (zh) 处理业务请求的方法、装置与存储系统
CN110447019B (zh) 存储器分配管理器及由其执行的用于管理存储器分配的方法
US10824640B1 (en) Framework for scheduling concurrent replication cycles
US20140281343A1 (en) Information processing apparatus, program, and memory area allocation method
CN116483740B (zh) 内存数据的迁移方法、装置、存储介质及电子装置
US20230050808A1 (en) Systems, methods, and apparatus for memory access in storage devices
WO2024012153A1 (zh) 一种数据处理方法及装置
US20240176539A1 (en) Novel data cache scheme for high performance flash memories

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16913275

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16913275

Country of ref document: EP

Kind code of ref document: A1