CN116149539A - Data reading method and related equipment - Google Patents

Data reading method and related equipment Download PDF

Info

Publication number
CN116149539A
CN116149539A CN202111389761.7A CN202111389761A CN116149539A CN 116149539 A CN116149539 A CN 116149539A CN 202111389761 A CN202111389761 A CN 202111389761A CN 116149539 A CN116149539 A CN 116149539A
Authority
CN
China
Prior art keywords
target file
data
file data
local nvm
nvm
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111389761.7A
Other languages
Chinese (zh)
Inventor
黄林鹏
孙鹏昊
郑圣安
王晶钰
戚振林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202111389761.7A priority Critical patent/CN116149539A/en
Publication of CN116149539A publication Critical patent/CN116149539A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/16Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
    • G06F15/163Interprocessor communication
    • G06F15/173Interprocessor communication using an interconnection network, e.g. matrix, shuffle, pyramid, star, snowflake
    • G06F15/17306Intercommunication techniques
    • G06F15/17331Distributed shared memory [DSM], e.g. remote direct memory access [RDMA]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0638Organizing or formatting or addressing of data
    • G06F3/0643Management of files
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0638Organizing or formatting or addressing of data
    • G06F3/0644Management of space entities, e.g. partitions, extents, pools
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/067Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Software Systems (AREA)
  • Computer Hardware Design (AREA)
  • Mathematical Physics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The embodiment of the application discloses a data reading method and related equipment, which are used for improving the bandwidth for reading target file data. In the application, the user equipment firstly determines a first part of target file data in the local NVM, then reads the first part of the target file data from the local NVM in parallel, and reads a second part of the target file data from the server through RDMA, wherein the second part of the target file data is other data except the first part of the target file data, and the bandwidth for reading the data from the local NVM and the bandwidth for reading the data from the server through RDMA is fully utilized, so that the bandwidth for reading the target file data is improved.

Description

Data reading method and related equipment
Technical Field
The present disclosure relates to the field of storage technologies, and in particular, to a data reading method and related devices.
Background
Remote direct memory access (remote direct memory access, RDMA) is a network communication technology that allows one network node to directly access the memory of another network node through an RDMA network, without being processed by a processor at the opposite end, so that the performance impact caused by network communication can be effectively reduced, and the network throughput can be improved. The non-volatile memory (NVM) is a memory technology, and has the characteristics of low latency, high bandwidth and byte addressing of the conventional memory, and meanwhile, has the persistent storage capability of storage devices such as a disk, and the like, so that data in the NVM cannot be lost after the system is powered off.
The distributed system is composed of a plurality of computer nodes, and the computer nodes are divided into a server and a client. The server or the client is composed of a cluster of computer nodes interconnected by an RDMA network, and file data and metadata are stored in the cluster in a scattered manner. Each computer node is equipped with a local NVM for storing file data, and each computer node can directly access the local NVM of other computer nodes through RDMA.
RDMA has a higher latency and lower bandwidth than non-volatile memory, and therefore in order to reduce network access overhead, it is currently chosen to equip clients with local NVM for caching file data that needs to be read. When the client reads the required file data, the client firstly searches in the local NVM, if the client hits, the client directly reads from the local NVM, otherwise, the client reads from the server through RDMA.
Under the existing technology, the read bandwidths of the local NVM and the read data of the server through RDMA can reach about 40GB/s and 25GB/s respectively, and when the file data is read from the local NVM, the upper limit of the read rate is the read bandwidth of the local NVM, and the hardware bandwidth resource of the distributed system cannot be fully utilized.
Disclosure of Invention
The embodiment of the application provides a data reading method and related equipment, which are used for improving the bandwidth for reading target file data.
The first aspect of the present application provides a data reading method, in which a user device first determines a first portion of target file data in a local NVM, then reads the first portion of the target file data from the local NVM in parallel, and reads a second portion of the target file data from a server through RDMA, where the second portion of the target file data is other data than the first portion of the target file data, and bandwidth of reading the data from the local NVM and reading the data from the server through RDMA is fully utilized, so that bandwidth of reading the target file data is improved.
In some possible implementations, before the user equipment determines the first portion of the target file data in the local NVM, the user equipment sends a read request for the target file data to the server, and then receives the memory area address of the target file data sent by the server according to the read request, in the step of determining the first portion of the target file data in the local NVM, specifically, the user equipment determines all data of the target file data cached in the local NVM according to the memory area address of the target file data, and determines the first portion of the target file data from all data of the target file data cached in the local NVM, thereby realizing determining the data amount of the target file data read from the local NVM, so that the read rate of the target file data can be adjusted by adjusting the data amount of the target file data read from the local NVM.
In some possible implementations, the user device determines a first read rate of data from the local NVM before determining a first portion of the target file data from all data cached in the local NVM, and then determines a second read rate of data from a server via the RDMA, and obtains a target ratio from the first read rate and the second read rate, the target ratio being the first read rate/(the first read rate+the second read rate). In the step of determining the first portion of the target file data from all data of the target file data cached in the local NVM by the user device, the user device determines the first portion of the target file data from all data of the target file data cached in the local NVM according to the target proportion, the data amount of the first portion of the target file data is a smaller value of the product of the data amount of the target file data and the target proportion and the data amount of all data of the target file data cached in the local NVM, so that the completion time point of reading the first portion of the target file data in the local NVM and the completion time point of reading the second portion of the target file data from the server by RDMA are the same, and the highest reading rate of the target file data is realized.
In some possible implementations, the first portion of the target file data is all data of the target file data cached in the local NVM, and by preferentially acquiring the target file data in the local NVM, the reading efficiency of the target file data is improved under the condition of poor communication quality or network congestion.
In some possible implementations, the user device determines, in parallel after reading the first portion of the target file data in the local NVM and the second portion of the target file data from the server via RDMA, data in the second portion of the target file data that is not cached in the NVM as the third portion of the target file data; the user device caches a third portion of the target file data in the local NVM. More data volume of the target file data is stored in the local NVM, and when the target file data needs to be acquired, a higher read mode of the local NVM with higher read efficiency can be preferentially used.
A second aspect of the present application provides a user equipment for performing the method of any one of the preceding first aspects.
In a third aspect, the present application provides a computer readable storage medium having instructions stored therein which, when run on a computer, cause the computer to perform the method of the first aspect or any of the possible implementation manners of the first aspect.
A fourth aspect of the present application provides a computer program product comprising computer-executable instructions stored in a computer-readable storage medium; the at least one processor of the apparatus may read the computer-executable instructions from a computer-readable storage medium, the at least one processor executing the computer-executable instructions causing the apparatus to implement the method provided by the first aspect or any one of the possible implementations of the first aspect.
A fifth aspect of the present application provides a communication device that may include at least one processor, a memory, and a communication interface. At least one processor is coupled with the memory and the communication interface. The memory is for storing instructions, the at least one processor is for executing the instructions, and the communication interface is for communicating with other communication devices under control of the at least one processor. The instructions, when executed by at least one processor, cause the at least one processor to perform the method of the first aspect or any possible implementation of the first aspect.
A seventh aspect of the present application provides a chip system comprising a processor for supporting a user equipment to implement the functionality referred to in the first aspect or any one of the possible implementations of the first aspect.
In one possible design, the system on a chip may further include a memory to hold the program instructions and data necessary for the user device. The chip system can be composed of chips, and can also comprise chips and other discrete devices.
The technical effects of the fourth to seventh aspects or any one of the possible implementation manners of the fourth to seventh aspects may be referred to the technical effects of the first aspect or the technical effects of the different possible implementation manners of the first aspect, which are not described herein.
Drawings
FIG. 1 is a schematic diagram of a memory system according to the present application;
FIG. 2 is a schematic diagram of an embodiment of a data reading method provided herein;
fig. 3 is a schematic structural diagram of a user equipment provided in the present application;
fig. 4 is a schematic structural diagram of a communication device provided in the present application.
Detailed Description
The embodiment of the application provides a data reading method and related equipment, which are used for reading target file data of a local NVM and a server side in parallel.
Embodiments of the present application are described below with reference to the accompanying drawings.
The terms first, second and the like in the description and in the claims of the present application and in the above-described figures, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the terms so used are interchangeable under appropriate circumstances and are merely illustrative of the manner in which the embodiments of the application described herein have been described for objects of the same nature. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of elements is not necessarily limited to those elements, but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
Fig. 1 is a schematic diagram of a memory system according to an embodiment of the present application. The embodiment of the application provides a storage system 100, which comprises a server 110 and user equipment 120, wherein the server 110 is used as a server, and the user equipment 120 is used as a client.
In this application, the server 110 is used to store file data. In some possible implementations, the server 110 may be a separate physical server, or may be a server cluster or a distributed storage system formed by a plurality of physical servers. It should be noted that the distributed storage system is composed of a plurality of servers, the servers are interconnected by an RDMA network, and file data are stored in a distributed manner in the plurality of servers, wherein each server is equipped with a local NVM for storing the file data.
In some possible implementations, the server 110 may also be a cloud server that provides cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communications, middleware services, domain name services, security services, CDNs, and basic cloud computing services such as big data and artificial intelligence platforms. The server is also called a server, and is a device for providing a computing service. Since a server needs to respond to a service request and process the service, and provide a reliable service, in general, the server should have a capability of bearing the service and guaranteeing the service, and the server needs to have a strong processing capability, high stability, high reliability, high security, expandability, and manageability.
The ue 120 may be a terminal (terminal), a Mobile Station (MS), a Mobile Terminal (MT), or the like. The terminal may be a mobile phone (mobile phone), a tablet (Pad), a computer with a wireless transceiving function, a Virtual Reality (VR) terminal device, an augmented reality (augmented reality, AR) terminal device, a wireless terminal in industrial control (industrial control), a wireless terminal in unmanned driving (self driving), a wireless terminal in teleoperation (remote medical surgery), a wireless terminal in smart grid (smart grid), a wireless terminal in transportation security (transportation safety), a wireless terminal in smart city (smart city), a wireless terminal in smart home (smart home), and the like. The embodiment of the present application does not limit the specific technology and the specific device configuration adopted by the user device 120. The user device 120 and the server 110 may be directly or indirectly connected through wired or wireless communication, which is not limited herein.
Currently, before accessing using RDMA, the user device 120 needs to register the memory region address to be accessed into the local RDMA network card. In the embodiment of the present application, both the server 110 and the user device 120 are configured with local NVM for storing file data, and each server 110 or user device 120 may directly access the remote local NVM through RDMA. When the user equipment reads the required file data, the user equipment firstly searches in the local NVM, if the user equipment hits, the user equipment directly reads the file data from the local NVM, and otherwise, the user equipment reads the file data from a server through RDMA.
Under the existing technology and technology, the server 110 and the user equipment 120 can be both provided with a local NVM and an RDMA-supporting network card, and the read bandwidths are respectively 40GB/s and 25GB/s, and the local NVM and the RDMA-supporting network card are connected through an RDMA network. With the above technology, when the user equipment 120 reads file data from the local NVM, it cannot use RDMA to increase bandwidth at the same time, so that the upper limit of data reading is the read bandwidth of reading data from the local NVM, and the hardware bandwidth resource of the distributed system cannot be fully utilized.
Therefore, the application proposes a data reading method, in which a user device first determines a first part of target file data in a local NVM, then reads the first part of the target file data from the local NVM in parallel, and reads a second part of the target file data from a server through RDMA, wherein the second part of the target file data is other data than the first part of the target file data, and bandwidth of reading the data from the local NVM and the server through RDMA is fully utilized, so that bandwidth of reading the target file data is improved.
The foregoing embodiments describe the storage system 100 provided in the present application, and next describe a data reading method performed based on the storage system 100, referring to fig. 2, the data reading method provided in the embodiment of the present application mainly includes the following steps:
201. the user device sends a read request for the target file data to the server.
In some possible implementations, the user device may send a read request for the target file data to the server over RDMA. In some possible implementations, the manner in which the user device may send the read request for the target file data to the server through RDMA may be a two-sided operation or a one-sided operation, which is not limited herein. It should be noted that, the bilateral operation (i.e., the bilateral read/write operation) needs the counterpart (i.e., the server in the embodiment of the present application) to initiate the corresponding write/read operation, while the unilateral operation (i.e., the bilateral read/write operation) does not need the counterpart (i.e., the server in the embodiment of the present application) to participate.
For example, an application running on a user device requests to read 1 Gigabyte (GB) of target file data. Assuming that the target file data is stored in the form of block granularity, and the size of the block granularity is 4 Kilobytes (KB), the target file data has 1GB/4 kb=26262844 file blocks in total. In some possible implementations, the user device may divide a temporary space of 1GB in the local NVM for storing the read 1GB of target file data, and then send a read request for the target file data to the server through RDMA (single-sided operation or double-sided operation). It should be noted that, if the user device sends a read request for the target file data to the server through a single-side operation of RDMA, the speed of acquiring the target file data from the server is faster.
202. And the server sends the memory area address of the target file data to the user equipment according to the reading request.
In this embodiment of the present application, the read request for the target file data may carry metadata corresponding to the target file data. In this embodiment of the present application, after receiving a read request for target file data, a server may obtain metadata corresponding to the target file data, and determine, according to the metadata, a storage location where each file block corresponding to the target file data is located, so as to determine a plurality of memory area addresses corresponding to the target file data, and obtain a memory area list. It should be noted that, one memory area address corresponds to one file block, and each file block is used for storing data with 1 block granularity. For example, the block granularity is 4KB, the target file data is 1GB, and then the memory region list includes 1GB/4 kb=26262844 memory region addresses. After the server determines the memory area list, the memory area list may be sent to the user equipment.
203. And the user equipment determines all data of the target file data cached in the local NVM according to the memory area address of the target file data.
In this embodiment of the present application, after the user equipment receives the memory area list of the target file data sent by the server, the file block of the target file data in the local NVM may be determined according to the memory area list of the target file data. In some possible implementations, the user device may look up the cache address search tree according to the memory region list to confirm whether the file block in the target file data is cached in the local NVM. For this purpose, the ue may locally maintain a cache address search tree, where the cache address search tree is used to find out whether a specific file block is cached in the local NVM according to the memory area of the file data. For example, when the user device receives the memory area list, the cache address search tree may be searched according to the memory area list, so as to obtain that 182331 file blocks in 262626144 file blocks of the 1GB target file data (each file block stores 4KB of data) are cached in the local NVM.
204. The user device determines a first portion of the target file data in the local non-volatile memory NVM.
In some possible implementations, the user device can determine the first portion of the target file data from all of the data in which the target file data is cached in the local NVM.
For example, the user device may determine the first portion of the target file data from all of the data in which the target file data is cached in the local NVM according to the target proportion, wherein the data size of the first portion of the target file data is a smaller value of a product of the data size of the target file data and the target proportion and the data size of all of the data in which the target file data is cached in the local NVM.
For example, the user device determines a first read rate at which data is read from the local NVM, and then determines a second read rate at which data is read from the server via RDMA, and obtains a target ratio from the first read rate and the second read rate, where the target ratio is the first read rate/(the first read rate+the second read rate).
For example, the user device may probe the read bandwidth of the local NVM as a first read rate to read data from the local NVM. In some possible implementations, the user device can probe the read bandwidth of the local NVM when the storage system is booted. In some possible implementations, the read bandwidth of the local NVM can be probed when the user device is powered on. In some possible implementations, the read bandwidth of the local NVM can be probed when the user device receives the relevant instructions. In some possible implementations, the read bandwidth of the local NVM can be probed after the user device and server establish the RDMA connection. And are not limited herein.
In some possible implementations, the user device can probe the read bandwidth to the local NVM by reading a certain amount of data from the local NVM. In some possible implementations, the target file data is stored in the local NVM of the user device at a block granularity. By way of example, file data may be stored in the local NVM of the user device at a block granularity of 4 Kilobytes (KB), assuming 1 Gigabyte (GB) of data, there are 1GB/4 kb=26262844 file blocks. In some possible implementations, the user device may maintain a cache least recently used algorithm (least recently used, LRU) linked list in the local NVM, one entry in the LRU linked list corresponding to a file block in the local NVM, wherein one file block corresponds to one block granularity, i.e., one file block may store file data of one block granularity, e.g., 4KB. For example, when the storage system is started, the user equipment may allocate a file block according to the LRU linked list, so as to divide a temporary space in the local NVM, where the temporary space corresponds to a plurality of entries in the LRU linked list, read file data from the temporary space, and calculate a read bandwidth of the local NVM according to the size of the read file data and the time used. For example, after the user device probes the local NVM, it is determined that the read bandwidth of the local NVM is 40 gigabytes per second (GB/s). In the embodiment of the application, after the user equipment completes the detection of the read bandwidth of the local NVM, the user equipment may record the read bandwidth of the local NVM, and no re-detection is required in the subsequent step.
In the embodiment of the application, the user equipment can detect the read bandwidth of the server through RDMA as a second read rate for reading data from the server through RDMA. In some possible implementations, when the storage system is started, the user device may probe the read bandwidth to the server after establishing an RDMA connection with the server. In some possible implementations, when the user device is powered on, the user device may probe the read bandwidth to the server after establishing an RDMA connection with the server. In some possible implementations, when the user device receives the relevant instruction after the user device establishes an RDMA connection with the server, the read bandwidth to the server may be detected.
In some possible implementations, the user device may read an amount of data from the server over RDMA to probe read bandwidth to the server over RDMA probes to obtain a second read rate of reading data from the server over RDMA. Illustratively, when the storage system is started, the user equipment hosts a temporary memory area on the server, reads data from the temporary memory area through RDMA, and calculates the read bandwidth of the local NVM according to the size of the read data and the time. In some possible implementations, the data is stored in the server at a block granularity. For example, data may be stored in a server at a block granularity of 4 Kilobytes (KB), assuming 1 Gigabyte (GB) of data, there are 1GB/4 kb=26262820 file blocks. For example, after the user device detects the read bandwidth to the server through RDMA, it determines that the read bandwidth to the server through RDMA is 25 gigabytes per second (GB/s). In the embodiment of the application, the granularity of the blocks stored by the target file data in the server is the same as the granularity of the blocks stored by the local NVM of the user equipment. In the embodiment of the application, after the user equipment completes the read bandwidth of the server through the RDMA probe, the user equipment can record the read bandwidth of the server through the RDMA probe, and the subsequent probing is not needed again.
After the first read rate of reading data from the local NVM and the second read rate of reading data from the server by RDMA are obtained, a target ratio may be calculated according to the first read rate and the second read rate, where the target ratio is the first read rate/(the first read rate+the second read rate). For example, the first reading rate is 40GB/s, the second reading rate is 25GB/s, and then the target ratio is the first reading rate/(the first reading rate+the second reading rate) =40/65= 61.54%.
Then, the data size of the first portion of the target file data is a smaller value of the product of the data size of the target file data and the target proportion and the data size of all data for which the target file data is cached in the local NVM. For example, the data amount of the target file data is 1GB, the target ratio is 61.54%, and then the product of the data amount of the target file data and the target ratio is 0.6154GB. If the amount of data of the entire data of the target file cached in the local NVM is 0.6GB, since 0.61.54GB cannot be read from the local NVM, only 0.6GB can be read at most, and the first portion of the data of the target file is taken to be the smaller value of both, i.e., 0.6GB. If the amount of data of the entire data of the target file cached in the local NVM is 0.7GB, then since 0.6154GB can be read from the local NVM, the first portion of the target file data is taken to be the smaller of the two, 0.6154GB.
When 61.54% (i.e., 0.6154 GB) of target file data is read from the local NVM using a first rate (e.g., 40G/s), 1-61.54% = 38.46% (i.e., 0.3846 GB) of target file data is read from the server by RDMA using a second rate (e.g., 25G/s), the two take up time periods of 0.6154 GB/(40 GB/s) = 0.015385s and 0.3846 GB/(25 GB/s) = 0.015384s, which are approximately equal, the bandwidth of reading data from the local NVM and the bandwidth of reading data from the server by RDMA are fully utilized, and the reading rate of target file data is improved.
In some possible implementations, the first portion of the target file data is all data of the target file data cached in the local NVM, and by preferentially acquiring the target file data in the local NVM, the reading efficiency of the target file data is improved under the condition of poor communication quality or network congestion. And are not limited herein.
205. The user device reads in parallel a first portion of the target file data from the local NVM, and a second portion of the target file data from the server via remote direct memory access RDMA, the second portion of the target file data being other data than the first portion of the target file data.
For example, of the 262626144 file blocks of 1GB of target file data, 182331 file blocks have been cached in the local NVM (the data size of each file block is 4KB, i.e. the block granularity is 4 KB), i.e. the target file data is cached in the local NVM as all data. Assuming that the first portion of the target file data is data for 160000 file blocks of 182331 file blocks, then the user device can read the file data in these 160000 file blocks from the local NVM, while the remaining 262626144-160000 = 102144 file blocks are stored in the server, then the user device can read the file data in these 102144 file blocks from the server via RDMA. Assuming that the first portion of the target file data is data in 182331 file blocks, the user device can read the file data in these 182331 file blocks from the local NVM, while the remaining 262626144-182331 = 79813 file blocks are stored in the server, and then the user device can read the file data in these 79813 file blocks from the server via RDMA.
In the application, the user equipment firstly determines the first part of the target file data in the local NVM, then reads the first part of the target file data from the local NVM and reads the second part of the target file data from the server through RDMA (remote direct memory access), wherein the second part of the target file data is other data except the first part of the target file data, and the bandwidth for reading the data from the local NVM and the bandwidth for reading the data from the server through RDMA are fully utilized, so that the bandwidth for reading the target file data is improved.
For example, the first rate is 40GB/s, the second rate is 25GB/s, and the target file data amount is 1GB, so the target ratio is 61.54%, that is, the user equipment can read the data amount of 1gb×61.54% = 0.6154GB from the local NVM, and read the data amount of 1GB-0.6154 gb= 0.3846GB from the server through RDMA. Assume that the data amount of all data of the target file data stored in the local NVM by the user equipment is 0.7GB, taking a smaller value between 0.6154GB and 0.7GB as 0.6154GB. Then the user device can read 0.6154GB of data from the local NVM in parallel using 40GB/s and 0.3846GB of data from the server over RDMA using 25 GB/s. Then it takes Max (0.6154 GB/(40 GB/s), 0.3846 GB/(25 GB/s))= 0.015385s. Then its read bandwidth is 1GB/0.015385 s.apprxeq.65.0 GB/s.
If using the current technology, the user device reads the data size of 0.7GB from the local NVM using the first rate of 40GB/s and then reads the data size of 0.3GB from the server via RDMA using the second rate of 25GB/s, then it takes 0.7 GB/(40 GB/s) +0.3 GB/(25 GB/s) =0.0295 s, with a read bandwidth of 1GB/0.0295s≡33.9GB/s.
The comparison of the present embodiments of the present invention is shown in table 1.
TABLE 1
Figure BDA0003368180890000081
It can be seen that, in the current technical solution, the upper limit of the rate at which the target file data is read by the user equipment is the read bandwidth of the local NVM (when the target file data are all stored in the local NVM), and the actual read rate is lower than the read bandwidth of the local NVM due to the cache miss (i.e. the portion of the target file data is not stored in the local NVM and needs to be read from the server by RDMA). After the technical scheme is used, the theoretical limit of the reading rate of the target file data read by the user equipment can reach the sum of the reading bandwidths of the local nonvolatile memory and the RDMA, and the actual reading rate can exceed the reading bandwidth of the local NVM.
It should be noted that, before the user device sends a read request for the target file data to the server through RDMA, the address of the memory area of the data to be read determined in the memory area list may be registered to a local RDMA network card, and then the server is accessed through the local RDMA network card in a single-side or double-side manner, so as to read the corresponding file data.
206. The user device caches a third portion of the target file data in the local NVM.
In some possible implementations, the user device determines data in the second portion of the target file data that is not cached in the NVM as the third portion of the target file data, and then the user device caches the third portion of the target file data in the local NVM.
In the embodiment of the application, the user equipment may cache the read target file data in the file blocks allocated in the LRU linked list. In some possible implementations, the user device may not need to cache again the portion of the target file data that is cached in the local NVM, but instead cache a third portion of the target file data read from RDMA that is not cached in the local NVM.
For example, if 182331 of the 262820 file blocks of the 1GB target file data are already cached in the local NVM (the data amount of each file block is 4KB, i.e. the block granularity is 4 KB), and the user device can read the file data in the 182331 file blocks from the local NVM, while the remaining 2626262831= 79813 file blocks are stored in the server, and the user device can read the file data in the 79813 file blocks from the server by RDMA, the user device can cache the data in the 79813 file blocks in the local NVM.
If one of the 182331 file blocks of the 1GB target file data is cached in the local NVM (the data size of each file block is 4KB, i.e. the block granularity is 4 KB), and the user device can read the data of 161320 file blocks of the 182331 file blocks from the local NVM, while the remaining 262626144-161320= 100824 file blocks are stored in the server, and the user device can read the data of the 100824 file blocks from the server by RDMA, of the 100824 file blocks, of which 79813 file blocks are not cached in the local NVM, and the rest are cached, then the user device can cache the data of the 79813 file blocks in the local NVM.
If 150000 of the 262820 file blocks of the 1GB target file data are cached in the local NVM (the data amount of each file block is 4KB, i.e., the block granularity is 4 KB), and the user device can read the data in the 150000 file blocks from the local NVM, while the remaining 262626144-150000= 112144 file blocks are stored in the server, and the user device can read the data in the 112144 file blocks from the server by RDMA, and the data in the 112144 file blocks are not cached in the local NVM, then the user device can cache the data in the 112144 file blocks in the local NVM.
It should be noted that, for simplicity of description, the foregoing method embodiments are all expressed as a series of action combinations, but it should be understood by those skilled in the art that the present application is not limited by the order of actions described, as some steps may be performed in other order or simultaneously in accordance with the present application. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily required in the present application.
In order to facilitate better implementation of the above-described aspects of the embodiments of the present application, the following further provides related devices for implementing the above-described aspects.
Referring to fig. 3, a user equipment 300 provided in an embodiment of the present application may include: a processing module 301, a reading module 302 and a transceiving module 303, wherein,
a processing module 301 is configured to determine a first portion of the target file data in the local NVM.
The reading module 302 is configured to read, in parallel, a first portion of the target file data from the local NVM and a second portion of the target file data from a server through RDMA, where the second portion of the target file data is other data than the first portion of the target file data.
In some possible implementations, the transceiver module 303 is configured to send a read request for the target file data to the server;
the transceiver module 303 is further configured to receive a memory area address related to the target file data sent by the server according to the read request;
the processing module 301 is specifically configured to: determining all data of the target file data cached in the local NVM according to the memory area address of the target file data; a first portion of the target file data is determined from all of the data in which the target file data is cached in the local NVM.
In some possible implementations, the processing module 301 is further configured to determine a first read rate at which data is read from the local NVM; the processing module 301 is further configured to determine a second read rate at which data is read from the server by the RDMA; the processing module 301 is further configured to obtain a target ratio according to the first reading rate and the second reading rate, where the target ratio is the first reading rate/(the first reading rate+the second reading rate); the processing module 301 is specifically configured to: and determining a first part of the target file data from all data cached in the local NVM according to the target proportion, wherein the data volume of the first part of the target file data is a smaller value of the product of the data volume of the target file data and the target proportion and the data volume of all data cached in the local NVM.
In some possible implementations, the first portion of the target file data is all data of the target file data cached in the local NVM.
In some possible implementations, the processing module 301 is further configured to determine, as the third portion of the target file data, data in the second portion of the target file data that is not cached in the NVM; the processing module 301 is further configured to cache a third portion of the target file data in the local NVM.
It should be noted that, because the content of information interaction and execution process between the modules/units of the above-mentioned device is based on the same concept as the method embodiment of the present application, the technical effects brought by the content are the same as the method embodiment of the present application, and specific content can be referred to the description in the method embodiment shown in the foregoing application, which is not repeated here.
The embodiment of the application also provides a computer storage medium, wherein the computer storage medium stores a program, and the program executes part or all of the steps described in the embodiment of the method.
Referring to fig. 4, referring to another communication device provided in the embodiment of the present application, a communication device 400 includes: a receiver 401, a transmitter 402, a processor 403 and a memory 404. In some embodiments of the present application, the receiver 401, transmitter 402, processor 403, and memory 404 may be connected by a bus or other means, where a bus connection is illustrated in fig. 4.
Memory 404 may include read only memory and random access memory and provides instructions and data to processor 403. A portion of memory 404 may also include non-volatile random access memory (non-volatile random accessmemory, NVRAM). The memory 404 stores an operating system and operating instructions, executable modules or data structures, or a subset thereof, or an extended set thereof, where the operating instructions may include various operating instructions for performing various operations. The operating system may include various system programs for implementing various underlying services and handling hardware-based tasks.
The processor 403 controls the operation of the communication device, the processor 403 may also be referred to as a central processing unit (centralprocessing unit, CPU). In a specific application, the various components of the communication device are coupled together by a bus system that may include, in addition to a data bus, a power bus, a control bus, a status signal bus, and the like. For clarity of illustration, however, the various buses are referred to in the figures as bus systems.
The method disclosed in the embodiments of the present application may be applied to the processor 403 or implemented by the processor 403. Processor 403 may be an integrated circuit chip with signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in the processor 403 or by instructions in the form of software. The processor 403 may be a general purpose processor, a digital signal processor (digital signal processing, DSP), an application specific integrated circuit (application specific integrated circuit, ASIC), a field-programmable gate array (field-programmable gate array, FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components. The disclosed methods, steps, and logic blocks in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in connection with the embodiments of the present application may be embodied directly in hardware, in a decoded processor, or in a combination of hardware and software modules in a decoded processor. The software modules may be located in a random access memory, flash memory, read only memory, programmable read only memory, or electrically erasable programmable memory, registers, etc. as well known in the art. The storage medium is located in a memory 404, and the processor 403 reads the information in the memory 404 and, in combination with its hardware, performs the steps of the method described above.
The receiver 401 may be used to receive input digital or character information and generate signal inputs related to relevant settings and function control of the communication device, the transmitter 402 may comprise a display device such as a display screen, and the transmitter 402 may be used to output digital or character information via an external interface.
In the embodiment of the present application, the processor 403 is configured to execute the data reading method executed by the foregoing communication device.
In another possible design, when the user equipment or the communication device is a chip, it includes: a processing unit, which may be, for example, a processor, and a communication unit, which may be, for example, an input/output interface, pins or circuitry, etc. The processing unit may execute the computer-executable instructions stored in the storage unit to cause the chip in the terminal to perform the method for transmitting wireless report information according to any one of the above first aspects. Alternatively, the storage unit is a storage unit in the chip, such as a register, a cache, or the like, and the storage unit may also be a storage unit in the terminal located outside the chip, such as a read-only memory (ROM) or other type of static storage device that may store static information and instructions, a random access memory (random access memory, RAM), or the like.
The processor mentioned in any of the above may be a general-purpose central processing unit, a microprocessor, an ASIC, or one or more integrated circuits for controlling the execution of the programs of the above method.
It should be further noted that the above-described apparatus embodiments are merely illustrative, and that the units described as separate units may or may not be physically separate, and that units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. In addition, in the drawings of the embodiment of the device provided by the application, the connection relation between the modules represents that the modules have communication connection therebetween, and can be specifically implemented as one or more communication buses or signal lines.
From the above description of the embodiments, it will be apparent to those skilled in the art that the present application may be implemented by means of software plus necessary general purpose hardware, or of course may be implemented by dedicated hardware including application specific integrated circuits, dedicated CPUs, dedicated memories, dedicated components and the like. Generally, functions performed by computer programs can be easily implemented by corresponding hardware, and specific hardware structures for implementing the same functions can be varied, such as analog circuits, digital circuits, or dedicated circuits. However, a software program implementation is a preferred embodiment in many cases for the present application. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a readable storage medium, such as a floppy disk, a usb disk, a removable hard disk, a ROM, a RAM, a magnetic disk or an optical disk of a computer, etc., including several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to perform the method described in the embodiments of the present application.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product.
The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces a flow or function in accordance with embodiments of the present application, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be stored by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., a floppy Disk, a hard Disk, a magnetic tape), an optical medium (e.g., a DVD), or a semiconductor medium (e.g., a Solid State Disk (SSD)), or the like.

Claims (14)

1. A data reading method, comprising:
the user equipment determines a first part of target file data in the local nonvolatile memory (NVM);
the user device reads a first portion of the target file data in parallel from the local NVM, and reads a second portion of the target file data from a server via remote direct memory access, RDMA, the second portion of the target file data being other than the first portion of the target file data.
2. The method of claim 1, wherein prior to the user device determining the first portion of the target file data in the local NVM, further comprising:
the user equipment sends a reading request of the target file data to the server;
the user equipment receives a memory area address of the target file data sent by the server according to the reading request;
the user device determining a first portion of the target file data in the local NVM includes:
the user equipment determines all data of the target file data cached in the local NVM according to the memory area address of the target file data;
the user device determines a first portion of the target file data from all data in which the target file data is cached in the local NVM.
3. The method of claim 2, wherein prior to determining the first portion of the target file data from all data in the local NVM from which the target file data is cached, the user device further comprises:
the user device determining a first read rate at which data is read from the local NVM;
the user equipment determining a second read rate at which data is read from a server over the RDMA;
the user equipment obtains a target proportion according to the first reading rate and the second reading rate, wherein the target proportion is the first reading rate/(the first reading rate+the second reading rate);
the user device determining a first portion of the target file data from all data in which the target file data is cached in the local NVM includes:
and the user equipment determines a first part of the target file data from all data cached in the local NVM according to the target proportion, wherein the data volume of the first part of the target file data is a smaller value of the product of the data volume of the target file data and the target proportion and the data volume of all data cached in the local NVM.
4. The method of claim 2, wherein the first portion of the target file data is all data of the target file data cached in the local NVM.
5. The method of any of claims 1-4, wherein the user device, in parallel after reading the first portion of the target file data in the local NVM, reading the second portion of the target file data from a server via RDMA, further comprises:
the user equipment determining data in the second portion of the target file data that is not cached in the NVM as a third portion of the target file data;
the user device caches a third portion of the target file data in the local NVM.
6. A user device, comprising:
a processing module for determining a first portion of the target file data in the local NVM;
and the reading module is used for reading the first part of the target file data from the local NVM in parallel and reading the second part of the target file data from the server through RDMA, wherein the second part of the target file data is other data except the first part of the target file data.
7. The user device of claim 6, further comprising:
the receiving and transmitting module is used for sending a reading request of the target file data to the server;
the receiving and transmitting module is further configured to receive a memory area address related to the target file data sent by the server according to the read request;
the processing module is specifically configured to:
determining all data of the target file data cached in the local NVM according to the memory area address of the target file data;
a first portion of the target file data is determined from all of the data in which the target file data is cached in the local NVM.
8. The user equipment of claim 7, further comprising:
the processing module is further configured to determine a first read rate at which data is read from the local NVM;
the processing module is further configured to determine a second read rate at which data is read from the server over the RDMA;
the processing module is further configured to obtain a target ratio according to the first reading rate and the second reading rate, where the target ratio is the first reading rate/(the first reading rate+the second reading rate);
The processing module is specifically configured to:
and determining a first part of the target file data from all data cached in the local NVM according to the target proportion, wherein the data volume of the first part of the target file data is a smaller value of the product of the data volume of the target file data and the target proportion and the data volume of all data cached in the local NVM.
9. The user device of claim 7, wherein the first portion of the target file data is all data of the target file data cached in the local NVM.
10. The user equipment according to any of claims 6-9, further comprising:
the processing module is further configured to determine, as a third portion of the target file data, data in the second portion of the target file data that is not cached in the NVM;
the processing module is further configured to cache a third portion of the target file data in the local NVM.
11. A computer readable storage medium, characterized in that the computer readable storage medium stores a program that causes a computer device to execute the method according to any one of claims 1-5.
12. A computer program product, the computer program product comprising computer-executable instructions stored on a computer-readable storage medium; at least one processor of a device reads the computer-executable instructions from the computer-readable storage medium, the at least one processor executing the computer-executable instructions causing the device to perform the method of any one of claims 1-5.
13. A communication device comprising at least one processor, a memory, and a communication interface;
the at least one processor is coupled with the memory and the communication interface;
the memory is used for storing instructions, the processor is used for executing the instructions, and the communication interface is used for communicating with other communication devices under the control of the at least one processor;
the instructions, when executed by the at least one processor, cause the at least one processor to perform the method of any of claims 1-5.
14. A chip system, characterized in that it comprises a processor and a memory, said memory and said processor being interconnected by wires, said memory having instructions stored therein, said processor being adapted to perform the method according to any of claims 1-5.
CN202111389761.7A 2021-11-22 2021-11-22 Data reading method and related equipment Pending CN116149539A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111389761.7A CN116149539A (en) 2021-11-22 2021-11-22 Data reading method and related equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111389761.7A CN116149539A (en) 2021-11-22 2021-11-22 Data reading method and related equipment

Publications (1)

Publication Number Publication Date
CN116149539A true CN116149539A (en) 2023-05-23

Family

ID=86358754

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111389761.7A Pending CN116149539A (en) 2021-11-22 2021-11-22 Data reading method and related equipment

Country Status (1)

Country Link
CN (1) CN116149539A (en)

Similar Documents

Publication Publication Date Title
US11966612B2 (en) Solid-state disk (SSD) data migration
CN111654519B (en) Method and device for transmitting data processing requests
CN114020655A (en) Memory expansion method, device, equipment and storage medium
US20150127691A1 (en) Efficient implementations for mapreduce systems
CN104462225B (en) The method, apparatus and system of a kind of digital independent
CN110235098B (en) Storage system access method and device
US20210326270A1 (en) Address translation at a target network interface device
US9253275B2 (en) Cognitive dynamic allocation in caching appliances
EP3070633A1 (en) Network interface devices with remote storage control
US11099767B2 (en) Storage system with throughput-based timing of synchronous replication recovery
CN113360077B (en) Data storage method, computing node and storage system
WO2015196378A1 (en) Method, device and user equipment for reading/writing data in nand flash
CN113014662A (en) Data processing method and storage system based on NVMe-oF protocol
US12079080B2 (en) Memory controller performing selective and parallel error correction, system including the same and operating method of memory device
US8549274B2 (en) Distributive cache accessing device and method for accelerating to boot remote diskless computers
CN113535611A (en) Data processing method and device and heterogeneous system
JP2013539111A (en) System and method for efficient sequential logging on a storage device that supports caching
CN110383254B (en) Optimizing memory mapping associated with network nodes
WO2016201998A1 (en) Cache distribution, data access and data sending methods, processors, and system
US10061725B2 (en) Scanning memory for de-duplication using RDMA
CN116149539A (en) Data reading method and related equipment
CN114253733B (en) Memory management method, device, computer equipment and storage medium
CN113986134B (en) Method for storing data, method and device for reading data
WO2021237431A1 (en) Data processing method and apparatus, processing device, and data storage system
CN116166177A (en) Metadata reading method and related equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication