WO2023179742A1 - 数据访问方法及系统、硬件卸载设备、电子设备及介质 - Google Patents

数据访问方法及系统、硬件卸载设备、电子设备及介质 Download PDF

Info

Publication number
WO2023179742A1
WO2023179742A1 PCT/CN2023/083533 CN2023083533W WO2023179742A1 WO 2023179742 A1 WO2023179742 A1 WO 2023179742A1 CN 2023083533 W CN2023083533 W CN 2023083533W WO 2023179742 A1 WO2023179742 A1 WO 2023179742A1
Authority
WO
WIPO (PCT)
Prior art keywords
file
written
storage system
object file
hardware
Prior art date
Application number
PCT/CN2023/083533
Other languages
English (en)
French (fr)
Inventor
朴君
Original Assignee
阿里云计算有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 阿里云计算有限公司 filed Critical 阿里云计算有限公司
Publication of WO2023179742A1 publication Critical patent/WO2023179742A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/10Program control for peripheral devices
    • G06F13/102Program control for peripheral devices where the programme performs an interfacing function, e.g. device driver
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/38Information transfer, e.g. on bus
    • G06F13/42Bus transfer protocol, e.g. handshake; Synchronisation
    • G06F13/4282Bus transfer protocol, e.g. handshake; Synchronisation on a serial bus, e.g. I2C bus, SPI bus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/17Details of further file system functions
    • G06F16/172Caching, prefetching or hoarding of files
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/18File system types
    • G06F16/188Virtual file systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2213/00Indexing scheme relating to interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F2213/0026PCI express

Definitions

  • the present application relates to the field of computer technology, and in particular, to a data access method and system, hardware uninstallation equipment, electronic equipment and media.
  • Object-Based Storage System is a Key-Value (key-value pair) storage system that can provide high-durability, high-availability, and high-performance object storage services.
  • the object storage system uses object files as the basic unit for storing data.
  • users not only need to pay the storage fee for data persistence, but also need to pay the request fee for object file access.
  • the request fee is generally calculated based on the number of times the object file is accessed. The more times the object file is accessed, the more request fees are incurred.
  • Various small files include but are not limited to: text files, image files, audio files, video files, etc.
  • each small file is written into the object storage system as an object file. In this way, when a large number of small files are written into the object storage system, more object file access times will be generated, and more request fees will be generated. Data access costs are higher.
  • Various aspects of the present application provide a data access method and system, hardware offloading equipment, electronic equipment and media to reduce request fees generated by accessing object files in an object storage system and reduce data access costs.
  • Embodiments of the present application provide a data access method, which is applied to a client running on a hardware offloading device.
  • the hardware offloading device communicates with an electronic device through a bus.
  • the method includes: obtaining a file to be written sent by an application program on the electronic device, And write the file to be written to the cache on the hardware offload device; if the cached file to be written in the cache meets the file merging conditions, merge the files to be written in the cache to obtain the file to be written.
  • the first object file entered; the first object file is written to the object storage system to which the client has access rights.
  • Embodiments of the present application also provide a hardware offloading device.
  • the hardware offloading device is communicatively connected to the electronic device through a bus.
  • the hardware offloading device includes a main processor and a cache.
  • the main processor runs a client program to: obtain applications on the electronic device.
  • the program sends the file to be written and writes the file to be written to the cache on the hardware offload device; if the cached file to be written in the cache meets the file merging conditions, the cached file to be written in the cache is The files are merged to obtain the first object file to be written; the first object file is written to the object storage system to which the client has access rights.
  • An embodiment of the present application also provides an electronic device, including: a processor and the above-mentioned hardware offloading device, at least one application program is running on the processor, and the processor is communicatively connected with the hardware offloading device through a bus.
  • Embodiments of the present application also provide a data access system, including: an electronic device, a hardware offloading device, and an object storage system; the electronic device and the hardware offloading device are connected through bus communication, and the hardware offloading device is connected through communication with the object storage system; running on the electronic device There is at least one application program used to send files to be written to the hardware offloading device through the application program; the hardware offloading device is used to obtain the files to be written and write the files to be written into the cache on the hardware offloading device; If the cached files to be written in the cache meet the file merging conditions, merge the cached files to be written in the cache to obtain the first object file to be written; and send the file including the first object to the object storage system.
  • File write request the object storage system is used to store the first object file in response to the write request.
  • Embodiments of the present application also provide a computer storage medium storing a computer program.
  • the processor can implement the steps in the data access method.
  • transferring the data access tasks for the object storage system from the electronic device to the hardware offloading device can reduce the processing pressure of the electronic device, improve the processing performance of the electronic device, and enhance data access. performance.
  • each hardware offloading device receives a file to be written from an application, it does not directly write the file to be written to the object storage system. Instead, it first caches the file locally, and then caches the file to the object storage system. After multiple files to be written, the multiple files to be written are then merged into a new object file and written to the object storage system to which the client has access rights.
  • the object storage system can greatly reduce the number of object file accesses to the object storage system, reduce the request fees generated by object file access to the object storage system, and reduce the cost of data access. Especially for scenarios with massive small files, it can greatly reduce the number of object file accesses.
  • the number of object file accesses in the storage system reduces the request costs incurred by object file accesses in the object storage system and reduces data access costs.
  • Figure 1 is a schematic structural diagram of a data access system provided by an embodiment of the present application.
  • Figure 2 is a flow chart of a data access method provided by an embodiment of the present application.
  • Figure 3 is a schematic structural diagram of another data access system provided by an embodiment of the present application.
  • Figure 4 is a flow chart of another data access method provided by an embodiment of the present application.
  • Figure 5 is a schematic structural diagram of a hardware offloading device provided by an embodiment of the present application.
  • FIG. 6 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • Virtual Filesystem Switch is a kernel software layer that provides POSIX (Portable Operating System Interface of UNIX, Portable Operating System Interface) for upper-layer applications so that upper-layer applications can use POSIX interfaces to access different File system.
  • POSIX Portable Operating System Interface of UNIX, Portable Operating System Interface
  • Filesystem in Userspace is a software interface for Unix-like computer operating systems that enables unprivileged users to create their own file systems without editing kernel code.
  • the user space file system provides the kernel module and the user space library (libfuse) module.
  • the kernel module is responsible for encapsulating file operation commands into file operation requests of the FUSE protocol and sending them to the user space library module through the transmission channel; the user space library module receives and parses File operation requests of the FUSE protocol are processed by calling the corresponding file operation function according to the FUSE protocol data command type.
  • Hardware offload device refers to a hardware device with hardware offload function.
  • the hardware offloading device can bear the burden of accessing the object storage system originally by an electronic device running an application (Application, App). Data access tasks, thereby reducing the processing pressure of electronic devices and improving the processing performance of electronic devices.
  • Various small files include but are not limited to: text files, image files, audio files, video files, etc.
  • each small file is written into the object storage system as an object file.
  • object file access times will be generated, and more request fees will be incurred.
  • Data access costs are higher.
  • embodiments of the present application provide a data access method and system, hardware offloading equipment, electronic equipment, and media.
  • transferring the data access tasks for the object storage system from the electronic device to the hardware offloading device can reduce the processing pressure of the electronic device, improve the processing performance of the electronic device, and enhance data access. performance.
  • each hardware offloading device receives a file to be written from an application, it does not directly write the file to be written to the object storage system. Instead, it first caches the file locally, and then caches the file to the object storage system. After multiple files to be written, the multiple files to be written are then merged into a new object file and written to the object storage system to which the client has access rights.
  • the object storage system can greatly reduce the number of object file accesses to the object storage system, reduce the request fees generated by object file access to the object storage system, and reduce the cost of data access. Especially for scenarios with massive small files, it can greatly reduce the number of object file accesses.
  • the number of object file accesses in the storage system reduces the request costs incurred by object file accesses in the object storage system and reduces data access costs.
  • Figure 1 is a schematic structural diagram of a data access system provided by an embodiment of the present application.
  • the system may include: an electronic device 10 , a hardware offloading device 20 and an object storage system 40 .
  • the electronic device 10 and the hardware offloading device 20 are communicatively connected through the bus 30 , and the hardware offloading device 20 can be connected to the object storage system 40 via wired or wireless communication.
  • one or more Apps 11 can be run on the electronic device 10.
  • any App 11 can call the POSIX interface provided by the virtual file system 12 to send data access such as write requests, read requests, etc. to the user space file system 13. ask.
  • the user space file system 13 receives the data access request sent by the virtual file system 12 through the kernel module it has, and sends the data access request to the hardware offload device 20 through the bus.
  • the hardware offloading device 20 receives the data access request sent by the electronic device 10 through the kernel module through the bus interface 21, and sends the data access request to the main processor 22.
  • the main processor 22 provides the data access request to the client, and the client responds with the data
  • the access request performs data interaction with the object storage system 40 .
  • the client writes an object file to the object storage system 40 or reads an object file in the object storage system 40 .
  • the kernel module can also convert data access requests with the POSIX file protocol into data access requests with the FUSE protocol adapted to the user space file system 13, and pass the data access requests of the FUSE protocol through The bus is sent to the hardware offload device 20.
  • the bus interface 21 on the hardware offloading device 20 converts the received data access request of the FUSE protocol, obtains a data access request of the POSIX file protocol, and sends the data access request of the POSIX file protocol to the main processor 22 .
  • the bus interface 21 can be a PCIE (peripheral component interconnect express, high-speed serial computer expansion bus standard) interface, SPI (serial peripheral interface, serial peripheral interface) or AXI (Advanced eXtensible Interface, advanced scalable interface).
  • PCIE peripheral component interconnect express, high-speed serial computer expansion bus standard
  • SPI serial peripheral interface, serial peripheral interface
  • AXI Advanced eXtensible Interface, advanced scalable interface
  • the main processor 22 includes, for example, but is not limited to: DSP (Digital Signal Processing, digital signal processor), NPU (Neural-network Processing Unit, embedded neural network processor), CPU (central processing unit, central processing unit) and GPU (Graphic Processing Unit, graphics processor).
  • DSP Digital Signal Processing, digital signal processor
  • NPU Neuro-network Processing Unit, embedded neural network processor
  • CPU central processing unit, central processing unit
  • GPU Graphic Processing Unit, graphics processor
  • the electronic device is used to send the file to be written to the hardware offloading device through the application program; the hardware offloading device is used to obtain the file to be written and write the file to be written. to the cache on the hardware offload device; if the files to be written that have been cached in the cache meet the file merging conditions, merge the files that have been cached to be written in the cache to obtain the first object file to be written; and Send a write request including the first object file to the object storage system; the object storage system is configured to store the first object file in response to the write request.
  • the electronic device is used to send a first read request to the hardware offloading device through an application program.
  • the first read request includes the file name of the target file to which the target data to be read belongs and its location.
  • the first location information in the target file the hardware offloading device receives the first read request sent by the application program; sends a second read request to the object storage system, the second read request includes the file name of the target file; receives the second read request returned by the object storage system two object files, and obtain the target file from the second object file; read the target data from the target file according to the first location information, and send the target data to the application program;
  • the object storage system is used to respond to the second read request from Obtain the second object file including the target file from the stored object file.
  • FIG. 2 is a flow chart of a data access method provided by an embodiment of the present application. This method is applied to the client running on the hardware offloading device 20.
  • the hardware offloading device 20 is communicatively connected with the electronic device 10 through the bus 30. As shown in Figure 2, the method may include the following steps:
  • any application program on the electronic device 10 can send a file to be written to the hardware offloading device 20 when triggered by a file writing requirement, and the client on the hardware offloading device 20 obtains the file to be written from the application program. , and write the file to be written into the cache 24 on the hardware offload device 20 .
  • the client when the client obtains the file to be written sent by the application program on the electronic device 10, it is specifically used to: receive the write request sent by the application program through the bus interface 21, and the write request is the application program calling the user space file. Sent by the kernel module provided by the system 13; calling the user space library module provided by the user space file system 13 to process the write request to obtain the file to be written.
  • the client uses the user space library module to process the write request.
  • the client uses the user space library module to process the write request.
  • it can also determine the SDK (Software Development Kit, software) that interacts with the object storage system 40. Development Kit), but is not limited to this.
  • the client After writing the file to be written into the cache 24, the client detects whether the cached file to be written in the cache 24 satisfies the file merging condition. If the file merging conditions are met, the client merges several cached files to be written to obtain a new object file. In order to facilitate understanding and distinction, the new object file obtained by merging is called the first object file. After the client obtains the first object file, it writes the first object file into the object storage system 40 to which the client has access rights. If the file merging conditions are not met, the client will not perform the file merging operation on the cached files to be written in cache 24 for the time being. The client can call the SDK that interacts with the object storage system 40 to write the first object file into the object storage system 40 .
  • the file merging conditions include but are not limited to: the remaining cache space in the cache 24 is less than the preset cache space, or the number of files to be written in the cache 24 is greater than or equal to the preset The number of files, or the cache duration reaches the cache period.
  • the caching period is, for example, one hour, one day, or one month.
  • a description file related to the first object file can also be generated.
  • the description file can, for example, record the file name, file size, and location of the merged file in the first object file. location information.
  • the file data of the merged file can be read from the first object file based on the position information of the merged file in the first object file.
  • the object file includes merged n files and a description file. Among them, n is a positive integer.
  • the storage fee billing unit of the object storage system 40 is: $0.023/GB/month, that is, 1GB of data needs to be paid $0.023 per month.
  • the billing unit of the request fee of the object storage system 40 is: $0.0005c/PUT, that is, each object file needs to be paid 0.0005 cents per access. Since each object file is charged according to the number of accesses, the more access times, the higher the request fee.
  • each small file is written to the object storage system 40 as an object file.
  • 1TB of data capacity can store 268435456 small files of 4KB data size.
  • the data access method provided by the embodiment of the present application transfers the data access task for the object storage system 40 from the electronic device 10 to the hardware offloading device 20 for execution, which can reduce the processing pressure of the electronic device 10 and improve the performance of the electronic device 10 processing performance and can enhance data access performance.
  • the hardware offloading device 20 receives a file to be written from an application program, it does not directly write the file to be written to the object storage system 40 , but first caches it locally, and then After multiple files to be written are cached, the multiple files to be written are then merged into a new object file and written to the object storage system 40 to which the client has access rights.
  • the object storage system can greatly reduce the number of object file accesses to the object storage system, reduce the request fees generated by object file access to the object storage system, and reduce the cost of data access. Especially for scenarios with massive small files, it can greatly reduce the number of object file accesses.
  • the number of object file accesses in the storage system reduces the request costs incurred by object file accesses in the object storage system and reduces data access costs.
  • the client in addition to writing new object files to the object storage system 40 for storage, can also write new object files to the memory 23 of the hardware offload device 20 for storage.
  • the new object file is stored in the memory 23 of both the object storage system 40 and the hardware offload device 20.
  • the object file can also be accessed in the memory 23 preferentially. There is currently no object stored in the memory 23. In the case of a file, accessing the object file in the object storage system 40 can further reduce the request fee incurred when accessing the object file in the object storage system 40 .
  • a memory 23 can be set on the hardware offload device 20.
  • the memory 23 includes, for example, but is not limited to: a mechanical hard disk (Hard Disk Drive, HDD) and a solid state disk (Solid State Disk, SSD). and other hard drives.
  • HDD Hard Disk Drive
  • SSD Solid State Disk
  • the client can also write the index information of the first object file into the index file.
  • the index information of the first object file includes the object identifier of the first object file, the storage status and the merged at least one to-be-written
  • the file name of the input file, the storage status represents whether the first object file is stored in the memory 23 or the object storage system 40.
  • the index file can be saved to the memory 23 on the hardware offload device 20 .
  • the data capacity of the memory 23 provided on the hardware offload device 20 is smaller. As time goes by, some of the object files written in the memory 23 during the historical period may have been cleared from the memory 23 , and some may still remain in the memory 23 .
  • the client can accurately know whether the first object file is stored in the memory 23 or the object storage system 40 through the index information of the first object file recorded in the index file, and quickly determine whether to perform data access to the memory 23 or the object storage system 40 , thereby increasing data access efficiency and reducing access frequency to the object storage system 40 .
  • the client can directly merge the cached files to be written in the cache to obtain the first object file to be written.
  • a compression module with a data compression function can also be provided on the hardware offloading device 20 . Therefore, when the client merges the cached files to be written in the cache and obtains the first object file to be written, it is specifically used to: send the cached files to be written to the compression module, so that the compression module Compress the cached file to be written to obtain a first object file; receive the first object file returned by the compression module.
  • Figure 4 is a flow chart of a data access method provided by an embodiment of the present application. This method is applied to the client running on the hardware offloading device 20.
  • the hardware offloading device 20 is communicatively connected with the electronic device 10 through the bus. As shown in Figure 4, the method may include the following steps:
  • the first read request includes the file name of the target file to which the target data to be read belongs and its first position information in the target file.
  • the object storage system 40 sends a second read request to the object storage system 40.
  • the second read request includes the file name of the target file, so that the object storage system 40 obtains the second object file including the target file from the stored object files.
  • any application program on the electronic device 10 can send a first read request to the hardware offloading device 20 when triggered by a file reading requirement.
  • the first read request includes the file of the target file to which the target data to be read belongs. name and its first position information in the target file.
  • the first position information is the writing position of the target data in the target file, and the target data can be read from the target file according to the first position information.
  • the client calls the SDK that interacts with the object storage system 40 to send a second read request to the object storage system 40 .
  • the object storage system 40 queries the pre-saved metadata based on the file name of the target file in the second read request, determines the object name of the second object file including the target file, and determines the object name of the second object file including the target file. Namely obtains the second object file from the stored object file and returns it to the client.
  • the second object file may not have undergone data compression.
  • the location information of the target file in the second object file can be obtained directly from the description file corresponding to the second object file.
  • the location information of the target file in the second object file is called second location information.
  • the second object file may have undergone data compression. In this case, it is necessary to decompress the second object file and obtain the target file from the decompressed second object file.
  • a decompression module with a data decompression function can be provided on the hardware offloading device 20.
  • the client when the client obtains the target file from the second object file, it is specifically used to: convert the second object file to Send to the decompression module for the decompression module to decompress the second object file to obtain the decompressed second object file; receive the decompressed second object file returned by the decompression module, and generate the decompressed second object file according to the
  • the file name queries the description file corresponding to the decompressed second object file, and obtains the second location information of the target file in the decompressed second object file; based on the second location information, obtains the decompressed second object file from the decompressed second object file. Get the target file.
  • the client after receiving the first read request sent by the application program, the client can directly access the object storage system 40 to obtain the second object file including the target file.
  • the client may also first access the memory 23 on the hardware offload device 20, and then access the object storage system 40 after the second object file is not obtained from the memory 23.
  • the client before sending the second read request to the object storage system 40, the client can also query the index file according to the file name of the target file to obtain the second object.
  • the storage status of the file if the storage status of the second object file indicates that only the second object file is stored in the object storage system 40, then the step of sending a second read request to the object storage system 40 is performed. If the storage status of the second object file indicates that the second object file is stored in the memory 23, then the memory 23 Get the second object file.
  • the object file is first accessed in the memory 23 , and when the memory 23 does not currently store the object file, the object file is accessed in the object storage system 40 , which can further reduce the number of times the object file is accessed in the object storage system 40 . Request fees incurred for documents.
  • the data access method provided by the embodiment of the present application transfers the data access task for the object storage system from the electronic device to the hardware offloading device for execution, which can reduce the processing pressure of the electronic device and enhance the data access performance.
  • the data to be read is file data in multiple files included in the same object file
  • when it is necessary to read data from multiple files in the same object file only the data in the object storage system needs to be read.
  • the same object file can be accessed once, and there is no need to access multiple object files in the object storage system multiple times. This can greatly reduce the number of object file accesses to the object storage system, reduce the request fees incurred by object file accesses to the object storage system, and reduce data access costs.
  • the number of object file accesses in the object storage system reduces the request fees incurred by object file accesses in the object storage system and reduces data access costs.
  • the hardware offloading device 20 can run the client, compression module and decompression module on one processor, or can run the client, compression module and decompression module on multiple processors.
  • the hardware offloading device 20 is provided with a main processor 22 and a coprocessor 25 .
  • the main processor 22 and the co-processor 25 include, for example, but are not limited to: DSP (Digital Signal Processing, digital signal processor), NPU (Neural-network Processing Unit, embedded neural network processor), CPU (central processing unit) , central processing unit) and GPU (Graphic Processing Unit, graphics processor).
  • the main processor 22 runs the client, and the co-processor 25 provides a compression module and a decompression module.
  • the main processor 22 and the co-processor 25 work together to improve the overall data access performance.
  • the execution subject of each step of the method provided in the above embodiments may be the same device, or the method may also be executed by different devices.
  • the execution subject of steps 201 to 203 may be device A; for another example, the execution subject of steps 201 and 202 may be device A, the execution subject of step 203 may be device B; and so on.
  • FIG. 5 is a schematic structural diagram of a hardware offloading device 20 provided by an embodiment of the present application.
  • the hardware offloading device 20 is communicatively connected to the electronic device 10 through a bus.
  • the hardware offloading device 20 includes a main processor 22 and a cache.
  • the main processor 22 runs a client program to: obtain the electronic device 10 upload the file to be written sent by the application program, and write the file to be written into the cache 24 on the hardware offload device 20; if the file to be written that has been cached in the cache 24 meets the file merging condition, the file to be written into the cache 24 will be The cached files to be written are merged to obtain the first object file to be written; the first object file is written to the object storage system 40 to which the client has access rights.
  • the hardware offloading device 20 also includes: a memory 23; the main processor 22 is also used to: write the first object file into the memory 23, and write the index information of the first object file into the index file.
  • the index information of the first object file includes the object identifier of the first object file, the storage status and the file name of at least one merged file to be written.
  • the storage status indicates whether the first object file is stored in the memory 23 or the object storage system 40 .
  • the hardware offloading device 20 also includes: a coprocessor 25; when the main processor 22 performs file merging, it is specifically used to: send cached files to be written to the coprocessor 25, and receive the coprocessor 25. 25; the co-processor 25 is used to compress the cached file to be written to obtain the first object file, and send the first object file to the main processor 22.
  • a coprocessor 25 when the main processor 22 performs file merging, it is specifically used to: send cached files to be written to the coprocessor 25, and receive the coprocessor 25. 25; the co-processor 25 is used to compress the cached file to be written to obtain the first object file, and send the first object file to the main processor 22.
  • the main processor 22 is also configured to: receive a first read request sent by the application program, where the first read request includes the file name of the target file to which the target data to be read belongs and its first position information in the target file. ; Send a second read request to the object storage system 40, the second read request includes the file name of the target file, so that the object storage system 40 can obtain the second object file including the target file from the stored object file; receive the object storage system 40 returns the second object file, and obtains the target file from the second object file; reads the target data from the target file according to the first location information, and sends the target data to the application program.
  • the main processor 22 when it obtains the target file, it is specifically used to: send the second object file to the co-processor 25, receive the target file returned by the co-processor 25, and query the solution according to the file name of the target file.
  • the description file corresponding to the compressed second object file obtains the second location information of the target file in the decompressed second object file; according to the second location information, the target file is obtained from the decompressed second object file.
  • the coprocessor 25 is also used to decompress the second object file to obtain a decompressed second object file, and return the decompressed second object file to the main processor 22.
  • FIG. 6 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • the electronic device It includes: a processor 62 and the hardware offloading device 61 described in any of the previous embodiments. At least one application program runs on the processor 62.
  • the processor 62 is communicatively connected with the hardware offloading device through a bus. Notably, applications on processor 62 may interact with the hardware offload device to achieve access to the object storage system.
  • the hardware offloading device 61 receives the data access request sent from the application program on the processor 62, and executes the data access method provided by the embodiment of the present application to respond to the data access request.
  • the processor 62 includes, for example, but is not limited to: DSP (Digital Signal Processing, digital signal processor), NPU (Neural-network Processing Unit, embedded neural network processor), CPU (central processing unit, central processing unit) and GPU (Graphic Processing Unit, graphics processor).
  • DSP Digital Signal Processing, digital signal processor
  • NPU Neuro-network Processing Unit, embedded neural network processor
  • CPU central processing unit, central processing unit
  • GPU Graphic Processing Unit, graphics processor
  • the electronic device also includes: a communication component 63 , a display 64 , a power supply component 65 , an audio component 66 and other components. Only some components are schematically shown in FIG. 6 , which does not mean that the electronic device only includes the components shown in FIG. 6 . In addition, the components within the dotted box in Figure 6 are optional components, not required components, and may depend on the product form of the electronic device.
  • the electronic device in this embodiment can be implemented as a terminal device such as a desktop computer, a notebook computer, a smartphone, or an IOT device, or as a server device such as a conventional server, a cloud server, or a server array.
  • the electronic device of this embodiment is implemented as a terminal device such as a desktop computer, laptop computer, smart phone, etc., it may include the components in the dotted box in Figure 6; if the electronic device of this embodiment is implemented as a conventional server, cloud server, or server array, etc.
  • the server device does not need to include the components in the dotted box in Figure 6.
  • embodiments of the present application also provide a computer-readable storage medium storing a computer program.
  • the steps in the above data access method can be implemented.
  • embodiments of the present application also provide a computer program product, which includes a computer program/instruction.
  • the processor can implement each step in the above data access method.
  • the above communication component is configured to facilitate wired or wireless communication between the device where the communication component is located and other devices.
  • the device where the communication component is located can access wireless networks based on communication standards, such as WiFi, 2G, 3G, 4G/LTE, 5G and other mobile communication networks, or a combination thereof.
  • the communication component receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel.
  • the communication component further includes a near field communication (NFC) module to facilitate short-range communication.
  • the NFC module can be implemented based on radio frequency identification (RFID) technology, infrared data association (IrDA) technology, ultra-wideband (UWB) technology, Bluetooth (BT) technology and other technologies.
  • RFID radio frequency identification
  • IrDA infrared data association
  • UWB ultra-wideband
  • Bluetooth Bluetooth
  • the above-mentioned display includes a screen, and the screen may include a liquid crystal display (LCD) and a touch panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from the user.
  • the touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. A touch sensor can not only sense the boundaries of a touch or swipe action, but also detect the duration and pressure associated with the touch or swipe action.
  • a power supply component provides power to various components of the device where the power supply component is located.
  • a power component may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power to the device in which the power component resides.
  • the above audio components may be configured to output and/or input audio signals.
  • the audio component includes a microphone (MIC), and when the device where the audio component is located is in an operating mode, such as call mode, recording mode, and voice recognition mode, the microphone is configured to receive an external audio signal.
  • the received audio signals may be further stored in memory 23 or sent via communication components.
  • the audio component further includes a speaker for outputting audio signals.
  • embodiments of the present application may be provided as methods, systems, or computer program products. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment that combines software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
  • computer-usable storage media including, but not limited to, disk storage, CD-ROM, optical storage, etc.
  • These computer program instructions may also be stored in a computer-readable memory that causes a computer or other programmable data processing apparatus to operate in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including the instruction means, the instructions
  • the device implements the functions specified in a process or processes of the flowchart and/or a block or blocks of the block diagram.
  • These computer program instructions may also be loaded onto a computer or other programmable data processing device, causing a series of operating steps to be performed on the computer or other programmable device to produce computer-implemented processing, thereby executing on the computer or other programmable device.
  • Instructions provide steps for implementing the functions specified in a process or processes of a flowchart diagram and/or a block or blocks of a block diagram.
  • the electronic device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
  • processors CPUs
  • input/output interfaces network interfaces
  • memory volatile and non-volatile memory
  • Memory may include non-permanent storage in computer-readable media, random access memory (RAM), and/or non-volatile memory in the form of read-only memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
  • RAM random access memory
  • ROM read-only memory
  • flash RAM flash random access memory
  • Computer-readable media includes both persistent and non-volatile, removable and non-removable media that can be implemented by any method or technology for storage of information.
  • Information may be computer-readable instructions, data structures, modules of programs, or other data.
  • Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), and read-only memory.
  • PRAM phase change memory
  • SRAM static random access memory
  • DRAM dynamic random access memory
  • RAM random access memory
  • ROM electrically erasable programmable read-only memory
  • flash memory or other memory technology
  • compact disc read-only memory (CD-ROM) digital versatile disc (DVD) or other optical storage
  • Magnetic tape cassettes, tape disk storage or other magnetic storage devices or any other non-transmission medium can be used to store information that can be accessed by electronic devices.
  • computer-readable media does not include transitory media, such as modulated data signals and carrier waves.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Bioethics (AREA)
  • Software Systems (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Hardware Design (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

本申请实施例提供一种数据访问方法及系统、硬件卸载设备、电子设备及介质。在本申请实施例中,一方面,将针对对象存储系统的数据访问任务从电子设备转移至硬件卸载设备上执行,可以减轻电子设备的处理压力,提高电子设备的处理性能,且能够增强数据访问性能。另一方面,硬件卸载设备在每接收到来源于应用程序上的待写入文件后,并不直接将待写入文件写入至对象存储系统中,而是首先在本地进行缓存,在缓存到多个待写入文件后,再将多个待写入文件合并成一个新的对象文件写入至对象存储系统。这样,能够极大地减少针对对象存储系统的对象文件访问次数,减少针对对象存储系统的对象文件访问所产生的请求费用,降低数据访问成本。

Description

数据访问方法及系统、硬件卸载设备、电子设备及介质
本申请要求于2022年03月25日提交中国专利局、申请号为202210307679.3、申请名称为“数据访问方法及系统、硬件卸载设备、电子设备及介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及计算机技术领域,尤其涉及一种数据访问方法及系统、硬件卸载设备、电子设备及介质。
背景技术
对象存储系统(Object-Based Storage System)是一种Key-Value(键值对)形式的存储系统,能够提供高持久、高可用、高性能的对象存储服务。对象存储系统采用对象(Object)文件作为存储数据的基本单元,用户在使用对象存储系统的过程中,除了需要支付数据持久化的存储费用,还需要支付对象文件访问的请求费用。该请求费用一般是按照对象文件访问次数进行计费,对象文件访问次数越多,产生的请求费用越多。
在一些应用场景中,会产生海量的数据量较小的各种小文件,各种小文件例如包括但不限于:文本文件、图片文件、音频文件和视频文件等。目前,将每个小文件作为一个对象文件写入至对象存储系统中,这样,在将海量小文件写入对象存储系统时,会产生较多的对象文件访问次数,产生的请求费用较多,数据访问成本较高。
发明内容
本申请的多个方面提供一种数据访问方法及系统、硬件卸载设备、电子设备及介质,用以减少随着在对象存储系统中访问对象文件所产生的请求费用,降低数据访问成本。
本申请实施例提供一种数据访问方法,应用于硬件卸载设备上运行的客户端,硬件卸载设备通过总线与电子设备通信连接,该方法包括:获取电子设备上应用程序发送的待写入文件,并将待写入文件写入至硬件卸载设备上的缓存中;若缓存中已缓存的待写入文件满足文件合并条件,则将缓存中已缓存的待写入文件进行文件合并,得到待写入的第一对象文件;将第一对象文件写入至客户端具有访问权限的对象存储系统中。
本申请实施例还提供一种硬件卸载设备,硬件卸载设备通过总线与电子设备通信连接,硬件卸载设备包括主处理器和缓存,主处理器运行客户端程序,以用于:获取电子设备上应用程序发送的待写入文件,并将待写入文件写入至硬件卸载设备上的缓存中;若缓存中已缓存的待写入文件满足文件合并条件,则将缓存中已缓存的待写入文件进行文件合并,得到待写入的第一对象文件;将第一对象文件写入至客户端具有访问权限的对象存储系统中。
本申请实施例还提供一种电子设备,包括:处理器和上述硬件卸载设备,处理器上运行有至少一个应用程序,处理器通过总线与硬件卸载设备通信连接。
本申请实施例还提供一种数据访问系统,包括:电子设备、硬件卸载设备和对象存储系统;电子设备与硬件卸载设备通过总线通信连接,硬件卸载设备与对象存储系统通信连接;电子设备上运行有至少一个应用程序,用于通过应用程序向硬件卸载设备发送待写入文件;硬件卸载设备,用于获取待写入文件,并将待写入文件写入至硬件卸载设备上的缓存中;若缓存中已缓存的待写入文件满足文件合并条件,则将缓存中已缓存的待写入文件进行文件合并,得到待写入的第一对象文件;以及向对象存储系统发送包括第一对象文件的写请求;对象存储系统,用于响应写请求存储第一对象文件。
本申请实施例还提供一种存储有计算机程序的计算机存储介质,当计算机程序被处理器执行时,致使处理器能够实现数据访问方法中的步骤。
在本申请实施例中,一方面,将针对对象存储系统的数据访问任务从电子设备转移至硬件卸载设备上执行,可以减轻电子设备的处理压力,提高电子设备的处理性能,且能够增强数据访问性能。另一方面,硬件卸载设备在每接收到来源于应用程序上的待写入文件后,并不直接将待写入文件写入至对象存储系统中,而是首先在本地进行缓存,在缓存到多个待写入文件后,再将多个待写入文件合并成一个新的对象文件写入至客户端具有访问权限的对象存储系统。这样,能够极大地减少针对对象存储系统的对象文件访问次数,减少针对对象存储系统的对象文件访问所产生的请求费用,降低数据访问成本,特别是针对海量小文件场景,可以极大地减少针对对象存储系统的对象文件访问次数,减少针对对象存储系统的对象文件访问所产生的请求费用,降低数据访问成本。另外,极大地减少了因大量的数据访问请求触发对象存储系统出现流量限速的情形的发生,增强了对象存储系统的存储性能和访问性能。
附图说明
此处所说明的附图用来提供对本申请的进一步理解,构成本申请的一部分,本申请的示意性实施例及其说明用于解释本申请,并不构成对本申请的不当限定。在附图中:
图1为本申请实施例提供的一种数据访问系统的结构示意图;
图2为本申请实施例提供的一种数据访问方法的流程图;
图3为本申请实施例提供的另一种数据访问系统的结构示意图;
图4为本申请实施例提供的另一种数据访问方法的流程图;
图5为本申请实施例提供的一种硬件卸载设备的结构示意图;
图6为本申请实施例提供的一种电子设备的结构示意图。
具体实施方式
为使本申请的目的、技术方案和优点更加清楚,下面将结合本申请具体实施例及相应的附图对本申请技术方案进行清楚、完整地描述。显然,所描述的实施例仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
另外,为了便于清楚描述本申请实施例的技术方案,在本申请的实施例中,采用了“第一”、“第二”等对功能和作用基本相同的相同项或相似项进行区分。本领域技术人员可以理解“第一”、“第二”等并不对数量和执行次序进行限定,并且“第一”、“第二”等也并不限定一定不同。
首先,对本申请实施例涉及的名词进行解释:
虚拟文件系统(Virtual Filesystem Switch,VFS)是一个内核软件层,为上层的应用程序提供POSIX(Portable Operating System Interface of UNIX,可移植操作系统接口),以使上层的应用程序能够使用POSIX接口访问不同文件系统。
用户空间文件系统(Filesystem in Userspace,FUSE),是一个面向类Unix计算机操作系统的软件接口,它使无特权的用户能够无需编辑内核代码而创建自己的文件系统。用户空间文件系统提供内核模块和用户空间库(libfuse)模块,内核模块负责将文件操作命令封装成FUSE协议的文件操作请求,并通过传输通道发送给用户空间库模块;用户空间库模块接收并解析FUSE协议的文件操作请求,根据FUSE协议数据命令类型,调用对应的文件操作函数进行处理。有关用户空间文件系统中内核模块和用户空间库模块的更多介绍可以参见相关技术。
硬件卸载设备,是指具有硬件卸载offload功能的硬件设备。在本申请实施例中,硬件卸载设备能够承担原本由运行应用程序(Application,App)的电子设备访问对象存储系统的 数据访问任务,进而减轻电子设备的处理压力,提高电子设备的处理性能。
在一些应用场景中,会产生海量的数据量较小的各种小文件,各种小文件例如包括但不限于:文本文件、图片文件、音频文件和视频文件等。目前,将每个小文件作为一个对象文件写入至对象存储系统中,这样,在将海量小文件写入对象存储系统时,会产生较多的对象文件访问次数,产生的请求费用较多,数据访问成本较高。针对上述技术问题,本申请实施例提供一种数据访问方法及系统、硬件卸载设备、电子设备及介质。在本申请实施例中,一方面,将针对对象存储系统的数据访问任务从电子设备转移至硬件卸载设备上执行,可以减轻电子设备的处理压力,提高电子设备的处理性能,且能够增强数据访问性能。另一方面,硬件卸载设备在每接收到来源于应用程序上的待写入文件后,并不直接将待写入文件写入至对象存储系统中,而是首先在本地进行缓存,在缓存到多个待写入文件后,再将多个待写入文件合并成一个新的对象文件写入至客户端具有访问权限的对象存储系统。这样,能够极大地减少针对对象存储系统的对象文件访问次数,减少针对对象存储系统的对象文件访问所产生的请求费用,降低数据访问成本,特别是针对海量小文件场景,可以极大地减少针对对象存储系统的对象文件访问次数,减少针对对象存储系统的对象文件访问所产生的请求费用,降低数据访问成本。另外,极大地减少了因大量的数据访问请求触发对象存储系统出现流量限速的情形的发生,增强了对象存储系统的存储性能和访问性能。
图1为本申请实施例提供的一种数据访问系统的结构示意图。参见图1,该系统可以包括:电子设备10、硬件卸载设备20和对象存储系统40。其中,电子设备10和硬件卸载设备20通过总线30通信连接,硬件卸载设备20可与对象存储系统40进行有线或无线通信连接。
其中,电子设备10上可以运行一个或多个App11,任一App11在有数据访问需求时,可以调用虚拟文件系统12提供的POSIX接口向用户空间文件系统13发送诸如写请求、读请求等数据访问请求。用户空间文件系统13通过其具有的内核模块接收虚拟文件系统12发送的数据访问请求,以及将数据访问请求通过总线发送给硬件卸载设备20。
硬件卸载设备20通过总线接口21接收电子设备10通过内核模块发送的数据访问请求,以及将数据访问请求发送至主处理器22,主处理器22将数据访问请求提供至客户端,客户端响应数据访问请求与对象存储系统40进行数据交互,例如,客户端向对象存储系统40写入对象文件,或者,读取对象存储系统40中的对象文件。
进一步可选的,内核模块还可以具有POSIX文件协议的数据访问请求转换成具有与用户空间文件系统13适配的FUSE协议的数据访问请求,将FUSE协议的数据访问请求通过 总线发送给硬件卸载设备20。相应地,硬件卸载设备20上的总线接口21对接收到的FUSE协议的数据访问请求进行转换,得到POSIX文件协议的数据访问请求,并将POSIX文件协议的数据访问请求发送至主处理器22。
其中,总线接口21可以是PCIE(peripheral component interconnect express,高速串行计算机扩展总线标准)接口、SPI(serial peripheral interface,串行外围设备接口)或者AXI(Advanced eXtensible Interface,高级可扩展接口)。
其中,主处理器22例如包括但不限于:DSP(Digital Signal Processing,数字信号处理器)、NPU(Neural-network Processing Unit,嵌入式神经网络处理器)、CPU(central processing unit,中央处理器)和GPU(Graphic Processing Unit,图形处理器)。
在本实施例中,在对象文件写入阶段,电子设备用于通过应用程序向硬件卸载设备发送待写入文件;硬件卸载设备,用于获取待写入文件,并将待写入文件写入至硬件卸载设备上的缓存中;若缓存中已缓存的待写入文件满足文件合并条件,则将缓存中已缓存的待写入文件进行文件合并,得到待写入的第一对象文件;以及向对象存储系统发送包括第一对象文件的写请求;对象存储系统,用于响应写请求存储第一对象文件。关于数据访问系统在对象文件写入阶段的交互过程更多介绍可以参见后文。
在本实施例中,在对象文件读取阶段,电子设备,用于通过应用程序向硬件卸载设备发送第一读请求,第一读请求包括待读取的目标数据所属目标文件的文件名和其在目标文件中的第一位置信息;硬件卸载设备接收应用程序发送的第一读请求;向对象存储系统发送第二读请求,第二读请求包括目标文件的文件名;接收对象存储系统返回的第二对象文件,并从第二对象文件中获取目标文件;根据第一位置信息从目标文件中读取目标数据,并将目标数据发送给应用程序;对象存储系统,用于响应第二读请求从已存储的对象文件中获取包括目标文件的第二对象文件。关于数据访问系统在对象文件读取的交互过程更多介绍可以参见后文。
以下结合附图,详细说明本申请各实施例提供的技术方案。
图2为本申请实施例提供的一种数据访问方法的流程图。该方法应用于硬件卸载设备20上运行的客户端,硬件卸载设备20通过总线30与电子设备10通信连接,如图2所示,该方法可以包括以下步骤:
201、获取电子设备10上应用程序发送的待写入文件,并将待写入文件写入至硬件卸载设备20上的缓存中。
202、若缓存中已缓存的待写入文件满足文件合并条件,则将缓存中已缓存的待写入文件进行文件合并,得到待写入的第一对象文件。
203、将第一对象文件写入至客户端具有访问权限的对象存储系统40中。
具体而言,电子设备10上任一应用程序可以在文件写入需求的触发下,向硬件卸载设备20发送待写入文件,硬件卸载设备20上的客户端获取来源于应用程序的待写入文件,并将待写入文件写入至硬件卸载设备20上的缓存24中。
在一可选实现方式中,客户端获取电子设备10上应用程序发送的待写入文件时,具体用于:通过总线接口21接收应用程序发送的写请求,写请求是应用程序调用用户空间文件系统13提供的内核模块发送的;调用用户空间文件系统13提供的用户空间库模块处理写请求,以得到待写入文件。
其中,客户端利用用户空间库模块处理写请求,除了可以获取到来源于电子设备10上的应用程序的待写入文件,还可以确定与对象存储系统40进行交互的SDK(Software Development Kit,软件开发工具包),但并不限于此。
客户端在将待写入文件写入至缓存24后,检测缓存24中已缓存的待写入文件是否满足文件合并条件。若满足文件合并条件,客户端将已缓存的若干个待写入文件进行文件合并,得到一个新的对象文件,为了便于理解和区分,将合并所得的新的对象文件称作第一对象文件。客户端得到第一对象文件后,将第一对象文件写入至客户端具有访问权限的对象存储系统40中。若不满足文件合并条件,客户端暂不执行将缓存24中已缓存的待写入文件进行文件合并操作。其中,客户端可以调用与对象存储系统40进行交互的SDK将第一对象文件写入至对象存储系统40中。
本实施例对文件合并条件不做限制,文件合并条件例如包括但不限于:缓存24中的剩余缓存空间小于预设缓存空间,或者,缓存24中的待写入文件的数量大于或等于预设文件数量,或者,缓存时长到达缓存周期。其中,缓存周期例如为1个小时、一天或者一个月。在缓存24已经缓存1个小时、一天或者一个月期间的待写入文件时,确定满足文件合并条件。
进一步可选的,在文件合并时,还可以生成与第一对象文件相关的描述文件,描述文件例如可以记录被合并的文件的文件名、文件大小、被合并的文件在第一对象文件中的位置信息。其中,基于被合并的文件在第一对象文件中的位置信息可以从第一对象文件中读取被合并的文件的文件数据。参见图2,对象文件中包括所合并的n个文件和一个描述文件。其中,n为正整数。
为了便于理解,结合表1和海量小文件场景进行举例说明。
表1
例如,对象存储系统40的存储费用的计费单位:$0.023/GB/month,也即1GB数据大小的数据每个月需要付费0.023美元。对象存储系统40的请求费用的计费单位:$0.0005c/PUT,也即每个对象文件每次访问需要付费0.0005美分。由于每个对象文件按照访问次数计费,访问次数越多请求费用越高。
在现有方案中,每个小文件作为一个对象文件写入至对象存储系统40中。例如,1TB的数据容量可以存储268435456个4KB数据大小的小文件,按照100次/秒的写入速度计算,产生的一个月费的存储费用大概为0.023*1024=$23;产生的请求费用为0.0005c*268435456/100=$1342。由此,可以发现请求费用远大于数据存储成本,大约占到总成本的98%。
在改进方案中,多个小文件合并成一个对象文件写入至存储系统中。例如,1TB的数据容量可以存储205个5GB小文件,产生的一个月费的存储费用大概为0.023*1024=$23;产生的请求费用为0.0005c*205/100=$0.001。
本申请实施例提供的数据访问方法,一方面,将针对对象存储系统40的数据访问任务从电子设备10转移至硬件卸载设备20上执行,可以减轻电子设备10的处理压力,提高电子设备10的处理性能,且能够增强数据访问性能。另一方面,硬件卸载设备20在每接收到来源于应用程序上的待写入文件后,并不直接将待写入文件写入至对象存储系统40中,而是首先在本地进行缓存,在缓存到多个待写入文件后,再将多个待写入文件合并成一个新的对象文件写入至客户端具有访问权限的对象存储系统40。这样,能够极大地减少针对对象存储系统的对象文件访问次数,减少针对对象存储系统的对象文件访问所产生的请求费用,降低数据访问成本,特别是针对海量小文件场景,可以极大地减少针对对象存储系统的对象文件访问次数,减少针对对象存储系统的对象文件访问所产生的请求费用,降低数据访问成本。另外,极大地减少了因大量的数据访问请求触发对象存储系统出现流量限速的情形的发生,增强了对象存储系统的存储性能和访问性能。
在一些可选实施例中,客户端除了可以将新的对象文件写入至对象存储系统40进行存储,还可以将新的对象文件写入至硬件卸载设备20的存储器23中进行存储。在对象存储系统40和硬件卸载设备20的存储器23中均存储新的对象文件,除了可以增加对象文件的存储安全性,还可以优先在存储器23中访问对象文件,在存储器23当前未存储对象 文件的情况下,再到对象存储系统40中访问对象文件,能够进一步减少随着在对象存储系统40中访问对象文件所产生的请求费用。
于是,进一步可选的,参见图3,可以在硬件卸载设备20上设置存储器23,该存储器23例如包括但不限于:机械硬盘(Hard Disk Drive,HDD)和固态硬盘(Solid State Disk,SSD)等各种硬盘。基于上述,客户端除了将第一对象文件写入至对象存储系统40中,还可以将第一对象文件写入至存储器23中。
进一步可选的,客户端还可以将第一对象文件的索引信息写入至索引文件中,第一对象文件的索引信息包括第一对象文件的对象标识、存储状态和所合并的至少一个待写入文件的文件名,存储状态表征存储器23或者对象存储系统40中是否存储第一对象文件。进一步可选的,索引文件可以保存至硬件卸载设备20上的存储器23中。
值得注意的是,相比于可以存储海量数据的对象存储系统40,硬件卸载设备20上所设置的存储器23的数据容量较小。随着时间推移,在历史时间段内写入存储器23中的对象文件有的可能已经从存储器23中清除,有的可能仍然保留在存储器23中。
于是,客户端通过索引文件记录的第一对象文件的索引信息,可以准确获知存储器23或对象存储系统40是否保存了第一对象文件,并快速确定是对存储器23还是对象存储系统40进行数据访问,进而增加数据访问效率和减少对对象存储系统40的访问频次。
实际应用中,客户端可以直接将缓存中已缓存的待写入文件进行文件合并,得到待写入的第一对象文件。进一步可选的,为了减少数据传输所要消耗的资源,增强数据访问性能,还可以在硬件卸载设备20上设置具有数据压缩功能的压缩模块。于是,客户端将缓存中已缓存的待写入文件进行文件合并,得到待写入的第一对象文件时,具体用于:将已缓存的待写入文件发送至压缩模块,以使压缩模块对已缓存的待写入文件进行压缩处理得到第一对象文件;接收压缩模块返回的第一对象文件。
下面结合图4,从数据读取角度介绍下本申请实施例提供的数据访问方法。图4为本申请实施例提供的一种数据访问方法的流程图。该方法应用于硬件卸载设备20上运行的客户端,硬件卸载设备20通过总线与电子设备10通信连接,如图4所示,该方法可以包括以下步骤:
401、接收应用程序发送的第一读请求,第一读请求包括待读取的目标数据所属目标文件的文件名和其在目标文件中的第一位置信息。
402、向对象存储系统40发送第二读请求,第二读请求包括目标文件的文件名,以供对象存储系统40从已存储的对象文件中获取包括目标文件的第二对象文件。
403、接收对象存储系统40返回的第二对象文件,并从第二对象文件中获取目标 文件。
404、根据第一位置信息从目标文件中读取目标数据,并将目标数据发送给应用程序。
在本实施例中,电子设备10上任一应用程序可以在文件读取需求的触发下,向硬件卸载设备20发送第一读请求,第一读请求包括待读取的目标数据所属目标文件的文件名和其在目标文件中的第一位置信息。其中,第一位置信息是目标数据在目标文件中的写入位置,根据第一位置信息可以从目标文件中读取目标数据。
客户端调用与对象存储系统40进行交互的SDK向对象存储系统40发送第二读请求。对象存储系统40响应第二读请求,根据第二读请求中目标文件的文件名,查询预先保存的元数据,确定包括目标文件的第二对象文件的对象名,并根据第二对象文件的对象名从已存储的对象文件中获取第二对象文件,并返回给客户端。
实际应用中,第二对象文件可能未经过数据压缩,针对这种情形,可以直接从第二对象文件对应的描述文件中获取目标文件在第二对象文件中的位置信息,为了便于理解和区分,将目标文件在第二对象文件中的位置信息称作为第二位置信息。根据第二位置信息从第二对象文件中获取目标文件。第二对象文件可能经过数据压缩,针对这种情形,则需要对第二对象文件进行解压缩,并从解压缩后的第二对象文件中获取目标文件。在一可选实现方式中,可以在硬件卸载设备20上设置具有数据解压缩功能的解压缩模块,于是,客户端从第二对象文件中获取目标文件时,具体用于:将第二对象文件发送给解压缩模块,以供解压缩模块对第二对象文件进行解压缩处理得到解压缩后的第二对象文件;接收解压缩模块返回的解压缩后的第二对象文件,并根据目标文件的文件名查询解压缩后的第二对象文件对应的描述文件,得到目标文件在解压缩后的第二对象文件中的第二位置信息;根据第二位置信息,从解压缩后的第二对象文件中获取目标文件。
实际应用中,客户端在接收到应用程序发送的第一读请求后,可以直接将访问对象存储系统40,以获取包括目标文件的第二对象文件。客户端还可以首先访问硬件卸载设备20上的存储器23,在未从存储器23中获取到第二对象文件后,再访问对象存储系统40。进一步可选的,为了增加数据访问效率和减少访问对象存储系统40的次数,客户端向对象存储系统40发送第二读请求之前,还可以根据目标文件的文件名查询索引文件,获取第二对象文件的存储状态;若第二对象文件的存储状态表征只有对象存储系统40中存储第二对象文件,则执行向对象存储系统40发送第二读请求的步骤。若第二对象文件的存储状态表征存储器23中存储第二对象文件,则从存储器23 中获取第二对象文件。
值得注意的是,优先在存储器23中访问对象文件,在存储器23当前未存储对象文件的情况下,再到对象存储系统40中访问对象文件,能够进一步减少随着在对象存储系统40中访问对象文件所产生的请求费用。
本申请实施例提供的数据访问方法,一方面,将针对对象存储系统的数据访问任务从电子设备转移至硬件卸载设备上执行,可以减轻电子设备的处理压力,且能够增强数据访问性能。另一方面,在待读取数据是同一个对象文件中所包括的多个文件中的文件数据时,在需要读取同一个对象文件的多个文件的数据时,仅需对对象存储系统中同一个对象文件进行一次访问即可,无需多次访问对象存储系统中的多个对象文件。由此,能够极大地减少针对对象存储系统的对象文件访问次数,减少针对对象存储系统的对象文件访问所产生的请求费用,降低数据访问成本,特别是针对海量小文件场景,可以极大地减少针对对象存储系统的对象文件访问次数,减少针对对象存储系统的对象文件访问所产生的请求费用,降低数据访问成本。另外,极大地减少了因大量的数据访问请求触发对象存储系统出现流量限速的情形的发生,增强了对象存储系统的存储性能和访问性能。
值得注意的是,硬件卸载设备20可以在一个处理器上运行客户端、压缩模块和解压缩模块,也可以在多个处理器上运行客户端、压缩模块和解压缩模块。参见图3所示,硬件卸载设备20设置了一个主处理器22和协处理器25。其中,主处理器22和协处理器25例如包括但不限于:DSP(Digital Signal Processing,数字信号处理器)、NPU(Neural-network Processing Unit,嵌入式神经网络处理器)、CPU(central processing unit,中央处理器)和GPU(Graphic Processing Unit,图形处理器)。主处理器22上运行客户端,协处理器25上提供压缩模块和解压缩模块,在主处理器22和协处理器25相互配合工作,能够提高整个数据访问性能。
需要说明的是,上述实施例所提供方法的各步骤的执行主体均可以是同一设备,或者,该方法也由不同设备作为执行主体。比如,步骤201至步骤203的执行主体可以为设备A;又比如,步骤201和202的执行主体可以为设备A,步骤203的执行主体可以为设备B;等等。
另外,在上述实施例及附图中的描述的一些流程中,包含了按照特定顺序出现的多个操作,但是应该清楚了解,这些操作可以不按照其在本文中出现的顺序来执行或并行执行,操作的序号如201、202等,仅仅是用于区分开各个不同的操作,序号本身不代表任何的执行顺序。另外,这些流程可以包括更多或更少的操作,并且这些操作可以按顺序执行或并行执行。需要说明的是,本文中的“第一”、“第二”等描述,是用于区分不同的消息、设 备、模块等,不代表先后顺序,也不限定“第一”和“第二”是不同的类型。
图5为本申请实施例提供的一种硬件卸载设备20的结构示意图。该硬件卸载设备20通过总线与电子设备10通信连接,如图5所示,该硬件卸载设备20包括主处理器22和缓存,主处理器22运行客户端程序,以用于:获取电子设备10上应用程序发送的待写入文件,并将待写入文件写入至硬件卸载设备20上的缓存24中;若缓存24中已缓存的待写入文件满足文件合并条件,则将缓存24中已缓存的待写入文件进行文件合并,得到待写入的第一对象文件;将第一对象文件写入至客户端具有访问权限的对象存储系统40中。
进一步可选的,硬件卸载设备20还包括:存储器23;主处理器22还用于:将第一对象文件写入至存储器23中,以及将第一对象文件的索引信息写入至索引文件中,第一对象文件的索引信息包括第一对象文件的对象标识、存储状态和所合并的至少一个待写入文件的文件名,存储状态表征存储器23或者对象存储系统40中是否存储第一对象文件。
进一步可选的,硬件卸载设备20还包括:协处理器25;主处理器22进行文件合并时,具体用于:将已缓存的待写入文件发送至协处理器25,并接收协处理器25发送的第一对象文件;协处理器25,用于对已缓存的待写入文件进行压缩处理得到第一对象文件,并将第一对象文件发送给主处理器22。
进一步可选的,主处理器22还用于:接收应用程序发送的第一读请求,第一读请求包括待读取的目标数据所属目标文件的文件名和其在目标文件中的第一位置信息;向对象存储系统40发送第二读请求,第二读请求包括目标文件的文件名,以供对象存储系统40从已存储的对象文件中获取包括目标文件的第二对象文件;接收对象存储系统40返回的第二对象文件,并从第二对象文件中获取目标文件;根据第一位置信息从目标文件中读取目标数据,并将目标数据发送给应用程序。
进一步可选的,主处理器22获取目标文件时,具体用于:将第二对象文件发送给协处理器25,并接收协处理器25返回的目标文件,并根据目标文件的文件名查询解压缩后的第二对象文件对应的描述文件,得到目标文件在解压缩后的第二对象文件中的第二位置信息;根据第二位置信息,从解压缩后的第二对象文件中获取目标文件;协处理器25,还用于对第二对象文件进行解压缩处理得到解压缩后的第二对象文件,并将解压缩后的第二对象文件返回至主处理器22。
图5所示的硬件卸载设备20其中各个模块、单元执行操作的具体方式已经在有关数据访问方法的实施例中进行了详细描述,此处将不做详细阐述说明。
图6为本申请实施例提供的一种电子设备的结构示意图。参见图6,该电子设备 包括:处理器62和前述实施例任一项所述的硬件卸载设备61,处理器62上运行有至少一个应用程序,处理器62通过总线与硬件卸载设备通信连接。值得注意的是,处理器62上的应用程序可与硬件卸载设备进行交互,以实现访问对象存储系统。硬件卸载设备61接收来源于处理器62上的应用程序发送的数据访问请求,并执行本申请实施例提供的数据访问方法响应数据访问请求。
其中,处理器62例如包括但不限于:DSP(Digital Signal Processing,数字信号处理器)、NPU(Neural-network Processing Unit,嵌入式神经网络处理器)、CPU(central processing unit,中央处理器)和GPU(Graphic Processing Unit,图形处理器)。
进一步,如图6所示,该电子设备还包括:通信组件63、显示器64、电源组件65、音频组件66等其它组件。图6中仅示意性给出部分组件,并不意味着电子设备只包括图6所示组件。另外,图6中虚线框内的组件为可选组件,而非必选组件,具体可视电子设备的产品形态而定。本实施例的电子设备可以实现为台式电脑、笔记本电脑、智能手机或IOT设备等终端设备,也可以是常规服务器、云服务器或服务器阵列等服务端设备。若本实施例的电子设备实现为台式电脑、笔记本电脑、智能手机等终端设备,可以包含图6中虚线框内的组件;若本实施例的电子设备实现为常规服务器、云服务器或服务器阵列等服务端设备,则可以不包含图6中虚线框内的组件。
相应地,本申请实施例还提供一种存储有计算机程序的计算机可读存储介质,计算机程序被执行时能够实现上述数据访问方法中的各步骤。
相应地,本申请实施例还提供一种计算机程序产品,包括计算机程序/指令,当计算机程序/指令被处理器执行时,致使处理器能够实现上述数据访问方法中的各步骤。
上述通信组件被配置为便于通信组件所在设备和其他设备之间有线或无线方式的通信。通信组件所在设备可以接入基于通信标准的无线网络,如WiFi,2G、3G、4G/LTE、5G等移动通信网络,或它们的组合。在一个示例性实施例中,通信组件经由广播信道接收来自外部广播管理系统的广播信号或广播相关信息。在一个示例性实施例中,通信组件还包括近场通信(NFC)模块,以促进短程通信。例如,在NFC模块可基于射频识别(RFID)技术,红外数据协会(IrDA)技术,超宽带(UWB)技术,蓝牙(BT)技术和其他技术来实现。
上述显示器包括屏幕,其屏幕可以包括液晶显示器(LCD)和触摸面板(TP)。如果屏幕包括触摸面板,屏幕可以被实现为触摸屏,以接收来自用户的输入信号。触摸面板包括一个或多个触摸传感器以感测触摸、滑动和触摸面板上的手势。触摸传感器可以不仅感测触摸或滑动动作的边界,而且还检测与触摸或滑动操作相关的持续时间和压力。
上述电源组件,为电源组件所在设备的各种组件提供电力。电源组件可以包括电源管理系统,一个或多个电源,及其他与为电源组件所在设备生成、管理和分配电力相关联的组件。
上述音频组件,可被配置为输出和/或输入音频信号。例如,音频组件包括一个麦克风(MIC),当音频组件所在设备处于操作模式,如呼叫模式、记录模式和语音识别模式时,麦克风被配置为接收外部音频信号。所接收的音频信号可以被进一步存储在存储器23或经由通信组件发送。在一些实施例中,音频组件还包括一个扬声器,用于输出音频信号。
本领域内的技术人员应明白,本申请的实施例可提供为方法、系统、或计算机程序产品。因此,本申请可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本申请可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
本申请是参照根据本申请实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
在一个典型的配置中,电子设备包括一个或多个处理器(CPU)、输入/输出接口、网络接口和内存。
内存可能包括计算机可读介质中的非永久性存储器,随机存取存储器(RAM)和/或非易失性内存等形式,如只读存储器(ROM)或闪存(flash RAM)。内存是计算机可读介质的示例。
计算机可读介质包括永久性和非永久性、可移动和非可移动媒体可以由任何方法或技术来实现信息存储。信息可以是计算机可读指令、数据结构、程序的模块或其他数据。计算机的存储介质的例子包括,但不限于相变内存(PRAM)、静态随机存取存储器(SRAM)、动态随机存取存储器(DRAM)、其他类型的随机存取存储器(RAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、快闪记忆体或其他内存技术、只读光盘只读存储器(CD-ROM)、数字多功能光盘(DVD)或其他光学存储、磁盒式磁带,磁带磁盘存储或其他磁性存储设备或任何其他非传输介质,可用于存储可以被电子设备访问的信息。按照本文中的界定,计算机可读介质不包括暂存电脑可读媒体(transitory media),如调制的数据信号和载波。
还需要说明的是,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、商品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、商品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括要素的过程、方法、商品或者设备中还存在另外的相同要素。
以上仅为本申请的实施例而已,并不用于限制本申请。对于本领域技术人员来说,本申请可以有各种更改和变化。凡在本申请的精神和原理之内所作的任何修改、等同替换、改进等,均应包含在本申请的权利要求范围之内。

Claims (14)

  1. 一种数据访问方法,其特征在于,应用于硬件卸载设备上运行的客户端,所述硬件卸载设备通过总线与电子设备通信连接,所述方法包括:
    获取所述电子设备上应用程序发送的待写入文件,并将所述待写入文件写入至所述硬件卸载设备上的缓存中;
    若所述缓存中已缓存的待写入文件满足文件合并条件,则将所述缓存中已缓存的待写入文件进行文件合并,得到待写入的第一对象文件;
    将所述第一对象文件写入至所述客户端具有访问权限的对象存储系统中。
  2. 根据权利要求1所述的方法,其特征在于,所述硬件卸载设备还包括:存储器,所述方法还包括:
    将所述第一对象文件写入至所述存储器中,以及将所述第一对象文件的索引信息写入至索引文件中,所述第一对象文件的索引信息包括所述第一对象文件的对象标识、存储状态和所合并的至少一个待写入文件的文件名,所述存储状态表征所述存储器或者所述对象存储系统中是否存储所述第一对象文件。
  3. 根据权利要求1所述的方法,其特征在于,所述硬件卸载设备还包括:压缩模块,所述将所述缓存中已缓存的待写入文件进行文件合并,得到待写入的第一对象文件,包括:
    将所述已缓存的待写入文件发送至所述压缩模块,以使所述压缩模块对所述已缓存的待写入文件进行压缩处理得到所述第一对象文件;
    接收所述压缩模块返回的所述第一对象文件。
  4. 根据权利要求1所述的方法,其特征在于,获取所述电子设备上应用程序发送的待写入文件包括:
    通过总线接口接收所述应用程序发送的写请求,所述写请求是所述应用程序调用用户空间文件系统提供的内核模块发送的;
    调用所述用户空间文件系统提供的用户空间库模块处理所述写请求,以得到所述待写入文件。
  5. 根据权利要求2所述的方法,其特征在于,还包括:
    接收所述应用程序发送的第一读请求,所述第一读请求包括待读取的目标数据所属目标文件的文件名和其在所述目标文件中的第一位置信息;
    向所述对象存储系统发送第二读请求,所述第二读请求包括所述目标文件的文件 名,以供所述对象存储系统从已存储的对象文件中获取包括所述目标文件的第二对象文件;
    接收所述对象存储系统返回的所述第二对象文件,并从所述第二对象文件中获取所述目标文件;
    根据所述第一位置信息从所述目标文件中读取所述目标数据,并将所述目标数据发送给所述应用程序。
  6. 根据权利要求5所述的方法,其特征在于,向所述对象存储系统发送第二读请求之前,还包括:
    根据所述目标文件的文件名查询所述索引文件,获取所述第二对象文件的存储状态;
    若所述第二对象文件的存储状态表征只有所述对象存储系统中存储所述第二对象文件,则执行向所述对象存储系统发送第二读请求的步骤。
  7. 根据权利要求6所述的方法,其特征在于,还包括:
    若所述第二对象文件的存储状态表征所述存储器中存储所述第二对象文件,则从所述存储器中获取所述第二对象文件。
  8. 根据权利要求5所述的方法,其特征在于,所述硬件卸载设备还包括:解压缩模块,所述从所述第二对象文件中获取所述目标文件,包括:
    将所述第二对象文件发送给所述解压缩模块,以供所述解压缩模块对所述第二对象文件进行解压缩处理得到解压缩后的第二对象文件;
    接收所述解压缩模块返回的所述解压缩后的第二对象文件,并根据所述目标文件的文件名查询所述解压缩后的第二对象文件对应的描述文件,得到所述目标文件在所述解压缩后的第二对象文件中的第二位置信息;
    根据所述第二位置信息,从所述解压缩后的第二对象文件中获取所述目标文件。
  9. 一种硬件卸载设备,其特征在于,所述硬件卸载设备通过总线与电子设备通信连接,所述硬件卸载设备包括主处理器和缓存,所述主处理器运行客户端程序,以用于:
    获取所述电子设备上应用程序发送的待写入文件,并将所述待写入文件写入至所述硬件卸载设备上的缓存中;
    若所述缓存中已缓存的待写入文件满足文件合并条件,则将所述缓存中已缓存的待写入文件进行文件合并,得到待写入的第一对象文件;
    将所述第一对象文件写入至所述客户端具有访问权限的对象存储系统中。
  10. 根据权利要求9所述的硬件卸载设备,其特征在于,还包括:存储器;
    所述主处理器还用于:将所述第一对象文件写入至所述存储器中,以及将所述第一对象文件的索引信息写入至索引文件中,所述第一对象文件的索引信息包括所述第一对象文件的对象标识、存储状态和所合并的至少一个待写入文件的文件名,所述存储状态表征所述存储器或者所述对象存储系统中是否存储所述第一对象文件。
  11. 根据权利要求9所述的硬件卸载设备,其特征在于,所述硬件卸载设备还包括:协处理器;
    所述主处理器进行文件合并时,具体用于:将所述已缓存的待写入文件发送至所述协处理器,并接收所述协处理器发送的所述第一对象文件;
    所述协处理器,用于对所述已缓存的待写入文件进行压缩处理得到所述第一对象文件,并将所述第一对象文件发送给所述主处理器。
  12. 一种电子设备,其特征在于,包括:处理器和如权利要求9至11任一项所述的硬件卸载设备,所述处理器上运行有至少一个应用程序,所述处理器通过总线与所述硬件卸载设备通信连接。
  13. 一种数据访问系统,其特征在于,包括:电子设备、硬件卸载设备和对象存储系统;所述电子设备与所述硬件卸载设备通过总线通信连接,所述硬件卸载设备与所述对象存储系统通信连接;
    所述电子设备上运行有至少一个应用程序,用于通过所述应用程序向所述硬件卸载设备发送待写入文件;
    所述硬件卸载设备,用于获取所述待写入文件,并将所述待写入文件写入至所述硬件卸载设备上的缓存中;若所述缓存中已缓存的待写入文件满足文件合并条件,则将所述缓存中已缓存的待写入文件进行文件合并,得到待写入的第一对象文件;以及向所述对象存储系统发送包括所述第一对象文件的写请求;
    所述对象存储系统,用于响应所述写请求存储所述第一对象文件。
  14. 一种存储有计算机程序的计算机存储介质,其特征在于,当所述计算机程序被处理器执行时,致使所述处理器能够实现权利要求1-8任一项所述方法中的步骤。
PCT/CN2023/083533 2022-03-25 2023-03-24 数据访问方法及系统、硬件卸载设备、电子设备及介质 WO2023179742A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210307679.3 2022-03-25
CN202210307679.3A CN114817978A (zh) 2022-03-25 2022-03-25 数据访问方法及系统、硬件卸载设备、电子设备及介质

Publications (1)

Publication Number Publication Date
WO2023179742A1 true WO2023179742A1 (zh) 2023-09-28

Family

ID=82530327

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/083533 WO2023179742A1 (zh) 2022-03-25 2023-03-24 数据访问方法及系统、硬件卸载设备、电子设备及介质

Country Status (2)

Country Link
CN (1) CN114817978A (zh)
WO (1) WO2023179742A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114817978A (zh) * 2022-03-25 2022-07-29 阿里云计算有限公司 数据访问方法及系统、硬件卸载设备、电子设备及介质
CN115421854A (zh) * 2022-08-24 2022-12-02 阿里巴巴(中国)有限公司 存储系统、方法以及硬件卸载卡

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8738669B1 (en) * 2007-10-08 2014-05-27 Emc Corporation Method and apparatus for providing access to data objects within another data object
WO2017107948A1 (zh) * 2015-12-23 2017-06-29 中兴通讯股份有限公司 文件的写聚合、读聚合方法及系统和客户端
CN111104063A (zh) * 2019-12-06 2020-05-05 浪潮电子信息产业股份有限公司 一种数据存储方法、装置及电子设备和存储介质
US20200174893A1 (en) * 2018-12-03 2020-06-04 Acronis International Gmbh System and method for data packing into blobs for efficient storage
CN111625191A (zh) * 2020-05-21 2020-09-04 苏州浪潮智能科技有限公司 一种数据读写方法、装置及电子设备和存储介质
CN114817978A (zh) * 2022-03-25 2022-07-29 阿里云计算有限公司 数据访问方法及系统、硬件卸载设备、电子设备及介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8738669B1 (en) * 2007-10-08 2014-05-27 Emc Corporation Method and apparatus for providing access to data objects within another data object
WO2017107948A1 (zh) * 2015-12-23 2017-06-29 中兴通讯股份有限公司 文件的写聚合、读聚合方法及系统和客户端
US20200174893A1 (en) * 2018-12-03 2020-06-04 Acronis International Gmbh System and method for data packing into blobs for efficient storage
CN111104063A (zh) * 2019-12-06 2020-05-05 浪潮电子信息产业股份有限公司 一种数据存储方法、装置及电子设备和存储介质
CN111625191A (zh) * 2020-05-21 2020-09-04 苏州浪潮智能科技有限公司 一种数据读写方法、装置及电子设备和存储介质
CN114817978A (zh) * 2022-03-25 2022-07-29 阿里云计算有限公司 数据访问方法及系统、硬件卸载设备、电子设备及介质

Also Published As

Publication number Publication date
CN114817978A (zh) 2022-07-29

Similar Documents

Publication Publication Date Title
WO2023179742A1 (zh) 数据访问方法及系统、硬件卸载设备、电子设备及介质
US11165667B2 (en) Dynamic scaling of storage volumes for storage client file systems
US9792060B2 (en) Optimized write performance at block-based storage during volume snapshot operations
CN113287286B (zh) 通过rdma进行分布式存储节点中的输入/输出处理
WO2021036370A1 (zh) 预读取文件页的方法、装置和终端设备
WO2017117919A1 (zh) 数据存储方法和装置
WO2019080531A1 (zh) 一种信息采集及内存释放的方法及装置
US11438010B2 (en) System and method for increasing logical space for native backup appliance
CN114579055B (zh) 磁盘存储方法、装置、设备及介质
WO2019057000A1 (zh) 日志写入方法、装置及系统
CN103645873A (zh) 一种在趋势曲线系统中实现高效数据缓存的方法
US11093176B2 (en) FaaS-based global object compression
JP2016515258A (ja) 最適化ファイル動作のためのファイル集合化
WO2018233216A1 (zh) 一种数据处理方法和电子设备
US10848179B1 (en) Performance optimization and support compatibility of data compression with hardware accelerator
US11288096B2 (en) System and method of balancing mixed workload performance
TW202230140A (zh) 管理記憶體的方法及非暫時性電腦可讀媒體
CN108718329B (zh) 支持多种方式访问的云存储移动路由设备的方法及设备
WO2019091322A1 (zh) 虚拟机快照处理方法、装置及系统
US9933944B2 (en) Information processing system and control method of information processing system
US10840943B1 (en) System and method of data compression between backup server and storage
US11734121B2 (en) Systems and methods to achieve effective streaming of data blocks in data backups
EP4120060A1 (en) Method and apparatus of storing data,and method and apparatus of reading data
WO2024022119A1 (zh) 数据同步方法、电子设备及系统
CN111831655B (zh) 一种数据处理的方法、装置、介质和电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23773987

Country of ref document: EP

Kind code of ref document: A1