CN117631958A - Storage space expansion method, device and system of DPU - Google Patents

Storage space expansion method, device and system of DPU Download PDF

Info

Publication number
CN117631958A
CN117631958A CN202210998035.3A CN202210998035A CN117631958A CN 117631958 A CN117631958 A CN 117631958A CN 202210998035 A CN202210998035 A CN 202210998035A CN 117631958 A CN117631958 A CN 117631958A
Authority
CN
China
Prior art keywords
host
data
dpu
memory
storage space
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210998035.3A
Other languages
Chinese (zh)
Inventor
覃国
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Huawei Technology Co Ltd
Original Assignee
Chengdu Huawei Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Huawei Technology Co Ltd filed Critical Chengdu Huawei Technology Co Ltd
Priority to CN202210998035.3A priority Critical patent/CN117631958A/en
Priority to PCT/CN2023/101068 priority patent/WO2024037172A1/en
Publication of CN117631958A publication Critical patent/CN117631958A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/38Information transfer, e.g. on bus
    • G06F13/40Bus structure
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Computer Hardware Design (AREA)
  • Human Computer Interaction (AREA)
  • Memory System Of A Hierarchy Structure (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

In the application, when the DPU needs to expand the storage space of the DPU or the storage space of the DPU is insufficient, a request message is initiated to a host connected with the DPU through a bus, wherein the request message is used for applying for the storage space of the host; the host configures a memory space for the DPU in a memory of the host according to the request message, and sends a response message carrying an address of the memory space to the DPU. The DPU maps host memory to memory usage of the DPU. The DPU can apply for the host memory space to the host according to the own demand, so that the memory space configured for the DPU by the host is not fixed any more, and the waste of the memory space is avoided.

Description

Storage space expansion method, device and system of DPU
Technical Field
The present disclosure relates to the field of communications technologies, and in particular, to a method, an apparatus, and a system for expanding a storage space of a DPU.
Background
A data processing unit (data process unit, DPU) is a special purpose processor that is connected to a host and is capable of assisting the host in performing tasks of some data processing classes. Because of the superior performance of the DPU in terms of data processing, the DPU is able to support offloading of basic functions such as virtualization, storage, networking, security, etc. That is, the basic functions such as virtualization, storage, network, security and the like that the host needs to have can be offloaded to the DPU, and the DPU realizes the functions, thereby achieving the effect of accelerating the processing.
In order to facilitate the data processing of the DPU, the DPU is configured with a dedicated memory, which provides a memory space for the DPU, but from the cost and the constraint of hardware, the memory space of the memory is not too large, which limits the data processing capability of the DPU, and for this purpose, the memory space of the memory of the DPU can be extended by the host memory. The conventional way is to reserve a fixed size of memory space for the DPU in the memory of the host, and map the memory space into the memory space of the DPU. The reserved memory space of the segment is pre-allocated to the DPU, can only be used by the DPU, and cannot be used by the host. When the DPU does not require a large amount of memory, it is easy to waste memory.
Disclosure of Invention
The application provides a storage space expansion method, device and system of a DPU (data processing unit) for accelerating the realization of format conversion and reducing the consumption of a CPU (Central processing Unit).
In a first aspect, an embodiment of the present application provides a method for expanding a memory space of a DPU, where the DPU is connected to a host through a system bus. The DPU can be positioned outside the host as an external device of the host; the DPU may also be disposed within the host, for example, on a motherboard or backplane of the host to which the DPU is connected via a PCIe bus or other type of bus. When the DPU needs to expand the storage space of the DPU or the storage space of the DPU is insufficient, a request message is initiated to the host, wherein the request message is used for applying for the storage space of the host; after obtaining the request message, the host configures a host memory space for the DPU in a memory of the host according to the request message, and sends a response message carrying an address of the host memory space to the DPU. After the DPU acquires the response message, mapping the host memory space into the memory space of the DPU for use, and reading and writing data from and into the host memory space.
By the method, the DPU can apply for the storage space to the host according to the self demand, so that the storage space configured for the DPU by the host is not fixed any more, and the waste of the storage space is avoided.
In one possible implementation, after applying for the storage space to the host, the DPU may also release the host storage space when the storage space is not needed, and when releasing the host storage space, the DPU sends a notification message to the host, where the notification message is used to notify the host of releasing the host storage space, and after receiving the notification message, the host recovers the host storage space and uses the storage space.
By the method, the DPU can timely inform the host to release the host storage space, so that the host can continue to use the host storage space, and the resource utilization rate of the host storage space is improved.
In one possible implementation, the request message sent by the DPU also indicates the size of the host memory space. The size of the host memory space is expressed as the number of pages.
Through the method, the DPU can determine the size of the host memory space to be applied according to the self demand, and initiate a request message to the host, so that the application of the host memory space is more flexible, and the waste of the host memory space is avoided.
In one possible implementation, the DPU is provided with a memory, and the DPU is connected with the memory through a system bus, and the memory includes a host page buffer, and after the DPU applies for a host memory space to a host, the DPU writes data in the host memory space into the host page buffer by way of DMA or writes the host page buffer back into the host memory space by way of DMA.
For example, when the DPU needs to write data into the host memory space, the DPU writes data in the host page buffer, and writes data in the host page buffer into the host memory space through DMA.
For another example, when the DPU needs to read data in the host memory space, the DPU writes the data in the host memory space into the host page buffer through DMA, and reads the data from the host page buffer.
By the method, the DPU can realize data reading and data writing of the host memory space by means of the memory and the DMA of the DPU, and the host (such as a processor in the host) is not needed, so that the data reading and data writing of the host memory space are simpler and faster.
In one possible implementation, when the DPU releases the host storage space, if the data in the host page buffer includes the data in the host storage space, the data in the host storage space stored in the host page buffer is marked as invalid data, or the data in the host storage space is deleted.
By the method, when the DPU releases the host memory space, the data in the host memory space in the host page buffer memory is marked as invalid data in time, or the data in the host memory space in the host page buffer memory is deleted, so that the occupation of the host page buffer memory is reduced.
In one possible implementation, the interaction of the DPU with the host (e.g., a processor in the host) may be implemented based on a virtualized input output (virtual I/O) protocol, or may be implemented based on other protocols.
Through the method, the DPU and the host interact based on the mature virtual I/O protocol, so that the DPU and the host are simpler.
In one possible implementation, the DPU maps the address of the host memory space to the logical address of the memory space of the DPU when the memory space of the host is mapped for use by the DPU. The address of the host memory space may be a physical address of the host memory space. And establishing a mapping relation between the address of the host memory space and the logic address of the memory space of the DPU.
By the method, the DPU maps the address of the storage space of the host to the logic address of the storage space of the DPU, and the upper software of the DPU faces the upper software of the DPU, and only perceives the logic address of the storage space of the DPU, and the storage space indicated by the logic address of the storage space of the DPU is not required to be definitely located in the memory of the host or the memory of the DPU.
In one possible implementation, the DPU performs some data processing operations with data in the host memory space after applying to the host memory space. For example, the processing method is applicable to a scenario that the DPU has a supporting network function, in which the DPU acquires data stored in a host memory space, encapsulates the data, and transmits the encapsulated data to a device external to the DPU. For another example, the DPU encrypts data and transmits the encrypted data to a device external to the DPU, and this processing method is suitable for a scenario where the DPU has a security function. For another example, the DPU stores data in a local storage device of the host or stores the data in a remote storage device connected to the DPU through a network, which is suitable for a scenario in which the DPU has a storage function. The local storage device of the host is a storage device, such as a hard disk, used for persistent storage by the host, and the remote storage device is a storage device deployed at a remote location and used for implementing the persistent storage, such as a device deployed in a remote storage system.
Through the method, the DPU calls and applies to the storage space of the host, so that the host is helped to bear a part of data processing operation, and some basic functions can be realized.
In a possible implementation manner, after the DPU applies to the host storage space, some data is stored in the host storage space, for example, the DPU can unpack a data packet from a device external to the DPU, and write the unpacked data into the host storage space, where the processing manner is suitable for a scenario where the DPU has a supporting network function. For another example, the DPU may decrypt data from a device or a host external to the DPU, and write the decrypted data into the host storage space, where the processing method is applicable to a scenario where the DPU has a security function. For another example, the DPU accesses a storage device outside the DPU, reads data in a local storage device of the host or a remote storage device of the host, and stores the read data in a storage space of the host, which is suitable for a scenario that the DPU has a storage function.
Through the method, the DPU calls and applies to the storage space, so that the host computer is helped to bear a part of data processing operation, and some basic functions can be realized.
In a second aspect, embodiments of the present application further provide a data processing apparatus, where the data processing apparatus has a function of implementing the behavior in the method example of the first aspect, and the beneficial effects may be referred to the description of the first aspect and are not repeated herein. The functions may be implemented by hardware, or may be implemented by hardware executing corresponding software. The hardware or software includes one or more modules corresponding to the functions described above. In one possible design, the structure of the data processing device includes a transmission module, a mapping module, and optionally, a writing module, a reading module, a first processing module, and a second processing module. These modules may perform the corresponding functions in the method examples of the first aspect, which are specifically referred to in the detailed description of the method examples and are not described herein.
In a third aspect, embodiments of the present application further provide a data processing apparatus, where the data processing apparatus has a function of implementing the behavior in the method instance of the first aspect, and the beneficial effects may be referred to the description of the first aspect and are not repeated herein. The device comprises a DPU and a power supply circuit, wherein the power supply circuit is used for supplying power to the DPU, and optionally, a memory and a communication interface are also included. The DPU is connected to the host computer via a system bus, the DPU being configured to support the data processing apparatus for performing the corresponding functions of the method of the first aspect described above. The memory is connected to the DPU via a system bus, which stores the necessary computer program instructions and data (e.g., data in memory space) for the communications device, and may be one or a combination of more of RAM, ROM, DRAM, flash memory media, hard disk, etc. The connection mode between the DPU and the host is not limited, for example, the DPU is connected to the back plane or the motherboard of the host through a PCIe bus or other types of buses.
The data processing apparatus further comprises a communication interface in the structure for communicating with other devices or hosts, such as sending request messages, receiving response messages, sending notification messages, etc.
In a fourth aspect, embodiments of the present application further provide a computing system, where the computing system includes a data processing apparatus and a host, where the data processing apparatus includes a DPU, where the DPU is connected to the host through a system bus, for example, where the DPU is connected to a backplane or a motherboard of the host through a PCIe bus. The DPU has the function of implementing the behavior in the method example of the first aspect, and the beneficial effects may be referred to the description of the first aspect and will not be repeated here.
In particular implementations, the data processing apparatus may be deployed as part of a host within the host. For example, the DPU is connected to the back plane or motherboard of the host via a PCIe bus.
In a fifth aspect, the present application also provides a computer readable storage medium having instructions stored therein, which when run on a computer, cause the computer to perform the method of the first aspect and each possible implementation of the first aspect.
In a sixth aspect, the present application also provides a computer program product comprising instructions which, when run on a computer, cause the computer to perform the method as described in the first aspect and in the various possible implementations of the first aspect.
In a seventh aspect, the present application further provides a computer chip, where the chip is connected to a memory, and the chip is configured to read and execute a software program stored in the memory, and perform the method described in the first aspect and each possible implementation manner of the first aspect.
Drawings
FIG. 1 is a schematic diagram of a computing system architecture provided herein;
FIG. 2 is a schematic diagram of a method for expanding a storage space provided in the present application;
FIG. 3 is a schematic diagram of a request message transmission provided in the present application;
fig. 4 is a schematic diagram of sending a response message provided in the present application;
fig. 5 is a schematic structural diagram of a data processing apparatus provided in the present application.
Detailed Description
As shown in fig. 1, a schematic system architecture is provided in an embodiment of the present application, and the system 10 includes a host 20 and a data processing device 30. Host 20 is a conventional computer device. Host 20 includes, but is not limited to, a personal computer, server, cell phone, tablet computer, smart car, or the like.
The host 20 is capable of communicating with other devices outside the system, receiving data sent by other devices to the host 20, to request the host 20 to process the data. In this embodiment, the host 20 and the data processing device 30 both have a data processing function, the host 20 may process the data by itself, or the host 20 may transfer the data to the data processing device 30, and the data processing device 30 processes the data. After the host 20 or the data processing apparatus 30 completes the data processing, the data processing result may be fed back to the other device.
The data processing apparatus 30 is connected to the host 20, where the data processing apparatus 30 may be an external device of the host 20, or may be disposed inside the host 20, where the data processing apparatus 30 is located on a motherboard or a back plane of the host 20, and the data processing apparatus 30 (e.g., the DPU301 in the data processing apparatus 30) exchanges data with the host 20 (e.g., the processor 201 in the host 20) through a bus, where the bus 300 may be a peripheral component interconnect express (peripheral component interconnect express, PCIe) bus, a computing interconnect express (compute express link, CXL), a universal serial bus (universal serial bus, USB) protocol, or a bus of other protocols.
The data processing device 30 may serve as a module having a data processing function attached to the host 20, and may serve as a part of the functions of the host 20. That is, part of the functions of the host 20 are offloaded to the data processing apparatus 30, and the data processing apparatus 30 processes data instead of the host 20, performs part of the tasks to relieve the pressure of the host 20, in particular, the pressure of the processor 201 in the host 20, and releases the computational power of the processor 201. The embodiments of the present application are not limited to the functions that the data processing apparatus 30 can implement instead of the host 20.
For example, the data processing device 30 can be plugged onto the host 20 as a network card for the host 20. The data processing device 30 can complete processing of a data packet based on a network protocol as a network card, and can realize encapsulation, transmission, and the like of data. The data processing device 30 can also be used to support data security and can encrypt and decrypt data including data. The data processing apparatus 30 can also be used as a storage portal, and distributed storage of data and remote access of data are realized through the data processing apparatus 30, where the data processing apparatus 30 can access some local storage devices (such as a hard disk and other devices for persistent storage) of the host 20. Remote storage devices (e.g., devices in a remote storage system) may also be accessed through a network.
The data processing device 30 and the host 20 are each internally provided with a memory 302, the memory 302 in the data processing device 30 being capable of providing storage space to support some data processing operations of the data processing device 30. Similarly, memory 202 in host 20 can provide storage space to support some data processing operations of host 20.
In the embodiment of the present application, the data processing apparatus 30 is allowed to "borrow" the storage space in the host 20 (the storage space in the host 20 may also be simply referred to as the host 20 storage space) according to its own requirement, that is, the data processing apparatus 30 can use the storage space of the memory 202 in the host 20 to support its own data processing operation. The data processing device 30 applies for a host storage space to the host 20 according to its own requirement, where the host storage space is a storage space in the memory 202, and the host 20 may send an address of the host storage space to the data processing device 30, so that the data processing device 30 can map the host storage space to the storage space of the data processing device 30 itself, and use the host storage space.
When the data processing device 30 no longer requires memory space in the memory 202, the data processing device 30 frees up the host memory space, informing the host 20 of the release of the host memory space. The host 20 reclaims the host memory space upon notification of the data processing apparatus 30.
The data processing device 30 can apply for the host 20 memory space or release the host memory space according to its own needs. That is, the host 20 no longer needs to reserve a storage space with a fixed size for the data processing apparatus 30, and only needs to allocate the storage space according to the requirement of the data processing apparatus 30 when the data processing apparatus 30 applies for the host storage space. The data processing device 30 can take the storage space in the memory 202 of the host 20 as required, so that excessive occupation of the memory 202 is avoided, and waste of the storage space is reduced.
The internal structure of the host 20 and the data processing apparatus 30 will be described below.
Host 20 includes I/O interface 203, processor 201, memory 202. The I/O interface 203 is used to communicate with devices external to the host 20. For example, the external device may send data to the host 20 through the I/O interface 203, and after the host 20 processes the input data, the output result after the data processing is sent to the external device through the I/O interface 203.
The processor 201 is an operation core and a control core of the host 20, and may be a central processing unit (central processing unit, CPU) or other specific integrated circuits. The processor 201 may also be other general purpose processors, digital signal processors (digital signal processing, DSP), application specific integrated circuits (application specific integrated circuit, ASIC), field programmable gate arrays (field programmable gate array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like.
The memory 202 is typically used to store various computer program instructions being run in the operating system of the host 20, data to be processed, data processing results, and the like. In order to increase the access speed of the processor 201, the memory 202 needs to have an advantage of a fast access speed. The memory 202 typically employs dynamic random access memory (dynamic random access memory, DRAM) as the memory 202. In addition to DRAM, the memory 202 may be other random access memory, such as static random access memory (Static random access memory, SRAM) or the like. The memory 202 may be a Read Only Memory (ROM). For read-only memory, for example, it may be a programmable read-only memory (programmable read only memory, PROM), erasable programmable read-only memory (erasable programmable read only memory, EPROM), etc. The memory 202 may also be a FLASH memory medium (FLASH), a Hard Disk Drive (HDD), a Solid State Drive (SSD), or the like.
The present embodiment does not limit the number of memories 202. The processor 201 is connected to the memory 202 via a Double Data Rate (DDR) bus or other type of bus. The memory 202 is understood to be the memory (internal memory) of the host 20, which is also referred to as main memory (main memory). The processor 201 performs some of the methods to be performed by the host 20 in the embodiment shown in fig. 2 below by invoking computer program instructions in the memory 202.
The data processing device 30 comprises a data processing unit (data process unit, DPU) 301 and a memory 302, the DPU301 and the memory 302 being connected by a system bus, which may be a PCIe-based line or a bus of CXL, USB protocol or other protocols.
The DPU301 is a main arithmetic unit of the data processing apparatus 30, and is a core unit of the data processing apparatus 30, and the DPU301 takes on the main functions of the data processing apparatus 30. For example, some of the functionality of host 20 may be offloaded to DPU301, data processed by DPU301, and tasks performed by host 20 for delivery to data processing apparatus 30. DPU301 performs some of the methods that DPU301 needs to perform in the embodiment shown in FIG. 3 below by invoking computer program instructions in memory 302.
In this embodiment, the DPU301 may also apply for the host storage space according to its own requirements, or release the host storage space.
Memory 302 is capable of supporting data processing operations of DPU301, providing data storage space for DPU301, and memory 302 is used to store computer program instructions that DPU301 needs to call, data that needs to be processed, data processing results, and the like, similar to memory 202. The type of the memory 302 is similar to the type of the memory 202, and reference is specifically made to the foregoing, and details are not repeated here. The memory 302 is understood to be the memory of the data processing device 30.
In memory 302 and memory 202, data is organized at a granularity of pages (pages). That is, the physical memory space of the memory 302 in the memory 302 is divided at a granularity of pages. DPU301 also accesses memory 302 at page granularity and processor 201 accesses memory 202 at page granularity.
As shown in fig. 1, the data processing device 30 may be directly inserted into a card slot on the motherboard of the host 20 to exchange data with the processor 201 via the PCIe bus 204. It should be noted that PCIe bus 204 in fig. 1 can be replaced with a bus of a computing fast interconnect (compute express link, CXL), universal serial bus (universal serial bus, USB) protocol, or other protocol to enable data transfer by data processing device 30.
In the data processing apparatus 30, in order to facilitate management of the memory 302, the physical storage space of the memory 302 is mapped into a virtual storage space. The address of the physical memory space is referred to as a physical address, and the address of the virtual memory space is referred to as a logical address. The DPU301 is deployed with a management unit for the memory 302. The management unit can implement management of the memory 302, including but not limited to: management of the mapping relationship between physical addresses and logical addresses, application of memory space (memory space with granularity of pages) to the host 20, allocation of memory space (memory space with granularity of pages) for software on the DPU301, and data reading and writing.
The management operation of the management unit is described below:
(1) Management of the mapping relationship between physical addresses and logical addresses.
The management unit can realize the mapping between the physical storage space and the virtual storage space, establish the mapping relation between the physical address and the logical address, and delete the established mapping relation between the physical address and the logical address.
Due to the presence of the management unit, a software program (also referred to as software) on the DPU301 only needs to perceive the logical address, the conversion between logical address and to physical address being effected by the management unit. For example, when a software program running on DPU301 needs to apply for memory, the management unit provides the logical address of the memory allocated for the software program to the software of DPU 301. For another example, when a software program running on DPU301 needs to write data to a memory space, the software program running on DPU301 performs providing of a logical address and the data to be written to the management unit, and the management unit converts the logical address into a physical address based on a mapping relationship between the physical address and the logical address, and writes the data to the physical address. For another example, when a software program running on DPU301 needs to read data from a memory space, the software program running on DPU301 provides a logical address to the management unit, and the management unit converts the logical address to a physical address based on a mapping relationship between the physical address and the logical address, reads the data from the physical address, and feeds back the data to the software program.
(2) Memory space is allocated for software on DPU 301.
The software program on DPU301 may require some memory space during operation to perform data processing operations that are accompanied by data reads or writes in the host memory space. For example, data to be processed is read, or data after processing is written. The software program on the DPU301 applies for storage space to the management unit as required by itself.
It should be noted that, when applying for the memory space, the software program on the DPU301 applies for the page as granularity, that is, the size of the memory space required for the DPU301 is an integer multiple of the page.
The management unit determines the free physical memory space in the memory 302, allocates the free physical memory space for the software running on the DPU301, maps the physical address of the free physical memory space to a logical address, saves the mapping relationship between the physical address and the logical address, and provides the logical address as the software running on the DPU 301.
(3) The host 20 is applied for a storage space.
When the software program on the DPU301 applies for the storage space to the management unit, if the free physical storage space in the memory 302 is insufficient to support the storage space applied by the software program on the DPU301, that is, the size of the storage space applied by the software program on the DPU301 is smaller than the size of the free physical storage space in the memory 302. The management unit may apply for host storage space from the host 20, the host 20 allocates the storage space in the memory 202 to the management unit, and sends an address of the storage space in the memory 202 (the address may be a physical address of the storage space) to the management unit.
After acquiring the physical address of the storage space in the memory 202, the management unit maps the physical address to a logical address, saves the mapping relationship between the physical address and the logical address, and provides the logical address to the software program running on the DPU 301.
(4) And data reading and writing.
The management unit writes data in the memory space filed by the software on the DPU301 or reads data from the memory space filed by the software on the DPU301 at the request of the software program on the DPU 301.
When a software program running on the DPU301 needs to write data into the storage space, the software running on the DPU301 initiates a write instruction to the management unit, where the write instruction carries a logical address and the data to be written. The management unit converts the logical address into a physical address based on a mapping relationship between the physical address and the logical address, and writes the data to the physical address.
When a software program running on the DPU301 needs to read data from the storage space, the software running on the DPU301 initiates a read instruction to the management unit, wherein the read instruction carries a logical address, the management unit converts the logical address into a physical address based on a mapping relationship between the physical address and the logical address, reads the data from the physical address, and feeds the data back to the software.
A management unit is understood to be a software program running on the DPU301 and may also be understood to be a part of the hardware modules on the DPU 301. The embodiments of the present application are not limited to the specific form of the management unit.
The following describes a method for applying storage space provided in the embodiment of the present application with reference to fig. 2, where the embodiment shown in fig. 2 includes two parts, and one part is that DPU301 applies for storage space to host 20, specifically, see steps 201 to 204. Another part is DPU301 requesting a release of memory from host 20, see specifically steps 205-206.
Step 201: DPU301 initiates a request message to host 20 requesting host memory space from host 20, the request message also indicating the size of host memory space that needs to be requested.
When the minimum unit of data storage in both memory 302 and memory 202 is a page, the size of the page in memory 302 is the same as the size of the page in memory 202, and the size of the storage space can be represented by the number of pages.
One or more software programs may be run on DPU301, the running of which requires memory space, the software programs requiring writing data or reading data in memory space, DPU301 may perform step 201 when it is determined that memory 302 does not have sufficient memory space.
For example, within DPU301, when a software program on DPU301 begins running, the software program may apply for memory to the management unit and inform the management unit of the size of the memory that the management unit needs to apply for, and the management unit initiates a request message to host 20 if it is determined that there is no free memory in memory 302.
For another example, within DPU301, when a software program running on DPU301 needs to expand the memory space during the running process, the software program applies for the memory space to the management unit and informs the management unit of the size of the memory space that needs to be expanded, and the management unit may initiate a request message to host 20 if it determines that there is no free memory space in memory 302.
The manner in which DPU301 interacts with host 20 is not limited in embodiments of the present application. The interaction between DPU301 and host 20 may be based on existing protocols.
For example, DPU301 and host 20 may interact based on a virtual input/output (virtual I/O) protocol. The virtual I/O protocol defines a generic IO virtualization model. virtual I/O is an abstraction of a set of generic simulation devices in a paravirtualized hypervisor (hypervisor). Through this IO virtualization model, devices (such as the data processing device 30 of the embodiment of the present application) connected to the host 20 are unified so as to implement unified management, maintenance, expansion, and the like. Thus, these devices implement the functions provided by the virtual I/O protocol definitions, avoiding updating the hypervisor of the host 20.
As shown in fig. 3, DPU301 deploys virtual I/O back-end drivers (virtual I/O drivers) based on the virtual I/O protocol, and host 20 deploys virtual I/O front-end drivers (virtual I/O frontend driver). Interaction between DPU301 and host 20 may be achieved through virtual I/O queues maintained in common by virtual input output back-end drivers and virtual input output front-end drivers. The virtual I/O queue comprises a request queue and a reply queue, wherein the request queue is used for bearing a request instruction added by the virtual input/output front end driver, and the reply queue is used for bearing a reply instruction added by the virtual input/output rear end aiming at the request.
When a request instruction needs to be initiated to the DPU301, the host 20 adds the request instruction to a request queue of a virtual I/O queue through a virtual input/output front end driver, so that a virtual input/output back end driver disposed in the DPU301 obtains the request instruction from the virtual I/O queue, the DPU301 processes the request instruction, and places a processing result as a reply instruction in a reply queue of the virtual I/O queue.
Based on virtual I/O protocol, the virtual I/O post driver can return a reply instruction to the virtual I/O front driver through the virtual I/O queue only when the virtual I/O front driver issues a request instruction to the virtual I/O post driver through the virtual I/O queue.
In this embodiment of the present application, in order to enable the DPU301 and the host 20 to continue to implement sending of the request message by using the interactive manner shown in fig. 3, in this case, the virtual input/output front end driver deployed by the host 20 adds a request instruction in advance in the virtual I/O queue (the request instruction may not indicate any practical content), so that when the subsequent DPU301 needs to apply for the storage space of the memory 202 of the host 20, the virtual input/output back end driver deployed in the DPU301 adds the application message for applying for the storage space as a reply instruction to the virtual I/O queue.
That is, in the event that the management unit determines that there is no free memory space in the memory 302, the management unit initiates the request message to the virtual input output back-end driver, which adds the request message as a reply instruction to the virtual I/O queue.
The manner in which DPU301 sends the request message to host 20 is merely an example, and in practical applications, DPU301 and host 20 may interact based on other protocols. Alternatively, a new protocol may be redefined between DPU301 and host 20, based on which interactions are performed.
It should be understood that the interaction between DPU301 and host 20 is essentially the interaction between DPU301 and processor 201 in host 20, and for convenience of description, the interaction between DPU301 and processor 201 in host 20 is collectively referred to herein as the interaction between DPU301 and host 20.
Step 202: after receiving the request message, the host 20 requests host memory space for the DPU301 from the memory 202 of the host 20. If the request message carries the number of pages to be applied, the host 20 applies for a corresponding number of pages from the memory 202.
Taking the host 20 and the DPU301 as examples based on the virtual I/O protocol, after the virtual I/O front end driver deployed in the host 20 obtains the request message from the virtual I/O queue, the virtual I/O front end driver applies for the storage space to the memory 202. In this case, the memory 202 would consider the host memory space to be used by the virtual input output front end driver and not explicitly the host memory space is used by the DPU 301. The memory 202 configures a memory space for the virtual input output front end driver, the size of the host memory space is consistent with the size of the memory space indicated in the request message, and a physical address (e.g., a physical address of a page) of the host memory space is fed back to the virtual input output front end driver.
Step 203: host 20 sends a response message to DPU301 that carries the address of the host memory space. Such as the address of a page in memory 202 may be carried in the response message.
Taking the host 20 and the DPU301 as an example based on the virtual I/O protocol, as shown in fig. 4, the virtual I/O front-end driver obtains the address of the host memory space from the memory 202, and then adds the address of the host memory space as a request instruction to the virtual I/O queue.
In addition, to facilitate the subsequent ability of DPU301 to re-apply for memory space in memory 202 to host 20, the virtual input output front end driver may also re-initiate another request instruction (which may not indicate any actual content) to add to the virtual I/O queue.
Step 204: upon receiving the response message, DPU301 obtains the address of the host memory space, maps the host memory space to the memory space of DPU301, and uses the host memory space. For example, DPU301 maps the address of the host memory space to a logical address, and feeds back the logical address to the software program in DPU301 that applied for the memory space.
After acquiring the response message, the management unit in the DPU301 acquires the physical address of the host memory space from the response message, maps the physical address to a logical address, and stores the mapping relationship between the physical address and the logical address.
In the embodiment of the present application, the memory 302 includes a host page buffer, which is used to store data in the storage space of the host 20, and the host storage space is the storage space of the memory 202 applied by the DPU301 to the host 20, inside the memory 302. In the case where memory 202 and memory 302 both store data at page granularity, the host page cache may be used to store data in pages of memory 202 that DPU301 applies to host 20. The memory 302 accesses the host memory space by direct memory access (direct memory access, DMA), such as reading data from the host memory space, and stores the read data in a host page cache. For another example, the data stored in the host page cache is written into host memory space.
After the software program running on the DPU301 acquires the logical address, data reading and writing are performed on the logical address.
In the foregoing description it was mentioned that the data processing device 30 is capable of assuming part of the functionality of the host 20. While assuming this part of the functionality, DPU301 is able to invoke this host memory space to perform some data processing operations by data processing device 30.
DPU301 can obtain the data stored in the host memory space and process the data. DPU301 may encapsulate the acquired data based on a network protocol and send the encapsulated data, for example, to a device external to DPU 301; DPU301 may also encrypt the data, store the encrypted data in the host memory space, or send the encrypted data to a device external to the DPU. DPU301 may also access external storage devices for persisting data, e.g., DPU301 may access a local storage device of a host or a remote storage device connected to DPU301 via a network, storing data stored in the host storage space on a local storage device of a host or a remote storage device connected to DPU301 via a network.
When DPU301 needs to acquire data stored in the host memory space, DPU301 instructs memory 302 to store the data in the host memory space in the host page buffer by way of DMA. Inside the DPU301, a software program that needs to use the data initiates a read instruction to the management unit, which carries the logical address of the host memory space. The management unit determines a physical address from the logical address, and the management unit instructs the memory 302 to read data from the host page cache based on the physical address. If the data stored in the current host page buffer is the data in the host memory space, the memory 302 directly reads the data from the host page buffer. If the data stored in the current host page buffer is not the data in the host memory space (e.g., the data has been written to the memory 202 of the host 20 by DMA after writing), the memory 302 reads the data from the host memory space by DMA according to the address of the host memory space and writes the data to the host page buffer. The memory 302 will read the data from the host page cache and feed it back to the management unit, which feeds it back to the software program.
DPU301 is capable of storing processed data in the host memory space. For example, the DPU301 decapsulates the acquired data based on the network protocol, and stores the decrypted acquired data in the host storage space; for another example, DPU301 decrypts the acquired encrypted data and stores the decrypted data in the host storage space. For another example, DPU301 accesses an external storage device, such as a local storage device of a host or a remote storage device connected to DPU301 via a network, reads data from the external storage device, and stores the read data in the host storage space.
When DPU301 needs to store data in the host memory space, DPU301 instructs memory 302 to store the data in the host page cache of memory 302, after which memory 302 stores the data in the host page cache in the host memory space by way of DMA. Inside the DPU301, a software program that needs to store the data initiates a write instruction to the management unit indicating the logical address and the data that needs to be written. After receiving the write command, the management unit determines a physical address according to the logical address, and if the physical address is the address of the storage space of the host 20, the management unit instructs the memory 302 to write the data into the host page buffer. If the current host page buffer is not full, i.e. there is free memory space in the host page buffer, the memory 302 directly writes the data into the host page buffer.
If the data in the current host page buffer is full, the memory 302 writes the data in the host page buffer back to the memory space of the host 20 by DMA. After the data is written back, the memory 302 writes the data into the host page cache.
Step 205: DPU301 initiates a notification message to host 20 that informs host 20 to free up memory space, the notification message also carrying the address of the memory space. When the minimum unit of data storage in both memory 302 and memory 202 is a page, the size of the page in memory 302 is the same as the size of the page in memory 202, and the size of the storage space can be represented by the number of pages. The physical address of the page that needs to be freed serves as the physical address of the memory space.
When a software program on DPU301 is running or is accurately stopped, the software program on DPU301 may request memory space that was previously available to release the software program. The memory space that the software program on the DPU301 applies to release may be all the memory space that the software program applies to before, or may be a part of the memory space.
The software program on DPU301 may apply for the freed memory to the management unit and inform the logical addresses of the memory that need to be freed.
And the management unit deletes the mapping relation between the logical address and the physical address of the host memory space after obtaining the logical address of the host memory space. If the data in the host memory space is still cached in the host page cache of the memory 302, the management unit marks the data in the host memory space in the host page cache as invalid data.
Taking the host 20 and the DPU301 as examples based on the virtual I/O protocol, the management unit initiates the notification message to the virtual I/O back-end driver, which adds the notification message as a reply instruction to the virtual I/O queue.
Step 206: after receiving the notification message, the host 20 releases the host memory.
Taking the host 20 and the DPU301 as examples based on the virtual I/O protocol, after the virtual I/O front end driver deployed in the host 20 obtains the notification message from the virtual I/O queue, the virtual I/O front end driver instructs the memory 202 to release the storage space. The memory 202 frees up the host memory space according to the virtual input output front end drive indication.
Based on the same inventive concept as the method embodiment, the present application further provides a data processing apparatus, which is configured to perform the method performed by the DPU301 in the method embodiment shown in fig. 5, and relevant features may be referred to the method embodiment and will not be described herein. The data processing device can be understood as a software form of the management unit mentioned in the foregoing description. As shown in fig. 5, the data processing apparatus 500 includes a transmission module 501 and a mapping module 502.
A transmission module 501, configured to initiate a request message to a host, where the request message is used to apply for a storage space of the host; and acquiring a response message fed back by the host, wherein the response message carries an address of a host storage space, and the host storage space is positioned in a memory of the host.
The mapping module 502 is configured to map the host memory space into a memory space usage of the DPU, where the DPU is connected to the host through a system bus.
In one possible implementation, when mapping the physical address of the host storage space to the storage space of the DPU, the mapping module 502 may map the address of the host storage space to the logical address of the storage space of the DPU, and establish a mapping relationship between the address of the host storage space and the logical address of the storage space of the DPU.
In one possible implementation, the mapping module 502 deletes the mapping relationship between the physical address and the virtual address when the data processing apparatus 500 releases the host memory space; the transport module 501 sends a notification message to the host informing that host storage is freed.
In one possible implementation, the request message also indicates the size of the host storage space.
In a possible implementation, the DPU is connected to the memory via a system bus, the memory comprises a host page buffer, and the data processing apparatus 500 further comprises a writing module 503, the writing module 503 being capable of writing data in a host memory space. When writing data into the host memory space, the writing module 503 writes data into the host page buffer, and writes data in the host page buffer into the host memory space by DMA.
In one possible implementation, the data processing apparatus 500 includes a reading module 504, where the reading module 504 is capable of reading data from the host memory space, and the reading module 504 writes the data from the host memory space to the host page buffer by DMA when reading the data.
In one possible implementation, upon freeing the host memory space, the mapping module 502 marks the data in the host memory space stored in the host page cache as invalid data.
In one possible implementation, transport module 501 interacts with a host based on virtual I/O protocols.
In a possible implementation, the data processing apparatus 500 further includes a first processing module 505, where the first processing module 505 is capable of performing some data processing operations on data in the host storage space. For example, the first processing module 505 obtains data stored in the host memory space, and performs some or all of the following:
acquiring data stored in a storage space, and performing part or all of the following:
and encapsulating the data and sending the encapsulated data to equipment outside the DPU.
Encrypting the data and transmitting the encrypted data to a device external to the DPU.
The data is stored in a local storage device of the host or in a remote storage device connected to the DPU via a network.
In a possible implementation, the data processing apparatus 500 further includes a second processing module 506, where the second processing module 506 may store the processed data in the storage space. For example, the second processing module 506 decapsulates the data packet from the external device, and writes the data obtained after decapsulation into the host storage space. For another example, the second processing module 506 decrypts data from the external device and writes the decrypted data into the host storage space. For another example, the second processing module 506 reads data in a local storage device of the host or a remote storage device of the host, and stores the read data in the host storage space.
The data processing apparatus 500 according to the embodiments of the present application may correspond to an apparatus for performing the descriptions of the embodiments of the present application, and the foregoing and other operations and/or functions of each module in the apparatus 500 are respectively for implementing the corresponding flow of each apparatus in fig. 2, and are not described herein for brevity.
It should be noted that, in the embodiment of the present application, the division of the modules is schematic, which is merely a logic function division, and other division manners may be implemented in actual implementation. The functional modules in the embodiments of the present application may be integrated into one processing module, or each module may exist alone physically, or two or more modules may be integrated into one module. The integrated modules may be implemented in hardware or in software functional modules.
The above embodiments may be implemented in whole or in part by software, hardware, firmware, or any other combination. When implemented in software, the above-described embodiments may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer program instructions. When loaded or executed on a computer, produces a flow or function in accordance with embodiments of the present invention, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer program instructions may be stored in or transmitted from one computer readable storage medium to another, for example, by wired (e.g., coaxial cable, optical fiber, digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.) means from one website, computer, server, or data center. The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains one or more sets of available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium. The semiconductor medium may be a solid state disk (solid state drive, SSD).
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as an apparatus, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of apparatus, devices (systems) and computer program products according to the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various modifications and variations can be made in the present application without departing from the scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims and the equivalents thereof, the present application is intended to cover such modifications and variations.

Claims (24)

1. A method for expanding a memory space of a DPU, the method being used for expanding a memory space of a data processing unit DPU, the DPU being connected to a host via a system bus, comprising:
initiating a request message to the host, wherein the request message is used for applying for a host storage space;
acquiring a response message fed back by the host, wherein the response message carries an address of a storage space of the host, and the storage space of the host is positioned in a memory of the host;
mapping the host memory space to a memory space usage of the DPU.
2. The method of claim 1, wherein the method further comprises:
and sending a notification message to the host, wherein the notification message is used for notifying the release of the host storage space.
3. The method of claim 1 or 2, wherein the request message further indicates a size of the host storage space.
4. A method as claimed in any one of claims 1 to 3, wherein said DPU is connected to a memory via a system bus, said memory comprising a host page buffer, said obtaining of said response message fed back by said host further comprising:
writing data in the host page cache, and writing the data in the host page cache into the host storage space through Direct Memory Access (DMA).
5. A method as claimed in any one of claims 1 to 3, wherein said DPU is connected to a memory via a system bus, said memory comprising a host page buffer, said obtaining of said response message fed back by said host further comprising:
and writing the data of the host memory space into the host page cache through a direct memory access DMA, and reading the data from the host page cache.
6. The method of claim 4 or 5, wherein the method further comprises:
and releasing the host storage space, and marking the data in the host storage space stored in the host page cache as invalid data.
7. The method of any of claims 1-6, wherein the DPU and the host interact based on a virtualized input output virtual I/O protocol.
8. The method of any of claims 1-7, wherein the mapping the memory space of the host to the memory space usage of the DPU specifically comprises:
mapping the address of the host memory space to a logical address of the memory space of the DPU.
9. The method of any one of claims 1-8, wherein the method further comprises:
Acquiring data stored in the storage space, and executing part or all of the following steps:
encapsulating the data and sending the encapsulated data to equipment outside the DPU;
encrypting the data and sending the encrypted data to equipment outside the DPU;
storing the data in a local storage device of the host or a remote storage device connected with the DPU through a network.
10. The method of any one of claims 1-9, wherein the method further comprises:
performing part or all of the following:
unpacking a data packet from equipment outside the DPU, and writing the unpacked data into the host memory space;
decrypting data from equipment outside the DPU, and writing the decrypted data into the host storage space;
and reading data in the local storage equipment of the host or the remote storage equipment of the host, and storing the read data in the storage space of the host.
11. A data processing apparatus, characterized in that the data processing apparatus comprises:
the transmission module is used for initiating a request message to the host, wherein the request message is used for applying for the storage space of the host; acquiring a response message fed back by the host, wherein the response message carries an address of a storage space of the host, and the storage space of the host is positioned in a memory of the host;
The mapping module is configured to map the host storage space to storage space usage of the DPU, where the DPU and the host are connected through a system bus.
12. The apparatus of claim 11, wherein the device comprises a plurality of sensors,
the mapping module is further configured to: releasing the host memory space;
the transmission module is further configured to: and sending a notification message to the host, wherein the notification message is used for notifying the release of the host storage space.
13. The apparatus of claim 11 or 12, wherein the request message further indicates a size of the host storage space.
14. The apparatus of any of claims 11-13, wherein the DPU is coupled to a memory via a system bus, the memory including a host page cache, the data processing apparatus further comprising a write module to:
writing data in the host page cache, and writing the data in the host page cache into the host storage space through Direct Memory Access (DMA).
15. The apparatus of any of claims 11-13, wherein the DPU is coupled to a memory via a system bus, the memory including a host page cache, the data processing apparatus including a read module to:
And writing the data of the host memory space into the host page cache through a direct memory access DMA, and reading the data from the host page cache.
16. The apparatus of claim 14 or 15, wherein the mapping module is further to:
and releasing the host storage space, and marking the data in the host storage space stored in the host page cache as invalid data.
17. The apparatus of any of claims 11-16, wherein the transport module and the host interact based on a virtualized input output (I/O) protocol.
18. The apparatus of any of claims 11-17, wherein the mapping module is to use in mapping a memory space of the host to a memory space of the DPU to:
mapping the address of the host memory space to a logical address of the memory space of the DPU.
19. The apparatus according to any of claims 11-18, wherein the data processing apparatus further comprises a first processing module, the first processing module configured to obtain data stored in the storage space, and perform part or all of:
Acquiring data stored in the storage space, and executing part or all of the following steps:
encapsulating the data and sending the encapsulated data to equipment outside the DPU;
encrypting the data and sending the encrypted data to equipment outside the DPU;
storing the data in a local storage device of the host or a remote storage device connected with the DPU through a network.
20. The apparatus according to any of claims 11-18, wherein the data processing apparatus further comprises a second processing module, the second processing module configured to decapsulate a data packet from a device external to the DPU, and write the decapsulated data into the host storage space; decrypting data from equipment outside the DPU, and writing the decrypted data into the host storage space; and reading data in the local storage equipment of the host or the remote storage equipment of the host, and storing the read data in the storage space of the host.
21. A data processing device, characterized in that the data processing device comprises a data processing unit DPU and a power supply circuit for supplying the DPU, which DPU is connected to a host for performing the method according to any one of claims 1 to 10.
22. The apparatus of claim 19, wherein the data processing apparatus further comprises a memory coupled to the DPU via a system bus, the memory comprising some or all of:
random access memory RAM, read only memory ROM, dynamic random access memory DRAM, flash media, hard disk.
23. The apparatus of claim 19, wherein the DPU is connected to a backplane or motherboard of the host via a peripheral component interconnect express (PCIe) bus.
24. A computing system comprising a host computer, and a DPU connected to the host computer via a system bus, the DPU configured to perform the method of any one of claims 1-10.
CN202210998035.3A 2022-08-19 2022-08-19 Storage space expansion method, device and system of DPU Pending CN117631958A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202210998035.3A CN117631958A (en) 2022-08-19 2022-08-19 Storage space expansion method, device and system of DPU
PCT/CN2023/101068 WO2024037172A1 (en) 2022-08-19 2023-06-19 Storage space expansion method, apparatus and system for dpu

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210998035.3A CN117631958A (en) 2022-08-19 2022-08-19 Storage space expansion method, device and system of DPU

Publications (1)

Publication Number Publication Date
CN117631958A true CN117631958A (en) 2024-03-01

Family

ID=89940592

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210998035.3A Pending CN117631958A (en) 2022-08-19 2022-08-19 Storage space expansion method, device and system of DPU

Country Status (2)

Country Link
CN (1) CN117631958A (en)
WO (1) WO2024037172A1 (en)

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103793330B (en) * 2012-10-31 2017-03-01 国际商业机器公司 The method and apparatus carrying out data exchange in a virtual machine environment
CN103677674B (en) * 2013-12-27 2017-01-04 华为技术有限公司 A kind of data processing method and device
US9294567B2 (en) * 2014-05-02 2016-03-22 Cavium, Inc. Systems and methods for enabling access to extensible storage devices over a network as local storage via NVME controller
CN105094997B (en) * 2015-09-10 2018-05-04 重庆邮电大学 Physical memory sharing method and system between a kind of cloud computing host node
CN107077426B (en) * 2016-12-05 2019-08-02 华为技术有限公司 Control method, equipment and the system of reading and writing data order in NVMe over Fabric framework
CN106959893B (en) * 2017-03-31 2020-11-20 联想(北京)有限公司 Accelerator, memory management method for accelerator and data processing system
CN114089926B (en) * 2022-01-20 2022-07-05 阿里云计算有限公司 Management method of distributed storage space, computing equipment and storage medium
CN114490433A (en) * 2022-01-20 2022-05-13 哲库科技(上海)有限公司 Management method of storage space, data processing chip, device and storage medium

Also Published As

Publication number Publication date
WO2024037172A1 (en) 2024-02-22

Similar Documents

Publication Publication Date Title
CN110647480B (en) Data processing method, remote direct access network card and equipment
US10534552B2 (en) SR-IOV-supported storage resource access method and storage controller and storage device
CN108984465B (en) Message transmission method and device
US10496427B2 (en) Method for managing memory of virtual machine, physical host, PCIE device and configuration method thereof, and migration management device
WO2020247042A1 (en) Network interface for data transport in heterogeneous computing environments
US9973335B2 (en) Shared buffers for processing elements on a network device
US9164804B2 (en) Virtual memory module
US10951741B2 (en) Computer device and method for reading or writing data by computer device
CN105786589A (en) Cloud rendering system, server and method
WO2015180598A1 (en) Method, apparatus and system for processing access information of storage device
CN113742269B (en) Data transmission method, processing device and medium for EPA device
WO2019141157A1 (en) Inter-core data transmission apparatus and method
EP3036648B1 (en) Enhanced data transfer in multi-cpu systems
WO2023103704A1 (en) Data processing method, storage medium, and processor
WO2014100954A1 (en) Method and system for data controlling
WO2015055117A1 (en) Method, device, and system for accessing memory
CN109582592B (en) Resource management method and device
US11249934B2 (en) Data access method and apparatus
CN117631958A (en) Storage space expansion method, device and system of DPU
EP4105771A1 (en) Storage controller, computational storage device, and operational method of computational storage device
TW202230140A (en) Method to manage memory and non-transitory computer-readable medium
CN116418848A (en) Method and device for processing configuration and access requests of network nodes
CN108228496B (en) Direct memory access memory management method and device and master control equipment
CN113722110B (en) Computer system, memory access method and device
JP7197212B2 (en) Information processing device, information processing method and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination