CN113971158A - Network card-based memory access method, device and system - Google Patents

Network card-based memory access method, device and system Download PDF

Info

Publication number
CN113971158A
CN113971158A CN202111152532.3A CN202111152532A CN113971158A CN 113971158 A CN113971158 A CN 113971158A CN 202111152532 A CN202111152532 A CN 202111152532A CN 113971158 A CN113971158 A CN 113971158A
Authority
CN
China
Prior art keywords
memory
access
access request
attribute information
processor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111152532.3A
Other languages
Chinese (zh)
Inventor
宋东洋
付斌章
王利虎
刘振军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba China Co Ltd
Original Assignee
Alibaba China Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba China Co Ltd filed Critical Alibaba China Co Ltd
Priority to CN202111152532.3A priority Critical patent/CN113971158A/en
Publication of CN113971158A publication Critical patent/CN113971158A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/16Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
    • G06F15/163Interprocessor communication
    • G06F15/173Interprocessor communication using an interconnection network, e.g. matrix, shuffle, pyramid, star, snowflake
    • G06F15/17306Intercommunication techniques
    • G06F15/17331Distributed shared memory [DSM], e.g. remote direct memory access [RDMA]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/38Information transfer, e.g. on bus
    • G06F13/42Bus transfer protocol, e.g. handshake; Synchronisation
    • G06F13/4282Bus transfer protocol, e.g. handshake; Synchronisation on a serial bus, e.g. I2C bus, SPI bus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2213/00Indexing scheme relating to interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F2213/0026PCI express

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The embodiment of the application provides a network card-based memory access method, a memory access method, equipment and a system. In the embodiment of the application, the memory attribute information of the memory to be accessed can be determined according to the access request; and the memory to be accessed is accessed according to the access path adapted to the memory attribute information, so that the access path separation of the volatile memory and the persistent memory is realized. The access path adaptive to the volatile memory is cached by the processor, and the access delay of the volatile memory can be reduced by utilizing the low-delay performance of the cache of the processor; on the other hand, the access path of the persistent memory does not pass through the processor cache, so that intermediate links for accessing the persistent memory are reduced, and the access efficiency of the persistent memory is improved. In summary, the embodiment of the present application is helpful to improve the memory access efficiency.

Description

Network card-based memory access method, device and system
Technical Field
The present application relates to the field of computer technologies, and in particular, to a network card-based memory access method, a device, and a system.
Background
With the development of nonvolatile Memory and Remote Direct Memory Access (RDMA) technologies, data centers with high storage performance and low latency network Access have become a trend. The persistent memory has the characteristics of low time delay and the like, and is matched with the characteristics of high bandwidth and low time delay of a high-performance network. Therefore, the persistent memory is widely applied to data storage in a data center.
When writing to persistent memory in the prior art, the persistent memory may be written to by the processor cache. However, when the bandwidth load is high, the contention that software (based on processor operation) writes into the persistent memory and peripheral writes into the persistent memory exists in the processor cache, which finally causes the external overall performance of the software system to be reduced and cannot fully exert the device performance of the persistent memory. Or the processor cache is directly closed, so that the peripheral equipment is directly written into the persistent memory without passing through the processor cache. However, directly closing the processor cache may result in no processor cache acceleration for all memory accesses, and the overall system external service capability may be degraded.
Disclosure of Invention
Various aspects of the present application provide a network card-based memory access method, a device, and a system, which are used for separating access paths of a volatile memory and a persistent memory, and are beneficial to improving memory access efficiency.
An embodiment of the present application provides a memory access method, including:
acquiring a first access request;
determining memory attribute information of a memory to be accessed according to the first access request;
accessing the memory to be accessed according to the access path adapted to the memory attribute information and the first access request;
the access path adaptive to the volatile memory comprises a processor cache; the persistent memory-adapted access path does not include a processor cache.
The embodiment of the present application further provides a memory access method based on a network card, including:
the network card acquires a first access request; determining memory attribute information of a memory to be accessed according to the first access request; performing protocol conversion on the first access request according to the memory attribute information to obtain a second access request following a communication protocol between the network card and the processor; providing the second access request to the processor;
the processor accesses the memory to be accessed according to the access path adaptive to the memory attribute information carried by the second access request;
the access path adaptive to the volatile memory comprises a processor cache; the persistent memory-adapted access path does not include a processor cache.
An embodiment of the present application further provides a memory access method, including:
acquiring memory attribute information of a memory to be accessed;
generating an access request according to the memory attribute information;
and providing the access request to other computer equipment so that the other computer equipment can access the memory to be accessed according to the access path adapted to the memory attribute information.
An embodiment of the present application further provides a data processing system, including: client equipment and server equipment;
wherein the client device is configured to: acquiring memory attribute information of a memory to be accessed; generating an access request according to the memory attribute information; providing the access request to the server-side equipment;
the server device is configured to: determining the memory attribute information of the memory to be accessed according to the access request; accessing the memory to be accessed according to the access path adapted to the memory attribute information and the access request; the access path adaptive to the volatile memory comprises a processor cache; the persistent memory-adapted access path does not include a processor cache.
An embodiment of the present application further provides a computer device, including: the system comprises a processor, a network card, a volatile memory and a persistent memory;
the processor is coupled to the network card, the volatile memory, and the persistent memory to: acquiring a first access request through the network card; determining memory attribute information of a memory to be accessed according to the first access request; accessing the memory to be accessed according to the access path adapted to the memory attribute information and the first access request; the access path adaptive to the volatile memory comprises a processor cache; the persistent memory-adapted access path does not include a processor cache.
An embodiment of the present application further provides a computer device, including: a memory, a processor and a network card;
wherein the memory is used for storing a computer program;
the processor is coupled to the memory and the network card for executing the computer program for: acquiring memory attribute information of a memory to be accessed; generating an access request according to the memory attribute information; and providing the access request to other computer equipment through the network card so that the other computer equipment can access the memory to be accessed according to the access path adapted to the memory attribute information.
In the embodiment of the application, the memory attribute information of the memory to be accessed can be determined according to the access request; and the memory to be accessed is accessed according to the access path adapted to the memory attribute information, so that the access path separation of the volatile memory and the persistent memory is realized. The access path adaptive to the volatile memory is cached by the processor, and the access delay of the volatile memory can be reduced by utilizing the low-delay performance of the cache of the processor; on the other hand, the access path of the persistent memory does not pass through the processor cache, so that intermediate links for accessing the persistent memory are reduced, and the access efficiency of the persistent memory is improved. In summary, the embodiment of the present application is helpful to improve the memory access efficiency.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
FIG. 1 is a block diagram of a data processing system according to an embodiment of the present application;
FIG. 2 is a schematic diagram of an access path provided by an embodiment of the present application;
fig. 3 and fig. 4a are schematic flow charts of a memory access method according to an embodiment of the present application;
fig. 4b is a schematic flowchart of a memory access method based on a network card according to an embodiment of the present application;
fig. 5 and fig. 6 are schematic structural diagrams of a computer device provided in an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be described in detail and completely with reference to the following specific embodiments of the present application and the accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In order to provide memory access efficiency, a scheme of separating access paths of a volatile memory and a persistent memory is provided, wherein the access path adapted to the volatile memory comprises a processor cache; the persistent memory-adapted access path does not include a processor cache. In the embodiment of the application, the memory attribute information of the memory to be accessed can be determined according to the access request; and the memory to be accessed is accessed according to the access path adapted to the memory attribute information, so that the access path separation of the volatile memory and the persistent memory is realized. The access path adaptive to the volatile memory is cached by the processor, and the access delay of the volatile memory can be reduced by utilizing the low-delay performance of the cache of the processor; on the other hand, the access path of the persistent memory does not pass through the processor cache, so that intermediate links for accessing the persistent memory are reduced, and the access efficiency of the persistent memory is improved. In summary, the embodiment of the present application is helpful to improve the memory access efficiency.
The technical solutions provided by the embodiments of the present application are described in detail below with reference to the accompanying drawings.
It should be noted that: like reference numerals refer to like objects in the following figures and embodiments, and thus, once an object is defined in one figure or embodiment, further discussion thereof is not required in subsequent figures and embodiments.
Fig. 1 is a schematic structural diagram of a data processing system according to an embodiment of the present application. As shown in fig. 1, the data processing system includes: client device 10 and server device 20.
Wherein, the connection between the client device 10 and the server device 20 can be wireless or wired. Alternatively, the server device 20 may be communicatively connected to the client device 10 through a mobile network, and accordingly, the network format of the mobile network may be any one of 2G (gsm), 2.5G (gprs), 3G (WCDMA, TD-SCDMA, CDMA2000, UTMS), 4G (LTE), 4G + (LTE +), 5G, WiMax, and the like. Alternatively, the client device 10 may be communicatively connected to the server device 20 through bluetooth, WiFi, infrared, etc. Of course, the client device 10 and the server device 20 may also be connected through a high-speed Virtual Private Network (VPN) communication, and the like.
In the present embodiment, the client device 10 and the server device 20 are logical clients and servers. The client device 10 may be a single server device having a client function in a data center, a cloud server array, or a Virtual Machine (VM) running in the cloud server array.
In this embodiment, the server device 20 is a computer device capable of performing data management, responding to a service request from the client device 10, and providing a service corresponding to the service request to a user, and generally has the capability of undertaking and securing the service. The server device 20 may be a single server device, a cloud server array, or a Virtual Machine (VM) running in the cloud server array. The server device 20 may also refer to other computing devices having corresponding service capabilities, such as a terminal device (running a service program) such as a computer.
In the embodiment of the present application, as shown in fig. 1, for the client device 10, a memory 101 and a processor 102 may be included. The storage 101 may be implemented as any form of storage medium, such as a memory. The memory may include: volatile Memory, such as Random Access Memory (RAM); of course, the memory may also include persistent memory, etc.
In this embodiment, the client device 10 may access the memory of the server device 20. Alternatively, the client device 10 may access the memory of the server device 20 via RDMA techniques. Accordingly, the client device 10 may be configured with a network card 103. The network card 103 may be an RDMA network card.
In the embodiment of the present application, as shown in fig. 1, the server device 20 may include: a network card 201, a processor 202, volatile memory 203, and persistent memory 204. The network card 201, the volatile memory 203 and the persistent memory 204 are respectively in communication connection with the processor 202. Optionally, the network card 201, the volatile memory 203, and the persistent memory 204 may be communicatively connected to the processor 202 through PCIe interfaces, respectively. The network card 201 may be a Remote Direct Memory Access (RDMA) network card.
In the embodiment of the present application, a volatile memory, which is one of important parts of a computer, is also called an internal memory and a main memory, and is used to temporarily store operation data in a processor and data exchanged with an external memory such as a hard disk. It is a bridge for the communication between the external storage medium and the processor (such as the central processing unit CPU). All programs in the computer equipment are operated in the memory, and the level of the overall performance of the computer is influenced by the strength of the memory performance. When the computer equipment starts to run, the software transfers the data to be operated to the processor from the memory for operation, and when the operation is completed, the processor transmits the result. After the computer device is powered down, the data in the volatile memory is lost.
After the computer equipment in the persistent memory is powered off, the data written into the persistent memory cannot be lost, and the data can be read again after the computer equipment is restarted.
The processor can be a Central Processing Unit (CPU) of the computer device, is a core of operation and control of the computer device, and is a final execution unit for information processing and program operation. Software runs on the processor, and the software can access the volatile memory and the persistent memory through the processor. As shown in FIG. 2, processor 202 has a cache area, referred to as a processor cache for short. The processor cache has a lower access latency than volatile memory and persistent memory, and thus caching data into the processor cache reduces access latency and improves access efficiency. Particularly, for a Direct Data Input Output (DDIO) module in a Last Level Cache (LLC) of a processor Cache, peripheral access to memory Data can be accelerated. The network card can access the memory data through the DDIO, and if the cache of the processor is hit, the memory does not need to be accessed, so that the data access speed is improved. The peripheral device refers to other external facilities except the memory, such as a network card, a hard disk, a processor, or the like.
In this embodiment, the network card has information forwarding and logic processing capabilities. The network card may include a processing unit. In the embodiment of the present application, the implementation form of the processing unit of the network card is not limited. In some embodiments, the processing unit of the network card may be an ASIC chip, FPGA, or the like, but is not limited thereto.
In general, the access delay of the processor for the cache is about 10 ns; the access time delay of the volatile memory is about 80 ns-100 ns; the access latency of the persistent memory is about 350 ns. Correspondingly, the access time delay of the volatile memory is about 8-10 times of the cache of the processor; while the access latency of the persistent memory is about 35-40 times that of the processor cache. Since the processor cache is also a volatile storage medium, writing data to the processor cache may be considered as writing to the volatile memory, for the case where the write data is to be written to the volatile memory.
However, for the scenario of writing in the persistent memory, the writing of software (based on processor operation) into the persistent memory and the writing of external facilities into the persistent memory compete for the processor cache, which eventually causes the external overall performance of the software system to be reduced, and the device performance of the persistent memory cannot be fully exerted.
In other arrangements, the processor cache may be shut down directly, resulting in all peripheral writes to persistent memory not passing through the processor cache. This way of directly closing the processor cache may cause that all the peripheral accesses under the slot corresponding to the processor have no processor cache acceleration, resulting in that the external service capability of the device is wholly reduced. On the other hand, the access to the volatile memory cannot be accelerated by using the processor cache, and the access efficiency is low.
On the other hand, since the minimum unit of the processor cache is 64 bytes, and the minimum unit of the persistent memory is 256 bytes, in the process of caching the persistent small data blocks in the processor, volatile memory data may be written into the processor cache, which causes the persistent data in the processor cache to be updated into volatile memory data, and finally causes the data written into the persistent memory to be lost due to the update of the memory data in the processor cache; therefore, the data newly written into the persistent memory needs to be read back into the processor cache; then modifying in the processor cache; and then, the modified data is printed back to the persistent memory, so that the writing delay of the persistent memory is increased by updating.
In the embodiment of the application, in order to provide memory access efficiency, a scheme of separating access paths of a volatile memory and a persistent memory is provided, wherein the access path adapted to the volatile memory comprises a processor cache; the persistent memory-adapted access path does not include a processor cache. In this way, for the case where the memory to be accessed is volatile memory, the volatile memory can be accessed through the processor cache. For the case that the memory to be accessed is the persistent memory, the persistent memory can be directly accessed. For the access path adaptive to the volatile memory, the low-delay performance of the processor cache can be utilized to reduce the access delay of the volatile memory; on the other hand, the access path of the persistent memory does not pass through the processor cache, so that intermediate links for accessing the persistent memory are reduced, and the access efficiency of the persistent memory is improved. In summary, the embodiment of the present application is helpful to improve the memory access efficiency. The following provides an exemplary description of a memory access method provided in the embodiments of the present application.
In this embodiment, when the client device 10 accesses the memory of the server device 20, the memory attribute information of the memory to be accessed may be obtained. The memory attribute information refers to the characteristics of the memory storage data, and may include: volatile memory and persistent memory. The memory to be accessed is a memory of the server device 20, which may be a volatile memory 203 or a persistent memory 204, and is specifically determined by the processor 102 of the client device 10.
In the embodiment of the present application, in order to implement access path separation between a volatile memory and a persistent memory, a Queue Pair (QP) memory attribute is added to an RDMA communication library to identify memory attribute information, that is, to identify whether the memory is a volatile memory or a persistent memory. Based on this, the client device 10 may generate an access request according to the memory attribute information.
Optionally, the processor 102 may generate a first access request following a communication protocol between the processor 102 and the network card 103 according to the memory attribute information; and provides the first access request to the network card 103. In some embodiments, the network card 103 is an RDMA network card and follows the PCIe protocol with the processor 102. Based on this, the processor 102 may generate a first access request complying with the PCIe protocol according to the memory attribute information of the memory to be accessed. Specifically, the processor 102 may encapsulate the memory attribute information of the memory to be accessed into a header field of a PCIe Transaction Layer Packet (TLP), and encapsulate the access content into a data portion of the PCIe TLP, so as to obtain the PCIe TLP, i.e., the first access request. Further, the processor 102 may provide the first access request to the network card 103.
For the network card 103, the memory attribute information of the memory to be accessed can be analyzed from the first access request; further, the network card 103 may perform protocol conversion on the first access request according to the memory attribute information to obtain a second access request conforming to a network protocol. Specifically, the network card 103 may parse a packet body of the first access request from the first access request; and packaging the message body of the first access request according to the network protocol by taking the memory attribute information of the memory to be accessed as a message header field to obtain a second access request following the network protocol.
Further, the client device 10 may provide the access request to the server device 20. Specifically, the client device 10 may provide the second access request to the server device 20 through the network card 103.
For RDMA network cards, RDMA is a host-based Direct Memory Access (Direct Memory Access) capability, and provides a function of remotely accessing the Memory of a responder host from a requester host by using a Direct Memory Access technology during multi-host communication. The RDMA enabled network card is responsible for managing the reliable connection between the source and target. The messaging service is established on a Channel-input-output (Channel-IO) connection created between the home and remote applications of the communicating parties. When an application needs to communicate, a channel connection is created, and the head and tail end points of each channel are two pairs of QPs. Each pair of QPs consists of a Send Queue (SQ) and a Receive Queue (RQ), in which various types of messages are managed. The QP will be mapped to the virtual address space of the application so that the application accesses the network card directly through it.
Based on the above analysis, in the embodiment of the present application, for the RDMA network card, the client device 10 and the server device 20 may establish different QP links in advance to transmit access requests of different memory attribute information. In the embodiment of the present application, a QP link corresponding to a volatile memory and a QP link corresponding to a persistent memory may be pre-established between the client device 10 and the server device 20. Based on the QP link, the client device 10 may provide the second access request to the server device 20 through the QP link corresponding to the memory attribute information of the memory to be accessed.
Accordingly, for the server device 20, a second access request may be received; and determining the memory attribute information of the memory to be accessed according to the second access request. Optionally, the server device 20 may parse the memory attribute information of the memory to be accessed from the second access request. Further, the server device 20 may access the memory to be accessed according to the access path adapted to the memory attribute information and the second access request. As shown in fig. 1, the access path adapted to the volatile memory includes a processor cache; the persistent memory-adapted access path does not include a processor cache. In this way, for the case that the memory to be accessed is a volatile memory, the server device 20 may access the volatile content 203 through the processor cache according to the second access request. For the case that the memory to be accessed is the persistent memory, the server device 20 may directly access the persistent memory according to the second access request.
In the embodiment of the present application, a specific implementation form of the access request is not limited. In some embodiments, if the access request is a memory read request, the server device 20 may read the data requested by the first access request from the memory to be accessed according to the access path adapted to the memory attribute information of the memory to be accessed. Of course, the access request may be a memory write request, and the server device 20 may write the data to be written included in the second access request into the memory to be accessed according to the access path adapted to the memory attribute information of the memory to be accessed.
For the case that the access request is a write memory request, for the server device 20, the network card 201 may receive a second access request provided by the client device 10; analyzing the memory attribute information of the memory to be accessed from the second access request; then, according to the memory attribute information of the memory to be accessed, the protocol conversion may be performed on the second access request to obtain a third access request that follows the communication protocol between the network card 201 and the processor 202. In this embodiment, a specific implementation manner of a communication protocol between the network card 201 and the processor 202 is not limited, and in some embodiments, the communication protocol between the network card 201 and the processor 202 is a PCIe protocol, and the network card 201 may perform protocol conversion on the second access request according to the PCIe protocol standard according to the memory attribute information of the memory to be accessed, so as to obtain a PCIe TLP and the like.
Specifically, the network card 201 may further parse, from the second access request, data to be written included in the second access request; and then, using the memory attribute information as a message header field, and encapsulating the data to be written according to a communication protocol between the network card 201 and the processor 202 to obtain a third access request.
The network card 201 may then provide a third access request to the processor 202. Alternatively, the network card 201 may provide the third access request to the processor 202 through a connection channel (e.g., PCIe high speed channel) with the processor 202.
Accordingly, the processor 202 may parse the data to be written and the memory attribute information from the third access request; and then, writing the data to be written into the memory to be accessed according to the access path adapted to the memory attribute information.
Specifically, as shown in fig. 2, for the case that the memory attribute information of the memory to be accessed is a volatile memory, the processor 202 may write the data to be written into the volatile memory 203 through a direct input output (DDIO) module. Specifically, the processor 202 may write the data to be written into the DDIO module, and since the volatile memory and the DDIO module are both volatile storage media, writing the data to be written into the DDIO module may be regarded as successful writing of the data to be written. Further, the processor 202 may control the DDIO module to write the data to be written into the volatile memory 203.
For the case that the memory attribute information of the memory to be accessed is the persistent memory, the processor 202 may directly write the data to be written into the persistent memory 204. Alternatively, as shown in fig. 2, the processor 202 may write the data to be accessed directly to the persistent Memory 204 through an Integrated Memory Controller (IMC). The data to be written is directly written into the persistent memory 204 without passing through a processor cache (such as a DDIO module), so that intermediate links for accessing the persistent memory are reduced, and the access efficiency of the persistent memory is improved.
On the other hand, since the access delay of the persistent memory 204 is much longer than that of the DDIO module and is about 35-40 times of the access delay of the DDIO module, if data is written into the persistent memory 204 through the DDIO module, the data to be written stays in the DDIO module for a long time, so that processor cache resource competition is caused, and the performance of the volatile memory is affected. In this embodiment, the access paths of the volatile memory and the persistent memory are separated, and the access path of the persistent memory does not pass through the processor cache, so that the situation that the volatile memory competes for the processor cache resources does not exist, which is beneficial to improving the performance of the volatile memory.
In the embodiment of the present application, since the access path of the volatile memory passes through the processor cache, if the server device 20 has a power failure event, if data in the processor cache is not specifically stored, the data may be lost. In order to solve the problem, in this embodiment of the application, an energy storage device, such as a capacitor battery, may be added to the motherboard of the server device 20, so that when the server device 20 is powered down, the energy storage device corresponding to the motherboard can continue to supply power to the motherboard, and in the process of continuing to supply power to the motherboard by the energy storage device, the data in the processor cache is stored in the persistent memory 204. Based on this, processor 202 may write data in the processor cache to persistent memory 204 in response to a power down event. Specifically, in some embodiments, processor 202 may Write data in a Write Pending Queue (WPQ) in the processor's IMC to persistent memory 204 using Asynchronous memory Refresh (ADR) techniques. In other embodiments, processor 202 may utilize Enhanced ADR (edadr) technology to write data in a processor Cache (CPU Cache) to persistent memory 204. The whole process can be completed within 100 mus. Therefore, when the server device 20 is powered down, the embodiment of the present application writes the data in the processor cache into the persistent memory 204, which can prevent the data in the processor cache from being lost due to the power down of the server device 20, and is helpful to improve the data security.
It should be noted that, the data processing system only takes the client device and the server device as examples, and performs an exemplary description on the memory access process. Wherein the data processing logic of the client device is adaptable to any request sender; the data processing logic of the server device may be adapted to any request receiver. In the embodiment of the present application, the method for receiving a request includes: volatile memory and persistent memory.
In addition to the foregoing data processing system embodiment, the present application embodiment also provides a memory access method, and the following describes, in an exemplary manner, the memory access method provided in the present application embodiment with reference to specific embodiments, with respect to the angle of the request sending end and the angle of the request receiving end respectively.
Fig. 3 is a schematic flowchart of a memory access method according to an embodiment of the present application. The memory access method is adapted to a request sending end. As shown in fig. 3, the method includes:
301. and acquiring the memory attribute information of the memory to be accessed.
302. And generating an access request according to the memory attribute information.
303. And providing the access request to other computer equipment so that the other computer equipment can access the memory to be accessed according to the access path adapted to the memory attribute information.
In this embodiment, the request sending end may be implemented as any form of computer device, such as a terminal device or a server device. In this embodiment, the request sender may access the memory of the other computer device. Alternatively, memory of other computer devices may be accessed via RDMA techniques. Correspondingly, the request sending end can be configured with a network card. The network card may be an RDMA network card.
In this embodiment, when the request sending end accesses the memory of another computer device, in step 301, the memory attribute information of the memory to be accessed may be obtained. The memory attribute information refers to the characteristics of the memory storage data, and may include: volatile memory and persistent memory.
In the embodiment of the present application, in order to implement the access path separation between the volatile memory and the persistent memory, a QP memory attribute is added to the RDMA communication library to identify memory attribute information, that is, to identify whether the memory is a volatile memory or a persistent memory. Based on this, in step 302, an access request may be generated according to the memory attribute information.
Optionally, for the request sending end, the processor may generate a first access request following a communication protocol between the processor and the network card according to the memory attribute information; and provides the first access request to the network card. In some embodiments, the network card is an RDMA network card and the PCIe protocol is followed between the network card and the processor. Based on the above, the processor may generate a first access request complying with the PCIe protocol according to the memory attribute information of the memory to be accessed. Specifically, the processor may encapsulate the memory attribute information of the memory to be accessed into a header field of the PCIe TLP, encapsulate the access content into a data portion of the PCIe TLP, and obtain the PCIe TLP, i.e., the first access request. Further, the processor may provide the first access request to the network card.
For the network card, the memory attribute information of the memory to be accessed can be analyzed from the first access request; further, the network card can perform protocol conversion on the first access request according to the memory attribute information to obtain a second access request following the network protocol. Specifically, the network card may parse a packet body of the first access request from the first access request; and packaging the message body of the first access request according to the network protocol by taking the memory attribute information of the memory to be accessed as a message header field to obtain a second access request following the network protocol.
Further, in step 303, the access request may be provided to other computer devices. Specifically, the second access request may be provided to the other computer device through the network card.
For the request receiving end, the memory can be accessed according to the access path adapted to the memory attribute information carried by the received access request. The memory access logic of the request receiver is illustratively described below.
Fig. 4a is a schematic flowchart of another memory access method according to an embodiment of the present disclosure. The method is adapted to the request receiver. The method comprises the following steps:
401. and acquiring the second access request.
402. And determining the memory attribute information of the memory to be accessed according to the second access request.
403. Accessing the memory to be accessed according to the access path adapted to the memory attribute information and the second access request; the access path adaptive to the volatile memory comprises a processor cache; the persistent memory-adapted access path does not include a processor cache.
In the embodiment of the present application, the request receiving end may also be implemented as any form of computer device, such as a terminal device or a server device. Accordingly, in step 401, a second access request may be received; and in step 402, according to the second access request, determining the memory attribute information of the memory to be accessed. Optionally, the memory attribute information of the memory to be accessed may be parsed from the second access request. Further, in step 403, the memory to be accessed may be accessed according to the access path adapted to the memory attribute information and the second access request. The access path adaptive to the volatile memory comprises a processor cache; the persistent memory-adapted access path does not include a processor cache. In this way, for the case that the memory to be accessed is a volatile memory, the volatile content can be accessed through the processor cache according to the second access request. And for the condition that the memory to be accessed is the persistent memory, directly accessing the persistent memory according to the second access request.
In this embodiment, when the memory to be accessed is a volatile memory, the volatile memory may be accessed through the processor cache. For the case that the memory to be accessed is the persistent memory, the persistent memory can be directly accessed. For the access path adaptive to the volatile memory, the low-delay performance of the processor cache can be utilized to reduce the access delay of the volatile memory; on the other hand, the access path of the persistent memory does not pass through the processor cache, so that intermediate links for accessing the persistent memory are reduced, and the access efficiency of the persistent memory is improved.
In the embodiment of the present application, a specific implementation form of the access request is not limited. In some embodiments, if the access request is a memory read request, the data requested by the first access request may be read from the memory to be accessed according to the access path adapted to the memory attribute information of the memory to be accessed. Of course, the access request may be a memory write request, and the data to be written included in the second access request may be written into the memory to be accessed according to the access path adapted to the memory attribute information of the memory to be accessed.
For the condition that the access request is a memory writing request, the network card can receive a second access request for the request receiving end; analyzing the memory attribute information of the memory to be accessed from the second access request; and then, according to the memory attribute information of the memory to be accessed, performing protocol conversion on the second access request to obtain a third access request following the communication protocol between the network card and the processor.
Specifically, the network card may further parse the data to be written included in the second access request from the second access request; and then, using the memory attribute information as a message header field, and encapsulating the data to be written according to a communication protocol between the network card and the processor to obtain a third access request.
The network card may then provide the third access request to the processor at the request receiver. Alternatively, the network card may provide the third access request to the processor through a connection channel with the processor (e.g., a PCIe high speed channel).
Correspondingly, the processor can analyze the data to be written and the memory attribute information from the third access request; and then, writing the data to be written into the memory to be accessed according to the access path adapted to the memory attribute information.
Specifically, for the case that the memory attribute information of the memory to be accessed is a volatile memory, the processor may write the data to be written into the volatile memory through a direct input output (DDIO) module. Specifically, the processor can write the data to be written into the DDIO module, and since the volatile memory and the DDIO module are both volatile storage media, the data to be written into the DDIO module can be considered to be successfully written, and since the access delay of the DDIO module is low, the writing efficiency of the volatile data can be improved compared with the memory writing operation which does not utilize the DDIO module to accelerate. Further, the processor may control the DDIO module to write the data to be written into the volatile memory.
For the case that the memory attribute information of the memory to be accessed is the persistent memory, the processor can directly write the data to be written into the persistent memory. Alternatively, the processor may write the data to be accessed directly to the persistent memory through an Integrated Memory Controller (IMC). The data to be written is directly written into the persistent memory without passing through a processor cache (such as a DDIO module), so that intermediate links for accessing the persistent memory are reduced, and the access efficiency of the persistent memory is improved.
On the other hand, since the access delay of the persistent memory is far longer than that of the DDIO module and is about 35-40 times of the access delay of the DDIO module, if data is written into the persistent memory through the DDIO module, the data to be written stays in the DDIO module for a long time, so that processor cache resource competition is caused, and the performance of the volatile memory is affected. In this embodiment, the access paths of the volatile memory and the persistent memory are separated, and the access path of the persistent memory does not pass through the processor cache, so that the situation that the volatile memory competes for the processor cache resources does not exist, which is beneficial to improving the performance of the volatile memory.
In the embodiment of the present application, since an access path of the volatile memory passes through the processor cache, if a power failure event occurs at a request receiving end, if data in the processor cache is not stored specifically, the data may be lost. In order to solve the problem, in the embodiment of the present application, an energy storage device, such as a capacitor battery, may be added to the motherboard of the request receiving end, so that when the request receiving end is powered down, the energy storage device corresponding to the motherboard can continue to supply power to the motherboard, and in the process of continuing to supply power to the motherboard by the energy storage device, the data in the processor cache is stored in the persistent memory. Therefore, when the device is powered off, the data in the processor cache is written into the persistent memory in the embodiment of the application, so that the data in the processor cache can be prevented from being lost due to the power failure of the request receiving end, and the data security is improved. For a specific implementation of writing data in the processor cache into the persistent memory in response to the power down event, reference may be made to relevant contents of the foregoing embodiments, and details are not described here again.
The memory access method provided by the above embodiment is implemented based on a network card, and particularly, the memory access method is implemented based on an RDMA network card, which is described below with reference to a specific embodiment.
Fig. 4b is a schematic flowchart of a memory access method based on a network card according to an embodiment of the present application. As shown in fig. 4b, the method comprises:
and S1, the network card acquires the second access request.
And S2, determining the memory attribute information of the memory to be accessed according to the second access request.
And S3, performing protocol conversion on the second access request according to the memory attribute information to obtain a third access request following the communication protocol between the network card and the processor.
And S4, providing the third access request to the processor.
S5, the processor accesses the memory to be accessed according to the access path adapted to the memory attribute information carried by the third access request; the access path adaptive to the volatile memory comprises a processor cache; the persistent memory-adapted access path does not include a processor cache.
The memory access method based on the network card provided by the embodiment is suitable for a request receiving end. The network card and the processor may be the network card and the processor of the request receiving end. In step S1, the network card may receive a second access request; and in step S2, according to the second access request, determining the memory attribute information of the memory to be accessed. Optionally, the memory attribute information of the memory to be accessed may be parsed from the second access request.
Then, in step S3, the network card may perform protocol conversion on the second access request according to the memory attribute information of the memory to be accessed, so as to obtain a third access request that follows the communication protocol between the network card and the processor. For a specific implementation of step S3, reference may be made to relevant contents of the foregoing embodiments, and details are not described herein.
Thereafter, in step S4, the network card may provide a third access request to the processor at the request receiver. Alternatively, the network card may provide the third access request to the processor through a connection channel with the processor (e.g., a PCIe high speed channel).
Correspondingly, in step S5, the to-be-accessed memory is accessed according to the access path adapted to the memory attribute information carried in the third access request. For a specific implementation of step S5, reference may be made to relevant contents of the foregoing embodiments, and details are not described herein.
In this embodiment, when the memory to be accessed is a volatile memory, the volatile memory may be accessed through the processor cache. For the case that the memory to be accessed is the persistent memory, the persistent memory can be directly accessed. For the access path adaptive to the volatile memory, the low-delay performance of the processor cache can be utilized to reduce the access delay of the volatile memory; on the other hand, the access path of the persistent memory does not pass through the processor cache, so that intermediate links for accessing the persistent memory are reduced, and the access efficiency of the persistent memory is improved.
It should be noted that the execution subjects of the steps of the methods provided in the above embodiments may be the same device, or different devices may be used as the execution subjects of the methods. For example, the execution subject of steps 401 and 402 may be device a; for another example, the execution subject of step 401 may be device a, and the execution subject of step 402 may be device B; and so on.
In addition, in some of the flows described in the above embodiments and the drawings, a plurality of operations are included in a specific order, but it should be clearly understood that the operations may be executed out of the order presented herein or in parallel, and the sequence numbers of the operations, such as 401, 402, etc., are merely used to distinguish various operations, and the sequence numbers themselves do not represent any execution order. Additionally, the flows may include more or fewer operations, and the operations may be performed sequentially or in parallel.
Fig. 5 is a schematic structural diagram of a computer device according to an embodiment of the present application. As shown in fig. 5, the computer apparatus includes: memory 50a, processor 50b, and network card 50 c. In this embodiment, the computer device may be implemented as a request sender. Alternatively, the network card 50c may be an RDMA network card or the like.
The memory 50a is used for storing computer programs.
The processor 50b is coupled to the memory 50a and the network card 50c for executing computer programs for: acquiring memory attribute information of a memory to be accessed; generating an access request according to the memory attribute information; the access request is provided to other computer devices through the network card 50c, so that the other computer devices can access the memory to be accessed according to the access path adapted to the memory attribute information.
In some embodiments, the processor 50b, when generating the access request, is specifically configured to: generating a first access request following a communication protocol between the processor 50b and the network card 50c according to the memory attribute information; and provides the first access request to network card 50 c.
Correspondingly, the network card 50c is configured to parse the memory attribute information of the memory to be accessed from the first access request; performing protocol conversion on the first access request according to the memory attribute information to obtain a second access request following a network protocol; and providing the second access request to the other computer device.
In some optional embodiments, as shown in fig. 5, the computer device may further include: other components such as a communication component 50d and a power supply component 50e other than the network card 50 c. Only some of the components shown in fig. 5 are schematically depicted, and it is not meant that the computer device must include all of the components shown in fig. 5, nor that the computer device only includes the components shown in fig. 5.
The computer device provided in this embodiment may identify an access request of the volatile memory and the persistent memory, and may access the memory to be accessed through different access paths according to the attribute information of the memory to be accessed for another computer device that subsequently receives the access request, so as to implement access path separation between the volatile memory and the persistent memory. The access path adaptive to the volatile memory is cached by the processor, and the access delay of the volatile memory can be reduced by utilizing the low-delay performance of the cache of the processor; on the other hand, the access path of the persistent memory does not pass through the processor cache, so that intermediate links for accessing the persistent memory are reduced, and the access efficiency of the persistent memory is improved. In summary, the embodiment of the present application is helpful to improve the memory access efficiency.
Fig. 6 is a schematic structural diagram of another computer device according to an embodiment of the present application. As shown in fig. 6, the computer apparatus includes: a processor 60a, a network card 60b, volatile memory 60c, and persistent memory 60 d. The processor 60a and the network card 60b are connected in communication. The network card 60b may be an RDMA network card.
In the present embodiment, the processor 60a is coupled to the network card 60b, the volatile memory 60c and the persistent memory 60d, and is configured to: acquiring a first access request through the network card 60 b; determining memory attribute information of a memory to be accessed according to the first access request; accessing the memory to be accessed according to the access path and the first access request which are adaptive to the memory attribute information; the access path adapted to the volatile memory 60c includes a processor cache; the access path adapted by the persistent memory 60d does not include a processor cache.
Optionally, correspondingly, when the processor 60a accesses the memory to be accessed, the following steps are specifically performed: and writing the data to be written in the first access request into the memory to be accessed according to the access path adapted to the memory attribute information.
Optionally, the memory attribute information is a volatile memory. Accordingly, when the processor 60a writes the data to be written included in the first access request into the memory to be accessed, the following steps are specifically performed: the data to be accessed is written to the volatile memory 60c by the direct input output module of the processor 60 a.
Optionally, the memory attribute information is a persistent memory. Accordingly, when the processor 60a writes the data to be written included in the first access request into the memory to be accessed, the following steps are specifically performed: the data to be written is written directly to the persistent memory 60 d.
In this embodiment, the network card 60b performs protocol conversion on the first access request according to the memory attribute information to obtain a second access request conforming to a communication protocol between the network card and the processor; the second access request is provided to processor 60 a.
Accordingly, when the processor 60a writes the data to be written into the memory to be accessed, it is specifically configured to: analyzing the data to be written and the memory attribute information from the second access request; and writing the data to be written into the memory to be accessed according to the access path adapted to the memory attribute information.
When performing protocol conversion on the first access request, the network card 60b is specifically configured to: analyzing the data to be written contained in the first access request from the first access request; and taking the memory attribute information as a message header field, and encapsulating the data to be written according to a communication protocol between the network card and the processor to obtain a second access request.
In some embodiments of the present application, the computer device further comprises: a main board 60 e. The processor 60a may be disposed on the motherboard 60 e. Optionally, the main board 60e is further provided with an energy storage device 60 f. The energy storage device 60f may provide power to the motherboard 60e in the event of a power failure of the computer device. Alternatively, energy storage device 60f may include: a capacitive battery.
Accordingly, the processor 60a is further configured to: in response to a power down event, data in the processor cache is written to persistent memory 60 d.
In some optional embodiments, as shown in fig. 6, the computer device may further include: power supply assembly 60g, and the like. Only some of the components shown in fig. 6 are schematically shown, and it is not meant that the computer device must include all of the components shown in fig. 6, nor that the computer device only includes the components shown in fig. 6.
The computer device provided by this embodiment may determine, according to the access request, memory attribute information of the memory to be accessed; and the memory to be accessed is accessed according to the access path adapted to the memory attribute information, so that the access path separation of the volatile memory and the persistent memory is realized. The access path adaptive to the volatile memory is cached by the processor, and the access delay of the volatile memory can be reduced by utilizing the low-delay performance of the cache of the processor; on the other hand, the access path of the persistent memory does not pass through the processor cache, so that intermediate links for accessing the persistent memory are reduced, and the access efficiency of the persistent memory is improved. In summary, the embodiment of the present application is helpful to improve the memory access efficiency.
In embodiments of the present application, the memory is used to store computer programs and may be configured to store other various data to support operations on the device on which it is located. Wherein the processor may execute a computer program stored in the memory to implement the corresponding control logic. The memory may be implemented by any type or combination of volatile or non-volatile memory devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
In the embodiments of the present application, the processor may be any hardware processing device that can execute the above described method logic. Alternatively, the processor may be a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), or a Micro Controller Unit (MCU); programmable devices such as Field-Programmable Gate arrays (FPGAs), Programmable Array Logic devices (PALs), General Array Logic devices (GAL), Complex Programmable Logic Devices (CPLDs), etc. may also be used; or Advanced Reduced Instruction Set (RISC) processors (ARM) or System on chips (SoC), etc., but is not limited thereto.
In embodiments of the present application, the communication component is configured to facilitate wired or wireless communication between the device in which it is located and other devices. The device in which the communication component is located can access a wireless network based on a communication standard, such as WiFi, 2G or 3G, 4G, 5G or a combination thereof. In an exemplary embodiment, the communication component receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component may also be implemented based on Near Field Communication (NFC) technology, Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, or other technologies.
In embodiments of the present application, a power supply component is configured to provide power to various components of the device in which it is located. The power components may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the device in which the power component is located.
It should be noted that, the descriptions of "first", "second", etc. in this document are used for distinguishing different messages, devices, modules, etc., and do not represent a sequential order, nor limit the types of "first" and "second" to be different.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (13)

1. A memory access method based on a network card is characterized by comprising the following steps:
the network card acquires a first access request; determining memory attribute information of a memory to be accessed according to the first access request; performing protocol conversion on the first access request according to the memory attribute information to obtain a second access request following a communication protocol between the network card and the processor; providing the second access request to the processor;
the processor accesses the memory to be accessed according to the access path adaptive to the memory attribute information carried by the second access request;
the access path adaptive to the volatile memory comprises a processor cache; the persistent memory-adapted access path does not include a processor cache.
2. A memory access method, comprising:
acquiring a first access request;
determining memory attribute information of a memory to be accessed according to the first access request;
accessing the memory to be accessed according to the access path adapted to the memory attribute information and the first access request;
the access path adaptive to the volatile memory comprises a processor cache; the persistent memory-adapted access path does not include a processor cache.
3. The method according to claim 2, wherein the accessing the memory to be accessed according to the access path adapted to the memory attribute information and the first access request comprises:
and writing the data to be written in the first access request into the memory to be accessed according to the access path adapted to the memory attribute information.
4. The method according to claim 3, wherein the memory attribute information is a volatile memory, and the writing the data to be written included in the first access request into the memory to be accessed according to the access path adapted to the memory attribute information comprises:
and writing the data to be accessed into the volatile memory through a direct input and output module of the processor.
5. The method according to claim 3, wherein the memory attribute information is a persistent memory, and the writing the data to be written included in the first access request into the memory to be accessed according to the access path adapted to the memory attribute information comprises:
and directly writing the data to be written into the persistent memory.
6. The method according to any one of claims 3 to 5, wherein before writing the data to be written into the memory to be accessed according to the access path adapted to the memory attribute information, the method further comprises:
the network card performs protocol conversion on the first access request according to the memory attribute information to obtain a second access request following a communication protocol between the network card and the processor; providing the second access request to the processor;
writing the data to be written into the memory to be accessed according to the access path adapted to the memory attribute information, including:
the processor analyzes the data to be written and the memory attribute information from the second access request;
and writing the data to be written into the memory to be accessed according to the access path adapted to the memory attribute information.
7. The method of claim 6, wherein the protocol conversion of the first access request by the network card according to the memory attribute information comprises:
analyzing the data to be written contained in the first access request from the first access request;
and packaging the data to be written according to a communication protocol between the network card and the processor by taking the memory attribute information as a message header field to obtain the second access request.
8. The method of any of claims 2-5, further comprising:
and responding to a power failure event, and writing the data in the processor cache into the persistent memory.
9. A memory access method, comprising:
acquiring memory attribute information of a memory to be accessed;
generating an access request according to the memory attribute information;
and providing the access request to other computer equipment so that the other computer equipment can access the memory to be accessed according to the access path adapted to the memory attribute information.
10. The method according to claim 9, wherein the generating an access request according to the memory attribute information comprises:
the processor generates a first access request following a communication protocol between the processor and a network card according to the memory attribute information; and providing the first access request to the network card;
the network card analyzes the memory attribute information of the memory to be accessed from the first access request; and performing protocol conversion on the first access request according to the memory attribute information to obtain a second access request following a network protocol.
11. A data processing system, comprising: client equipment and server equipment;
wherein the client device is configured to: acquiring memory attribute information of a memory to be accessed; generating an access request according to the memory attribute information; providing the access request to the server-side equipment;
the server device is configured to: determining the memory attribute information of the memory to be accessed according to the access request; accessing the memory to be accessed according to the access path adapted to the memory attribute information and the access request; the access path adaptive to the volatile memory comprises a processor cache; the persistent memory-adapted access path does not include a processor cache.
12. A computer device, comprising: the system comprises a processor, a network card, a volatile memory and a persistent memory;
the processor is coupled to the network card, the volatile memory, and the persistent memory to: acquiring a first access request through the network card; determining memory attribute information of a memory to be accessed according to the first access request; accessing the memory to be accessed according to the access path adapted to the memory attribute information and the first access request; the access path adaptive to the volatile memory comprises a processor cache; the persistent memory-adapted access path does not include a processor cache.
13. A computer device, comprising: a memory, a processor and a network card;
wherein the memory is used for storing a computer program;
the processor is coupled to the memory and the network card for executing the computer program for: acquiring memory attribute information of a memory to be accessed; generating an access request according to the memory attribute information; and providing the access request to other computer equipment through the network card so that the other computer equipment can access the memory to be accessed according to the access path adapted to the memory attribute information.
CN202111152532.3A 2021-09-29 2021-09-29 Network card-based memory access method, device and system Pending CN113971158A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111152532.3A CN113971158A (en) 2021-09-29 2021-09-29 Network card-based memory access method, device and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111152532.3A CN113971158A (en) 2021-09-29 2021-09-29 Network card-based memory access method, device and system

Publications (1)

Publication Number Publication Date
CN113971158A true CN113971158A (en) 2022-01-25

Family

ID=79587342

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111152532.3A Pending CN113971158A (en) 2021-09-29 2021-09-29 Network card-based memory access method, device and system

Country Status (1)

Country Link
CN (1) CN113971158A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116204455A (en) * 2023-04-28 2023-06-02 阿里巴巴达摩院(杭州)科技有限公司 Cache management system, method, private network cache management system and equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116204455A (en) * 2023-04-28 2023-06-02 阿里巴巴达摩院(杭州)科技有限公司 Cache management system, method, private network cache management system and equipment
CN116204455B (en) * 2023-04-28 2023-09-22 阿里巴巴达摩院(杭州)科技有限公司 Cache management system, method, private network cache management system and equipment

Similar Documents

Publication Publication Date Title
US9934065B1 (en) Servicing I/O requests in an I/O adapter device
US10572309B2 (en) Computer system, and method for processing multiple application programs
US9864538B1 (en) Data size reduction
US20180027074A1 (en) System and method for storage access input/output operations in a virtualized environment
US9489328B2 (en) System on chip and method for accessing device on bus
CN107526620B (en) User mode input and output equipment configuration method and device
CN112650558B (en) Data processing method and device, readable medium and electronic equipment
US20160004548A1 (en) Notification conversion program and notification conversion method
WO2021238702A1 (en) Task scheduling method, computing device and storage medium
CN114201421A (en) Data stream processing method, storage control node and readable storage medium
CN110888602A (en) Method and device for improving reading performance based on solid state disk and computer equipment
CN113971158A (en) Network card-based memory access method, device and system
CN107181802B (en) Intelligent hardware control method and device, server and storage medium
CN114363185A (en) Virtual resource processing method and device
CN109478171A (en) Improve the handling capacity in OPENFABRICS environment
CN113296691B (en) Data processing system, method and device and electronic equipment
US20230342087A1 (en) Data Access Method and Related Device
US10339091B2 (en) Packet data processing method, apparatus, and system
CN112965788A (en) Task execution method, system and equipment in hybrid virtualization mode
CN115904259B (en) Processing method and related device of nonvolatile memory standard NVMe instruction
US20230281113A1 (en) Adaptive memory metadata allocation
CN112433812A (en) Method, system, equipment and computer medium for virtual machine cross-cluster migration
CN116489177A (en) IO access method and device based on block storage, electronic equipment and medium
CN108628550B (en) Method, device and system for reading disk mapping file
US7055152B1 (en) Method and system for maintaining buffer registrations in a system area network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40067017

Country of ref document: HK