CN110727632A - Data processing method and device - Google Patents

Data processing method and device Download PDF

Info

Publication number
CN110727632A
CN110727632A CN201810779536.6A CN201810779536A CN110727632A CN 110727632 A CN110727632 A CN 110727632A CN 201810779536 A CN201810779536 A CN 201810779536A CN 110727632 A CN110727632 A CN 110727632A
Authority
CN
China
Prior art keywords
data
data processing
operation request
log
address
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810779536.6A
Other languages
Chinese (zh)
Other versions
CN110727632B (en
Inventor
孙贝磊
周超
李涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
XFusion Digital Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202211292680.XA priority Critical patent/CN115658593A/en
Priority to CN201810779536.6A priority patent/CN110727632B/en
Publication of CN110727632A publication Critical patent/CN110727632A/en
Application granted granted Critical
Publication of CN110727632B publication Critical patent/CN110727632B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/16Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
    • G06F15/163Interprocessor communication
    • G06F15/173Interprocessor communication using an interconnection network, e.g. matrix, shuffle, pyramid, star, snowflake
    • G06F15/17306Intercommunication techniques
    • G06F15/17331Distributed shared memory [DSM], e.g. remote direct memory access [RDMA]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/14Handling requests for interconnection or transfer
    • G06F13/20Handling requests for interconnection or transfer for access to input/output bus
    • G06F13/28Handling requests for interconnection or transfer for access to input/output bus using burst mode transfer, e.g. direct memory access DMA, cycle steal

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Memory System Of A Hierarchy Structure (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Debugging And Monitoring (AREA)
  • Retry When Errors Occur (AREA)

Abstract

The embodiment of the application provides a data processing method and device, relates to the technical field of communication, and aims to reduce the number of times that a memory persistence daemon process is scheduled to execute. The scheme comprises the following steps: the method comprises the steps that a data processing device receives a first data operation request, wherein the first data operation request comprises data and a first address corresponding to the data, and the first address is a storage address of the data in the data processing device; the data processing device records a log corresponding to the first data operation request in a log list according to the first data operation request; the log corresponding to the first data operation request comprises the data, the first address and the operation type of the first data operation request; the log list is stored in the non-volatile flash storage medium; the data processing apparatus stores the data. The scheme is suitable for the storage system.

Description

Data processing method and device
Technical Field
The embodiment of the application relates to the technical field of communication, in particular to a data processing method and device.
Background
Storage Class Memory (SCM) refers to a byte-level memory in which data does not disappear after power is turned off, and may also be referred to as a non-volatile memory. Compared to a Dynamic Random Access Memory (DRAM), the SCM has characteristics of power-off non-volatility and large storage density. As SCMs mature, existing computer system architectures face significant innovation.
Remote Direct Memory Access (RDMA) may bypass the kernel software stack, and directly read and write data in the remote memory through a Direct Memory Access (DMA) via a hardware RDMA Network Interface Card (RNIC). In a distributed storage system, the combination of RDMA + SCM is a necessary choice in order to fully exploit the SCM media characteristics.
Although SCMs are non-volatile, current processor cache levels are volatile, i.e., data in the cache is lost after power is removed. Therefore, in order to ensure high reliability of data, the data in the cache must be flushed back to the SCM in time to ensure data persistence. If data stored in local memory is accessed, the cache line may be flushed back via the CLFLUSH instruction. If the data stored in the remote end is accessed through RDMA, after the data in the remote end is accessed, a Central Processing Unit (CPU) needs to be informed to call a CLFLUSH instruction to flush the corresponding cache line back to the SCM. The whole process causes that the data interaction flow for accessing the remote end is complex, the time delay is extremely high, and the characteristics of SCM and RDMA cannot be fully utilized.
As shown in fig. 1, if a data direct access input/output (DDIO) of a CPU is turned on, a client (client) first writes data to a server (server) by an RDMA Write operation request. After the client receives the response fed back by the target server and used for indicating that the requested data is written into the server, the client initiates a persistence operation request to the server through the RDMAsend to request the server to persist the data requested to be written by the client in the memory.
However, when the client writes data to the server in fig. 1, six network interfaces, three network transmissions, and one CPU cache line flush operation need to be called. The delay in fig. 1 takes several tens of microseconds if the overhead of operating system scheduling and the like involved in the middle is taken into consideration. In contrast, one RDMA Write requires only 1 microsecond (us) to 2us, while one SCM Write requires at most 500 nanoseconds (ns). It can be seen that the benefits of SCM are completely masked by the above scheme flow.
Disclosure of Invention
The embodiment of the application provides a data processing method and device, which are used for reducing the times of scheduling and executing of a memory persistence daemon.
In order to solve the above technical problem, an embodiment of the present application provides the following technical solutions:
in a first aspect, an embodiment of the present application provides a data processing method, which is applied to a data processing apparatus having a nonvolatile flash storage medium, and the data processing method includes: the data processing device receives a first data operation request comprising data and a first address corresponding to the data, wherein the first address is a storage address of the data in the data processing device; the data processing device records a log corresponding to the first data operation request in a log list according to the first data operation request; the log corresponding to the first data operation request comprises the data, a first address and an operation type of the first data operation request; the log list is stored in the nonvolatile fast storage medium; the data processing device stores the data.
The embodiment of the application provides a data processing method, by recording a log corresponding to a received first data operation request into a log list stored in a nonvolatile fast storage medium in a data processing device with the nonvolatile fast storage medium, because the fast storage medium is nonvolatile, when the data processing device is used, the first data operation request can be recovered through the record in the log list, and a request initiating terminal is required to reinitiate corresponding memory operation. Because the embodiment of the application only needs two network interface calls (for example, for initiating a data transmission request and acquiring a mark of transmission end) and one network transmission (for example, sending data from a client to a data processing device), the number of times that the memory persistence daemon process is scheduled to execute is reduced compared with the prior art.
In a possible implementation manner, a data processing apparatus includes N cache lines, a log list includes N entries, the N entries are mapped with the N cache lines one by one, one entry is used to record a log corresponding to a data operation request sent to the data processing apparatus, N is an integer greater than or equal to 1, and the data processing apparatus records the log corresponding to a first data operation request in the log list according to the first data operation request, including: the data processing device determines a target entry corresponding to the first address from the N entries according to the first address; and the data processing device records a log corresponding to the first data operation request in the target entry according to the first data operation request. If the cache line corresponding to the first address is the same as the cache line corresponding to a certain address, the embodiment of the application records the log corresponding to the first data operation request on the target entry determined by the first address, so that the data operation requests for the same cache line can be continuously eliminated, the purpose of recording the last cache line operation is achieved, and because the log list is stored in a nonvolatile fast storage medium, when the storage system is suddenly powered off, the corresponding cache line operation only needs to be recovered according to the content in the log list.
In one possible implementation manner, if the length of the Payload requested by the first data operation request is greater than the preset byte, a portion of the Payload requested by the first data operation request, which is equal to the preset byte, is recorded in the target entry, and a portion of the Payload requested by the first data operation request, which is other than the preset byte, is recorded in an entry determined by the first address and the preset byte.
In one possible implementation manner, the determining, by the data processing apparatus, a target entry corresponding to the first address from the N entries according to the first address includes: the data processing device determines a cache line corresponding to the first address; and the data processing device determines a target entry from the N entries according to the mapping relation between the N cache lines and the N entries and the cache lines corresponding to the first addresses.
In a possible implementation manner, the log list includes a multi-level sub-log list, N entries included in the multi-level sub-log list are mapped one by one, and the multi-level sub-log list has different priorities, where the method provided in the embodiment of the present application further includes: the cache line corresponding to the second operation request is the same as the cache line corresponding to the first operation request; when the data processing device determines that a log corresponding to the first operation request is recorded on a first entry corresponding to the second address, the data processing device sequentially migrates logs recorded on the first entry in a previous-level sub-log list in a multi-level sub-log list to a first entry in a next-level sub-log list adjacent to the previous-level sub-log list according to a preset priority order; and the data processing device records the log corresponding to the second data operation request on the first entry in the sub-log list with the highest priority.
In one possible implementation, a data processing apparatus stores data, including: and the data processing device stores the data in the cache line corresponding to the first address in the data processing device in a Direct Memory Access (DMA) mode.
In a possible implementation manner, the method provided in the embodiment of the present application further includes: and the data processing device determines that the packet sequence number PSN of the first data operation request is not equal to the expected ePSN, and then the data processing device sends a response operation request to the data sending device, wherein the response operation request is used for indicating the data sending device to retransmit the data or indicating a data transmission error. Therefore, when the written data has errors, the data transmitting device can retransmit the data to be written in time.
In a possible implementation manner, the method provided in the embodiment of the present application further includes: when the data processing device fails, the data processing device requests the data sending device to reinitiate corresponding operation according to each entry in the log list.
In a second aspect, the present application provides a data processing apparatus, which may implement the method of the first aspect or any possible implementation manner of the first aspect, and thus may also achieve the beneficial effects of the first aspect or any possible implementation manner of the first aspect. The data processing apparatus may be a server, or may be an apparatus that can support the server to implement the method in the first aspect or any possible implementation manner of the first aspect, for example, a chip applied in the server. The server can implement the method through software, hardware or corresponding software executed by hardware.
A data processing apparatus having a nonvolatile flash storage medium therein, the data processing apparatus comprising: the data processing device comprises a receiving unit, a processing unit and a processing unit, wherein the receiving unit is used for receiving a first data operation request comprising data and a first address corresponding to the data, and the first address is a storage address of the data in the data processing device; the processing unit is used for recording a log corresponding to the first data operation request in a log list according to the first data operation request; the log corresponding to the first data operation request comprises data, a first address and an operation type of the first data operation request; the log list is stored in a nonvolatile fast storage medium; and the storage unit is used for storing the data by the data processing device.
In a possible implementation manner, the data processing apparatus includes N cache lines, the log list includes N entries, the N entries are mapped with the N cache lines one by one, one entry is used to record a log corresponding to a data operation request sent to the data processing apparatus, N is an integer greater than or equal to 1, and the data processing apparatus further includes: the determining unit is used for determining a target entry corresponding to the first address from the N entries according to the first address; and the processing unit is specifically used for recording a log corresponding to the first data operation request in the target entry according to the first data operation request.
In a possible implementation manner, the processing unit is further configured to, when the determining unit determines that the length of the Payload requested by the first data operation request is greater than the preset byte, record a portion of the Payload requested by the first data operation request, which is equal to the preset byte, into the target entry, and record a portion of the Payload requested by the first data operation request, which is other than the preset byte, in an entry determined by the first address and the preset byte.
In a possible implementation manner, the determining unit is specifically configured to: determining a cache line corresponding to the first address; and determining a target entry from the N entries according to the mapping relation between the N cache lines and the N entries and the cache lines corresponding to the first addresses.
In a possible implementation manner, the log list includes a multi-level sub-log list, N entries included in the multi-level sub-log list are mapped one by one, the multi-level sub-log list has different priorities, the receiving unit is further configured to receive a second data operation request, and a cache line corresponding to the second operation request is the same as a cache line corresponding to the first operation request;
the processing unit is specifically configured to, when the determining unit determines that the log corresponding to the first operation request is recorded in the first entry corresponding to the second address, sequentially migrate, according to a preset priority order, the log recorded in the first entry in a previous-stage sub-log list in the multi-stage sub-log list to the first entry in a next-stage sub-log list adjacent to the previous-stage sub-log list; and the log corresponding to the second data operation request is recorded on a first entry in a sub-log list with the highest priority.
In a possible implementation manner, the storage unit is specifically configured to store the data in the cache line corresponding to the first address in the data processing apparatus in a direct memory access DMA manner.
In one possible implementation manner, the data processing apparatus further includes: a sending unit, configured to send, if the determining unit determines that the packet sequence number PSN of the first data operation request is not equal to the expected ePSN, a response operation request to the data sending apparatus, where the response operation request is used to instruct the data sending apparatus to retransmit the data or is used to instruct a data transmission error.
In a possible implementation manner, the sending unit is further configured to, when the data processing apparatus fails, request the data sending apparatus to reinitiate the corresponding operation according to each entry in the log list.
In one possible implementation manner, an embodiment of the present application further provides a data processing apparatus, where the data processing apparatus may be a server or a chip applied in the server, and the data processing apparatus includes: a processor and a communication interface, wherein the communication interface is configured to support the data processing apparatus to perform the steps of receiving and sending messages/data on the data processing apparatus side described in any one of the possible implementations of the first aspect to the first aspect. The processor is configured to support the data processing apparatus to perform the steps of message/data processing on the data processing apparatus side as described in any one of the possible implementations of the first aspect to the first aspect. For specific corresponding steps, reference may be made to descriptions in any one of possible implementation manners of the first aspect to the first aspect, which are not described herein again.
Optionally, the communication interface of the data processing apparatus and the processor are coupled to each other.
Optionally, the data processing apparatus may further comprise a memory for storing code and data, the processor, the communication interface and the memory being coupled to each other.
In a third aspect, embodiments of the present application provide a computer-readable storage medium, in which a computer program or instructions are stored, and when the computer program or instructions are run on a computer, the computer is caused to execute a data processing method as described in the first aspect or various possible implementation manners of the first aspect.
In a fourth aspect, the present application provides a computer program product comprising instructions which, when run on a computer, cause the computer to perform one or more of the first aspect and various possible implementations of the first aspect.
In a fifth aspect, embodiments of the present application provide a chip comprising a processor and an interface circuit, the interface circuit being coupled to the processor, the processor being configured to execute a computer program or instructions to implement a data processing method as described in the first aspect or in various possible implementations of the first aspect, and the interface circuit being configured to communicate with other modules than the chip.
In a sixth aspect, an embodiment of the present application provides a storage system, including: data sending means for sending a first data operation request to the data processing apparatus as described in various possible implementations of the second aspect to the second aspect.
Drawings
FIG. 1 is a schematic diagram of data persistence provided by an embodiment of the present application;
FIG. 2 is a schematic diagram of a storage system according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of a server according to an embodiment of the present application;
fig. 4 is a first flowchart illustrating a data processing method according to an embodiment of the present application;
fig. 5 is a schematic diagram of a log queue according to an embodiment of the present application;
fig. 6 is a schematic diagram between a log list and a cache line provided in an embodiment of the present application;
fig. 7 is a flowchart illustrating a data processing method according to an embodiment of the present application;
fig. 8 is a schematic diagram between a multi-level sub-log list and a cache line according to an embodiment of the present application;
fig. 9 is a third schematic flowchart of a data processing method according to an embodiment of the present application;
FIG. 10 is a schematic diagram of multi-level sublog list eviction provided by an embodiment of the application;
fig. 11 is a fourth schematic flowchart of a data processing method according to an embodiment of the present application;
fig. 12 is a first schematic structural diagram of a data processing apparatus according to an embodiment of the present application;
fig. 13 is a schematic structural diagram of a data processing apparatus according to an embodiment of the present application.
Detailed Description
In the embodiments of the present application, terms such as "first" and "second" are used to distinguish the same or similar items having substantially the same function and action. For example, the first data operation request and the second data operation request are only for distinguishing different data operation requests, and the sequence order thereof is not limited. Those skilled in the art will appreciate that the terms "first," "second," etc. do not denote any order or quantity, nor do the terms "first," "second," etc. denote any order or importance.
It is noted that, in the present application, words such as "exemplary" or "for example" are used to mean exemplary, illustrative, or descriptive. Any embodiment or design described herein as "exemplary" or "e.g.," is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present concepts related in a concrete fashion.
The network architecture and the service scenario described in the embodiment of the present application are for more clearly illustrating the technical solution of the embodiment of the present application, and do not form a limitation on the technical solution provided in the embodiment of the present application, and as a person of ordinary skill in the art knows that along with the evolution of the network architecture and the appearance of a new service scenario, the technical solution provided in the embodiment of the present application is also applicable to similar technical problems.
In the present application, "at least one" means one or more, "a plurality" means two or more. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone, wherein A and B can be singular or plural. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. "at least one of the following" or similar expressions refer to any combination of these items, including any combination of the singular or plural items. For example, at least one (one) of a, b, or c, may represent: a, b, c, a-b, a-c, b-c, or a-b-c, wherein a, b, c may be single or multiple.
As shown in fig. 2, fig. 2 shows a storage system applied to a data processing method provided in an embodiment of the present application, where the storage system includes: at least one client (client) and at least one server (server), wherein one server 101 and one client 102 are shown in fig. 2, respectively.
Wherein at least one client can write (write) data to at least one server via Remote Direct Memory Access (RDMA).
Wherein, include in at least one customer end and at least one server: CPU and RDMA network interface card (RNIC, RNIC network card for short).
The RNIC network card in the server has the following queues, for example: a receive queue (rq), a transmit queue (sq), and a completion queue (cq). Wherein the receive queue is configured to temporarily store messages received by the RNIC. And the sending queue is used for temporarily storing the message requests which need to be sent out by the RNIC. A completion queue to generate a notification event when the RDMA request is completed.
Any one of the servers described in fig. 3 may also be referred to as a node, and at least one of the servers is a network device that can provide services for a client, for example, the server 101 may be a computer with a server function. Referring to fig. 3, the server 101 includes a memory 1011, a processor 1012, a system bus 1013, a power supply component 1014, an input/output interface 1015, a communication component 1016, one or more network cards 1017, and the like. The memory 1031 may be used to store data, software programs, and modules, and mainly includes a program storage area and a data storage area, where the program storage area may store an operating system, an application program required by at least one function, and the data storage area may store data that a client requests to write. The processor 1012 performs various functions of the server 101 and processes data by running or executing software programs and/or modules stored in the memory 1011 and calling data stored in the memory 1011. The system bus 1013 includes an address bus, a data bus, and a control bus, and is used for transmitting data and instructions; the power supply component 1014 is used for providing power supply for each component of the server 101; input/output interface 1015 provides an interface between processor 1012 and peripheral interface modules; the communication component 1016 is configured to communicate in a wired or wireless manner between the server 101 and other devices. One or more network cards 1017 are used to form a session channel between the network cards of the server 101 and the client 102 to transmit traffic. In a communication system, a Server 101 is an indispensable important component in various Client/Server (C/S) mode or Browser/Server (B/S) mode based networks, which undertakes the key tasks of data storage, forwarding, distribution, etc.
As shown in fig. 4, fig. 4 illustrates a data processing method provided by an embodiment of the present application, where the method is applied to a data processing apparatus having a non-volatile flash storage medium, and the method includes:
s101, a data processing device receives a first data operation request, wherein the first data operation request comprises data and a first address corresponding to the data, and the first address is a storage address of the data in the data processing device.
For example, the data processing apparatus in the embodiment of the present application may be a server as shown in fig. 2, or a chip in the server as shown in fig. 2. Specifically, the data processing device may be an RNIC network card in the server shown in fig. 2.
Illustratively, the first Address may be a Physical Address (PA) corresponding to data, where PA is also called a real Address or an absolute Address.
Specifically, the data processing apparatus may receive a first data operation request sent by the data sending apparatus in an RDMA manner.
S102, the data processing device records a Log (Log) corresponding to the first data operation request in a Log List (Log List) according to the first data operation request.
The log corresponding to the first data operation request comprises data, a first address and an operation type of the first data operation request; the log list is stored in the non-volatile flash storage medium. For example, the non-volatile flash storage medium may be: magnetic Random Access Memory (MRAM).
Illustratively, the operation type may be a write operation. Illustratively, the log corresponding to the first data operation request includes: < opcode, PA, data … >.
For example, the log list in the embodiment of the present application includes N entries (Entry), where one Entry is used to record a log corresponding to one first data operation request. Any one of the N entries corresponds to one cache line, that is, the data processing apparatus has N cache lines. N cache lines and N entries are mapped one to one.
The Log list in the embodiment of the present application may also be referred to as a Log Queue, and when the Log list is the Log Queue, only enqueue and dequeue operations may be performed. Unlike a Log Queue, any Entry of the Log List can be replaced, rather than just enqueue and dequeue operations.
Specifically, when the Log list is the Log Queue, in this embodiment of the present application, when a Log corresponding to a first data operation request is recorded in the Log Queue, one Entry may be selected from entries in an idle state in the Log Queue according to an order of enqueuing and dequeuing for recording the Log corresponding to the first data operation request, or one Entry may be selected according to a Queue order of the Log Queue for recording the Log corresponding to the first data operation request, which is not limited in this embodiment of the present application. For example, as shown in fig. 5, the Log Queue in fig. 5 includes 5 entries (e.g., Entry 1 to Entry 5), where entries 1, 2, and 3 all record corresponding logs, and the data processing apparatus may record the Log corresponding to the first data operation request on Entry 4.
By recording the first data operation request into the Log Queue in the embodiment of the application, when the storage system fails, the RDMA Write operation request (a data operation request for writing data, which is sent by the data sending device to the data processing device in an RDMA manner) may be recovered or cancelled through the Log recorded in the Log Queue, and the data sending device is required to restart the corresponding memory operation.
S103, the data processing device stores the data.
For example, step S103 in the embodiment of the present application may be specifically implemented by: the data processing device stores data in a cache line corresponding to a first address in a plurality of cache lines included in an LLC (logical Link control) of the data processing device in a Direct Memory Access (DMA) mode.
As a possible implementation manner, before step S103, the method provided in the embodiment of the present application further includes: the data processing apparatus feeds back a response message (e.g., Ack response) indicating that the data storage is successful to the data transmission apparatus.
In the above scheme, when the Log Queue is filled up (that is, logs corresponding to data operation requests are recorded in all N entries), the cache lines of the memory operations corresponding to all entries in the Log Queue need to be flushed back to the SCM, and the Log Queue can be continuously filled up. Otherwise, it may cause some Entry to be replaced before persistence, resulting in a risk of data loss. Therefore, the above scheme merely reduces the number of times that the memory persistence daemon is scheduled to execute, that is, for N times of data operation requests, the memory persistence daemon is scheduled to execute once, instead of executing once every data operation request.
Meanwhile, if the bandwidth utilization rate of the RNIC network card is high, a large Log Queue (i.e., increasing the size of MRAM) needs to be set to play a significant role in buffering and merging. However, when the Log Queue is full, a long time is required to flush the corresponding cache lines in each Entry back to the SCM, which requires a long time, directly results in a long delay of RDMA operation, and may even cause retransmission or failure of RDMA operation. Because the data in the LLC needs to be durationally persisted, for example, if the memory addresses of two consecutive memory operations correspond to the same cacheline, the previous memory operation does not need to be durationally persisted, because the cache consistency principle of the CPU will ensure that the memory operation of the next time will automatically flush the result of the memory operation of the previous time back to the memory, and the CPU does not need to explicitly call the flush-back instruction of the cache line. In summary, RNIC need not record the operation of SCM, only LLC. If the LLC can be backed up in the RNIC, crash consistency of the data can be guaranteed (CrashConsistence).
Based on the above analysis, the embodiment of the present application may eliminate Entry in the log list in the following manner. As shown in FIG. 6, each Entry in the Log List records an RDMA Write operation request.
As shown in fig. 6, assuming that the Last Level Cache (LLC) of the CPU has N Cache lines in total, there are N entries corresponding to the Log List. Assuming that the target physical address of the RDMA Write operation request is X, the LLC replacement algorithm of the CPU is: e is F (X), and E is more than or equal to 0 and less than or equal to N.
Given a target physical address X, X corresponds to the E-th cache line of the LLC, and the mapping relation from X to E is F. If the target physical address of an RDMA Write operation request is X, it maps the RDMA Write operation request into entry E of the Log List with the same mapping relationship F.
Therefore, in order to avoid that addresses requested to be written by different data operation requests correspond to the same cache line, in this case, each entry in the log list may be replaced, as a possible implementation manner, as shown in fig. 7, step S102 provided in this embodiment of the present application may be implemented in the following manner:
s1021, the data processing device determines a target entry corresponding to the first address from the N entries according to the first address.
S1022, the data processing apparatus records, in the target entry, a log corresponding to the first data operation request according to the first data operation request.
Specifically, if the data address requested to be written corresponds to the same cache line before the first address and the first data operation request, the data processing apparatus deletes the log in the target entry corresponding to the data address requested to be written before the first data operation request, and records the log corresponding to the first data operation request in the target entry.
Illustratively, the size of one cache is 64 bytes, and the size of one entry is also 64 bytes. Therefore, if the Length (Length) of the Payload requested by the first data operation request is greater than a preset byte (e.g., 64 bytes), a portion of the Payload requested by the first data operation request equal to the preset byte is recorded in the target entry, and a portion of the Payload requested by the first data operation request other than the preset byte is recorded in an entry determined by the first address and the preset byte until the Payload requested by the first data operation request is all recorded in the log list.
For example, if the Length of the Payload of the first data operation request is greater than 64Byte, let X ' be X +64, repeat the execution of the target entry determined by X ', and store the remaining payloads of the first data operation request to the target entry determined by X ', until all the payloads of the first data operation request are processed. For example, the Length of the Payload of the first data operation request is 192, where X ═ 256 corresponds to entry 1, then 0 to 63 bytes in the Payload of the first data operation request may be stored in entry 1, then the entry determined according to X ' ═ X +64 ═ 256+64 ═ 320 is entry 5, then 64 to 127 bytes are stored in entry 5, then the entry determined according to X ' ═ X ' +64 ═ 320+64 ═ 384 is entry 6, then 128 to 191 bytes are stored in entry 5.
Specifically, S1021 in the embodiment of the present application may be implemented as follows: and the data processing device determines a cache line corresponding to the first address. And the data processing device determines the target entry from the N entries according to the mapping relation between the N cache lines and the N entries and the cache lines corresponding to the first addresses.
It is to be understood that S1021 and S1022 in the embodiment of the present application are also applicable to the case when the log list is a log queue.
Assuming that the first RDMA Write operation request corresponds to the E-th Entry of the Log List, according to the foregoing scheme, if the second RDMA Write operation request corresponds to the same cache as the first RDMA Write operation request, it is necessary to delete the Log corresponding to the first RDMA Write operation request recorded in the E-th Entry, and record the Log corresponding to the second RDMA Write operation request in the E-th Entry. If the storage system crashes suddenly at this time, because the second RDMA Write operation request is only recorded in the RNIC and has not yet been written into the LLC, the log corresponding to the first RDMA Write operation request deleted in the E-th Entry will not have a chance to be redone or undone, and the SCM operation corresponding thereto is not persisted.
To this end, the Log List in the embodiment of the present invention may include multiple levels of sub Log lists, as shown in fig. 8, where each level of sub Log List in the multiple levels of sub Log lists includes Entry in a one-to-one mapping, where the multiple levels of sub Log lists have different priorities (that means that the priorities of each level of sub Log List in the multiple levels of sub Log lists are different). Assume a P-level loglist is shared, with the sub-loglist numbered 0 having the highest priority and the sub-loglist numbered P-1 having the lowest priority. After receiving the first data operation request, according to the elimination mechanism, assuming that the first data operation request corresponds to the E-th LogList Entry, the data processing apparatus first migrates the Log recorded on the E-th Entry of the P-2-th sub-LogList to the E-th Entry of the P-1-th LogList, sequentially migrates the Log recorded on the E-th Entry of the 0-th LogList to the E-th Entry of the 1-th LogList, and then records the Log corresponding to the first data operation request to the E-th Entry of the 0-th LogList.
It should be noted that, if no log exists on an Entry during migration, the logs on all entries at the next stage of the Entry may not be migrated.
Specifically, as shown in fig. 9, the method provided in the embodiment of the present application further includes:
and S104, the data processing device receives a second data operation request, wherein the cache line corresponding to the second operation request is the same as the cache line corresponding to the first operation request.
S105, when the data processing device determines that the first entry corresponding to the second address records the log corresponding to the first operation request, the data processing device sequentially migrates the log recorded on the first entry in the previous-level sub-log list in the multi-level sub-log list to the first entry in the next-level sub-log list adjacent to the previous-level sub-log list according to a preset priority order.
Wherein, the preset priority order is the order of the priority from high to low.
S106, the data processing device records the log corresponding to the second data operation request on the first entry in the sub-log list with the highest priority.
It is understood that, when the preset priority order is the order of priority from low to high, the data processing apparatus records the log corresponding to the second data operation request on the first entry in the sub-log list with the lowest priority.
For example, as shown in fig. 10, before receiving the second data operation request, the logs recorded in the 2 nd Entry in the sub Log List at each level as shown in fig. 10 are respectively as shown in fig. 10, and after receiving the second data operation request, because the second data operation request and the first data operation request correspond to the corresponding cache line 3, the logs recorded in the 2 nd Entry in the sub Log List at each level are respectively migrated backward until eliminating the Log 4, in which case, the 2 nd Entry in the sub Log List at the 0 th level will be empty, and therefore the Log 5 corresponding to the second data operation request can be written into the 2 nd Entry in the sub Log List at the 0 th level.
As a possible implementation manner, before step S102, the embodiment of the present application further includes: and if the data processing device determines that the Packet Sequence Number (PSN) of the first data operation request is not equal to the expected ePSN, the data processing device sends a response operation request to a data sending device, wherein the response operation request is used for indicating the data sending device to retransmit the data or indicating the data transmission error. Therefore, when the written data has errors, the data transmitting device can retransmit the data to be written in time.
Illustratively, the data transmission device may be the client in fig. 2.
It is understood that in the embodiment of the present application, when the data processing apparatus determines that the packet sequence number PSN of the first data operation request is equal to the expected ePSN, S102 is executed.
As shown in fig. 11, after step S103, the embodiment of the present application further includes:
s107, when the data processing device fails, the data processing device requests the data sending device to reinitiate corresponding operation according to each entry in the log list. So as to recover the corresponding cache line operation according to the content in the Log List.
The above-mentioned scheme of the embodiment of the present application is introduced mainly from the perspective of interaction between network elements. It is to be understood that each network element, such as a data processing device, etc., includes corresponding hardware structures and/or software modules for performing each function in order to realize the functions. Those of skill in the art would readily appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiment of the present application, the data processing apparatus may perform the division of the functional units according to the method, for example, each functional unit may be divided corresponding to each function, or two or more functions may be integrated into one processing unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit. It should be noted that the division of the unit in the embodiment of the present application is schematic, and is only a logic function division, and there may be another division manner in actual implementation.
The following description will be given by taking the division of each function module corresponding to each function as an example:
in the case of an integrated unit, fig. 12 shows a schematic diagram of a possible structure of a data processing apparatus, which may be a server or a chip applied to a server, according to the above embodiments. The data processing apparatus includes: a receiving unit 201, a processing unit 202 and a storage unit 203.
Wherein, the receiving unit 201 is used to support the data processing apparatus to execute steps S101 and S104 in the above embodiments. The processing unit 202 is used to support the data processing apparatus to execute steps S102, S1022, S105, and S106 in the above-described embodiments.
The storage unit 203 is used to support the data processing apparatus to execute step 103 in the above-described embodiment.
Optionally, the data processing apparatus further includes: a determining unit 204 and a transmitting unit 205, wherein the determining unit 204 is used for supporting the data processing apparatus to execute the step S1021 in the above embodiment, and the transmitting unit 205 is used for supporting the data processing apparatus to execute the step S107 in the above embodiment. All relevant contents of each step related to the above method embodiment may be referred to the functional description of the corresponding functional module, and are not described herein again.
In the case of an integrated unit, fig. 13 shows a schematic diagram of a possible logical structure of the data processing apparatus in the above embodiment, which may be a terminal in the above embodiment or a chip applied in the terminal. The data processing apparatus includes: a processing module 212 and a communication module 213. The processing module 212 is used for controlling and managing the operation of the data processing apparatus, for example, the processing module 212 is used for executing a step of performing message or data processing on the data processing apparatus side, and the communication module 213 is used for performing a step of performing message or data processing on the data processing apparatus side.
For example, as a possible implementation manner, the processing module 212 is used to support the data processing apparatus to execute S102, S1021, S1022, S105 and S106 in the above embodiments. The communication module 213 is used to support the data processing apparatus to execute S101, S104 and S107 in the above embodiments. And/or other processes performed by data processing apparatus for use with the techniques described herein.
Optionally, the data processing apparatus may further comprise a storage module 211 for storing program codes and data of the data processing apparatus. For example, for performing S103.
The processing module 212 may be a processor or controller, such as a central processing unit, a general purpose processor, a digital signal processor, an application specific integrated circuit, a field programmable gate array or other programmable logic device, a transistor logic device, a hardware component, or any combination thereof. Which may implement or perform the various illustrative logical blocks, modules, and circuits described in connection with the disclosure. A processor may also be a combination of computing functions, e.g., comprising one or more microprocessors, a digital signal processor, a combination of microprocessors, and the like. The communication module 213 may be a transceiver, a transceiving circuit or a communication interface, etc. The storage module 211 may be a memory.
When the processing module 212 is the processor 1012, the communication module 213 is the communication component 1016, and the storage module 211 is the memory 1011, the data processing apparatus according to the present application may be the device shown in fig. 3.
In one aspect, a computer-readable storage medium is provided, in which instructions are stored, and when executed, cause a server or a chip applied in the server to perform S101, S102, S1021, S1022, S103, S104, S105, S106, and S107 in the embodiments. And/or other processes performed by the server or chips applied in the server for the techniques described herein.
The aforementioned readable storage medium may include: u disk, removable hard disk, read only memory, random access memory, magnetic or optical disk, etc. for storing program codes.
In one aspect, a computer program product is provided, which includes instructions stored therein, and when executed, causes a server or a chip applied in the server to perform S101, S102, S1021, S1022, S103, S104, S105, S106, and S107 in the embodiment. And/or other processes performed by the server or chips applied in the server for the techniques described herein.
In one aspect, a chip is provided, where the chip is applied in a server, and the chip includes one or more (including two) processors and an interface circuit, where the interface circuit and the one or more (including two) processors are interconnected by a line, and the processors are configured to execute instructions to perform S101, S102, S1021, S1022, S103, S104, S105, S106, and S107 in the embodiment. And/or other server-executed processes for the techniques described herein.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented using a software program, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. The procedures or functions according to the embodiments of the present application are all or partially generated when the computer program instructions are loaded and executed on a computer. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored on a computer readable storage medium or transmitted from one computer readable storage medium to another computer readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center via wire (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL), for short) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or can comprise one or more data storage devices, such as a server, a data center, etc., that can be integrated with the medium. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
While the present application has been described in connection with various embodiments, other variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed application, from a review of the drawings, the disclosure, and the appended claims. In the claims, the word "comprising" does not exclude other elements or steps, and the word "a" or "an" does not exclude a plurality. A single processor or other unit may fulfill the functions of several items recited in the claims. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage.
Although the present application has been described in conjunction with specific features and embodiments thereof, it will be evident that various modifications and combinations can be made thereto without departing from the spirit and scope of the application. Accordingly, the specification and figures are merely exemplary of the present application as defined in the appended claims and are intended to cover any and all modifications, variations, combinations, or equivalents within the scope of the present application. It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is also intended to include such modifications and variations.

Claims (18)

1. A data processing method applied to a data processing apparatus having a nonvolatile flash storage medium, the method comprising:
the data processing device receives a first data operation request, wherein the first data operation request comprises data and a first address corresponding to the data, and the first address is a storage address of the data in the data processing device;
the data processing device records a log corresponding to the first data operation request in a log list according to the first data operation request; the log corresponding to the first data operation request comprises the data, the first address and the operation type of the first data operation request; the log list is stored in the non-volatile flash storage medium;
the data processing apparatus stores the data.
2. The data processing method according to claim 1, wherein the data processing apparatus includes N cache lines, the log list includes N entries, the N entries are mapped to the N cache lines one by one, one entry is used to record a log corresponding to a data operation request sent to the data processing apparatus, N is an integer greater than or equal to 1, the data processing apparatus records the log corresponding to the first data operation request in the log list according to the first data operation request, and the method includes:
the data processing device determines a target entry corresponding to the first address from the N entries according to the first address;
and the data processing device records a log corresponding to the first data operation request on the target entry according to the first data operation request.
3. A data processing method according to claim 2, wherein if the Payload requested by the first data operation request has a length greater than a preset byte, a part of the Payload requested by the first data operation request equal to the preset byte is recorded in the target entry, and a part of the Payload requested by the first data operation request other than the preset byte is recorded in other entries determined by the first address and the preset byte.
4. A data processing method according to claim 2 or 3, wherein the data processing apparatus determines a target entry corresponding to the first address from the N entries according to the first address, and includes:
the data processing device determines a cache line corresponding to the first address;
and the data processing device determines the target entry from the N entries according to the mapping relation between the N cache lines and the N entries and the cache lines corresponding to the first addresses.
5. A data processing method according to any of claims 1-4, wherein the log list comprises a multi-level sub-log list comprising a one-to-one mapping between N entries, the multi-level sub-log list having different priorities, the method further comprising:
the data processing device receives a second data operation request, wherein a cache line corresponding to the second operation request is the same as a cache line corresponding to the first operation request;
when the data processing device determines that a log corresponding to the first operation request is recorded on a first entry corresponding to the second address, the data processing device sequentially migrates logs recorded on the first entry in a previous-level sub-log list in the multi-level sub-log list to a first entry in a next-level sub-log list adjacent to the previous-level sub-log list according to a preset priority order;
and the data processing device records the log corresponding to the second data operation request on a first entry in a sublog list with the highest priority.
6. A data processing method according to any one of claims 1 to 5, wherein the data processing apparatus stores the data, comprising:
and the data processing device stores the data in a cache line corresponding to the first address in the data processing device in a Direct Memory Access (DMA) mode.
7. A data processing method according to any of claims 1-6, characterized in that the method further comprises:
and if the data processing device determines that the Packet Sequence Number (PSN) of the first data operation request is not equal to the expected ePSN, the data processing device sends a response operation request to a data sending device, wherein the response operation request is used for indicating the data sending device to retransmit the data or indicating the data transmission error.
8. A data processing method according to any one of claims 1 to 7, characterized in that the method further comprises:
and when the data processing device fails, the data processing device requests the data sending device to reinitiate corresponding operation according to each entry in the log list.
9. A data processing apparatus, wherein the data processing apparatus is a server or a chip applied in the server, and the data processing apparatus has a nonvolatile flash storage medium therein, and the data processing apparatus includes:
the data processing device comprises a receiving unit, a processing unit and a processing unit, wherein the receiving unit is used for receiving a first data operation request, the first data operation request comprises data and a first address corresponding to the data, and the first address is a storage address of the data in the data processing device;
the processing unit is used for recording a log corresponding to the first data operation request in a log list according to the first data operation request; the log corresponding to the first data operation request comprises the data, the first address and the operation type of the first data operation request; the log list is stored in the non-volatile flash storage medium;
and the storage unit is used for storing the data by the data processing device.
10. The data processing apparatus according to claim 9, wherein the data processing apparatus includes N cache lines, the log list includes N entries, the N entries are mapped to the N cache lines one by one, one entry is used to record a log corresponding to a data operation request sent to the data processing apparatus, N is an integer greater than or equal to 1, and the data processing apparatus further includes:
a determining unit, configured to determine, according to the first address, a target entry corresponding to the first address from the N entries;
the processing unit is specifically configured to record, according to the first data operation request, a log corresponding to the first data operation request in the target entry.
11. The data processing apparatus according to claim 10, wherein the processing unit is further configured to, if the determining unit determines that the length of the Payload requested by the first data operation request is greater than a preset byte, record a portion of the Payload requested by the first data operation request, which is equal to the preset byte, into the target entry, and record a portion of the Payload requested by the first data operation request, which is other than the preset byte, into an entry determined by the first address and the preset byte.
12. The data processing apparatus according to claim 10 or 11, wherein the determining unit is specifically configured to: determining a cache line corresponding to the first address; and determining the target entry from the N entries according to the mapping relation between the N cache lines and the N entries and the cache lines corresponding to the first addresses.
13. The data processing apparatus according to any one of claims 9 to 12, wherein the log list includes a multi-level sub-log list, N entries included in the multi-level sub-log list are mapped one by one, the multi-level sub-log list has different priorities, the receiving unit is further configured to receive a second data operation request, and a cache line corresponding to the second operation request is the same as a cache line corresponding to the first operation request;
the processing unit is specifically configured to, when the determining unit determines that the log corresponding to the first operation request is recorded in the first entry corresponding to the second address, sequentially migrate, according to a preset priority order, the log recorded in the first entry in a previous-stage sub-log list in the multi-stage sub-log list to the first entry in a next-stage sub-log list adjacent to the previous-stage sub-log list; and the log corresponding to the second data operation request is recorded on a first entry in a sub-log list with the highest priority.
14. The data processing apparatus according to any one of claims 9 to 13, wherein the storage unit is specifically configured to store the data in a cache line corresponding to the first address in the data processing apparatus by a direct memory access DMA manner.
15. A data processing apparatus according to any one of claims 9 to 14, characterized in that the data processing apparatus further comprises:
a sending unit, configured to send, if the determining unit determines that the packet sequence number PSN of the first data operation request is not equal to the expected ePSN, a response operation request to the data sending apparatus, where the response operation request is used to instruct the data sending apparatus to retransmit the data or is used to instruct the data transmission error.
16. A data processing apparatus according to any of claims 9-15, wherein the sending unit is further configured to request the data sending apparatus to reinitiate the corresponding operation according to each entry in the log list when the data processing apparatus fails.
17. A computer-readable storage medium, in which a computer program or instructions are stored, which, when run on a computer, cause the computer to carry out a data processing method according to any one of claims 1 to 8.
18. A chip comprising a processor and interface circuitry, the interface circuitry being coupled to the processor, the processor being configured to execute a computer program or instructions to implement a data processing method as claimed in any one of claims 1 to 8, the interface circuitry being configured to communicate with other modules external to the chip.
CN201810779536.6A 2018-07-16 2018-07-16 Data processing method and device Active CN110727632B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202211292680.XA CN115658593A (en) 2018-07-16 2018-07-16 Data processing method and device
CN201810779536.6A CN110727632B (en) 2018-07-16 2018-07-16 Data processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810779536.6A CN110727632B (en) 2018-07-16 2018-07-16 Data processing method and device

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202211292680.XA Division CN115658593A (en) 2018-07-16 2018-07-16 Data processing method and device

Publications (2)

Publication Number Publication Date
CN110727632A true CN110727632A (en) 2020-01-24
CN110727632B CN110727632B (en) 2022-11-11

Family

ID=69216892

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202211292680.XA Pending CN115658593A (en) 2018-07-16 2018-07-16 Data processing method and device
CN201810779536.6A Active CN110727632B (en) 2018-07-16 2018-07-16 Data processing method and device

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202211292680.XA Pending CN115658593A (en) 2018-07-16 2018-07-16 Data processing method and device

Country Status (1)

Country Link
CN (2) CN115658593A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101515246A (en) * 2008-12-29 2009-08-26 卡斯柯信号有限公司 Method for processing multi-stage log messages of ITS automatic train monitoring system
EP2270665A2 (en) * 2009-06-22 2011-01-05 Citrix Systems, Inc. Systems and methods for web logging of trace data in a multi-core system
CN103270499A (en) * 2011-12-21 2013-08-28 华为技术有限公司 Log storage method and system
CN103546579A (en) * 2013-11-07 2014-01-29 陈靓 Method for improving availability of distributed storage system through data logs
CN107329696A (en) * 2017-06-23 2017-11-07 华中科技大学 A kind of method and system for ensureing data corruption uniformity
CN107346290A (en) * 2016-05-05 2017-11-14 西部数据科技股份有限公司 Zoned logic is reset to physical data address conversion table using parallelization log list
CN107544913A (en) * 2016-06-29 2018-01-05 北京忆恒创源科技有限公司 A kind of FTL tables fast reconstructing method and device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101515246A (en) * 2008-12-29 2009-08-26 卡斯柯信号有限公司 Method for processing multi-stage log messages of ITS automatic train monitoring system
EP2270665A2 (en) * 2009-06-22 2011-01-05 Citrix Systems, Inc. Systems and methods for web logging of trace data in a multi-core system
CN103270499A (en) * 2011-12-21 2013-08-28 华为技术有限公司 Log storage method and system
CN103546579A (en) * 2013-11-07 2014-01-29 陈靓 Method for improving availability of distributed storage system through data logs
CN107346290A (en) * 2016-05-05 2017-11-14 西部数据科技股份有限公司 Zoned logic is reset to physical data address conversion table using parallelization log list
CN107544913A (en) * 2016-06-29 2018-01-05 北京忆恒创源科技有限公司 A kind of FTL tables fast reconstructing method and device
CN107329696A (en) * 2017-06-23 2017-11-07 华中科技大学 A kind of method and system for ensureing data corruption uniformity

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
XU ZHEN: "Research on dependency of flushing of pending commit transaction log in kernelized multilevel secure database system", 《CHINESE JOURNAL OF COMPUTERS》 *
曾思望: "基于固态盘的分布式块存储系统缓存技术研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Also Published As

Publication number Publication date
CN110727632B (en) 2022-11-11
CN115658593A (en) 2023-01-31

Similar Documents

Publication Publication Date Title
US11916781B2 (en) System and method for facilitating efficient utilization of an output buffer in a network interface controller (NIC)
US20180375782A1 (en) Data buffering
CN107995129B (en) NFV message forwarding method and device
KR101941416B1 (en) Networking Technologies
US5193149A (en) Dual-path computer interconnect system with four-ported packet memory control
US5187780A (en) Dual-path computer interconnect system with zone manager for packet memory
US5020020A (en) Computer interconnect system with transmit-abort function
US10175891B1 (en) Minimizing read latency for solid state drives
US8248945B1 (en) System and method for Ethernet per priority pause packet flow control buffering
CN113918101B (en) Method, system, equipment and storage medium for writing data cache
US9747233B2 (en) Facilitating routing by selectively aggregating contiguous data units
EP3077914B1 (en) System and method for managing and supporting virtual host bus adaptor (vhba) over infiniband (ib) and for supporting efficient buffer usage with a single external memory interface
EP1554644A2 (en) Method and system for tcp/ip using generic buffers for non-posting tcp applications
US11226778B2 (en) Method, apparatus and computer program product for managing metadata migration
CN111641566A (en) Data processing method, network card and server
US20150074316A1 (en) Reflective memory bridge for external computing nodes
US11231964B2 (en) Computing device shared resource lock allocation
US20110116511A1 (en) Directly Providing Data Messages To A Protocol Layer
CN110727632B (en) Data processing method and device
US9665519B2 (en) Using a credits available value in determining whether to issue a PPI allocation request to a packet engine
US9804959B2 (en) In-flight packet processing
WO2022151766A1 (en) Io request pipeline processing device, method and system, and storage medium
US20110258282A1 (en) Optimized utilization of dma buffers for incoming data packets in a network protocol
KR20010095103A (en) An intelligent bus interconnect unit
CN115576660A (en) Data receiving method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20211227

Address after: 450046 Floor 9, building 1, Zhengshang Boya Plaza, Longzihu wisdom Island, Zhengdong New Area, Zhengzhou City, Henan Province

Applicant after: Super fusion Digital Technology Co.,Ltd.

Address before: 518129 Bantian HUAWEI headquarters office building, Longgang District, Guangdong, Shenzhen

Applicant before: HUAWEI TECHNOLOGIES Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant