WO2020118650A1 - Procédé d'envoi rapide d'un message d'achèvement de préparation de données d'écriture, et dispositif et système d'envoi rapide d'un message d'achèvement de préparation de données d'écriture - Google Patents

Procédé d'envoi rapide d'un message d'achèvement de préparation de données d'écriture, et dispositif et système d'envoi rapide d'un message d'achèvement de préparation de données d'écriture Download PDF

Info

Publication number
WO2020118650A1
WO2020118650A1 PCT/CN2018/121054 CN2018121054W WO2020118650A1 WO 2020118650 A1 WO2020118650 A1 WO 2020118650A1 CN 2018121054 W CN2018121054 W CN 2018121054W WO 2020118650 A1 WO2020118650 A1 WO 2020118650A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
page
data page
information table
information
Prior art date
Application number
PCT/CN2018/121054
Other languages
English (en)
Chinese (zh)
Inventor
胡泉波
徐启明
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to PCT/CN2018/121054 priority Critical patent/WO2020118650A1/fr
Priority to CN201880014873.4A priority patent/CN111642137A/zh
Publication of WO2020118650A1 publication Critical patent/WO2020118650A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/38Information transfer, e.g. on bus
    • G06F13/42Bus transfer protocol, e.g. handshake; Synchronisation
    • G06F13/4204Bus transfer protocol, e.g. handshake; Synchronisation on a parallel bus
    • G06F13/4221Bus transfer protocol, e.g. handshake; Synchronisation on a parallel bus being an input/output bus, e.g. ISA bus, EISA bus, PCI bus, SCSI bus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0877Cache access modes
    • G06F12/0882Page mode
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/10Address translation
    • G06F12/1009Address translation using page tables, e.g. page table structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/14Handling requests for interconnection or transfer
    • G06F13/16Handling requests for interconnection or transfer for access to memory bus
    • G06F13/1668Details of memory controller
    • G06F13/1673Details of memory controller using buffers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1097Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]

Definitions

  • This application relates to the field of information technology, and in particular to a method, device, and system for quickly sending a write data preparation complete message.
  • Storage area network (storage area network, SAN) is a high-speed, easily expandable, widely used data storage network, independent of the computer local area network (local area network, LAN).
  • the SAN connects the server and the storage device together, and can provide a dedicated communication channel for any server and any storage device on it.
  • the SAN separates the storage device from the server and realizes the storage resource sharing at the server level.
  • the SAN mainly includes servers, network equipment, and storage equipment. According to different network types, it can be divided into Internet Protocol Storage Area Network (Internet Protocol-SAN, IP-SAN) and Fibre Channel Storage Area Network (Fiber Channel-SAN, FC-SAN).
  • IP-SAN uses IP channels to connect servers and storage devices.
  • FC-SAN uses Fibre Channel to connect servers and storage devices.
  • FC-SAN needs to use some dedicated hardware as a data channel to connect the server and the storage array, these hardware includes Fibre Channel host bus adapter (Fiber Channel-host bus adapter (FC-HBA) card, FC switch, fiber optic cable, optical Modules etc.
  • FC-HBA Fibre Channel host bus adapter
  • the servers in the SAN use the small computer system interface (SCSI) protocol to send data and transmit it to the storage array storage through the FC or IP channel.
  • SCSI small computer system interface
  • the data processing can be "block level" ).
  • the server When the server needs to store data, it will send a SCSI protocol write data command to the storage array; when the server needs to read data, it will send a SCSI protocol read data command to the storage array.
  • the storage array Take the server to issue a SCSI protocol write data command as an example. After the storage array allocates the data storage page, it sends a response message to the server to complete the write data preparation. It needs to cooperate with multiple modules inside the storage array to complete . Multiple modules participate in the processing, which causes the storage array to return a response message to the server that the write data is ready to be delayed too much, which affects the efficiency of data storage.
  • Embodiments of the present application provide a method, device, and system for quickly sending a write data preparation complete message to reduce the delay when sending a write data preparation complete message and improve the efficiency of business processing.
  • an embodiment of the present application provides a storage device, including a processor, a network interface card, and a memory, where:
  • the processor is configured to create a data page information table in the memory by running a driver of the network interface card, and the data page information table is used to record information of free data pages in the memory;
  • the network interface card is used to receive a write data request, and according to the information of the data page recorded in the data page information table, allocate the data page recorded in the data page information table to cache the write data request to be stored And send a response message to the sender of the write data request to complete the write data preparation; wherein, the write data request is a request to write data to the storage device.
  • the network interface card when the network interface card receives the write data request, it obtains the information of the free data page from the memory, and the free data page can be used to cache the data to be written by the write data request data. In this way, the network interface card can directly obtain the information of the available data pages from the memory, complete the preparation for writing data, and send a response message to the sender of the write data request to complete the write data preparation. Since there is no need for multiple modules in the storage device to cooperate with each other to determine the available cache space, the time from receiving the write data request from the network interface card to sending the write data request response message from the network interface card is shortened, and the processing time of the write data request service is improved. effectiveness.
  • the network interface card allocates the data page recorded in the data page information table to cache the data to be stored by the write data request, which is the network interface card allocated to store the data to be cached by the write data request Free data page of the data.
  • the storage device is a storage array
  • the processor is a central processing unit (CPU)
  • the memory is a main memory, such as random access memory (Random Access Memory, RAM)
  • the network interface card can be an FC-HBA card or an Ethernet card.
  • the FC-HBA card is also called a Fibre Channel interface card.
  • the network interface card and the processor may be connected by a Peripheral Component Interconnect Express (PCIe) link.
  • PCIe Peripheral Component Interconnect Express
  • the network interface card and the memory may also be connected through a PCIe link.
  • the processor creates a data page information table in the memory by running a driver of the network interface card during initialization of the network interface card.
  • the data page information table may be created in a memory managed by an operating system (OS) of the storage device, or a base address register (base address register) in the memory of the network interface card , BAR) address space creates the data page information table.
  • OS operating system
  • base address register base address register
  • the data page information table is an information table directly accessible by the network interface card, and the data page information table is also an information table directly accessed by the processor through a driver that runs the network interface card.
  • the memory includes a cache
  • the cache is used to temporarily store data written to the storage device or to temporarily store data read from the storage device, and the page information table
  • the information of the idle data page recorded in is the information of the idle data page in the cache.
  • the processor is further configured to apply for a free data page from the cache by running a driver of the network interface card, and configure the information of the applied free data page to In the data page information table.
  • the processor may determine the number of idle data pages that can be configured in the data page information table according to a preset ratio by running a driver of the network interface card, or according to a preset algorithm Determine the number of dynamic idle data pages.
  • the driver for the network interface card is a program in the operating system of the storage device, and the processor runs the driver for the network interface card in the operating system of the storage device Applying for an idle data page in the cache, and configuring the information of the applied idle data page into the data page information table.
  • the data page information table includes multiple entries, and each entry is used to record information of an idle data page. That is, an entry in the data page information table records information of an idle data page, and each entry records information of a different idle data page.
  • the information of the idle data page includes physical address information and length information of the idle data page.
  • the data page information table is a page swap queue created in the memory, and the page swap queue contains multiple entries in the form of a queue.
  • the data page information table may also be a linked list or an array created in the memory.
  • the network interface card includes a controller and a network port; wherein,
  • the network port is used to receive the write data request
  • the controller is configured to allocate the data page recorded in the data page information table based on the information of the data page recorded in the data page information table to cache the data to be stored in the write data request.
  • the sender of the write data request sends a response message that write data preparation is complete.
  • the processor is further configured to configure at least one of the following information into the controller by running the driver of the network interface card according to the command format of the controller: the page exchange queue Starting address, number of entries, and format of entries.
  • the processor is configured to configure the idle data page requested from the cache into the page exchange queue by running the driver of the network interface card according to a command format preset by the controller .
  • the controller is further configured to migrate the data to be stored in the received write data request to the cache space corresponding to the allocated data page; the processor also uses By running the driver of the network interface card, the free data pages in the memory, such as the free data pages in the cache, are exchanged into the data page information table.
  • the free data pages in the cache are exchanged into the data page information table, which can ensure that there are enough free data pages in the data page information table for allocation, and can avoid the problem of business interruption caused by insufficient free data pages .
  • the processor is further configured to convert a virtual address of the data page information table in the memory to a physical address by running a driver of the network interface card;
  • the network interface card accesses the data page information table through the physical address of the data page information table.
  • an embodiment of the present application provides a network interface card, including a controller and a network port; wherein,
  • the network port is used to receive a write data request through a network, the write data request is a request to write data to a storage device, and the storage device receives a request to write data to the storage device through the network interface card And/or data to be stored;
  • the controller is configured to allocate a data page from the data page recorded in the data page information table to cache the data to be stored in the write data request, and send the write data preparation to the sender of the write data request Response message; wherein, the data page information table is an information table in the memory of the storage device, and the data page is an idle data page in the memory.
  • the controller when the network port receives a write data request, the controller obtains the information of the idle data page from the data page information table in the memory of the storage device, and the idle data page can be used To cache the data to be written in the write data request. In this way, the controller in the network interface card can directly obtain the available data page information from the memory, complete the preparation for writing data, and send a response message to the sender of the write data request to complete the write data preparation. Since there is no need for multiple modules in the storage device to cooperate with each other to determine the available cache space, the time from receiving the write data request to the network interface card to sending the write data request response message is shortened, and the time for processing the write data request service is improved. effectiveness.
  • the controller allocates the data page recorded in the data page information table to cache the data to be stored by the write data request, which is the data allocated by the controller to cache the write data request Of free data pages.
  • the network interface card is an FC-HBA card or an Ethernet card.
  • the memory includes a cache
  • the cache is used to temporarily store data written to the storage device or to temporarily store data read from the storage device, and the page information table
  • the information of the idle data page recorded in is the information of the idle data page in the cache.
  • the controller is further configured to receive configuration information of the data page information table and access the data page information table according to the configuration information of the data page information table; wherein, the The configuration information includes the starting address of the data page information table, the number and format of the table items.
  • the network port is also used to receive the data to be written by the write data request through the network;
  • the controller is also used to cache the data and migrate the data to the memory corresponding to the allocated data page.
  • an embodiment of the present application provides a method for processing a write data request, including:
  • the data page information table is an information table in the memory of the storage device,
  • the data page is a free data page in the memory
  • the method for processing a write data request when receiving a write data request, obtains information of an idle data page from the memory, and the idle data page can be used to cache the write request of the write data request data.
  • the information of available data pages can be directly obtained from the memory, the preparation of the write data can be completed, and a response message can be sent to the sender of the write data request to complete the write data preparation.
  • the time from receiving the write data request to sending the write data request response message is shortened, and the efficiency in processing the write data request service is improved.
  • the allocated data page is used to cache data to be stored by the write data request, and is an idle data page allocated to cache data to be stored by the write data request.
  • the data page information table may be created in a memory managed by the operating system, or the data page information table may be created in other non-cache space of the memory.
  • the memory includes a cache
  • the cache is used to temporarily store data written to the storage device or to temporarily store data read from the storage device, and the page information table
  • the information of the idle data page recorded in is the information of the idle data page in the cache.
  • the method further includes:
  • the data page information table includes multiple entries, and each entry is used to record information of an idle data page.
  • the information of the data page includes physical address information and length information of the idle data page.
  • the page information table is a page exchange queue created in the memory, and the page exchange queue contains multiple entries in the form of a queue.
  • the method further includes:
  • the method further includes:
  • At least one of the following information is configured into the controller: the starting address of the data page information table, the number of entries, and the format of the entries.
  • an embodiment of the present application provides an information processing system, including a server and a storage device, where the server is used to write data to or read data from the storage device;
  • the storage device includes Processor, network interface card and memory, including:
  • the processor is configured to create a data page information table in the memory by running a driver of the network interface card, and the data page information table is used to record information of free data pages in the memory;
  • the network interface card is used to receive a write data request sent by the server, and allocate the data page recorded in the data page information table for caching the data page according to the information of the data page recorded in the data page information table Write data to request the data to be stored, and send a response message to the server that write data preparation is complete.
  • the network interface card when receiving the write data request sent by the server, the network interface card obtains the information of the idle data page from the memory, and the idle data page can be used to cache the data to be written by the write data request. In this way, the network interface card can directly obtain the information of the available data pages from the memory, complete the preparation of writing data, and send a response message to the server that the preparation of writing data is complete. Since there is no need for multiple modules in the storage device to cooperate with each other to determine the available cache space, the time from receiving the write data request from the network interface card to sending the write data request response message from the network interface card is shortened, and the processing time of the write data request service is improved. effectiveness.
  • the network interface card allocates the data page recorded in the data page information table to cache the data to be stored by the write data request, which is the network interface card allocated to store the data to be cached by the write data request Free data page of the data.
  • the memory includes a cache
  • the cache is used to temporarily store data written to the storage device or to temporarily store data read from the storage device, and the page information table
  • the information of the idle data page recorded in is the information of the idle data page in the cache.
  • the processor is further configured to apply for a free data page from the cache by running a driver of the network interface card, and configure the information of the applied free data page to In the data page information table.
  • the data page information table includes multiple entries, and each entry is used to record information of an idle data page.
  • the information of the idle data page includes physical address information and length information of the idle data page.
  • the data page information table is a page swap queue created in the memory, and the page swap queue contains multiple entries in the form of a queue.
  • the network interface card includes a controller and a network port; wherein,
  • the network port is used to receive the write data request
  • the controller is configured to allocate the data page recorded in the data page information table based on the information of the data page recorded in the data page information table to cache the data to be stored in the write data request.
  • the sender of the write data request sends a response message that write data preparation is complete.
  • the controller is further configured to migrate the data to be stored in the received write data request to the memory corresponding to the allocated data page;
  • the processor is also used to exchange a free data page in the memory, for example, a free data page in the cache, into the data page information table by running a driver of the network interface card.
  • the processor is further configured to convert a virtual address of the data page information table in the memory to a physical address by running a driver of the network interface card;
  • the network interface card accesses the data page information table through the physical address of the data page information table.
  • the present application provides a computer storage medium for storing computer software instructions for controlling a chip, which includes a program designed to execute the above-mentioned third aspect.
  • the present application provides a computer program.
  • the control chip When a control chip in a computer device or server runs the computer program, the control chip performs the function of the network interface card described in the second aspect.
  • FIG. 1A is a schematic structural diagram of an implementation manner of an FC-SAN system composed of a server and a storage array;
  • FIG. 1B is a schematic diagram of the specific structure of the storage array in FIG. 1A;
  • FIG. 1C is a schematic diagram of the specific structure of the server in FIG. 1A;
  • FIG. 2 is a schematic flowchart of an implementation manner when a server stores data in a storage array
  • FIG. 3 is a schematic structural diagram of an implementation manner of a storage array 300 provided by an embodiment of the present application.
  • FIG. 4 is a schematic flowchart of a method for creating a page exchange queue process provided by an embodiment of this application;
  • FIG. 5 is a schematic flowchart of a processing method after the storage array 300 receives the write data request sent by the server 400;
  • FIG. 6 is a schematic flowchart of a method for a storage array 300 according to an embodiment of the present application to receive data and store data;
  • FIG. 7 is a schematic structural diagram of a storage device 700 provided by an embodiment of the present application.
  • FIG. 8 is a schematic structural diagram of a network interface card 800 provided by an embodiment of the present application.
  • FIG. 9 is a schematic flowchart of a method for processing a write data request according to an embodiment of the present application.
  • FIG. 10 is a schematic structural diagram of an information processing system 100 provided by an embodiment of the present application.
  • a storage array is composed of multiple hard disks to form an array, which is used as a large resource pool. It stores data in different hard drives in a striping manner. When accessing data, the related hard drives in the array work together, which not only guarantees reliability, but also greatly reduces data access time, and has better space. Utilization.
  • FIG. 1A is a schematic structural diagram of an implementation manner of an FC-SAN system.
  • the FC-SAN system includes a server, a storage array, and an optical module switch.
  • the FC-HBA card 1 in the server and the FC-HBA card 2 in the storage array are respectively connected to the optical module switch (also called FC Switch) connection, through the FC switch to achieve communication.
  • FC Switch optical module switch
  • FIG. 1A is just a schematic diagram of a simple structure.
  • the server may further include a CPU and a main memory (eg, random access memory RAM, etc.).
  • the storage array may also include one or more of a disk controller, a hard disk drive (HDD), and a solid-state disk (SSD). It can be understood that, in specific implementation, other connection devices may also be included between the storage array and the server.
  • the optical module switch is not included between the storage array and the server, that is, the storage array and the server are directly connected and communicated.
  • the server in FIG. 1A needs to read data or store data during operation; its storage data or read data includes but is not limited to writing data to or reading data from the storage array.
  • the storage array receives the write data request of the server through the network and writes the data to be stored to the disk of the storage array; or transmits the data in the storage array disk to the storage device according to the read data request of the server server.
  • the optical module switch in FIG. 1A is mainly used for forwarding messages or data between the server and the storage array.
  • FIG. 1B is a schematic diagram of a specific structure of the memory array in FIG. 1A.
  • the storage array includes a physical layer and a software layer.
  • the physical layer includes but is not limited to CPU, FC-HBA card 2, memory and disk.
  • the software layer includes but is not limited to a driver module and a cache module.
  • the drive module is mainly used for driving and initializing the FC-HBA card 2 and the like;
  • the cache module is used for managing cache resources in the memory. It can be understood that the drive module and the cache module are software modules in the operating system of the storage array, and are implemented by the CPU of the storage array executing relevant programs.
  • FIG. 1C is a schematic diagram of a specific structure of the server in FIG. 1A.
  • the server includes a software layer and a physical layer.
  • the physical layer includes but is not limited to the CPU and the FC-HBA card 1
  • the software layer includes but is not limited to the application module.
  • the application module may be an application program in the server operating system, which completes and implements certain functions, and is usually implemented by the CPU of the server executing relevant programs.
  • the application needs to write data to or read data from the storage array through the network, it will send related commands or data to the FC-HBA card 1 through the CPU in the server, and receive related information through the FC-HBA card 1. Data or commands.
  • the video software when the video software needs to store a certain video, it needs to write relevant video data into the storage array.
  • the server sends the write data request and the video data to be stored to the storage array through the optical module switch, and the storage array receives the write data request and the video data forwarded by the optical module switch through the network, and The video data is stored in the disk of the storage array.
  • the process of writing data by the server to the storage array includes:
  • Step S101 When the application module in the server needs to store data, the server sends a write data command to the storage array through the FC-HBA card 1;
  • Step S102 The FC-HBA card 1 in the server sends the write data command to the storage array through the network;
  • Step S103 The FC-HBA card 2 in the storage array receives the write data command, obtains information of the data to be stored according to the write data command and sends it to the drive module of the storage array;
  • Step S104 the driver module applies for an idle data page from the cache module of the storage array according to the information of the data to be stored;
  • Step S105 the cache module allocates an idle data page for storing the data to be cached by the write data command, and sends information such as the address of the allocated idle data page to the drive module;
  • Step S106 the driver module sends the received address and other information of the idle data page to the FC-HBA card 2;
  • Step S107 The FC-HBA card 2 records the address information of the allocated data page, and sends a message to the server FC-HBA card 1 that the data write preparation is complete;
  • Step S108 After receiving the message that the data writing is completed through the FC-HBA card 1, the server sends the data to be stored to the storage array;
  • Step S109 The FC-HBA card 2 receives the data to be stored through the network, and according to the recorded address information of the data page allocated for storing the data to be stored, accesses the data to be stored through direct memory access (direct memory access) , DMA) way to write the allocated data page to notify the drive module that the data reception is completed;
  • direct memory access direct memory access
  • Step S110 The drive module notifies the cache module, and the cache module triggers the storing of the data to be stored in the disk according to the notification of the drive module.
  • Step S111 The cache module returns a write completion message and sends it to the application module in the server through the drive module, FC-HBA card 2, and FC-HBA card 1.
  • the cache module when the cache module confirms that the data to be stored in the cache space has all been stored on the disk of the storage array or all of the backup storage devices, the data to be stored has been stored on the permanent storage disk After that, the cache module sends a write completion message to the driver module, and the drive module sends the write completion message to the server through the FC-HBA card 2 to notify the application module in the server.
  • multiple components in the storage array such as FC-HBA card 2, drive module, and cache module
  • the storage array causes the storage array to return the write data ready to complete message to the server with a long delay, usually the delay will reach more than 20 microseconds.
  • the storage array cannot allocate pages for caching due to the large number of read and write requests that need to be processed, the storage array returns a long delay to the server for error codes.
  • Embodiments of the present application provide a method, device, and system for quickly sending a write data preparation completion message, to reduce the delay from the storage array receiving a write data command to returning a write data preparation completion message, and improve the efficiency of data storage.
  • FIG. 3 is a schematic structural diagram of an implementation manner of a storage array 300 provided by an embodiment of the present application.
  • the storage array 300 includes an FC-HBA card 301, a CPU 302, and a memory 303.
  • the FC-HBA card 301 and the CPU 302 are connected by a PCIe link
  • the FC-HBA card 301 and the memory 303 are connected by a PCIe link
  • the CPU and the memory communicate by PCIe or other methods.
  • the CPU 302 controls the storage array 300 by running the operating system 306.
  • the memory 303 provides a space where the operating system 306 runs and data to be stored or read by the cache storage array 300.
  • the operating system 306 includes but is not limited to the FC driver 304 and the cache module 305.
  • the FC driver 304 can communicate directly with the cache module 305 or indirectly with the cache module 305.
  • the FC driver 304 can communicate with the cache module 305 through other modules.
  • the FC-HBA card 301 includes but is not limited to an FC chip 3011 and an FC port 3012.
  • the FC chip 3011 is the core of the FC-HBA card 301, and may be an FC protocol and a data processing chip, and is mainly used to receive data to be stored sent by the server or send data to the server to be read by the server.
  • the FC port 3012 is a port on the FC-HBA card 301 to communicate with the server.
  • the FC-HBA card 301 can usually include 2 or 4 FC ports ( Figure 3 uses 4 as an example), and the FC port 3012 can be used for insertion.
  • the physical channel for input/output transmission includes but is not limited to a full-duplex input/output channel.
  • the operating system 306 is the operating basis of the software running in the storage array 300, and is mainly used to manage the hardware resources in the storage array 300, such as memory 303 and bus and other hardware resources, as well as providing software modules such as the FC driver 304 and the cache module 305 Operating platform.
  • the FC driver 304 is mainly used to initialize the FC chip 3011 in the FC-HBA card 301, and control the FC-HBA card 301 to receive data to be stored by the server or send data to the server to be read by the server.
  • the cache module 305 is mainly used to manage the data pages in the cache resource pool 3032, write the data in the cache resource pool 3032 to the disk (not shown in FIG. 3) of the storage array 300, or write the data in the disk of the storage array 300
  • the cache is in the cache resource pool 3032.
  • the cache module 305 may also include functions such as initializing the cache resource pool 3032 and executing data hit algorithms.
  • the memory 303 mainly includes an operating system (operating system, OS) managed memory 3031 and a cache resource pool 3032.
  • the memory 3031 managed by the OS is mainly used to support the operation of the operating system and the initialization of the hardware (including but not limited to the initialization of the FC-HBA card 301).
  • the cache resource pool 3032 is mainly used for caching data that the server needs to store or read.
  • FIG. 4 is a schematic flowchart of a method for creating a page exchange queue process provided by an embodiment of the present application. As shown in FIG. 4, the method includes:
  • FC driver 304 is run to initialize the FC chip 3011.
  • the CPU 302 may initialize the FC chip 3011 by running a program related to the FC driver 304.
  • the embodiments of the present application are convenient for description.
  • the FC driver 304 is used to implement related methods or perform related functions, which represents that the CPU 302 implements the relevant method or performs related functions by running the FC driver 304 related programs.
  • a part of the memory is applied to the memory managed by the FC driver 304 operating system 306, and a page swap queue is created according to the preset configuration requirements of the FC chip 3011, and the page swap queue can be directly accessed by the FC chip 3011.
  • the page exchange queue records idle data page information in the form of a queue, and may contain multiple entries, each of which records information of an idle data page. Taking 4 entries in the page exchange queue as an example, entry0 records the page information of page0 in the cache resource pool 3032, entry1 records the page information of page1 in the cache resource pool 3032, entry2 records the page information of the page2 in the cache resource pool 3032 Information, entry3 records the page information of page3 in the cache resource pool 3032.
  • the creation of the page exchange queue in step S402 may be implemented during the initialization of the FC chip 3011 by the FC driver 304.
  • each chip is preset with some basic configuration requirements before shipment, including but not limited to: commands and interfaces for communication between the chip and the driver, and the commands include parameters of the queue and the like.
  • the parameters of the page exchange queue may include: upper and lower limits of elements (such as entries) that the page exchange queue can support, and the format of the entry.
  • the format of the entry in the page swap queue preset by the FC chip 3011 is:
  • the FC driver 304 may create a page exchange queue according to the format of the entry preset by the FC chip 3011.
  • the page exchange queue created by the FC driver 304 may contain N entries, where N is a positive integer greater than or equal to 2.
  • the entry is an entry in the queue for recording basic information of a data page, including but not limited to the physical address of the data page and the length of the data page.
  • the data page is the smallest unit of a common memory management method, and its size can be 4KB or 8KB.
  • the entry may also include identification information of each entry, etc. The identification information is used to indicate whether an idle data page recorded for the entry has been allocated for buffering data.
  • the FC driver 304 can apply for a part of memory other than the specific memory reserved by the OS from the memory managed by the OS, which is used to create a page exchange queue.
  • the specific memory includes memory necessary for the operation of the OS, or memory reserved in the OS by other external devices. These memories are either used exclusively for OS operation or for other external devices. The FC driver 304 cannot access these memories.
  • the address corresponding to the memory requested by the FC driver 304 from the memory managed by the OS is a virtual address.
  • the FC driver 304 needs to virtualize the created page swap queue.
  • the address is converted into a physical address, and is allocated to the FC chip 3011 when the FC chip 3011 is initialized. In this way, the FC chip 3011 can directly access the page exchange queue. That is, the FC chip 3011 can directly read the information of the idle data pages recorded in the page exchange queue without the participation of other software modules or hardware, for example, it does not need to obtain the information of the idle data pages through the driver.
  • FC driver 304 converts the virtual address to a physical address, and a general address conversion method can be used.
  • the operating system 306 will address hardware devices (including memory 303) uniformly and maintain a mapping relationship between a virtual address and a physical address.
  • the FC driver 304 may perform virtual address and mapping based on the mapping relationship maintained by the operating system 306. Translation between physical addresses.
  • FC driver 304 can also directly access the page swap queue created by the FC driver 304, that is, the FC driver 304 and the FC chip 3011 share the created page swap queue.
  • FC driver 304 applies for M data pages from the cache resource pool to cache the data that the storage array 300 needs to store;
  • M is a positive integer greater than or equal to 1.
  • the M data pages requested by the FC driver 304 from the cache resource pool are idle data pages, and these data pages can be used to cache the data that the storage array 300 needs to store.
  • the number M of data pages requested by the FC driver 304 from the cache resource pool may be determined in different scenarios under different scenarios.
  • One way is to configure according to the read-write ratio of typical scenarios of storage services. For example, a common database business is a read-write ratio of 7:3, and the FC driver 304 can apply for 30% of the data pages in the cache resource pool 3032 to be shared in the page exchange queue.
  • the value of M can be flexibly configured to cache data and cause resources to be idle. For example, through a configuration interface, the value configured by the user according to the requirements of different business scenarios is received through the configuration interface.
  • the FC driver 304 applies to the cache resource pool 3032 for the data page according to the received value of M.
  • the specific software in the storage array 300 can also automatically adjust the value of the number M of data pages in the page swap queue according to the actual business situation of the storage array 300 through a preset algorithm.
  • the embodiments of the present application do not limit specific implementation manners.
  • FC driver 304 configures the information of the applied data page to the entry in the page exchange queue
  • the information of the data page includes but is not limited to the physical address and length information of the data page.
  • the information of a data page can be configured into an entry of the page exchange queue.
  • Each entry separately records information on different data pages.
  • the FC driver 304 records the physical address and length information of a data page applied to a variable corresponding to an entry.
  • FC driver 304 configures the information of the page exchange queue to the FC chip 3011.
  • the information of the page exchange queue includes one or more of the starting address, the number of entries, and the format of the entry of the page exchange queue.
  • a preset command set is configured.
  • the driver of the chip will perform related configuration according to the preset command set.
  • the FC driver 304 may configure the information of the page exchange queue into the FC chip 3011 according to the command set preset by the FC chip 3011.
  • the pre-configured command set of the FC chip 3011 is:
  • the FC driver 304 fills in the command set with the starting address, number of entries, and other queue information of the page exchange queue according to the format requirements of the command set, and then sends the filled command set to the FC chip 3011.
  • the FC chip 3011 can access the page exchange queue and obtain the information of the free data pages in the cache resource pool 3032 through the page exchange queue.
  • the FC chip 3011 can determine the information of the data page that can be used to cache data from the page exchange queue according to the write data request sent by the server, and return to the server that the write data preparation is complete Response message.
  • the FC driver 304 may also send the information of the page exchange queue (including but not limited to the start address and entry of the page exchange queue after the creation of the page exchange queue after step S402 above) The number, format, etc.) are configured or shared with the FC chip 3011.
  • FIG. 5 is a schematic flowchart of a processing method after the initialized storage array 300 in FIG. 4 receives the write data request sent by the server 400.
  • the server 400 and the storage array 300 are connected through a network.
  • the server 400 and the storage array 300 may be connected through an optical module switch through a network connection.
  • the application module in the server 400 needs to store data, it sends a command to write data to the storage array 300 through the FC-HBA card 401.
  • the method includes:
  • Step S501 When an application module in the server 400 (usually a software module in the service layer in the server) needs to store data, the server 400 sends a write data request to the storage array 300 through the FC-HBA card 401;
  • the write data request sent by the server 400 may be implemented by sending a write data command.
  • the embodiment of the present application does not limit the specific implementation manner of the server 400 sending the write data request.
  • Step S502 The FC-HBA card 301 in the storage array 300 receives the write data request
  • the FC-HBA card 301 receives the write data request sent by the server 400 through the network or directly through the FC port 3012.
  • Step S503 The FC-HBA card 301 parses the write data request and allocates the data page;
  • the FC chip 3011 in the FC-HBA card 301 parses the write data request sent by the server 400, and allocates a data page for buffering the data to be stored in the write data request according to the data page recorded in the page exchange queue.
  • the FC chip 3011 allocates the data page recorded in the entry 0 in the page swap queue to cache the data write request. Stored data. If the data to be stored in the write data request requires multiple data pages to be cached, the FC chip 3011 allocates multiple data pages of entry records in the page swap queue to cache the data to be stored in the write data request.
  • the FC-HBA card 301 can allocate data pages according to the information of the data pages recorded in the entry of the page exchange queue in the following manner:
  • each entry in the page exchange queue has an identification bit, which is used to identify whether the idle data page recorded by the entry has been occupied. For example, the identification bit of an entry is 0, indicating that the free data page in the entry is not allocated; the identification bit of an entry is 1, indicating that the free data page in the entry has been allocated.
  • the FC-HBA card 301 can select an entry to allocate an idle data page according to the identification bit of each entry.
  • FC-HBA card 301 allocates free data pages from the entry at the beginning of the page exchange queue, records the location information of the last used entry after each allocation, and the next time the data page is allocated, According to the last entry of the last entry used in the previous record, the corresponding free data page is allocated from the corresponding entry.
  • the idle data pages exchanged from the cache resource pool are sequentially increased at the end of the queue to ensure that the page exchange queue has sufficient idle data pages for the FC-HBA card 301 to allocate.
  • Step S504 The FC-HBA card 301 sends a response message to the server 400 that the write data preparation is complete.
  • FC chip 3011 in the FC-HBA card 301 is allocated to an idle data page used to cache the data to be stored in the write data request
  • FC chip 3011 is successfully allocated to the cache data request
  • a response message of writing data preparation completion is sent to the server 400 through the FC port 3012 to notify the server 400 that the storage array 300 is ready to receive data.
  • the FC chip 3011 no longer requests the cache module 305 to allocate the data page in the cache resource pool 3032 through the FC driver 304, which can avoid the time delay caused by applying the data page through the FC driver 304 and the cache module 305, and can improve the The server returns the speed and efficiency of writing data ready to complete the message.
  • FIG. 6 is a schematic flowchart of a method for a storage array 300 according to an embodiment of the present application to receive data and store data.
  • the method flow is a process in which the storage array 300 receives the data sent by the server 400 and performs storage after sending a response message to the server 400 that the write data preparation is completed according to the method shown in FIG. 5.
  • the storage array in FIG. 6 further includes a hard disk management module 307 and a storage space 308.
  • the hard disk management module 307 is used to manage the storage space 308 in the storage array 300, including but not limited to writing the data in the cache resource pool 3032 into the storage space 308.
  • the storage space 308 is a physical space where the storage array 300 stores data, and may be an HDD, an SSD, or the like.
  • the hard disk management module 307 may also include multiple sub-modules, and the functions of the hard disk management module 307 may be jointly implemented by multiple different sub-modules, which is not specifically limited in the embodiments of the present application.
  • the storage array 300 receives and stores data as follows:
  • Step 1 The FC port 3012 of the FC-HBA card 301 receives the data delivered by the server, and the FC chip 3011 caches the data received by the FC port 3012 in the hardware cache space inside the FC chip 3011;
  • Step 2 The FC chip 3011 obtains the information of the allocated data page from the page exchange queue
  • FIG. 6 takes the data page recorded by entry0 as the allocated data page as an example for description.
  • the entry0 is an entry allocated when the FC chip 3011 receives the write data request sent by the server 400, and information of the allocated data page is recorded in the entry0. For example, in the above step S503, the entry allocated by the FC chip 3011 for recording the allocated data page information.
  • the FC chip 3011 when the data page allocated by the FC chip 3011 in the above step S503 occupies multiple entries, the FC chip 3011 correspondingly acquires information of the data page from the multiple entries.
  • Step 3 FC chip 3011 migrates the received data to the data page recorded in entry 0 through DMA;
  • the FC chip 3011 migrates the received data to the storage space of the data page (eg, page0) corresponding to the physical address by DMA according to the physical address of the data page (eg, page0) recorded in entry0.
  • the storage space of the data page is the storage space in the cache resource pool 3032.
  • Step 4 The FC chip 3011 sends a notification message to the FC driver 304 that the data reception is completed, and the notification message includes the physical address of the data page page 0;
  • Step 5 The FC driver 304 sends a notification to the cache module 305 to notify the cache module 305 to process the data in the data page page 0;
  • Step 6 The cache module 305 notifies the hard disk management module 307 to process the data in the data page page 0;
  • Step 7 The hard disk management module 307 stores the data in the data page page 0 into the corresponding storage space 308;
  • Step 8 The cache module 305 notifies the FC driver 304 of the physical address of an idle data page (for example, pagem);
  • the cache resource pool 3032 needs to be The free data pages in the page are swapped into the page cache queue. In this step, the FC driver 304 is notified of the page, that is, the idle data page is used to exchange the already occupied data page page 0.
  • the idle data pages in the cache resource pool 3032 are exchanged into the page exchange queue, the same number of idle data pages can be determined from the cache resource pool 3032, and the determined idle data pages can be configured in the page exchange queue .
  • the same number is the number of data pages recorded in the page exchange queue and used to cache data.
  • the data received in step 1 needs to occupy 3 data pages.
  • the FC chip 3011 migrates the received data to the cache space corresponding to the 3 data pages by DMA, 3 more are determined from the cache resource pool 3032 Idle data pages and configure them in the page exchange queue. In this way, it can be ensured that there are enough data pages in the page exchange queue for the FC chip 3011 to allocate for data buffering.
  • step 7 does not limit the execution order of step 7 and step 8, the two steps may be executed in parallel, or step 8 may be executed first, and then step 7.
  • the number M of data pages requested by the FC driver 304 from the cache resource pool may also be greater than the number of data pages required by actual business requirements.
  • the storage array 300 needs 500 data pages to cache the data to be stored within a certain period of time, and the FC driver 304 can apply for 1000 data pages from the cache resource pool.
  • the 500 additional data pages in the data pages requested by the FC driver 304 can avoid the problem of caching failure due to insufficient data pages due to the partial data that has not been exchanged into the page exchange queue.
  • Step 9 The FC driver 304 configures the information of the data page page (including but not limited to the physical address and size of the page) to entry 0.
  • FIGS. 3-6 are described by taking the FC-SAN scenario in which the storage array receives the write data request sent by the server through the FC-HBA as an example. It can be understood that, in other scenarios, the implementation manner when a device with storage capability receives a write data request sent by another device through a network interface card can be implemented by referring to the implementation manners shown in FIG. 3 to FIG. 6 described above.
  • the storage array can receive the write data request sent by the server through the Ethernet interface card and perform corresponding processing.
  • the Ethernet network interface card may be a network interface card that supports data forwarding and offloading.
  • a page exchange queue may also be created in other parts of the memory, for example, the BAR address space of the FC-HBA card 301 in the memory.
  • FC chip 3011 From the perspective of the FC chip 3011 quickly acquiring the free data page information in the cache, it is also possible to create a page exchange queue in the cache space of the FC chip 3011. In this way, when the FC chip 3011 receives the write data request, it can quickly obtain the information of the free data page from the local cache.
  • a communication channel such as a PCIe link
  • the exchanged data pages need to be transferred through the PCIe link between the memory and the FC chip 3011 Information, the efficiency of page exchange will be lower than the way to create a page exchange queue in memory.
  • the page swap queue records data page information that is free in the cache as an example.
  • the free data pages in the cache can also be recorded in other forms, for example, in other ways such as arrays or linked lists. That is, the page exchange queue is only an implementation method for recording data page information, and other record tables capable of recording data page information, such as a data page information table, are within the scope disclosed in the embodiments of the present application.
  • These data page information tables record data page information that is free in the cache, and in combination with other technical features provided by the embodiments of the present application, can also solve the technical problems to be solved by the present application. Only in the specific implementation, the implementation method will be slightly different due to the characteristics of each form. For example, the address of the entry in the page exchange queue is usually continuous, and because the linked list determines the address of the next entry through the pointer, the address of its entry is not all continuous.
  • FIG. 7 is a schematic structural diagram of a storage device 700 according to an embodiment of the present application.
  • the storage device 700 includes a processor 701, a network interface card 702, and a memory 703, where:
  • the processor 701 is configured to create a data page information table in the memory 703 by running a driver of the network interface card 702, and the data page information table is used to record information of free data pages in the memory ;
  • the network interface card 702 is configured to receive a write data request, and allocate the data pages recorded in the data page information table to cache the data write request according to the data page information recorded in the data page information table The stored data, and send a response message to the sender of the write data request that the write data preparation is completed; wherein, the write data request is a request to write data to the storage device 700.
  • the network interface card 702 when the network interface card 702 receives the write data request, it obtains the information of the idle data page from the memory 703, and the idle data page can be used to cache the write data request. The written data. In this way, the network interface card 702 can directly obtain the information of the available buffer space from the memory 703, complete the preparation of the write data, and send a response message to the sender of the write data request to complete the write data preparation. Since there is no need for multiple modules in the storage device 700 to cooperate with each other to determine the available cache space, the time from receiving the write data request from the network interface card 702 to the network interface card 702 sending the write data request response message is shortened, and processing the write data request is improved Business efficiency.
  • the memory 703 includes a cache, which is used to temporarily store data written to the storage device 700 or to temporarily store data read from the storage device 700, in the page information table
  • the recorded information of the idle data page is information of the idle data page in the cache.
  • the implementation manner of the foregoing storage device 700 may be implemented with reference to the implementation manner of the storage array 300 shown in FIG. 3 to FIG. 6 in the embodiment of the present application.
  • the implementation of the processor 701 can refer to the implementation of the CPU 302 in the storage array 300
  • the implementation of the network interface card 702 can refer to the implementation of the FC-HBA card 301
  • the implementation of the memory 703 can refer to the implementation of the memory 303
  • the implementation method of the data page information table can be implemented by referring to the implementation method of the page exchange queue, which will not be described in detail.
  • the functions or steps implemented by the FC driver 304 shown in FIGS. 3-6 are implemented by the CPU 302 by running the driver program of the FC-HBA card 301, that is, the functions or steps implemented by the FC driver 304 It may be implemented by the processor 701 by running the driver of the network interface card 702.
  • FIG. 8 is a schematic structural diagram of a network interface card 800 according to an embodiment of the present application.
  • the network interface card 800 includes a controller 801 and a network port 802; wherein,
  • the network port 802 is used to receive a write data request through a network, the write data request is a request to write data to a storage device, and the storage device receives write data to the storage device through the network interface card 800 Requests and/or data to be stored;
  • the controller 801 is configured to allocate a data page from the data pages recorded in the data page information table to cache the data to be stored in the write data request, and send the write data preparation to the sender of the write data request to be completed Response message; wherein, the data page information table is an information table in the memory of the storage device, and the data page is an idle data page in the memory.
  • the controller 801 when the network port 802 receives the write data request, the controller 801 obtains information of idle data pages from the data page information table in the memory of the storage device, the idle data The page can be used to cache the data to be written by the write data request. In this way, the controller 801 in the network interface card 800 can directly obtain the information of the available buffer space from the memory, complete the preparation of the write data, and send a response message to the sender of the write data request to complete the write data preparation.
  • the time from receiving the write data request from the network interface card 800 to sending the write data request response message is shortened, and the time for processing the write data request service is improved s efficiency.
  • the implementation manner of the network interface card 800 may be implemented by referring to the implementation manner of the FC-HBA card 301 shown in FIGS. 3 to 6 in the embodiment of the present application.
  • the implementation manner of the controller 801 may refer to the implementation manner of the FC chip 3011
  • the implementation manner of the network port 802 may refer to the implementation manner of the FC port 3012.
  • FIG. 9 is a schematic flowchart of a method for processing a write data request according to an embodiment of the present application, including:
  • Step 900 Receive a write data request, which is a request to write data to a storage device
  • Step 902 According to the information of the data page recorded in the data page information table, allocate a data page for buffering the data to be written by the write data request; wherein, the data page information table is in the memory of the storage device Information table, the data page is a free data page in the memory;
  • Step 904 Send a response message to the sender of the data write request to complete the data write preparation.
  • the idle data page when a write data request is received, information of an idle data page is obtained from the memory, and the idle data page can be used to cache the data to be written by the write data request.
  • the available buffer space information can be directly obtained from the memory, the preparation of the write data can be completed, and a response message can be sent to the sender of the write data request to complete the write data preparation.
  • the time from receiving the write data request to sending the write data request response message is shortened, and the efficiency in processing the write data request service is improved.
  • the implementation manner of the above method may be implemented by referring to the implementation manners shown in FIG. 3 to FIG. 6 in the embodiments of the present application.
  • step 900 reference may be made to the implementation of step S502 in FIG. 5, for step 902, reference may be made to the implementation of step S503 in FIG. 5, and for step 904, reference may be made to the implementation of step S504 in FIG. 5, which is not described in detail.
  • the method flow shown in FIG. 9 can be further implemented by referring to the method flow shown in FIG. 4 and FIG. 6, and details are not described again.
  • FIG. 10 is a schematic structural diagram of an information processing system 100 according to an embodiment of the present application.
  • the storage device 100 includes a server 101 and a storage device 102.
  • the server 100 is used to write data to or read data from the storage device 102;
  • the storage device 102 includes a processor 1021, a network interface card 1022, and Memory 1023 where:
  • the processor 1021 is configured to create a data page information table in the memory 1023 by running a driver of the network interface card 1022, and the data page information table is used to record information of free data pages in the memory ;
  • the network interface card 1022 is configured to receive the write data request sent by the server 101, and allocate the data pages recorded in the data page information table for caching according to the information of the data pages recorded in the data page information table
  • the write data requests the data to be stored, and sends a response message to the server 101 that the write data preparation is complete.
  • the network interface card 1022 when receiving the write data request sent by the server 101, the network interface card 1022 obtains the information of the free data page from the memory 1023, and the free data page can be used to cache the Write data request data to be written. In this way, the network interface card 1022 can directly obtain the information of the available buffer space from the memory 1023, complete the preparation of the write data, and send a response message to the server 101 that the write data preparation is completed. Since there is no need for multiple modules in the storage device 102 to cooperate with each other to determine the available buffer space, the time from receiving the write data request from the network interface card 1022 to sending the write data request response message from the network interface card 1022 is shortened, and processing the write data request is improved Business efficiency.
  • the memory 1023 further includes a cache, which is used to temporarily store data written to the storage device 102 or to temporarily store data read from the storage device 102, and the page information table
  • the information of the idle data page recorded in is the information of the idle data page in the cache.
  • the implementation manner of the foregoing information processing system 100 may be implemented with reference to the implementation manners of the storage array 300 and the server 400 shown in FIGS. 3 to 6 in the embodiment of the present application.
  • the implementation of the server 101 can refer to the implementation of the server 400
  • the implementation of the processor 1021 can refer to the implementation of the CPU 302 in the storage array 300
  • the implementation of the network interface card 1022 can refer to the implementation of the FC-HBA card 301
  • the implementation of the memory 1023 can refer to the implementation of the memory 303
  • the implementation of the data page information table can refer to the implementation of the page exchange queue, and details will not be described in detail.
  • the functions or steps implemented by the FC driver 304 shown in FIGS. 3-6 are implemented by the CPU 302 by running the driver program of the FC-HBA card 301, that is, the functions or steps implemented by the FC driver 304 It can be implemented by the processor 1021 by running the driver of the network interface card 1022.
  • the disclosed system, device, and method may be implemented in other ways.
  • the device embodiments described above are only schematic.
  • the division of the modules is only a division of logical functions.
  • there may be other divisions for example, multiple modules or components may be combined or Can be integrated into another system, or some features can be ignored, or not implemented.
  • the displayed or discussed mutual couplings or direct couplings or communication connections may be indirect couplings or communication connections through some interfaces, devices, or units, and may also be electrical, mechanical, or other forms of connection.
  • the units described as separate components may or may not be physically separated, and the components displayed as modules may or may not be physical units, that is, they may be located in one place, or may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments of the present invention.
  • each functional module in each embodiment of the present invention may be integrated into one processing unit, or each module may exist alone physically, or two or more modules are integrated into one unit.
  • the above integrated unit may be implemented in the form of hardware or software functional unit.
  • the integrated module is implemented in the form of a software functional unit and sold or used as an independent product, it may be stored in a computer-readable storage medium.
  • the technical solution of the present invention essentially or part of the contribution to the existing technology, or all or part of the technical solution can be embodied in the form of a software product, the computer software product is stored in a storage medium
  • several instructions are included to enable a computer device (which may be a personal computer, server, or network device, etc.) to perform all or part of the steps of the methods described in various embodiments of the present invention.
  • the foregoing storage media include various media that can store program codes, such as a U disk, a mobile hard disk, a read-only memory (Read-Only Memory, ROM), a RAM, a magnetic disk, or an optical disk.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

L'invention concerne un procédé de traitement d'une demande de données d'écriture, et un dispositif de stockage, et un système de traitement d'une demande de données d'écriture. Le procédé consiste : à recevoir une demande de données d'écriture, la demande de données d'écriture étant une demande d'écriture de données dans un dispositif de stockage ; à attribuer, en fonction des informations d'une page de données enregistrée dans une table d'informations de page de données, une page de données servant à mettre en mémoire cache des données à écrire par la demande de données d'écriture, la table d'informations de page de données étant une table d'informations dans une mémoire du dispositif de stockage, et la page de données étant une page de données au repos dans la mémoire ; et à envoyer un message de réponse d'achèvement de préparation de données d'écriture à un expéditeur de la demande de données d'écriture. Au moyen de l'acquisition directe, d'une mémoire, des informations d'une page de données destinée à la mise en mémoire cache, et de l'envoi d'un message de réponse de l'achèvement de préparation de données d'écriture, le temps écoulé de la réception d'une demande de données d'écriture à l'envoi d'un message de réponse de demande de données d'écriture est écourté, et l'efficacité du traitement d'un service de demande de données d'écriture est améliorée.
PCT/CN2018/121054 2018-12-14 2018-12-14 Procédé d'envoi rapide d'un message d'achèvement de préparation de données d'écriture, et dispositif et système d'envoi rapide d'un message d'achèvement de préparation de données d'écriture WO2020118650A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2018/121054 WO2020118650A1 (fr) 2018-12-14 2018-12-14 Procédé d'envoi rapide d'un message d'achèvement de préparation de données d'écriture, et dispositif et système d'envoi rapide d'un message d'achèvement de préparation de données d'écriture
CN201880014873.4A CN111642137A (zh) 2018-12-14 2018-12-14 快速发送写数据准备完成消息的方法、设备和系统

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/121054 WO2020118650A1 (fr) 2018-12-14 2018-12-14 Procédé d'envoi rapide d'un message d'achèvement de préparation de données d'écriture, et dispositif et système d'envoi rapide d'un message d'achèvement de préparation de données d'écriture

Publications (1)

Publication Number Publication Date
WO2020118650A1 true WO2020118650A1 (fr) 2020-06-18

Family

ID=71076693

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/121054 WO2020118650A1 (fr) 2018-12-14 2018-12-14 Procédé d'envoi rapide d'un message d'achèvement de préparation de données d'écriture, et dispositif et système d'envoi rapide d'un message d'achèvement de préparation de données d'écriture

Country Status (2)

Country Link
CN (1) CN111642137A (fr)
WO (1) WO2020118650A1 (fr)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113918101B (zh) * 2021-12-09 2022-03-15 苏州浪潮智能科技有限公司 一种写数据高速缓存的方法、系统、设备和存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030079087A1 (en) * 2001-10-19 2003-04-24 Nec Corporation Cache memory control unit and method
US20100023676A1 (en) * 2008-07-25 2010-01-28 Moon Yang-Gi Solid state storage system for data merging and method of controlling the same according to both in-place method and out-of-place method
CN101827071A (zh) * 2008-06-09 2010-09-08 飞塔公司 网络协议集合加速
US20160314042A1 (en) * 2015-04-27 2016-10-27 Invensas Corporation Preferred state encoding in non-volatile memories

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100349142C (zh) * 2004-05-25 2007-11-14 中国科学院计算技术研究所 一种用于虚拟共享存储系统的远程取页方法及网络接口卡
CN103645969B (zh) * 2013-12-13 2017-06-20 华为技术有限公司 数据复制方法及数据存储系统
CN107077426B (zh) * 2016-12-05 2019-08-02 华为技术有限公司 NVMe over Fabric架构中数据读写命令的控制方法、设备和系统

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030079087A1 (en) * 2001-10-19 2003-04-24 Nec Corporation Cache memory control unit and method
CN101827071A (zh) * 2008-06-09 2010-09-08 飞塔公司 网络协议集合加速
US20100023676A1 (en) * 2008-07-25 2010-01-28 Moon Yang-Gi Solid state storage system for data merging and method of controlling the same according to both in-place method and out-of-place method
US20160314042A1 (en) * 2015-04-27 2016-10-27 Invensas Corporation Preferred state encoding in non-volatile memories

Also Published As

Publication number Publication date
CN111642137A (zh) 2020-09-08

Similar Documents

Publication Publication Date Title
US9329783B2 (en) Data processing system and data processing method
US20200278880A1 (en) Method, apparatus, and system for accessing storage device
US10838665B2 (en) Method, device, and system for buffering data for read/write commands in NVME over fabric architecture
TWI732110B (zh) 對非揮發性快閃記憶體進行低延遲直接資料存取的系統及方法
US10372340B2 (en) Data distribution method in storage system, distribution apparatus, and storage system
US9395921B2 (en) Writing data using DMA by specifying a buffer address and a flash memory address
RU2640648C2 (ru) Управление ресурсами для доменов высокопроизводительного межсоединения периферийных компонентов
US9734085B2 (en) DMA transmission method and system thereof
US20190155548A1 (en) Computer system and storage access apparatus
US20140195634A1 (en) System and Method for Multiservice Input/Output
US9092426B1 (en) Zero-copy direct memory access (DMA) network-attached storage (NAS) file system block writing
US20070067432A1 (en) Computer system and I/O bridge
AU2015402888B2 (en) Computer device and method for reading/writing data by computer device
US20220222016A1 (en) Method for accessing solid state disk and storage device
WO2016065611A1 (fr) Procédé, système et hôte d'accès à un fichier
WO2023103704A1 (fr) Procédé de traitement de données, dispositif de stockage et processeur
US20110246600A1 (en) Memory sharing apparatus
US11604742B2 (en) Independent central processing unit (CPU) networking using an intermediate device
WO2020118650A1 (fr) Procédé d'envoi rapide d'un message d'achèvement de préparation de données d'écriture, et dispositif et système d'envoi rapide d'un message d'achèvement de préparation de données d'écriture
US10430220B1 (en) Virtual devices as protocol neutral communications mediators
KR20200143922A (ko) 메모리 카드 및 이를 이용한 데이터 처리 방법
JP2866376B2 (ja) ディスクアレイ装置
JP2009070359A (ja) ハードディスクレス型コンピュータの起動効率を向上可能な広域通信網起動システム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18943312

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18943312

Country of ref document: EP

Kind code of ref document: A1