CN108897491B - Heterogeneous hybrid memory quick access optimization method and system - Google Patents
Heterogeneous hybrid memory quick access optimization method and system Download PDFInfo
- Publication number
- CN108897491B CN108897491B CN201810541232.6A CN201810541232A CN108897491B CN 108897491 B CN108897491 B CN 108897491B CN 201810541232 A CN201810541232 A CN 201810541232A CN 108897491 B CN108897491 B CN 108897491B
- Authority
- CN
- China
- Prior art keywords
- command
- module
- polling
- nvme protocol
- processing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000015654 memory Effects 0.000 title claims abstract description 46
- 238000005457 optimization Methods 0.000 title claims abstract description 32
- 238000000034 method Methods 0.000 title claims abstract description 18
- 238000012545 processing Methods 0.000 claims abstract description 90
- 238000012360 testing method Methods 0.000 claims abstract description 13
- 230000008859 change Effects 0.000 claims abstract description 10
- 230000005540 biological transmission Effects 0.000 claims description 15
- 238000004458 analytical method Methods 0.000 claims description 6
- 238000001514 detection method Methods 0.000 claims description 3
- 230000008901 benefit Effects 0.000 description 5
- 238000013403 standard screening design Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 230000006872 improvement Effects 0.000 description 3
- 230000007246 mechanism Effects 0.000 description 3
- 238000011161 development Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/061—Improving I/O performance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0655—Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
- G06F3/0659—Command handling arrangements, e.g. command buffers, queues, command scheduling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0668—Interfaces specially adapted for storage systems adopting a particular infrastructure
- G06F3/0671—In-line storage system
- G06F3/0683—Plurality of storage devices
- G06F3/0685—Hybrid storage combining heterogeneous device types, e.g. hierarchical storage, hybrid arrays
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Memory System (AREA)
- Dram (AREA)
Abstract
The invention relates to the technical field of hybrid memories, and provides a method and a system for optimizing quick access of a heterogeneous hybrid memory, wherein the method comprises the following steps: optimizing command processing overhead of the NVMe protocol; after command processing overhead optimization processing of the NVMe protocol, when a command is started, polling a command queue by using an Endpoint; the method adopts the contents of the tracking bits in the polling receiving buffer to test whether the DRAM load arrives, and the change of the tracking bits represents the completion of a read command, so that the problem that the speeds of NVM (non-volatile memory) access and DRAM access are not matched is solved, the NVM memory can be quickly accessed on a heterogeneous hybrid memory platform based on the NVM extended system memory, the overall access performance of the system is improved, and the performance of the heterogeneous hybrid memory platform is improved.
Description
Technical Field
The invention belongs to the technical field of hybrid memories, and particularly relates to a method and a system for optimizing quick access of a heterogeneous hybrid memory.
Background
With the development and use of big data and Memory computing applications, the demand for Memory capacity and performance is increasing, but the high cost of Dynamic Random Access Memory (DRAM) is one of the important factors influencing the expansion of Memory capacity. Nonvolatile Memory (NVM) is a new medium, and has faster access speed than mechanical hard disk, and its nonvolatile property is receiving more and more attention from the market. NVM Express is an extensible host control chip interface standard developed for enterprise and common client systems using PCI Express SSDs. NVMe provides a complete set of relatively very complex data structures and transaction processing methods. The command distribution and processing is performed through a plurality of command submission queues, command completion queues, and the like. Although the mechanism provided by NVMe is good, for hybrid memory systems, a simplified and applicable protocol needs to be provided in a targeted manner, so that the accessed interface can fully use the advantages of the nonvolatile memory and exert the bandwidth advantages to the greatest extent.
Currently, NVM is used as PCIE SSD or SAS SSD, whose main role is to replace mechanical hard disk. NVM is used for memory extension, typically PCIe bus based, NVMe protocol is used. NVM Express is an extensible host control chip interface standard developed for enterprise and common client systems using PCI Express SSDs. However, the mechanism provided by NVMe is not completely suitable for heterogeneous hybrid memory systems, is complex, has low applicability, and cannot fully use the advantages of non-volatile memory to exert the bandwidth advantage to the maximum extent. Meanwhile, the PCIe bus data packet exchange time exceeds the time required by the data transmission of kilobytes, and a lot of unnecessary data packet exchanges exist, so that the access efficiency of the NVM is low.
Disclosure of Invention
The invention aims to provide a quick access optimization method for a heterogeneous hybrid memory, and aims to solve the problems that in the prior art, a mechanism provided by NVMe is not completely suitable for a heterogeneous hybrid memory system, and NVM (non-volatile memory) access efficiency is low.
The invention is realized in this way, a method for optimizing quick access to a heterogeneous hybrid memory, comprising the following steps:
optimizing command processing overhead of an NVMe protocol, wherein the command processing overhead of the NVMe protocol comprises command acquisition, command packet analysis, command packet distribution and command packet sending completion;
after command processing overhead optimization processing of the NVMe protocol is carried out, polling is carried out on a command queue by using an Endpoint when a command is started;
the contents of the tracking bits in the poll receive buffer, the change in which characterizes the completion of the read command, are used to test for the arrival of the DRAM load.
As an improved scheme, the step of performing optimization processing on the command processing overhead of the NVMe protocol specifically includes the following steps:
generating a special command processing unit, wherein the command processing unit is used for controlling and executing the acquisition of commands in the NVMe protocol and the sending flow of the finished command packet;
generating an arbiter for arbitrating the distribution of the command packets in the NVMe protocol.
As an improvement, the method further comprises the steps of:
and adding an erasing command in the I/O command, wherein the erasing command is used for controlling the erasing of the flash and the PCM, and recording the erasing times of different erasing positions of the flash and the PCM.
As an improved scheme, after the command processing overhead optimization processing of the NVMe protocol is performed, when a command is started, the polling step of the command queue by using an Endpoint specifically includes the following steps:
estimating the PCIE transmission time;
firstly, acquiring a part of commands and executing corresponding processing operation;
and controlling the next command to enter the processing of the next command when the next command arrives according to the estimated PCIE transmission time.
As an improved scheme, the step of testing whether the DRAM load arrives or not by using the contents of the tracking bits in the polling receiving buffer specifically includes the following steps:
presetting a plurality of flag bits in a receiving buffer area;
before the command is executed, writing preset mark bits into a host receiving buffer area;
and detecting the incomplete mark in the receiving buffer area, and judging whether the command is executed completely.
Another object of the present invention is to provide a system for optimizing fast access to heterogeneous hybrid memory, which includes:
the protocol optimization processing module is used for optimizing command processing overhead of the NVMe protocol, wherein the command processing overhead of the NVMe protocol comprises command acquisition, command packet analysis, command packet distribution and command packet sending completion;
the Endpoint polling module is used for polling the command queue by using Endpoint when a command is started after the command processing overhead optimization processing of the NVMe protocol is carried out;
and the tracking bit polling module is used for testing whether the DRAM load arrives or not by adopting the content of the tracking bit in the polling receiving buffer, and the change of the tracking bit represents the completion of the read command.
As an improved scheme, the protocol optimization processing module specifically includes:
the command processing unit generating module is used for generating a special command processing unit, and the command processing unit is used for controlling and executing the acquisition of commands in the NVMe protocol and the sending flow of the finished command packet;
and the arbiter generation module is used for generating an arbiter, and the arbiter is used for arbitrating the distribution of the command packet in the NVMe protocol.
As an improvement, the system further comprises:
and the erasing instruction adding module is used for adding an erasing instruction into the I/O instruction, wherein the erasing instruction is used for controlling the erasing of the flash and the PCM, and recording the erasing times of different erasing positions of the flash and the PCM.
As an improved scheme, the Endpoint polling module specifically includes:
the time estimation module is used for estimating the transmission time of the PCIE;
the partial command processing module is used for firstly acquiring partial commands and executing corresponding processing operations;
and the next command processing control module is used for controlling the next command to enter the processing of the next command when the next command arrives according to the estimated PCIE transmission time.
As an improved solution, the tracking bit polling module specifically includes:
a flag bit setting module, configured to set a plurality of flag bits in the receiving buffer in advance;
a flag bit writing module, which is used for writing preset flag bits into a host receiving buffer before the command is executed;
and the detection judging module is used for detecting the incomplete marks in the receiving buffer area and judging whether the command is executed completely.
In the embodiment of the invention, the command processing overhead of the NVMe protocol is optimized, wherein the command processing overhead of the NVMe protocol comprises the steps of command acquisition, command packet analysis, command packet distribution and command packet sending completion; after command processing overhead optimization processing of the NVMe protocol is carried out, polling is carried out on a command queue by using an Endpoint when a command is started; the method adopts the contents of the tracking bits in the polling receiving buffer to test whether the DRAM load arrives, and the change of the tracking bits represents the completion of a read command, so that the problem that the speeds of NVM (non-volatile memory) access and DRAM access are not matched is solved, the NVM memory can be quickly accessed on a heterogeneous hybrid memory platform based on the NVM extended system memory, the overall access performance of the system is improved, and the performance of the heterogeneous hybrid memory platform is improved.
Drawings
FIG. 1 is a flowchart illustrating an implementation of a method for optimizing quick access to a heterogeneous hybrid memory according to the present invention;
FIG. 2 is a flowchart illustrating an implementation of optimizing command processing overhead of the NVMe protocol according to the present invention;
FIG. 3 is a flowchart illustrating an implementation of polling a command queue by using an Endpoint when a command is started after the command processing overhead optimization processing of the NVMe protocol is performed;
FIG. 4 is a flow chart of an implementation of the present invention for testing the arrival or non-arrival of a DRAM load using the contents of the tracking bits in the poll receive buffer;
fig. 5 is a block diagram of a heterogeneous hybrid memory fast access optimization system according to the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Fig. 1 shows an implementation flowchart of the method for optimizing quick access to a heterogeneous hybrid memory, which specifically includes the following steps:
in step S101, optimizing a command processing overhead of the NVMe protocol, where the command processing overhead of the NVMe protocol includes obtaining a command, parsing a command packet, distributing the command packet, and completing sending the command packet.
In step S102, after the command processing overhead optimization processing of the NVMe protocol, when a command is started, polling is performed on a command queue by using Endpoint.
In step S103, the contents of the tracking bits in the poll receive buffer, the change of which characterizes the completion of the read command, are used to test the arrival or non-arrival of the DRAM load.
As shown in fig. 2, the step of performing optimization processing on the command processing overhead of the NVMe protocol specifically includes the following steps:
in step S201, a dedicated command processing unit is generated, where the command processing unit is configured to control and execute the acquisition of a command in the NVMe protocol and the sending flow of a completed command packet;
in step S202, an arbiter is generated for arbitrating the distribution of the command packet in the NVMe protocol.
Wherein the arbiter selects one or more of the acquired commands to be executed first, and the arbitration operation can realize different priorities among the commands, thereby ensuring that certain critical operations can be executed preferentially.
In the embodiment of the invention, an erasing command is added into the I/O command, the erasing command is used for controlling the erasing of the flash and the PCM, and the erasing times of different erasing positions of the flash and the PCM are recorded;
thereby selecting an area to which data is written next based on the number of times the different area is erased.
In the embodiment of the present invention, as shown in fig. 3, after performing overhead optimization processing on a command processing of the NVMe protocol, when the command is started, the polling step of polling the command queue by using an Endpoint specifically includes the following steps:
in step S301, the PCIE transmission time is estimated;
in step S302, a partial command is first acquired and a corresponding processing operation is performed;
in step S303, the processing of the next command is controlled to be performed when the next command arrives according to the estimated PCIE transmission time.
In this embodiment, when a command is started, the Endpoint is used to poll the command queue to replace the original NVMe "doorbell" signal, and by using the PCIe full duplex feature, the device continuously sends one or more command requests to the command queue of the DRAM without waiting for the host "doorbell" signal. By predicting the PCIe transmission time, the device can pre-fetch some commands, ideally the time of arrival of the next command is exactly when the device can process the next command.
In the embodiment of the present invention, as shown in fig. 4, the step of testing whether the DRAM load arrives or not by using the contents of the tracking bits in the polling receive buffer specifically includes the following steps:
in step S401, a plurality of flag bits are set in the receiving buffer in advance;
in step S402, before the command is executed, a flag bit set in advance is written into the host reception buffer;
in step S403, the incomplete flag in the receiving buffer is detected, and whether the command has been executed completely is determined.
In this embodiment, the arrival of the DRAM load is tested using the contents of a tracking bit in the poll receive buffer, the change in the tracking bit indicating the completion of the read command. The incomplete part of the receiving cache is marked using a cache-based "incomplete mark" method. The known mark is written into the host receiving buffer before the command is executed, and whether the command is completed is judged by monitoring the 'incomplete mark' in the receiving buffer. The determination of whether the command is complete uses a fast link from the CPU to the DRAM to avoid sending any unnecessary data bits or packets over the relatively slow PCIe link.
Fig. 5 is a block diagram illustrating an architecture of a system for heterogeneous hybrid memory fast access optimization, which is listened to by the present invention, wherein, for convenience of description, only the parts related to the embodiment of the present invention are shown in the diagram.
The heterogeneous hybrid memory quick access optimization system comprises:
the protocol optimization processing module 11 is configured to perform optimization processing on command processing overhead of the NVMe protocol, where the command processing overhead of the NVMe protocol includes obtaining a command, parsing a command packet, distributing the command packet, and completing sending the command packet;
an Endpoint polling module 12, configured to poll the command queue by using Endpoint when a command is started after performing command processing overhead optimization processing on the NVMe protocol;
a tracking bit polling module 13, configured to poll the content of the tracking bit in the receive buffer to test whether the DRAM load arrives, where a change in the tracking bit indicates completion of the read command.
The protocol optimization processing module 11 specifically includes:
a command processing unit generating module 14, configured to generate a dedicated command processing unit, where the command processing unit is configured to control and execute acquisition of a command in the NVMe protocol and a transmission flow of a completed command packet;
an arbiter generating module 15, configured to generate an arbiter, where the arbiter is configured to arbitrate distribution of the command packet in the NVMe protocol.
In this embodiment, the erasure instruction adding module 16 is configured to add an erasure instruction to the I/O command, where the erasure instruction is used to control erasure of the flash and the PCM, and record the number of times of erasure of different erasure positions of the flash and the PCM.
In this embodiment of the present invention, the Endpoint polling module 12 specifically includes:
a time estimation module 17, configured to estimate a PCIE transmission time;
a partial command processing module 18, configured to first obtain a partial command and perform a corresponding processing operation;
and the next command processing control module 19 is configured to control, according to the estimated PCIE transmission time, to enter processing of the next command when the next command arrives.
In the embodiment of the present invention, the tracking bit polling module 13 specifically includes:
a flag bit setting module 20, configured to set a plurality of flag bits in the receiving buffer in advance;
a flag bit writing module 21, configured to write a preset flag bit into a host receiving buffer before the command is executed;
and a detection and judgment module 22, configured to detect the incomplete flag in the receiving buffer, and judge whether the command has been executed.
The functions of the above modules are described in the above embodiments, and are not described herein again.
In the embodiment of the invention, the command processing overhead of the NVMe protocol is optimized, wherein the command processing overhead of the NVMe protocol comprises the steps of command acquisition, command packet analysis, command packet distribution and command packet sending completion; after command processing overhead optimization processing of the NVMe protocol is carried out, polling is carried out on a command queue by using an Endpoint when a command is started; the method adopts the contents of the tracking bits in the polling receiving buffer to test whether the DRAM load arrives, and the change of the tracking bits represents the completion of a read command, so that the problem that the speeds of NVM (non-volatile memory) access and DRAM access are not matched is solved, the NVM memory can be quickly accessed on a heterogeneous hybrid memory platform based on the NVM extended system memory, the overall access performance of the system is improved, and the performance of the heterogeneous hybrid memory platform is improved.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and improvements made within the spirit and principle of the present invention are intended to be included within the scope of the present invention.
Claims (6)
1. A quick access optimization method for a heterogeneous hybrid memory is characterized by comprising the following steps:
optimizing command processing overhead of an NVMe protocol, wherein the command processing overhead of the NVMe protocol comprises command acquisition, command packet analysis, command packet distribution and command packet sending completion;
after command processing overhead optimization processing of the NVMe protocol is carried out, polling is carried out on a command queue by using an Endpoint when a command is started;
adopting the content of a tracking bit in a polling receiving buffer area to test whether the DRAM load arrives or not, wherein the change of the tracking bit represents the completion of a read command;
after the command processing overhead optimization processing of the NVMe protocol is performed, when a command is started, the polling of the command queue by using the Endpoint specifically includes the following steps:
estimating the PCIE transmission time;
firstly, acquiring a part of commands and executing corresponding processing operation;
controlling the next command to enter the processing of the next command when the next command arrives according to the estimated PCIE transmission time;
the step of testing whether the DRAM load arrives or not by using the contents of the tracking bits in the polling receiving buffer specifically comprises the following steps:
presetting a plurality of flag bits in a receiving buffer area;
before the command is executed, writing preset mark bits into a host receiving buffer area;
and detecting the incomplete mark in the receiving buffer area, and judging whether the command is executed completely.
2. The method for optimizing the quick access to the heterogeneous hybrid memory according to claim 1, wherein the step of optimizing the command processing overhead of the NVMe protocol specifically comprises the steps of:
generating a special command processing unit, wherein the command processing unit is used for controlling and executing the acquisition of commands in the NVMe protocol and the sending flow of the finished command packet;
generating an arbiter for arbitrating distribution of the command packet in the NVMe protocol.
3. The method for optimizing fast access to heterogeneous hybrid memory according to claim 2, further comprising the steps of:
and adding an erasing command in the I/O command, wherein the erasing command is used for controlling the erasing of the flash and the PCM, and recording the erasing times of different erasing positions of the flash and the PCM.
4. A heterogeneous hybrid memory fast access optimization system, the system comprising:
the protocol optimization processing module is used for optimizing command processing overhead of the NVMe protocol, wherein the command processing overhead of the NVMe protocol comprises command acquisition, command packet analysis, command packet distribution and command packet sending completion;
the Endpoint polling module is used for polling the command queue by using Endpoint when a command is started after the command processing overhead optimization processing of the NVMe protocol is carried out;
the tracking bit polling module is used for testing whether the DRAM load arrives or not by adopting the content of the tracking bit in the polling receiving buffer area, and the change of the tracking bit represents the completion of a read command;
the Endpoint polling module specifically comprises:
the time estimation module is used for estimating the transmission time of the PCIE;
the partial command processing module is used for firstly acquiring partial commands and executing corresponding processing operations;
the next command processing control module is used for controlling the next command to enter the processing of the next command when the next command arrives according to the estimated PCIE transmission time;
the tracking bit polling module specifically comprises:
a flag bit setting module, configured to set a plurality of flag bits in the receiving buffer in advance;
a flag bit writing module, which is used for writing preset flag bits into a host receiving buffer before the command is executed;
and the detection judging module is used for detecting the incomplete marks in the receiving buffer area and judging whether the command is executed completely.
5. The system for heterogeneous hybrid memory fast access optimization according to claim 4, wherein the protocol optimization processing module specifically comprises:
the command processing unit generating module is used for generating a special command processing unit, and the command processing unit is used for controlling and executing the acquisition of commands in the NVMe protocol and the sending flow of the finished command packet;
and the arbiter generation module is used for generating an arbiter, and the arbiter is used for arbitrating the distribution of the command packet in the NVMe protocol.
6. The heterogeneous hybrid memory fast access optimization system of claim 5, further comprising:
and the erasing instruction adding module is used for adding an erasing instruction into the I/O instruction, wherein the erasing instruction is used for controlling the erasing of the flash and the PCM, and recording the erasing times of different erasing positions of the flash and the PCM.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810541232.6A CN108897491B (en) | 2018-05-30 | 2018-05-30 | Heterogeneous hybrid memory quick access optimization method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810541232.6A CN108897491B (en) | 2018-05-30 | 2018-05-30 | Heterogeneous hybrid memory quick access optimization method and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108897491A CN108897491A (en) | 2018-11-27 |
CN108897491B true CN108897491B (en) | 2021-07-23 |
Family
ID=64343538
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810541232.6A Active CN108897491B (en) | 2018-05-30 | 2018-05-30 | Heterogeneous hybrid memory quick access optimization method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108897491B (en) |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107797944A (en) * | 2017-10-24 | 2018-03-13 | 郑州云海信息技术有限公司 | A kind of hierarchy type isomery mixing memory system |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9785355B2 (en) * | 2013-06-26 | 2017-10-10 | Cnex Labs, Inc. | NVM express controller for remote access of memory and I/O over ethernet-type networks |
US10778765B2 (en) * | 2015-07-15 | 2020-09-15 | Cisco Technology, Inc. | Bid/ask protocol in scale-out NVMe storage |
CN106484549B (en) * | 2015-08-31 | 2019-05-10 | 华为技术有限公司 | A kind of exchange method, NVMe equipment, HOST and physical machine system |
CN106557277B (en) * | 2015-09-30 | 2019-07-19 | 成都华为技术有限公司 | The reading method and device of disk array |
US10769098B2 (en) * | 2016-04-04 | 2020-09-08 | Marvell Asia Pte, Ltd. | Methods and systems for accessing host memory through non-volatile memory over fabric bridging with direct target access |
CN107797759B (en) * | 2016-09-05 | 2021-05-18 | 北京忆恒创源科技有限公司 | Method, device and driver for accessing cache information |
CN107818056B (en) * | 2016-09-14 | 2021-09-07 | 华为技术有限公司 | Queue management method and device |
-
2018
- 2018-05-30 CN CN201810541232.6A patent/CN108897491B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107797944A (en) * | 2017-10-24 | 2018-03-13 | 郑州云海信息技术有限公司 | A kind of hierarchy type isomery mixing memory system |
Also Published As
Publication number | Publication date |
---|---|
CN108897491A (en) | 2018-11-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10079048B2 (en) | Adjusting access of non-volatile semiconductor memory based on access time | |
US9489141B2 (en) | Efficient scheduling of Input/Output requests to reduce latency and maximize throughput in a flash storage device | |
US9563367B2 (en) | Latency command processing for solid state drive interface protocol | |
CN107870869B (en) | Computing system including host-controlled storage device | |
KR101988260B1 (en) | EMBEDDED MULTIMEDIA CARD(eMMC), AND METHOD FOR OPERATING THE eMMC | |
US20180275921A1 (en) | Storage device | |
US20190043593A1 (en) | Method and apparatus to prioritize read response time in a power-limited storage device | |
US9396141B2 (en) | Memory system and information processing device by which data is written and read in response to commands from a host | |
KR20160049200A (en) | Method for operating data storage device, mobile computing device having the same, and method of the mobile computing device | |
WO2013170731A1 (en) | Method for writing data into storage device and storage device | |
US20190035445A1 (en) | Method and Apparatus for Providing Low Latency Solid State Memory Access | |
US20160170646A1 (en) | Implementing enhanced performance flash memory devices | |
CN110858188A (en) | Multiprocessor system with distributed mailbox structure and communication method thereof | |
CN111475438A (en) | IO request processing method and device for providing quality of service | |
US20220197563A1 (en) | Qos traffic class latency model for just-in-time (jit) schedulers | |
US20210357153A1 (en) | Controller Command Scheduling in a Memory System to Increase Command Bus Utilization | |
CN110716691B (en) | Scheduling method and device, flash memory device and system | |
US20190354483A1 (en) | Controller and memory system including the same | |
US11860773B2 (en) | Memory access statistics monitoring | |
KR102560251B1 (en) | Semiconductor device and semiconductor system | |
US20240086113A1 (en) | Synchronous write method and device, storage system and electronic device | |
KR20100120518A (en) | Data storage device and read commands execution method thereof | |
CN108897491B (en) | Heterogeneous hybrid memory quick access optimization method and system | |
US9870156B2 (en) | Memory system and method of controlling memory system | |
US10268388B2 (en) | Access control method, storage device, and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |