CN118092807A - Cache space allocation method and memory storage device - Google Patents

Cache space allocation method and memory storage device Download PDF

Info

Publication number
CN118092807A
CN118092807A CN202410281262.3A CN202410281262A CN118092807A CN 118092807 A CN118092807 A CN 118092807A CN 202410281262 A CN202410281262 A CN 202410281262A CN 118092807 A CN118092807 A CN 118092807A
Authority
CN
China
Prior art keywords
storage device
memory storage
data
buffer
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410281262.3A
Other languages
Chinese (zh)
Inventor
郑燕
朱凯迪
王志
吴宗霖
朱启傲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hefei Kaimeng Technology Co ltd
Original Assignee
Hefei Kaimeng Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hefei Kaimeng Technology Co ltd filed Critical Hefei Kaimeng Technology Co ltd
Priority to CN202410281262.3A priority Critical patent/CN118092807A/en
Publication of CN118092807A publication Critical patent/CN118092807A/en
Pending legal-status Critical Current

Links

Landscapes

  • Techniques For Improving Reliability Of Storages (AREA)

Abstract

The invention provides a cache space allocation method and a memory storage device. The method comprises the following steps: configuring a shared cache space, which comprises a first shared cache space configured in a write buffer and a second shared cache space configured in a read buffer; detecting a current state of a memory storage device; responding to the fact that the memory storage device is in an idle state and the data merging operation is triggered, determining whether to allocate the shared cache space according to whether a probability value is larger than a threshold value, wherein the probability value reflects the probability that the data merging operation executed when the memory storage device is in the idle state is interrupted by a host command; allocating the shared cache space for the data merging operation in response to the probability value being not greater than the threshold value; and executing data merging operation through the allocated shared cache space and the data merging buffer.

Description

Cache space allocation method and memory storage device
Technical Field
The present invention relates to a memory control technology, and in particular, to a cache space allocation method and a memory storage device.
Background
As the capacity of rewritable nonvolatile memory modules such as flash (flash) memory is increasing, rewritable nonvolatile memory modules supporting multi-channel access are becoming more popular. Generally, for convenience of management, a write buffer (write buffer), a read buffer (read buffer) and a garbage collection buffer (Garbage Collection buffer, GC buffer) with fixed capacity are generally disposed in the memory storage device, so as to provide independent buffer spaces for buffering data written into the rewritable nonvolatile memory module by the host system, data read from the rewritable nonvolatile memory module by the host system, and data accessed by garbage collection operations performed inside the rewritable nonvolatile memory module, respectively. In addition, in terms of capacity, for example, a rewritable nonvolatile memory module with a single physical page capacity of 16KB and supporting parallel access in a mode of 4 planes with a third level memory cell (TRIPLE LEVEL CELL, TLC), common configurations include a write buffer with a capacity of 576 kbytes (Bytes, B), a read buffer with a capacity of 256KB, and a garbage collection buffer with a capacity of 192 KB.
However, for memory storage devices further employing a multi-Chip Enabled (CE) architecture, the fixed-capacity write buffer and read buffer described above may be sufficient for multi-CE and/or multi-plane parallel access (including single-batch writing or reading) to the rewritable non-volatile memory module, but due to the generally smaller capacity of the configured garbage collection buffer, the capacity provided by the garbage collection buffer is insufficient to buffer data synchronously read from and/or to be synchronously written to the plurality of CEs during execution of the garbage collection operation. Therefore, during the garbage collection operation, the memory storage device can only perform the movement (including the reading and writing) of the data (i.e. the valid data) one by one for each CE of the rewritable nonvolatile memory module, and cannot perform the parallel garbage collection operation of multiple CEs, which results in the low system performance of the garbage collection operation and even the memory storage device as a whole.
Disclosure of Invention
The present invention provides a cache space allocation method and a memory storage device, which can improve the above problems.
An exemplary embodiment of the present invention provides a buffer space allocation method for a memory storage device, where the memory storage device includes a buffer memory, the buffer memory includes a write buffer for buffering data written to the memory storage device by a host system, a read buffer for buffering data read from the memory storage device by the host system, and a data merge buffer for buffering data accessed by a data merge operation performed inside the memory storage device, and the buffer space allocation method includes: configuring a shared cache space, wherein the shared cache space comprises a first shared cache space configured in the write buffer and a second shared cache space configured in the read buffer; detecting a current state of the memory storage device; responding to the memory storage device in an idle state and triggering the data merging operation, and determining whether to allocate the shared cache space according to whether a probability value is larger than a threshold value, wherein the probability value reflects the probability that the data merging operation executed when the memory storage device is in the idle state is interrupted by a host instruction; allocating the shared cache space for the data merging operation in response to the probability value being not greater than the threshold value; and executing the data merging operation through the allocated shared buffer space and the data merging buffer.
The exemplary embodiment of the invention further provides a memory storage device, which includes a connection interface unit for connecting to a host system; a rewritable nonvolatile memory module; a cache memory; and a memory control circuit unit connected to the connection interface unit, the rewritable nonvolatile memory module, and the cache memory, wherein the cache memory includes a write buffer for caching data written to the memory storage device by a host system, a read buffer for caching data read from the memory storage device by the host system, and a data merge buffer for caching data accessed by a data merge operation performed inside the memory storage device, the memory control circuit unit being configured to: configuring a shared cache space, wherein the shared cache space comprises a first shared cache space configured in the write buffer and a second shared cache space configured in the read buffer; detecting a current state of the memory storage device; determining whether to allocate the shared cache space for the data merge operation according to whether a probability value is greater than a threshold value in response to the memory storage device being in an idle state and the memory storage device being executing the data merge operation, wherein the probability value reflects a probability that the data merge operation executed when the memory storage device is in the idle state is interrupted by a host instruction; allocating the shared cache space for the data merging operation in response to the probability value being not greater than the threshold value; and executing the data merging operation through the allocated shared buffer space and the data merging buffer.
Based on the foregoing, the buffer space allocation method and the memory storage device according to the exemplary embodiments of the present invention may additionally allocate a shared buffer space in the buffer memory and detect the current state of the memory storage device in real time. In particular, the shared buffer space includes a first shared buffer space disposed in the write buffer and a second shared buffer space disposed in the read buffer. In the case that the memory storage device is in an idle state and the data merge operation is triggered, it may be determined whether to allocate the shared buffer space according to the probability information. In particular, the probability information reflects the probability that the data merge operation performed while the memory storage device is in an idle state is interrupted by a host instruction. For example, in response to the probability value not being greater than a threshold value, the shared cache space may be allocated for the data merge operation. Thereafter, a data merging operation may be performed through the allocated shared buffer space and the data merging buffer in the buffer memory. Thus, system performance of the memory storage device in performing data consolidation (e.g., GC) efficiency in various contexts (e.g., idle state and busy state) may be improved.
Drawings
FIG. 1 is a schematic diagram of a host system, a memory storage device, and an input/output (I/O) device, according to an example embodiment of the invention;
FIG. 2 is a schematic diagram of a host system, a memory storage device, and an I/O device according to an example embodiment of the invention;
FIG. 3 is a schematic diagram of a host system and a memory storage device according to an example embodiment of the invention;
FIG. 4 is a schematic diagram of a memory storage device according to an example embodiment of the invention;
FIG. 5 is a schematic diagram of a memory control circuit unit shown according to an example embodiment of the invention;
FIG. 6 is a schematic diagram illustrating managing a rewritable non-volatile memory module according to an example embodiment of the present invention;
FIG. 7 is a schematic diagram showing an internal configuration of a buffer memory according to an exemplary embodiment of the present invention;
FIG. 8 is a diagram illustrating a data merge operation performed by a shared buffer space and a data merge buffer according to an exemplary embodiment of the present invention;
FIG. 9 is a flowchart of a cache space allocation method according to an exemplary embodiment of the present invention.
Detailed Description
Reference will now be made in detail to the exemplary embodiments of the present invention, examples of which are illustrated in the accompanying drawings. Wherever possible, the same reference numbers will be used throughout the drawings and the description to refer to the same or like parts.
FIG. 1 is a schematic diagram of a host system, a memory storage device, and an input/output (I/O) device according to an example embodiment of the invention. FIG. 2 is a schematic diagram of a host system, a memory storage device, and an I/O device according to an example embodiment of the invention.
Referring to fig. 1 and 2, the host system 11 may include a processor 111, a random access memory (random access memory, RAM) 112, a Read Only Memory (ROM) 113, and a data transfer interface 114. The processor 111, the random access memory 112, the read only memory 113, and the data transfer interface 114 may be connected to a system bus 110.
Host system 11 may be coupled to memory storage device 10 through data transfer interface 114. For example, host system 11 may store data to memory storage device 10 or read data from memory storage device 10 through data transfer interface 114. In addition, host system 11 may be connected to I/O device 12 via system bus 110. For example, host system 11 may transmit output signals to I/O device 12 or receive input signals from I/O device 12 via system bus 110.
In an exemplary embodiment, the processor 111, the ram 112, the rom 113, and the data transfer interface 114 may be disposed on the motherboard 20 of the host system 11. The number of data transfer interfaces 114 may be one or more. The motherboard 20 may be connected to the memory storage device 10 by a wired or wireless connection through the data transmission interface 114.
In an exemplary embodiment, the memory storage device 10 may be, for example, a usb flash disk 201, a memory card 202, a Solid state disk (Solid STATE DRIVE, SSD) 203, or a wireless memory storage device 204. The wireless memory storage 204 may be, for example, a Near Field Communication (NFC) memory storage, a wireless facsimile (WiFi) memory storage, a Bluetooth (Bluetooth) memory storage, or a Bluetooth low energy memory storage (iBeacon) or the like based on a wide variety of wireless Communication technologies. In addition, the motherboard 20 may also be connected to various I/O devices such as a global positioning system (Global Positioning System, GPS) module 205, a network interface card 206, a wireless transmission device 207, a keyboard 208, a screen 209, a speaker 210, etc. through the system bus 110. For example, in an exemplary embodiment, the motherboard 20 may access the wireless memory storage device 204 through the wireless transmission device 207.
In an example embodiment, host system 11 is a computer system. In an example embodiment, host system 11 may be any system that may substantially cooperate with a memory storage device to store data. In an example embodiment, the memory storage device 10 and the host system 11 may include the memory storage device 30 and the host system 31 of fig. 3, respectively.
FIG. 3 is a schematic diagram of a host system and a memory storage device according to an example embodiment of the invention. Referring to fig. 3, the memory storage device 30 may be used with the host system 31 to store data. For example, the host system 31 may be a system such as a digital camera, video camera, communication device, audio player, video player, or tablet computer. For example, the memory storage device 30 may be a Secure Digital (SD) card 32, a Compact Flash (CF) card 33, or an embedded memory device 34 used by a host system 31. The embedded storage 34 includes embedded storage devices of various types such as embedded multimedia card (embedded Multi MEDIA CARD, EMMC) 341 and/or embedded Multi-chip package (embedded Multi CHIP PACKAGE, EMCP) 342 that directly connect the memory module to the substrate of the host system.
Fig. 4 is a schematic diagram of a memory storage device according to an example embodiment of the invention. Referring to fig. 4, the memory storage device 10 includes a connection interface unit 41, a memory control circuit unit 42, and a rewritable nonvolatile memory module 43.
The connection interface unit 41 is used for connecting to the host system 11. The memory storage device 10 may communicate with the host system 11 through the connection interface unit 41. For example, the connection interface unit 41 may be compatible with a peripheral component interconnect Express (PERIPHERAL COMPONENT INTERCONNECT EXPRESS, PCI Express) standard, a serial advanced technology attachment (SERIAL ADVANCED Technology Attachment, SATA) standard, a parallel advanced technology attachment (PARALLEL ADVANCED Technology Attachment, PATA) standard, an Institute of electrical and Electronics engineers (ELECTRICAL AND Electronic Engineers, IEEE) 1394 standard, a universal serial bus (Universal Serial Bus, USB) standard, an SD interface standard, a Ultra-high speed generation (Ultra HIGH SPEED-I, UHS-I) interface standard, a Ultra-high speed second generation (Ultra HIGH SPEED-II, UHS-II) interface standard, a Memory Stick (MS) interface standard, an MCP interface standard, an MMC interface standard, an eMMC interface standard, a universal flash Memory (Universal Flash Storage, UFS) interface standard, an eMCP interface standard, an integrated drive Electronics (INTEGRATED DEVICE Electronics, IDE) standard, or other suitable standard.
The memory control circuit unit 42 is connected to the connection interface unit 41 and the rewritable nonvolatile memory module 43. The memory control circuit unit 42 is used for controlling the rewritable nonvolatile memory module 43. For example, the memory control circuit unit 42 can instruct the rewritable nonvolatile memory module 43 to perform operations such as writing, reading and erasing of data according to the instruction of the host system 11. For example, the memory control circuit unit 42 may include a flash memory management circuit.
The rewritable nonvolatile memory module 43 is used for storing data written by the host system 11. The rewritable nonvolatile memory module 43 may include a single-level memory cell (SINGLE LEVEL CELL, SLC) NAND type flash memory module (i.e., a flash memory module that can store 1 bit in one memory cell), a second-level memory cell (Multi LEVEL CELL, MLC) NAND type flash memory module (i.e., a flash memory module that can store 2 bits in one memory cell), a third-level memory cell (TRIPLE LEVEL CELL, TLC) NAND type flash memory module (i.e., a flash memory module that can store 3 bits in one memory cell), a fourth-level memory cell (Quad LEVEL CELL, QLC) NAND type flash memory module (i.e., a flash memory module that can store 4 bits in one memory cell), other flash memory modules, or other memory modules having the same characteristics.
Fig. 5 is a schematic diagram of a memory control circuit unit according to an example embodiment of the invention. Referring to fig. 5, the memory control circuit unit 42 includes a memory management circuit 51, a host interface 52 and a memory interface 53.
The memory management circuit 51 is used for controlling the overall operation of the memory control circuit unit 42. For example, the memory management circuit 51 may include a central processing unit (Central Processing Unit, CPU), or other programmable general purpose or special purpose microprocessor, digital signal Processor (DIGITAL SIGNAL Processor, DSP), programmable controller, application SPECIFIC INTEGRATED Circuits (ASIC), programmable logic device (Programmable Logic Device, PLD), or other similar device or combination of devices.
The host interface 52 is connected to the memory management circuit 51. Memory management circuitry 51 may communicate with host system 11 through host interface 52. For example, the host interface 52 may be compatible with PCI Express standards, SATA standards, PATA standards, IEEE 1394 standards, USB standards, SD standards, UHS-I standards, UHS-II standards, MS standards, MMC standards, eMMC standards, UFS standards, CF standards, IDE standards, or other suitable data transfer standards.
The memory interface 53 is connected to the memory management circuit 51. The memory management circuit 51 may access the rewritable nonvolatile memory module 43 through the memory interface 53. For example, the memory management circuit 51 can issue operation instructions to the rewritable nonvolatile memory module 43 through the memory interface 53 to instruct the rewritable nonvolatile memory module 43 to perform various operations such as reading, writing or erasing of data.
In an exemplary embodiment, the memory control circuit unit 42 further includes an error checking and correction circuit 54, a buffer memory 55, and a power management circuit 56.
The error checking and correcting circuit 54 is connected to the memory management circuit 51 and is used for performing error checking and correcting operations to ensure the correctness of the data. For example, when the memory management circuit 51 receives a write command from the host system 11, the error checking and correcting circuit 54 generates a corresponding error correction code (error correcting code, ECC) and/or error check code (error detecting code, EDC) for the data corresponding to the write command, and the memory management circuit 51 writes the data corresponding to the write command and the corresponding error correction code and/or error check code into the rewritable nonvolatile memory module 43. Then, when the memory management circuit 51 reads data from the rewritable nonvolatile memory module 43, the error correction code and/or the error check code corresponding to the data are read at the same time, and the error check and correction circuit 54 performs an error check and correction operation on the read data according to the error correction code and/or the error check code.
The buffer memory 55 is connected to the memory management circuit 51 and is used for buffering data. The power management circuit 56 is connected to the memory management circuit 51 and is used to control the power of the memory storage device 10.
FIG. 6 is a schematic diagram illustrating managing a rewritable non-volatile memory module according to an example embodiment of the present invention. Referring to FIG. 6, the memory management circuit 51 can logically group the physical units 610 (0) -610 (B) in the rewritable nonvolatile memory module 43 into a memory area 601 and a spare (spare) area 602.
In an exemplary embodiment, a physical unit refers to a physical address or a physical programming unit. The physical programming unit is a basic unit for performing programming operations to write data. For example, a physical programming unit may include one or more physical pages or physical fans. In an exemplary embodiment, a physical unit may also be composed of a plurality of consecutive or non-consecutive physical addresses. In an exemplary embodiment, a physical unit may also be referred to as a Virtual Block (VB). A virtual block may include multiple physical addresses or multiple physical programming units. In an example embodiment, a dummy block may include one or more physical erase units. The physical erase unit is a basic unit for performing an erase operation to erase data. For example, a physical erased cell may include one or more physical blocks.
In an exemplary embodiment, the entity units 610 (0) -610 (A) in the storage area 601 are configured to store user data (e.g., user data from the host system 11 of FIG. 1). For example, the entity units 610 (0) to 610 (a) in the storage area 601 may store valid (valid) data and invalid (invalid) data. The physical units 610 (a+1) -610 (B) in the free area 602 do not store data (e.g., valid data). For example, if a physical unit does not store valid data, the physical unit may be associated (or added) to the idle area 602. In addition, the physical cells in the free area 602 (or the physical cells that do not store valid data) may be erased. When writing new data, one or more physical units may be extracted from the spare area 602 to store the new data. In an exemplary embodiment, the free area 602 is also referred to as a free pool (free pool). In an exemplary embodiment, the physical units 610 (A+1) -610 (B) in the idle region 602 are also referred to as idle physical units.
In an example embodiment, the memory management circuit 51 may configure the logic units 612 (0) -612 (C) to map the physical units 610 (0) -610 (A) in the memory area 601. In an exemplary embodiment, each logical unit corresponds to a logical address. For example, a logical address may include one or more logical block addresses (Logical Block Address, LBAs) or other logical management units. In an exemplary embodiment, a logic unit may also correspond to a logic programming unit or be composed of a plurality of consecutive or non-consecutive logic addresses.
In an exemplary embodiment, the memory management circuit 51 may record management data (also referred to as logic-to-entity mapping information) describing a mapping relationship between the logic units and the entity units in at least one logic-to-entity mapping table. When the host system 11 wants to read data from the memory storage device 10 or write data to the memory storage device 10, the memory management circuit 51 can access the rewritable nonvolatile memory module 43 according to the information in the logical-to-physical mapping table.
In an example embodiment, the memory management circuit 51 may continuously monitor the total number of physical units 610 (A+1) -610 (B) in the idle region 602. The memory management circuit 51 can determine whether the total number of the physical units 610 (a+1) to 610 (B) in the idle area 602 is less than or equal to a threshold value (also referred to as a first trigger threshold value). In response to the total number of the physical units 610 (a+1) -610 (B) in the idle area 602 being less than or equal to the first trigger threshold, the memory management circuit 51 may trigger and perform a data merging operation. This data integration operates to increase the number of physical units 610 (a+1) through 610 (B) (i.e., idle physical units) in the idle area 602. For example, this data consolidation operation includes a garbage collection (Garbage Collection, GC) operation.
In an exemplary embodiment, after triggering the data merge operation, the memory management circuit 51 may select at least one entity unit from the memory area 601 as a source unit and at least one entity unit from the idle area 602 as a target unit. In a data merging operation, the memory management circuit 51 may read valid data from one or more physical units as source units and store the read valid data in a centralized manner into one or more physical units as target units. Therefore, in the data merging operation, by centrally storing the valid data originally stored in the source unit into the target unit, the number of entity units currently used for storing the valid data in the rewritable nonvolatile memory module 43 can be effectively reduced. In addition, after the valid data stored in a certain physical unit is completely moved or copied to the target unit, the physical unit can be erased and associated with the idle area 602 to become a new idle physical unit. In an exemplary embodiment, the re-association of a physical unit to the idle area 602 is also referred to as releasing a new idle physical unit. Thus, after triggering the data merging operation, the memory management circuit 51 can continuously release new idle physical units by performing the data merging operation, thereby increasing the number of physical units 610 (a+1) to 610 (B) (i.e. idle physical units) in the idle area 602.
In an example embodiment, during operation of the memory storage device 10, the memory storage device 10 may be in an idle state or a busy state. For example, if the memory storage device 10 does not receive an instruction (also referred to as a host instruction) sent by the host system 11 and/or the memory storage device 10 does not perform an operation behavior indicated by the host instruction (e.g., reads data from or writes data to an entity unit mapped by a particular logical unit in response to a read instruction) within a certain point in time or a certain time range, it means that the memory storage device 10 is in an idle state at this point in time or within this time range.
In an example embodiment, if the memory storage device 10 has received an instruction (i.e., a host instruction) sent by the host system 11 and/or the memory storage device 10 is executing an operation behavior indicated by the host instruction (e.g., reading data from a physical unit mapped by a specific logical unit in response to a read instruction or writing data to a physical unit mapped by a specific logical unit in response to a write instruction) at a certain point in time or within a certain time range, it means that the memory storage device 10 is busy at this point in time or within this time range. Thus, the memory management circuit 51 may instruct the memory storage device 10 to switch between the idle state and the busy state to meet the current operation requirement according to whether the host instruction is received and/or whether the operation behavior indicated by the host instruction has been completed. Furthermore, according to different types of host instructions (e.g., erase instructions), the memory storage device 10 may be in a busy state and perform related operations (e.g., erase operations, etc.) while in the busy state, the present invention is not limited to the specific type of host instructions and/or operations performed by the memory storage device 10 while in the busy state.
In an example embodiment, the memory management circuit 51 may trigger the data merge operation when the memory storage device 10 is in an idle state or a busy state. In an example embodiment, in an idle state, the memory management circuit 51 may instruct the memory storage device 10 to perform a data merge operation in a Background (Background) mode. Alternatively, in an example embodiment, in a busy state, the memory management circuit 51 may instruct the memory storage device 10 to perform a data merge operation in the foreground (foreground) mode.
In an exemplary embodiment, the data merging operation is performed in the background mode, and the memory management circuit 51 may stop or end the data merging operation after receiving the host command. In an exemplary embodiment, when performing the data merge operation in the foreground mode, the memory management circuit 51 may determine whether the total number of the physical units 610 (a+1) to 610 (B) in the idle area 602 is greater than or equal to another threshold (also referred to as a second trigger threshold). The second trigger threshold may be greater than or equal to the first trigger threshold. In response to the total number of the physical units 610 (a+1) -610 (B) in the idle area 602 being greater than or equal to the second trigger threshold, the memory management circuit 51 may stop or end the data merging operation.
Fig. 7 is a schematic diagram showing an internal configuration of a buffer memory according to an exemplary embodiment of the present invention. Referring to fig. 7, in an exemplary embodiment, for convenience of management, the memory management circuit 51 may divide the buffer memory 55 into a write buffer 71, a read buffer 72 and a data merge buffer 73.
The write buffer 71 is preset to buffer data written to the memory storage device 10 (e.g. the rewritable nonvolatile memory module 43) by the host system 11. For example, during writing of data to a physical unit to which a particular logical unit is mapped according to a write instruction from host system 11, data received from host system 11 may be buffered in write buffer 71. The data buffered in the write buffer 71 may then be written (i.e., stored) to the rewritable nonvolatile memory module 43 in response to the write command.
The read buffer 72 is preset to buffer data read from the memory storage device 10 (e.g., the rewritable nonvolatile memory module 43) by the host system 11. For example, during the reading of data from the physical unit to which a particular logical unit is mapped according to a read instruction from the host system 11, the data read from the rewritable nonvolatile memory module 43 may be buffered in the read buffer 72. The data buffered in the read buffer 72 may then be transferred to the host system 11 in response to the read command.
The data merging buffer 73 is preset to buffer data accessed by a data merging operation performed inside the memory storage device 10 (e.g., the rewritable nonvolatile memory module 43). For example, during the data merge operation, the read valid data from one or more physical units in the rewritable non-volatile memory module 43 as source units may be buffered in the data merge buffer 73. The data buffered in the data merge buffer 73 may then be written (i.e., stored) into one or more physical units of the rewritable nonvolatile memory module 43 that are the target units.
In an exemplary embodiment, considering the setup cost (the greater the capacity of the buffer memory, the higher the cost), the capacities of the write buffer 71 and the read buffer 72 are both greater than the capacity of the data merge buffer 73. For example, assuming that the write buffer 71 has a capacity of 576 kbytes (Bytes, B) and the read buffer 72 has a capacity of 256KB, the data merge buffer 73 may have a capacity of 192KB. However, in an exemplary embodiment, the respective capacities of the write register 71, the read register 72 and the data merge register 73 can be adjusted according to the practical requirements, and the present invention is not limited thereto.
In an example embodiment, the memory management circuit 51 may configure the shared cache space in the buffer memory 55. For example, the shared cache space includes a shared cache space 701 (also referred to as a first shared cache space) disposed in the write buffer 71 and a shared cache space 702 (also referred to as a second shared cache space) disposed in the read buffer 72.
In an example embodiment, the memory management circuit 51 may dynamically determine whether to allocate the shared cache space 701 and/or 702 for the data merge operation according to the requirements. Furthermore, in an example embodiment, the memory management circuit 51 may dynamically adjust (e.g., increase or decrease) the capacity of the shared cache space 701 and/or 702 according to the requirements.
In an example embodiment, where the shared buffer space 701 is allocated for the data merge operation, even though the shared buffer space 701 is located in the write buffer 71, the shared buffer space 701 allocated for the data merge operation may be limited to be used only for buffering data accessed by the data merge operation (e.g., valid data that is moved or copied during the data merge operation). Therefore, in the case where the shared buffer space 701 is allocated for the data merging operation, the remaining capacity of the shared buffer space 701 is deducted from the write buffer 71 to be used for buffering the data written to the memory storage device 10 by the host system 11. However, in the case where the shared buffer space 701 is not allocated for the data merge operation, all the capacity in the write buffer 71 (including the shared buffer space 701) can be used to buffer the data written to the memory storage device 10 by the host system 11.
In an exemplary embodiment, assuming that the capacity of the write register 71 is 576KB and the capacity of the shared buffer space 701 is 192KB, the remaining capacity of the write register 71 minus the shared buffer space 701 is 384KB. In the case where the shared buffer space 701 is allocated for the data merge operation, the remaining capacity (i.e., 384 KB) of the write buffer 71 after deducting the shared buffer space 701 is used to buffer the data written to the memory storage device 10 by the host system 11. In addition, the shared buffer space 701 allocated for the data merging operation may be collocated with the data merging buffer 73 to jointly buffer data accessed by the data merging operation (e.g., valid data moved or copied in the data merging operation). However, in the case where the shared buffer space 701 is not allocated for the data merge operation, all of the capacity of the write buffer 71 (i.e., 576 KB) may be used to buffer data written by the host system 11 to the memory storage device 10.
In an example embodiment, where the shared cache space 702 is allocated for the data merge operation, even though the shared cache space 702 is located in the read buffer 72, the shared cache space 702 allocated for the data merge operation is limited to only be used for caching data accessed by the data merge operation (e.g., valid data that is moved or copied during the data merge operation). Therefore, in the case where the shared buffer space 702 is allocated for the data merging operation, the remaining capacity of the shared buffer space 702 is deducted from the read buffer 72 to be used for buffering the data read from the memory storage device 10 by the host system 11. However, in the case where the shared buffer space 702 is not allocated for the data merge operation, all of the capacity in the read buffer 72 (including the shared buffer space 702) may be used to buffer data read by the host system 11 from the memory storage device 10.
In an exemplary embodiment, assuming that the capacity of the read buffer 72 is 256KB and the capacity of the shared buffer space 702 is 192KB, the remaining capacity of the read buffer 72 minus the shared buffer space 702 is 64KB. In the case where the shared buffer space 702 has been allocated for the data merge operation, the remaining capacity (i.e., 64 KB) of the read buffer 72 after deducting the shared buffer space 702 is available to buffer data read by the host system 11 from the memory storage device 10. In addition, the shared buffer space 702 allocated for the data merging operation may be collocated with the data merging buffer 73 to jointly buffer data accessed by the data merging operation (e.g., valid data moved or copied in the data merging operation). However, in the case where the shared buffer space 702 is not allocated for the data merge operation, all of the capacity (i.e., 256 KB) in the read buffer 72 can be used to buffer data that the host system 11 reads from the memory storage device 10.
In an example embodiment, the memory management circuit 51 may detect the current state of the memory storage device 10 in real time. For example, the memory management circuit 51 may detect in real time that the current state of the memory storage device 10 is an idle state or a busy state. The descriptions of the idle state and the busy state of the memory storage device 10 are detailed above, and the detailed descriptions are omitted herein. In an example embodiment, the memory management circuit 51 may further monitor whether the data consolidation operation is triggered when the memory storage device 10 is in an idle state or a busy state.
In an example embodiment, the memory management circuit 51 may obtain the probability information in response to the memory storage device 10 being in an idle state and the data merge operation being triggered. This probability information may reflect the probability that the data merge operation performed while the memory storage device 10 is in an idle state was interrupted by a host instruction. The memory management circuit 51 may then determine whether to allocate shared buffer space (i.e., shared buffer spaces 701 and 702) for the data merge operation based on the probability information. It should be noted that the case where the data merging operation performed by the memory storage device 10 in the idle state is interrupted by the host command refers to the case where the memory storage device 10 is in the idle state and the data merging operation has been triggered, the host command (e.g. a write command, a read command, or an erase command) from the host system 11 is received, so that the memory storage device 10 interrupts the data merging operation being performed or to be performed and enters the busy state.
In an exemplary embodiment, the probability information may include a probability value. For example, this probability value may be positively correlated to the probability that the data merge operation performed while the memory storage device 10 is in an idle state is interrupted by a host instruction. That is, the larger the probability value, the higher the probability that the data consolidation operation performed while the memory storage device 10 is in an idle state is interrupted by the host instructions based on historical experience or statistics.
In an exemplary embodiment, the memory management circuit 51 may determine whether the probability value is greater than a threshold value (e.g., 0.5). If the probability value is not greater than the threshold value, it means that the probability of performing the data consolidation operation while the memory storage device 10 is in the idle state is relatively low based on historical experience or statistics. Thus, in response to the probability value not being greater than the threshold, the memory management circuit 51 may allocate shared cache spaces 701 and 702 for the data merge operation to improve performance of subsequent data merge operations.
In an exemplary embodiment, if the probability value is greater than the threshold value, which indicates that the probability of performing the data merge operation with the memory storage device 10 in an idle state is relatively high, based on historical experience or statistics. Thus, in response to the probability value being greater than the threshold, the memory management circuit 51 may not allocate the shared cache spaces 701 and 702 for the data merge operation to avoid affecting the performance of subsequent host access operations.
In an exemplary embodiment, by dynamically allocating or not allocating the shared cache space (i.e., shared cache spaces 701 and 702) for the data merge operation taking into account the probability that the data merge operation performed while the memory storage device 10 is in an idle state is interrupted by a host instruction, a balance is achieved between the performance of the host access operation and the performance of the data merge operation as much as possible while the memory storage device 10 is in an idle state.
In an example embodiment, the memory management circuit 51 may determine the probability information according to at least one of a plurality of probability values (also referred to as sub-probability values). In an example embodiment, the sub-probability values include a first sub-probability value, a second sub-probability value, and a third sub-probability value. The first sub-probability value reflects the probability that the data merge operation performed was interrupted by a host instruction during the last N times the memory storage device 10 was in an idle state. The second sub-probability value reflects the probability that the data merge operation performed was interrupted by the host instruction during the last M times the memory storage device 10 was in an idle state, and M is greater than N. For example, N may be 10, M may be 1000, and the values of N and M may be adjusted according to the actual requirements. In addition, the third sub-probability value may reflect the probability that the performed data merge operation was interrupted by the host instruction during the past when all memory storage devices 10 were in an idle state. In an example embodiment, the memory management circuit 51 may continuously count and update at least one of the plurality of sub-probability values during operation of the memory storage device 10.
In an exemplary embodiment, the memory management circuit 51 may determine the probability information according to the following equation (1.1).
P=(W(1)×P(1)+W(2)×P(2)+W(3)×P(3))/3(1.1)
In equation (1.1), parameter P represents the probability information, parameter P (1) represents the first sub-probability value, parameter P (2) represents the second sub-probability value, parameter P (3) represents the third sub-probability value, and parameters W (1), W (2), and W (3) are weight coefficients. Parameters P, P (1), P (2) and P (3) are all values between 0 and 1. In equation (1.1), the parameter W (i) is a weight coefficient corresponding to the parameter P (i). In an exemplary embodiment, the memory management circuit 51 may also calculate the parameter P based on only W (i) x P (i) or (W (i) x P (i) +w (j) x P (j))/2, where i and j are any two integers from 1 to 3, and i is not equal to j.
In an example embodiment, the memory management circuit 51 may dynamically adjust at least one of the parameters W (1), W (2) and W (3). In an example embodiment, the memory management circuit 51 may obtain statistics of the probability information. For example, this statistic may reflect an average of probability information calculated over the last K times. For example, K may be 10 or other integer greater than 1. The memory management circuit 51 can dynamically adjust at least one of the parameters W (1), W (2) and W (3) according to the statistic. For example, the memory management circuit 51 may adjust at least one of the parameters W (1), W (2) and W (3) according to the difference between the statistic and the parameters P (1), P (2) and P (3), respectively.
In an exemplary embodiment, the memory management circuit 51 may increase the value of the parameter W (i) in response to the difference between the statistic and the parameter P (i) being minimum (indicating that the parameter P (i) is closest to the average of the probability information calculated over the past K times). Wherein increasing the value of the parameter W (i) is equivalent to increasing the influence of the parameter P (i) on the parameter P in equation (1.1). Alternatively, in an exemplary embodiment, the memory management circuit 51 may decrease the value of the parameter W (j) in response to the difference between the statistic and the parameter P (j) being the largest (indicating the maximum difference between the parameter P (j) and the average of the probability information calculated for the past K times). Wherein, decreasing the value of the parameter W (j) is equivalent to decreasing the influence of the parameter P (j) on the parameter P in the equation (1.1). In an exemplary embodiment, the accuracy of the parameter P calculated according to the following equation (1.1) can be effectively improved by dynamically adjusting or correcting at least one of the parameter parameters W (1), W (2) and W (3).
In an exemplary embodiment, equation (1.1) may also be replaced by a predictive model. For example, the predictive model may be implemented using an artificial intelligent model such as machine learning or deep learning. In an example embodiment, the memory management circuit 51 may perform the data merge operation described above with the shared cache space 701 and/or 702 allocated for the data merge operation and the data merge buffer 73. In particular, by allocating shared buffer space 701 and/or 702 for the data merge operation, it is conceptually equivalent to increasing the capacity of data merge buffer 73. Thereafter, during the execution of the data merging operation, the performance of the data merging operation can be effectively improved in response to the increase in the capacity of the data merging register 73.
FIG. 8 is a diagram illustrating a data merge operation performed by a shared buffer space and data merge buffer according to an exemplary embodiment of the present invention. Referring to fig. 7 and 8, in an exemplary embodiment, the memory management circuit 51 may allocate a shared buffer space 801 for the data merging operation according to the probability information. For example, the shared buffer space 801 allocated for the data merge operation includes shared buffer spaces 701 and 702.
In an example embodiment, after the shared buffer space 801 is allocated for the data merging operation, valid data (hatched portion) read from the source unit 81 may be buffered in the shared buffer space 801 and/or the data merging buffer 73 in the data merging operation. Then, the valid data buffered in the shared buffer space 801 and/or the data merge buffer 73 may be sequentially written into the plurality of entity units 821 to 824 (hatched portions) as target units. Wherein physical units 821-824 may be located in one or more chip enables (e.g., a plurality of memory chips) in memory storage device 10, the invention is not limited.
In an exemplary embodiment, if the shared buffer space 801 is not allocated for the data merge operation, the valid data read from the source unit 81 may be stored only in a portion of the physical units 821-824 (e.g., the physical unit 821) at a time through the data merge buffer 73, and cannot be stored in batches (e.g., in parallel) in the physical units 821-824. However, in an example embodiment, after the shared buffer space 801 is allocated for the data merge operation, more valid data may be read into the shared buffer space 801 and/or the data merge buffer 73 and stored in batches (e.g., parallel) into the physical units 821-824 by sharing the shared buffer space 801 and/or the data merge buffer 73. Therefore, the execution efficiency of the data merging operation can be effectively improved.
In an example embodiment, in response to the memory storage device 10 being in an idle state and the data merge operation being triggered, the memory management circuit 51 may also adjust the capacity of the first shared cache space allocated for the data merge operation (i.e., the shared cache space 701) according to the write mode currently employed by the memory storage device 10. For example, assuming that the write mode currently adopted by the memory storage device 10 is SLC mode, the memory management circuit 51 may control the amount of batch write data in SLC mode with the remaining capacity of the shared buffer space 701 subtracted in the write buffer 71 to be not less than S times. For example, taking a single physical page with a capacity of 16KB and supporting multi-plane parallel writing with 4 planes collocated with SLC mode as an example, the amount of batch write data in SLC mode may be 64KB (64 kb=16 kb×4 planes×1 physical page). Alternatively, assuming that the current write mode of the memory storage device 10 is TLC mode, the memory management circuit 51 may control the remaining capacity of the deducted shared buffer space 701 in the write buffer 71 to be not less than T times the amount of batch write data in TLC mode. For example, taking a single physical page with a capacity of 16KB and supporting multi-plane parallel writing with 4 planes collocated with TLC mode as an example, the batch write data size in this TLC mode may be 192KB (192 kb=16 kb×4 planes×3 physical pages). S and T may be any integer. Thus, the memory device 10 is ensured to be in an idle state and to maintain basic data writing performance when performing data merging operation synchronously. Meanwhile, in the case where the memory storage device 10 is in an idle state and the data merging operation is triggered, the capacity of the second shared cache space may be configured to a first preset value.
In an example embodiment, in response to the memory storage device 10 being in a busy state and the data merge operation triggering, the memory management circuit 51 may allocate a first shared cache space (i.e., the shared cache space 701) for the data merge operation. Alternatively, in an example embodiment, in response to the memory storage device 10 being in a busy state and the data merge operation triggering, the memory management circuit 51 may allocate a first shared cache space (i.e., the shared cache space 701) and a second shared cache space (i.e., the shared cache space 702) together for the data merge operation. Taking fig. 8 as an example, in the case where the memory storage device 10 is in a busy state and a data merge operation is triggered, the shared buffer space 801 allocated for the data merge operation may include the shared buffer space 701 (and the shared buffer space 702).
It should be noted that in an exemplary embodiment, in response to the memory storage device 10 being busy and the data merge operation being triggered, the memory management circuit 51 may skip the above-described use and determination of probability information (e.g., probability values) to allocate shared cache space for the data merge operation. Triggering the data merging operation in the busy state to indicate that the current storage space is insufficient, and executing the data merging operation to release the new idle entity unit. Therefore, the data merging operation is not interrupted by the host instructions, the shared cache space is directly allocated for the data merging operation, so that the efficiency of the data merging operation is improved, and the available storage space is arranged as soon as possible.
In the busy state, once the data merge operation is triggered, the performance preset for the host write operation of the memory storage device 10 is reduced (because part of the bandwidth of the host write operation is occupied by the data merge operation). In this case, even if the capacity of the write buffer 71 is reduced, the performance of the host write operation is not affected, as long as it is ensured that the reduced capacity of the write buffer 71 (i.e., the first shared buffer space) can be diverted to increase the efficiency of the data merging operation.
In an example embodiment, in response to the memory storage device 10 being in a busy state and the data merge operation being triggered, the memory management circuit 51 may also adjust the capacity of the first shared cache space allocated for the data merge operation (i.e., the shared cache space 701) according to the urgency of the data merge operation. For example, the urgency of the data merge operation may be inversely related to the total number of currently remaining idle physical units (i.e., physical units in idle region 602 of FIG. 6). That is, if the total number of idle physical units remaining is large, the urgency of the data merge operation may be relatively low. However, if the total number of currently remaining idle physical units is small, the urgency of the data merge operation may be relatively high (indicating that the currently remaining idle physical units are about to run out).
In an exemplary embodiment, when the memory storage device 10 is busy, the adjusted capacity of the first shared buffer space may be directly related to the urgency of the data merge operation. For example, when the total number of currently remaining idle physical units is greater (i.e., the urgency of the data merge operation is relatively low), the memory management circuit 51 may reduce or maintain the capacity of the first shared cache space allocated for the data merge operation. However, when the total number of remaining idle entity units is relatively small (i.e., the urgency of the data merge operation is relatively high), the memory management circuit 51 may increase the capacity of the first shared cache space allocated for the data merge operation to improve the efficiency of the data merge operation and to accelerate the release of new idle entity units.
In an exemplary embodiment, in response to the memory storage device 10 being busy and the data merge operation being triggered, the memory management circuit 51 may also control the amount of batch write data in the TLC mode with the remaining capacity of the shared buffer space 701 subtracted from the write buffer 71 to be not less than T times. Thus, the memory storage device 10 is ensured to be in a busy state and to synchronously execute data merging operation, and the basic host writing performance can be maintained. Meanwhile, in case the memory storage device 10 is in a busy state and the data merge operation is triggered, the capacity of the second shared buffer space may be configured to a second preset value. The first preset value may be the same as or different from the second preset value, and the present invention is not limited thereto.
FIG. 9 is a flowchart of a cache space allocation method according to an exemplary embodiment of the present invention. Referring to fig. 9, in step S901, a shared buffer space is configured, wherein the shared buffer space includes a first shared buffer space configured in a write buffer and a second shared buffer space configured in a read buffer. In step S902, it is determined whether the data merge operation is triggered. In response to the data merge operation not being triggered, the process may return to step S902 or perform other procedures. In response to the data integration operation being triggered, in step S903, the current state of the memory storage device is detected. In response to the memory storage device being in the idle state and the data merging operation being triggered, in step S904, it is determined whether a probability value is greater than a threshold value, wherein the probability value reflects a probability that the data merging operation performed while the memory storage device is in the idle state is interrupted by a host instruction. In response to the probability value not being greater than the threshold value, in step S905, a shared buffer space is allocated for the data merging operation. In response to the probability value being greater than the threshold value, the process may return to step S902 or perform other processes. In addition, in response to the memory storage device being busy and the data merge operation being triggered, step S905 may also be entered to allocate shared buffer space for the data merge operation. In step S906, a data merging operation is performed by the allocated shared buffer space and the data merging register.
However, the steps in fig. 9 are described in detail above, and will not be described again here. It should be noted that each step in fig. 9 may be implemented as a plurality of program codes or circuits, which is not limited by the present invention. In addition, the method of fig. 9 may be used with the above exemplary embodiment, or may be used alone, and the present invention is not limited thereto.
In summary, the buffer space allocation method and the memory storage device according to the exemplary embodiments of the present invention can allocate the shared buffer space for the data merging operation to the write buffer and/or the read buffer to perform the data merging operation, thereby effectively improving the execution efficiency of the data merging operation.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the invention.

Claims (14)

1. The utility model provides a method for allocating buffer space, which is characterized in that, is used for a memory storage device, wherein the memory storage device comprises a buffer memory, the buffer memory comprises a write buffer, a read buffer and a data merging buffer, the write buffer is used for buffering data written into the memory storage device by a host system, the read buffer is used for buffering data read from the memory storage device by the host system, the data merging buffer is used for buffering data accessed by a data merging operation executed in the memory storage device, and the buffer space allocation method comprises the following steps:
Configuring a shared cache space, wherein the shared cache space comprises a first shared cache space configured in the write buffer and a second shared cache space configured in the read buffer;
detecting a current state of the memory storage device;
responding to the memory storage device in an idle state and the data merging operation is triggered, and determining whether to allocate the shared buffer space for the data merging operation according to whether a probability value is larger than a threshold value, wherein the probability value reflects the probability that the data merging operation executed when the memory storage device is in the idle state is interrupted by a host instruction;
Allocating the shared cache space for the data merging operation in response to the probability value being not greater than the threshold value; and
And executing the data merging operation through the allocated shared buffer space and the data merging buffer.
2. The cache space allocation method according to claim 1, wherein after detecting the current state of the memory storage device, the cache space allocation method further comprises:
In response to the memory storage device being in a busy state and the data merge operation being triggered, the shared cache space is allocated for the data merge operation.
3. The cache space allocation method according to claim 1, further comprising:
determining the probability value according to at least one of the first probability value, the second probability value and the third probability value,
Wherein the first sub-probability value reflects the probability that the data merge operation performed was interrupted by the host instruction during the last N times the memory storage device was in the idle state,
The second sub-probability value reflects the probability that the data merge operation performed was interrupted by the host instruction during the last M times the memory storage device was in the idle state, M being greater than N,
The third sub-probability value reflects the probability that the data merge operation performed was interrupted by the host instruction during the past all of the memory storage devices were in the idle state.
4. The cache space allocation method according to claim 3, wherein the step of determining the probability value according to the at least one of the first, second and third probability values comprises:
the probability value is determined according to the following equation:
P=(W(1)×P(1)+W(2)×P(2)+W(3)×P(3))/3
Wherein parameter P represents the probability value, parameter P (1) represents the first sub-probability value, parameter P (2) represents the second sub-probability value, parameter P (3) represents the third sub-probability value, and parameters W (1), W (2) and W (3) are weight coefficients.
5. The cache space allocation method according to claim 4, further comprising:
obtaining a statistical value of the probability value; and
And adjusting at least one of the parameters W (1), W (2) and W (3) according to the difference value between the statistic value and the parameters P (1), P (2) and P (3) respectively.
6. The cache space allocation method according to claim 5, wherein the step of adjusting the at least one of the parameters W (1), W (2) and W (3) according to the difference between the statistical value and the parameters P (1), P (2) and P (3), respectively, comprises:
increasing the value of parameter W (i) in response to the difference between the statistic and parameter P (i) being minimal; and
And reducing the value of the parameter W (j) in response to the difference between the statistical value and the parameter P (j) being the largest, wherein i and j are any two integers from 1 to 3, and i is not equal to j.
7. The cache space allocation method according to claim 2, further comprising:
Responding to the memory storage device in the idle state and the data merging operation triggered, and adjusting the capacity of the first shared cache space according to a writing mode adopted by the memory storage device, wherein the capacity of the second shared cache space is a first preset value; and
And responding to the memory storage device in the busy state and the data merging operation is triggered, adjusting the capacity of the first shared cache space according to the urgency of the data merging operation, wherein the capacity of the second shared cache space is a second preset value.
8. A memory storage device, comprising:
a connection interface unit for connecting to a host system;
A rewritable nonvolatile memory module;
A cache memory; and
A memory control circuit unit connected to the connection interface unit, the rewritable nonvolatile memory module, and the cache memory,
Wherein the cache memory comprises a write buffer, a read buffer and a data merging buffer,
The write buffer is used for buffering data written into the memory storage device by the host system,
The read buffer is used for buffering data read by the host system from the memory storage device,
The data merging buffer is used for buffering the data accessed by the data merging operation performed inside the memory storage device,
The memory control circuit unit is used for:
Configuring a shared cache space, wherein the shared cache space comprises a first shared cache space configured in the write buffer and a second shared cache space configured in the read buffer;
detecting a current state of the memory storage device;
responding to the memory storage device in an idle state and the data merging operation is triggered, and determining whether to allocate the shared buffer space for the data merging operation according to whether a probability value is larger than a threshold value, wherein the probability value reflects the probability that the data merging operation executed when the memory storage device is in the idle state is interrupted by a host instruction;
Allocating the shared cache space for the data merging operation in response to the probability value being not greater than the threshold value; and
And executing the data merging operation through the allocated shared buffer space and the data merging buffer.
9. The memory storage device of claim 8, wherein upon detecting the current state of the memory storage device, the memory control circuit unit is further to:
In response to the memory storage device being in a busy state and the data merge operation being triggered, the shared cache space is allocated for the data merge operation.
10. The memory storage device of claim 8, wherein the memory control circuit unit is further to:
determining the probability value according to at least one of the first probability value, the second probability value and the third probability value,
Wherein the first sub-probability value reflects the probability that the data merge operation performed was interrupted by the host instruction during the last N times the memory storage device was in the idle state,
The second sub-probability value reflects the probability that the data merge operation performed was interrupted by the host instruction during the last M times the memory storage device was in the idle state, M being greater than N,
The third sub-probability value reflects the probability that the data merge operation performed was interrupted by the host instruction during the past all of the memory storage devices were in the idle state.
11. The memory storage device of claim 10, wherein determining the rate value from the at least one of the first rate value, the second rate value, and the third rate value comprises:
the probability value is determined according to the following equation:
P=(W(1)×P(1)+W(2)×P(2)+W(3)×P(3))/3
Wherein parameter P represents the probability value, parameter P (1) represents the first sub-probability value, parameter P (2) represents the second sub-probability value, parameter P (3) represents the third sub-probability value, and parameters W (1), W (2) and W (3) are weight coefficients.
12. The memory storage device of claim 11, wherein the memory control circuit unit is further to:
obtaining a statistical value of the probability value; and
And adjusting at least one of the parameters W (1), W (2) and W (3) according to the difference value between the statistic value and the parameters P (1), P (2) and P (3) respectively.
13. The memory storage device of claim 12, wherein adjusting the at least one of the parameters W (1), W (2), and W (3) based on the difference between the statistic and the parameters P (1), P (2), and P (3), respectively, comprises:
increasing the value of parameter W (i) in response to the difference between the statistic and parameter P (i) being minimal; and
And reducing the value of the parameter W (j) in response to the difference between the statistical value and the parameter P (j) being maximum, wherein i and j are any two positive integers from 1 to 3, and i is not equal to j.
14. The memory storage device of claim 9, wherein the memory control circuit unit is further to:
Responding to the memory storage device in the idle state and the memory storage device executing the data merging operation, and adjusting the capacity of the first shared cache space according to a writing mode adopted by the memory storage device, wherein the capacity of the second shared cache space is a first preset value; and
And in response to the memory storage device being in the busy state and the memory storage device being executing the data merging operation, adjusting the capacity of the first shared cache space according to the urgency of the data merging operation, wherein the capacity of the second shared cache space is a second preset value.
CN202410281262.3A 2024-03-12 2024-03-12 Cache space allocation method and memory storage device Pending CN118092807A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410281262.3A CN118092807A (en) 2024-03-12 2024-03-12 Cache space allocation method and memory storage device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410281262.3A CN118092807A (en) 2024-03-12 2024-03-12 Cache space allocation method and memory storage device

Publications (1)

Publication Number Publication Date
CN118092807A true CN118092807A (en) 2024-05-28

Family

ID=91148661

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410281262.3A Pending CN118092807A (en) 2024-03-12 2024-03-12 Cache space allocation method and memory storage device

Country Status (1)

Country Link
CN (1) CN118092807A (en)

Similar Documents

Publication Publication Date Title
US10664167B2 (en) Data transmitting method, memory storage device and memory control circuit unit
TWI595412B (en) Data transmitting method, memory storage device and memory control circuit unit
CN110333770B (en) Memory management method, memory storage device and memory control circuit unit
TWI648634B (en) Memory management method, memory storage device and memory control circuit unit
CN113885692B (en) Memory efficiency optimization method, memory control circuit unit and memory device
CN111258505B (en) Data merging method of flash memory, control circuit unit and storage device
TWI649653B (en) Data storage method, memory storage device and memory control circuit unit
CN111078146B (en) Memory management method, memory storage device and memory control circuit unit
CN107817943B (en) Data transmission method, memory storage device and memory control circuit unit
CN106775479B (en) Memory management method, memory storage device and memory control circuit unit
TWI766764B (en) Method for managing memory buffer, memory control circuit unit and memory storage apparatus
CN111625197A (en) Memory control method, memory storage device and memory controller
CN106874223B (en) Data transmission method, memory storage device and memory control circuit unit
CN113867640A (en) Memory polling method, memory storage device and memory control circuit unit
CN112486417B (en) Memory control method, memory storage device and memory control circuit unit
TWI760697B (en) Data arrangement method of memory, memory storage device and memory control circuit unit
TWI722490B (en) Memory management method, memory storage device and memory control circuit unit
CN113504880A (en) Memory buffer management method, memory control circuit unit and storage device
CN118092807A (en) Cache space allocation method and memory storage device
CN117573208B (en) Instruction information distribution method and memory storage device
CN117632038B (en) Wear leveling method, memory storage device and memory control circuit unit
CN114115737B (en) Data storage allocation method, memory storage device and control circuit unit
CN114115739B (en) Memory management method, memory storage device and memory control circuit unit
TWI738272B (en) Data arrangement method of flash memory, flash memory storage device and flash memory control circuit unit
CN109471806B (en) Data storage method, memory storage device and memory control circuit unit

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination