CN112131144B - Serial interface NAND memory chip and method for reading data from same - Google Patents

Serial interface NAND memory chip and method for reading data from same Download PDF

Info

Publication number
CN112131144B
CN112131144B CN202011036444.2A CN202011036444A CN112131144B CN 112131144 B CN112131144 B CN 112131144B CN 202011036444 A CN202011036444 A CN 202011036444A CN 112131144 B CN112131144 B CN 112131144B
Authority
CN
China
Prior art keywords
page
cache
data
reading
read
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011036444.2A
Other languages
Chinese (zh)
Other versions
CN112131144A (en
Inventor
黄亚龙
徐光明
虞安华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xtx Technology Inc
Original Assignee
Xtx Technology Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xtx Technology Inc filed Critical Xtx Technology Inc
Priority to CN202011036444.2A priority Critical patent/CN112131144B/en
Publication of CN112131144A publication Critical patent/CN112131144A/en
Application granted granted Critical
Publication of CN112131144B publication Critical patent/CN112131144B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0862Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches with prefetch
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0877Cache access modes
    • G06F12/0882Page mode
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/10Address translation
    • G06F12/1081Address translation for peripheral access to main memory, e.g. direct memory access [DMA]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/10Providing a specific technical effect
    • G06F2212/1016Performance improvement
    • G06F2212/1024Latency reduction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/20Employing a main memory using a specific memory technology
    • G06F2212/202Non-volatile memory
    • G06F2212/2022Flash memory

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The invention discloses a serial interface NAND memory chip and a method for reading data from the same, wherein when the serial interface NAND memory chip receives a page reading command, page data corresponding to the page reading command is read to a first cache; when the page data reading is completed, setting OIP to be equal to 0, and pre-reading the page data of the next page to the second cache; when the serial interface NAND memory chip receives a new page reading command, a first page address of a page to be read corresponding to the page reading command and a second page address of the page to be read corresponding to the new page reading command are obtained, and when the second page address is the next page address of the first page address and the page data of the next page is pre-read, OIP is set to be equal to 0, and the page data of the next page is pre-read to the first cache. The invention improves the speed of the external main control for continuously reading the data in the NAND memory chip with the serial interface, and the speed is obviously improved when the number of the read pages is larger.

Description

Serial interface NAND memory chip and method for reading data from same
Technical Field
The present invention relates to the field of data reading technologies, and in particular, to a serial interface NAND memory chip and a method for reading data from the same.
Background
The serial interface NAND memory chip is mainly used in the fields of embedded systems, PONs, network communication modules, monitoring and the like, and the requirements of the applications in the fields on the continuous reading speed of the serial interface NAND memory chip are as high as possible, so that the key is on how to improve the continuous reading speed of the chip to meet the market demands.
At present, the serial interface NAND memory chip transfers the data in FLASH to the Cache memory after receiving the Page Read to Cache command sent by the external master control, and then the external master control reads the data From the Cache Buffer through the Read From Cache command, which causes the continuous reading speed of the external master control on the data in the serial interface NAND memory chip to be almost equal to the discontinuous reading speed, and the requirement of the customer cannot be met.
In the prior art, some methods for improving data reading are adopted, in which two buffers are used to perform ping-pong operation, for example, chinese patent CN105205012a, which discloses a data reading method and apparatus, including: s1, a controller controls data of a designated page to be read from a memory to a first cache, and simultaneously controls data stored in a second cache to be output to a serial interface; s2, the controller controls the data of a designated page to be read from the memory to the second cache, and simultaneously controls the data stored in the first cache to be output to the serial interface; if S1 or S2 is executed in one data reading operation, S2 or S1 is executed in the next data reading operation, and the data reading operations are sequentially and alternately executed. In general, the time period from the reading of data from Flash to the buffering is much longer than the time period for outputting the stored data from the buffering to the serial interface, so this alternate method can save the time for outputting the stored data from the buffering to the serial interface. However, this method still uses the chip to send the data in the FLASH to the cache after receiving the Page Read to Cache command sent by the external master, and this time is still too long. Further increases in read speed remain a critical requirement for market demand, particularly for the aforementioned continuous read demand.
Disclosure of Invention
The invention mainly aims to provide a serial interface NAND memory chip and a method for reading data from the same, and aims to solve the technical problem that in the prior art, the speed of continuously reading data in the serial interface NAND memory chip by an external master control is low.
To achieve the above object, an embodiment of the present invention provides a method for reading data from a serial interface NAND memory chip, where the serial interface NAND memory chip includes a first cache, a second cache, a CPU, and a NAND flash memory, and the method is characterized in that the method includes:
when a serial interface NAND memory chip receives a page read command through an SPI interface, reading page data corresponding to the page read command to the first cache in a DMA mode;
when the page data reading is completed, setting OIP to be equal to 0, and immediately pre-reading page data of the next page to the second cache in a DMA mode;
when a new page reading command is received by the serial interface NAND memory chip through the SPI interface, a first page address of a page to be read corresponding to the page reading command and a second page address of the page to be read corresponding to the new page reading command are obtained, and whether the second page address is the next page address of the first page address or not is detected;
if the second page address is the next page address of the first page address and the pre-reading of the page data of the next page is completed, OIP is set to be equal to 0, and the page data of the next page is pre-read to the first cache.
In the above method for reading data according to the present invention, after the detecting whether the second page address is the next page address of the first page address, the method further includes:
and if the second page address is not the next page address of the first page address, the CPU controls the page data corresponding to the new page read command to be read to the first/second cache.
According to the serial interface NAND memory chip provided by the invention, the serial interface NAND memory chip comprises a first cache, a second cache, a CPU and a NAND flash memory, and further comprises a device for reading page data corresponding to a page reading command from the NAND flash memory to the first cache when the CPU receives the page reading command; means for setting OIP equal to 0 when the page data reading is completed, and immediately pre-reading page data of a next page from the NAND flash memory to a second cache memory; when a new page reading command is received, acquiring a first page address of a page to be read corresponding to the page reading command and a second page address of the page to be read corresponding to the new page reading command, and detecting whether the second page address is a next page address of the first page address; and means for setting OIP equal to 0 and immediately pre-reading page data of a next page to the first cache if the second page address is a next page address of the first page address and the pre-reading of page data of the next page is completed.
In the above-described serial interface NAND memory chip according to the present invention, the CPU further includes means for reading page data from the NAND flash memory to the first/second caches by DMA.
In the above-described serial interface NAND memory chip according to the present invention, the CPU further includes means for reading the page data corresponding to the new page read command to the first/second cache memory if the second page address is not the next page address of the first page address.
When a serial interface NAND memory chip receives a page reading command, reading page data corresponding to the page reading command to a first cache; when the page data reading is completed, setting OIP to be equal to 0, and pre-reading page data of the next page to a second cache; when a new page reading command is received by a serial interface NAND memory chip, acquiring a first page address of a page to be read corresponding to the page reading command and a second page address of the page to be read corresponding to the new page reading command, and detecting whether the second page address is the next page address of the first page address; if the second page address is the next page address of the first page address and the pre-reading of the page data of the next page is completed, OIP is set to be equal to 0, and the page data of the next page is pre-read to the first cache. The method for pre-reading the next page data to the second cache does not need to wait until the Flash receives the Page Read to Cache command sent by the external main control, and the next page data is immediately pre-read to the cache before the next page data is sent to the cache, so that the time for reading the page data from the false to the cache is further shortened, particularly for the application scene of continuous reading, the speed of the external main control for continuously reading the data in the serial interface NAND memory chip is improved, and when the number of pages to be read is more, the speed is increased more obviously.
Drawings
FIG. 1 is a schematic diagram of a scenario in the prior art in which an external master reads data from a serial interface NAND memory chip;
FIG. 2 is a flow chart of a method for reading data from a NAND memory chip with a serial interface according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a method for reading data from a NAND memory chip with a serial interface according to an embodiment of the present invention;
FIG. 4 is a flowchart of a method for reading data from a NAND memory chip with a serial interface according to another embodiment of the present invention.
The achievement of the objects, functional features and advantages of the present invention will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
Referring to fig. 1, fig. 1 is a schematic diagram of a scenario in which an external master reads data from a serial interface NAND memory chip in the prior art. As shown in fig. 1, the external master control performs data transmission with a serial interface NAND memory chip (hereinafter referred to as a chip) through an SPI interface. When the data starts to be read, the external master sends a page read command 1 to the chip through the SPI interface, after the chip receives the page read command 1, the chip needs to determine information such as an address of a designated page, finds the corresponding page data (assuming that the time length is t 1), then the chip reads the page data 1 corresponding to the page read command 1 to the cache buffer, the cache (assuming that the read time length is t2, for example, about 100 microseconds), then the OIP is set to be 0, after the page data 1 is read to the cache, the external master reads the page data 1 from the cache through the read from cache command until the reading is completed (assuming that the time length is t3, for example, about 200 microseconds), and then the external master sends a page read command 2 to the chip through the SPI interface, and the follow-up process is consistent with the above.
As can be seen from the above analysis of the scenario of reading data from the serial interface NAND memory chip by the external master in the prior art, the external master needs about t1+t2+t3 (e.g. about 300 microseconds) each time it reads one page of data from the serial interface NAND memory chip, and if the external master needs to continuously read N pages of data from the serial interface NAND memory chip, the total time required by the external master is about n×1+t2+t3 (e.g. about 300 microseconds).
Referring to fig. 2, fig. 2 is a flowchart illustrating an embodiment of a method for reading data from a serial interface NAND memory chip according to the present invention. As shown in fig. 2, in one embodiment, a method for reading data from a serial interface NAND memory chip includes:
step S10, when a page reading command is received by a serial interface NAND memory chip, page data corresponding to the page reading command is read to a first cache;
in this step, when the serial interface NAND memory chip receives the page read command (herein, in a scenario where the external master reads data from the serial interface NAND memory chip for a certain time, the serial interface NAND memory chip receives the page read command sent by the external master for the first time), the page data corresponding to the page read command is read to the first cache, for example, when the serial interface NAND memory chip receives the page read command (page read command 1), the corresponding page data 1 is read to the first cache according to the page address included in the page read command 1. Wherein the serial interface NAND memory chip includes two caches, and the first cache may be either one of the two caches.
Step S20, when the page data reading is completed, OIP is set to be equal to 0, and page data of the next page is pre-read to the second cache;
in this step, when page data 1 has been completely read to the first cache, OIP is set equal to 0, and then page data of the next page is pre-read to the second cache. Where page read command 1 contains the page address of page 1, page data 2 of page 2 is pre-read to the second cache where page 2 is the next page of page 1 from the page address positional relationship. And since the external master will read page data 1 from the first cache by a read from cache command when page data 1 has been completely read to the first cache, this allows the external master to read page data 1 from the first cache simultaneously with the step of pre-reading page data 2 of page 2 to the second cache.
Step S30, when a new page reading command is received by the NAND memory chip with the serial interface, a first page address of a page to be read corresponding to the page reading command and a second page address of the page to be read corresponding to the new page reading command are obtained; detecting whether the second page address is a next page address to the first page address;
in this step, when the external master control finishes reading the page data 1 from the first cache through the read from cache command, a new page read command is sent to the serial interface NAND memory chip, and when the serial interface NAND memory chip receives the new page read command, it is detected whether the new page read command is a read command continuous with the page read command. Wherein detecting whether the new page read command is a read command that is consecutive to the page read command comprises:
acquiring a first page address of a page to be read corresponding to the page reading command and a second page address of the page to be read corresponding to the new page reading command; detecting whether the second page address is a next page address to the first page address; if the second page address is the next page address of the first page address, the new page read command and the page read command are continuous read commands; if the second page address is not the next page address of the first page address, the new page read command and the page read command are not consecutive read commands.
In this embodiment, the page read command (page read command 1) includes a first page address (page address 1) of the page to be read (page 1), and the new page read command (page read command 2) includes a second page address (page address 2) of the page to be read (page 2), and by acquiring the page address 1 and the page address 2, it is detected whether the page address 2 is a next page address of the page address 1, if yes, the new page read command and the page read command are consecutive read commands, otherwise, the new page read command and the page read command are not consecutive read commands. For example, when page address 1 is: when the page address 2 is (13 h 00 h) and the page address 2 is (13 h 00h 01 h), it can be seen that the page address 2 is the next page address of the page address 1, and the new page read command and the page read command are determined to be consecutive read commands. For another example, when page address 1 is: when the page address 2 is (13 h 00 h) and the page address 2 is (13 h 00h 02 h), it can be seen that the page address 2 is not the next page address of the page address 1, and it is determined that the new page read command and the page read command are not consecutive read commands.
In step S40, if the second page address is the next page address of the first page address and the pre-reading of the page data of the next page is completed, OIP is set equal to 0, and the page data of the next page is pre-read to the first cache.
In this step, if the new page read command and the page read command are consecutive read commands, that is, the second page address is the next page address of the first page address, it is indicated that the data to be read by the external master is the page data 2 on the page 2, and in step S20, the step of pre-reading the page data 2 to the second cache is already started simultaneously with the step of reading the page data 1 from the first cache by the external master, and at this time, after the step of reading the page data 1 from the first cache by the external master through the read from cache command is completed, correspondingly, the step of pre-reading the page data 2 to the second cache is also completed, so that the external master can read the page data 2 from the second cache through the read from cache command, and the serial interface NAND memory chip does not need to perform the step of reading the page data 2 to the cache, but sets OIP equal to 0, and pre-reads the page data of the next page to the first cache. I.e. page data 3 of the next page (page 3) of page 2 is pre-read to the first cache. I.e. the step of the external master reading page data 2 from the second cache by a read from cache command is performed simultaneously with the step of pre-reading page data 3 to the first cache.
It will be readily appreciated that when the external master finishes reading page data 2 from the second cache memory by the read from cache command, if a new page read command is sent to the serial interface NAND memory chip again and the new page read command is to read page data 3, then since page data 3 has already been read to the first cache memory, the external master can read page data 3 from the first cache memory by the read from cache command and the serial interface NAND memory chip sets OIP equal to 0 and performs the step of pre-reading page data 4 to the second cache memory. Similarly, the subsequent steps are executed based on the above manner when the serial interface NAND memory chip receives a new page read command which is sent by the external master and is continuous with the previous page read command. In this embodiment, since the page data of the next page is pre-read, only when the second page address of the page to be read corresponding to the new page read command is the next page address of the first page address of the page to be read corresponding to the previous page read command, the page data of the next page is pre-read, so that the data read from the cache by the external master is required, and the accuracy of data reading is ensured.
FIG. 3 is a schematic diagram of a method for reading data from a NAND memory chip with a serial interface according to an embodiment of the invention. As shown in the figure 3 of the drawings,
the process of the external master reading a page of data from one cache memory through the SPI interface may be performed simultaneously with the process of the CPU in the serial interface NAND memory chip pre-reading the next page of data from the NAND flash memory to the other cache memory, i.e., step 1 and step 2 in fig. 3 are performed simultaneously, and step 3 and step 4 are performed simultaneously, so that the time required for the external master reading a page of data from one cache memory flushes the time required for the CPU in the serial interface NAND memory chip to read the next page of data to the other cache memory.
In this embodiment, when a page read command is received by the serial interface NAND memory chip, page data corresponding to the page read command is read to the first cache; when the page data reading is completed, setting OIP to be equal to 0, and pre-reading page data of the next page to a second cache; when a new page reading command is received by the NAND memory chip with the serial interface, acquiring a first page address of a page to be read corresponding to the page reading command and a second page address of the page to be read corresponding to the new page reading command; detecting whether the second page address is a next page address to the first page address; if the second page address is the next page address of the first page address and the pre-reading of the page data of the next page is completed, OIP is set to be equal to 0, and the page data of the next page is pre-read to the first cache. With the present embodiment, the serial interface NAND memory chip reads the next page data to one cache memory while the external master reads the next page data from the other cache memory, so that the time required for the external master to read the next page data from one cache memory is flushed by the time required for the serial interface NAND memory chip to read the next page data to the other cache memory, and when the external master continuously reads the data of the adjacent page from the serial interface NAND memory chip, only the serial interface NAND memory chip needs to wait for the process of reading the page data 1 to the cache memory, and the serial interface NAND memory chip needs to wait for the process of reading the subsequent page data 2 to X to the cache memory. If the external master control reads a page of data from the cache memory with a time of 200us, and the serial interface NAND memory chip reads a page of data from the cache memory with a time of 100us, in this embodiment, since the time period of t3 is offset by using the ping-pong operation of two buffers, and since the time period of t2 is further shortened in the case of continuous reading by reading the page of data from the cache memory in advance, the time required by the external master control to continuously read the page of data of N continuous pages from the serial interface NAND memory chip is N (t1+short t 2), and the total time required by the external master control to continuously read the page of data of N continuous pages from the serial interface NAND memory chip in the prior art is about N (t1+t2+t3), even if the prior art uses ping-pong operation, the total time is N (t1+t2). It can be seen that the total time required by the external master control to continuously read the page data of N continuous pages from the serial interface NAND memory chip is reduced by the embodiment, so that the speed of the external master control to continuously read the data in the serial interface NAND memory chip is increased, and the speed is increased more obviously when the value of N is larger.
Further, in an embodiment, the serial interface NAND memory chip reads page data to the first/second caches by DMA.
In this embodiment, DMA is Direct Memory Access, direct memory Access. The page data on NAND FLASH is read from the first/second cache by DMA.
Further, in an embodiment, after the detecting whether the second page address is a next page address of the first page address, the method further includes:
and if the second page address is not the next page address of the first page address, reading the page data corresponding to the new page read command to the first/second cache.
Referring to fig. 4, fig. 4 is a flowchart illustrating a method for reading data from a serial interface NAND memory chip according to another embodiment of the invention. As shown in fig. 4, the process flow on the right side in fig. 4 is a flow of the external master control continuously reading page data of consecutive pages from the serial interface NAND memory chip, and the above embodiments have been described in detail, and will not be described herein. The left processing flow in fig. 4 is a processing flow when the new page read command and the previous page read command are not consecutive read commands. For example, when the page address 2 corresponding to the current page read command is (13 h 00h 02 h), and the page address 1 corresponding to the previous page read command is: (13 h 00 h), at this time, if the page address 2 is not the next page address of the page address 1, determining that the current page read command and the page read command are not continuous read commands, that is, are not continuous read states, so as to read the page data corresponding to the current read command to the first or the second cache, and after waiting for the data to be read, setting OIP to 0, and pre-reading the data in the next page of the page address 2 to the other cache by using a DMA mode, and after the pre-reading is completed, exiting the read state, and returning to the main loop.
Further, the embodiment of the invention also provides a serial interface NAND memory chip, which comprises a first cache, a second cache, a CPU and a NAND flash memory, wherein the CPU is used for reading page data corresponding to a page reading command from the NAND flash memory to the first cache when the page reading command is received; when the page data reading is completed, setting OIP to be equal to 0, and pre-reading page data of the next page from the NAND flash memory to a second cache; when a new page reading command is received, a first page address of a page to be read corresponding to the page reading command and a second page address of the page to be read corresponding to the new page reading command are obtained, and whether the second page address is a next page address of the first page address or not is detected; if the second page address is the next page address of the first page address and the pre-reading of the page data of the next page is completed, OIP is set to be equal to 0, and the page data of the next page is pre-read to the first cache.
Further, in an embodiment, the CPU reads page data from the NAND flash memory to the first/second caches by DMA.
Further, in an embodiment, the CPU is further configured to:
and if the second page address is not the next page address of the first page address, reading the page data corresponding to the new page read command to the first/second cache.
The specific embodiments of the NAND memory chip with serial interface are basically the same as the embodiments of the method for reading data, and are not described herein.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The foregoing embodiment numbers of the present invention are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) as described above, comprising several instructions for causing a terminal device to perform the method according to the embodiments of the present invention.
The foregoing description is only of the preferred embodiments of the present invention, and is not intended to limit the scope of the invention, but rather is intended to cover any equivalents of the structures or equivalent processes disclosed herein or in the alternative, which may be employed directly or indirectly in other related arts.

Claims (5)

1. A method of reading data from a serial interface NAND memory chip, the serial interface NAND memory chip including a first cache, a second cache, a CPU, a NAND flash memory, the method comprising:
when a serial interface NAND memory chip receives a page read command through an SPI interface, reading page data corresponding to the page read command to the first cache in a DMA mode;
when the page data is read, setting OIP to be equal to 0, and immediately pre-reading page data of the next page to the second cache in a DMA mode while the external master control reads the page data from the first cache, and carrying data in FLASH to the cache without waiting until the Flash receives a Page Read to Cache command sent by the external master control;
when a new page reading command is received by the serial interface NAND memory chip through the SPI interface, a first page address of a page to be read corresponding to the page reading command and a second page address of the page to be read corresponding to the new page reading command are obtained, and whether the second page address is the next page address of the first page address or not is detected;
if the second page address is the next page address of the first page address and the pre-reading of the page data of the next page is completed, OIP is set to be equal to 0, the page data of the next page is pre-read to the first cache when the external main control reads the page data of the next page from the second cache, and the data in the FLASH is not required to be fed to the cache until the Flash receives a Page Read to Cache command sent by the external main control.
2. The method of reading data according to claim 1, further comprising, after said detecting whether the second page address is a next page address to the first page address:
and if the second page address is not the next page address of the first page address, the CPU controls the page data corresponding to the new page read command to be read to the first/second cache.
3. A serial interface NAND memory chip, characterized in that the serial interface NAND memory chip comprises a first cache, a second cache, a CPU, a NAND flash memory, and means for reading page data corresponding to a page read command from the NAND flash memory to the first cache when the CPU receives the page read command; means for setting OIP equal to 0 when the page data reading is completed, and immediately pre-reading page data of a next page from the NAND Flash memory to a second cache memory while the external master reads the page data from a first cache memory, without waiting until the Flash memory receives a command of Page Read to Cache sent by the external master, to transfer data in the Flash memory to the cache memory; when a new page reading command is received, acquiring a first page address of a page to be read corresponding to the page reading command and a second page address of the page to be read corresponding to the new page reading command, and detecting whether the second page address is a next page address of the first page address; and means for setting OIP equal to 0 if the second page address is a next page address of the first page address and the page data pre-reading of the next page is completed, and pre-reading the page data of the next page to the first cache immediately while the external master reads the page data of the next page from the second cache, without waiting for the Flash to transfer the data in the Flash to the cache after receiving a Page Read to Cache command sent by the external master.
4. The serial interface NAND memory chip of claim 3 wherein said CPU further comprises means for reading page data from said NAND flash memory to a first/second cache memory by DMA.
5. The serial interface NAND memory chip of claim 4 wherein said CPU further comprises means for reading page data corresponding to said new page read command to the first/second cache if said second page address is not the next page address to the first page address.
CN202011036444.2A 2020-09-27 2020-09-27 Serial interface NAND memory chip and method for reading data from same Active CN112131144B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011036444.2A CN112131144B (en) 2020-09-27 2020-09-27 Serial interface NAND memory chip and method for reading data from same

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011036444.2A CN112131144B (en) 2020-09-27 2020-09-27 Serial interface NAND memory chip and method for reading data from same

Publications (2)

Publication Number Publication Date
CN112131144A CN112131144A (en) 2020-12-25
CN112131144B true CN112131144B (en) 2023-09-26

Family

ID=73841078

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011036444.2A Active CN112131144B (en) 2020-09-27 2020-09-27 Serial interface NAND memory chip and method for reading data from same

Country Status (1)

Country Link
CN (1) CN112131144B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104461943A (en) * 2014-12-29 2015-03-25 成都致云科技有限公司 Data reading method, device and system
WO2015196378A1 (en) * 2014-06-25 2015-12-30 华为技术有限公司 Method, device and user equipment for reading/writing data in nand flash
CN105205012A (en) * 2014-06-26 2015-12-30 北京兆易创新科技股份有限公司 Method and device for reading data
CN105930278A (en) * 2015-02-26 2016-09-07 爱思开海力士有限公司 Data storage device and operating method thereof
US9959227B1 (en) * 2015-12-16 2018-05-01 Amazon Technologies, Inc. Reducing input/output latency using a direct memory access (DMA) engine
CN108538332A (en) * 2017-03-06 2018-09-14 旺宏电子股份有限公司 The read method of NAND gate flash memory
CN108920387A (en) * 2018-06-06 2018-11-30 深圳忆联信息系统有限公司 Reduce method, apparatus, computer equipment and the storage medium of read latency
CN108958647A (en) * 2017-05-17 2018-12-07 旺宏电子股份有限公司 The read-while-write access method of memory device
CN111324282A (en) * 2018-12-14 2020-06-23 北京兆易创新科技股份有限公司 Memory device
CN111338976A (en) * 2018-12-19 2020-06-26 爱思开海力士有限公司 Memory system and operation method of memory system
CN111538679A (en) * 2020-05-12 2020-08-14 中国电子科技集团公司第十四研究所 Processor data prefetching design based on embedded DMA

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20170094815A (en) * 2016-02-11 2017-08-22 삼성전자주식회사 Nonvolatile memory capabling of outputting data using wrap around scheme, computing system having the same, and read method thereof
JP6232109B1 (en) * 2016-09-27 2017-11-15 ウィンボンド エレクトロニクス コーポレーション Semiconductor memory device and continuous reading method
US10521157B2 (en) * 2018-01-15 2019-12-31 Gigadevice Semiconductor (Shanghai) Inc. Jump page cache read method in NAND flash memory and NAND flash memory

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015196378A1 (en) * 2014-06-25 2015-12-30 华为技术有限公司 Method, device and user equipment for reading/writing data in nand flash
CN105205012A (en) * 2014-06-26 2015-12-30 北京兆易创新科技股份有限公司 Method and device for reading data
CN104461943A (en) * 2014-12-29 2015-03-25 成都致云科技有限公司 Data reading method, device and system
CN105930278A (en) * 2015-02-26 2016-09-07 爱思开海力士有限公司 Data storage device and operating method thereof
US9959227B1 (en) * 2015-12-16 2018-05-01 Amazon Technologies, Inc. Reducing input/output latency using a direct memory access (DMA) engine
CN108538332A (en) * 2017-03-06 2018-09-14 旺宏电子股份有限公司 The read method of NAND gate flash memory
CN108958647A (en) * 2017-05-17 2018-12-07 旺宏电子股份有限公司 The read-while-write access method of memory device
CN108920387A (en) * 2018-06-06 2018-11-30 深圳忆联信息系统有限公司 Reduce method, apparatus, computer equipment and the storage medium of read latency
CN111324282A (en) * 2018-12-14 2020-06-23 北京兆易创新科技股份有限公司 Memory device
CN111338976A (en) * 2018-12-19 2020-06-26 爱思开海力士有限公司 Memory system and operation method of memory system
CN111538679A (en) * 2020-05-12 2020-08-14 中国电子科技集团公司第十四研究所 Processor data prefetching design based on embedded DMA

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于一种NAND闪存页缓存器设计的C/F读取算法研究;陈珂;杜智超;叶松;王颀;霍宗亮;;电子学报(11);2619-2625 *

Also Published As

Publication number Publication date
CN112131144A (en) 2020-12-25

Similar Documents

Publication Publication Date Title
CN109639957B (en) Image data transmission system and image data transmission method
US11379280B2 (en) Methods and systems for managing communication lanes between a universal flash storage (USF) device and a USF host
US8868851B2 (en) Data access method of a memory device
US7549066B2 (en) Automatic power savings stand-by control for non-volatile memory
US20060165109A1 (en) Data communication device
US9026746B2 (en) Signal control device and signal control method
US7725621B2 (en) Semiconductor device and data transfer method
US20220011971A1 (en) Method for processing read/write data, apparatus, and computer readable storage medium thereof
US20090006669A1 (en) Dma transfer control device and method of dma transfer
EP1187025A3 (en) Cache update method and cache update control system employing non-blocking type cache
CN112131144B (en) Serial interface NAND memory chip and method for reading data from same
US7415555B2 (en) Bus bridge device
US7103702B2 (en) Memory device
CN111949585A (en) Data conversion processing method and device
US8375238B2 (en) Memory system
CN118349508A (en) Data transmission method, device and system applied to preventing short circuit between master device and slave device
CN114036085B (en) DDR 4-based multitasking read-write scheduling method, computer equipment and storage medium
US9965183B2 (en) Method for processing data in storage device and storage device
CN109887533B (en) Multifunctional user mobile terminal control system and method
KR101574406B1 (en) Drawing control device
US20050256979A1 (en) [direct memory access method for card reader and a method for programming controller of card reader]
CN115981594B (en) Data accumulation processing method and device, FPGA chip and medium
CN118277289B (en) Data output method, device, equipment and medium
JPH03241442A (en) Store buffer control system
JP2005301714A (en) Multi-cpu system, its data transfer method, and its program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 518000 Room 101, building 10, Dayun software Town, 8288 Longgang Avenue, he'ao community, Yuanshan street, Longgang District, Shenzhen City, Guangdong Province

Applicant after: XTX Technology Inc.

Address before: Floor 1, building 10, Dayun software Town, No. 8288, Henggang street, Longgang District, Shenzhen City, Guangdong Province

Applicant before: Paragon Technology (Shenzhen) Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant