CN113110878A - Memory device and operation method thereof - Google Patents

Memory device and operation method thereof Download PDF

Info

Publication number
CN113110878A
CN113110878A CN202010022523.1A CN202010022523A CN113110878A CN 113110878 A CN113110878 A CN 113110878A CN 202010022523 A CN202010022523 A CN 202010022523A CN 113110878 A CN113110878 A CN 113110878A
Authority
CN
China
Prior art keywords
read
memory
address
processing circuit
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010022523.1A
Other languages
Chinese (zh)
Inventor
余永晖
王志伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Realtek Semiconductor Corp
Original Assignee
Realtek Semiconductor Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Realtek Semiconductor Corp filed Critical Realtek Semiconductor Corp
Priority to CN202010022523.1A priority Critical patent/CN113110878A/en
Publication of CN113110878A publication Critical patent/CN113110878A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/38Concurrent instruction execution, e.g. pipeline, look ahead
    • G06F9/3802Instruction prefetching
    • G06F9/3804Instruction prefetching for branches, e.g. hedging, branch folding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0862Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches with prefetch

Abstract

A memory device and a method of operating the same, comprising: first and second memories and an access circuit. The reference address of the processing circuit is interleaved to correspond to the real addresses of the first and second memories. The access circuit is configured to: receiving a read command corresponding to the reference read address from the processing circuit to convert the reference read address into actual read addresses of the first and second memories; simultaneously reading a first set of read data from a first one of the first and second memories and a second set of read data previously read by a second one of the first and second memories according to the actual read address and a next actual read address; responding the first set of read data to the processing circuit; and responding to the second set of read data to the processing circuit when a next read command corresponding to a next read reference address is received by the processing circuit and the next read reference address corresponds to a next actual read address.

Description

Memory device and operation method thereof
Technical Field
The present invention relates to a memory technology, and more particularly, to a memory device and an operating method thereof.
Background
When the memory is accessed, a plurality of clock cycles are often required. In order to reduce the latency and maintain higher operating frequency, the processor usually adds a cache to compensate for the speed difference. However, such an approach significantly increases the area and cost of the processor.
Another way to compensate for this is to increase the bandwidth of the memory and read the data of multiple words at a time. However, when the fetched instruction encounters a branch of an address, if the fetched words are not aligned, the fetch time for the target address and the fetch of the next sequential instruction require additional cycles, reducing the performance of the processor access. In addition, in writing, an extra write buffer is often required to combine a plurality of word groups for writing at a time, which increases the cost of hardware.
Therefore, it is an urgent need in the art to design a new memory device and method for operating the same to solve the above-mentioned shortcomings.
Disclosure of Invention
This summary is intended to provide a simplified summary of the disclosure in order to provide a basic understanding to the reader. This summary is not an extensive overview of the disclosure and is intended to neither identify key/critical elements of the embodiments nor delineate the scope of the embodiments.
It is an object of the present disclosure to provide a memory device and an operating method thereof, so as to improve the problems of the prior art.
To achieve the above object, one aspect of the present invention relates to a memory device, comprising: the memory comprises a first memory, a second memory and an access circuit. The plurality of reference addresses of the processing circuit are interleaved to correspond to a plurality of real addresses of the first memory and the second memory. The access circuit is configured to: receiving a reading instruction corresponding to the reference reading address from the processing circuit to convert the reference reading address into an actual reading address of the first memory and an actual reading address of the second memory; reading a first set of read data from a first one of the first memory and the second memory and a second set of read data from a second one of the first memory and the second memory in advance according to the actual read address and a next actual read address; responding the first set of read data to the processing circuit; and responding to the second set of read data to the processing circuit when a next read command corresponding to a next read reference address is received by the processing circuit and the next read reference address corresponds to a next actual read address.
Another aspect of the present disclosure relates to a method for operating a memory device, comprising: the access circuit receives a reading instruction corresponding to a reference reading address from the processing circuit to convert the reference reading address into actual reading addresses of the first memory and the second memory, wherein a plurality of reference addresses of the processing circuit are staggered corresponding to a plurality of actual addresses of the first memory and the second memory; enabling the access circuit to read a first set of read data from a first one of the first memory and the second memory and a second set of read data from a second one of the first memory and the second memory according to the actual read address and a next actual read address at the same time; causing the access circuit to respond to the first set of read data to the processing circuit; and causing the access circuit to respond to the second set of read data to the processing circuit when a next read command corresponding to a next read reference address is received by the processing circuit and the next read reference address corresponds to a next actual read address.
The memory device and the operation method thereof eliminate the read delay caused by multi-cycle access in a parallel reading and instruction pre-reading mode through the arrangement of the two memories. In addition, the two memories are staggered and can be respectively configured with controlled addresses, so that the branch instruction is not limited by address bit alignment, and the pause period caused by the branch instruction is reduced.
Drawings
In order to make the aforementioned and other objects, features, and advantages of the invention, as well as others which will become apparent, reference is made to the following description taken in conjunction with the accompanying drawings in which:
FIG. 1 is a block diagram of a computer system according to an embodiment of the present invention;
FIG. 2 is a partial lookup table of reference addresses of the processing circuit and physical addresses of the first memory and the second memory according to an embodiment of the present invention;
FIG. 3 is a timing diagram illustrating a read operation of a memory device according to an embodiment of the present invention;
FIG. 4 is a timing diagram illustrating a read operation of a memory device according to another embodiment of the present invention;
FIG. 5 is a timing diagram illustrating a write operation of a memory device according to an embodiment of the present invention;
FIG. 6 is a flow chart of a method of operating a memory device according to an embodiment of the invention; and
FIG. 7 is a flow chart of a method of operating a memory device according to an embodiment of the invention.
Detailed Description
Please refer to fig. 1. Fig. 1 is a block diagram of a computer system 1 according to an embodiment of the present invention. The computer system 1 includes a processing circuit 100 and a memory device 110.
The processing circuit 100 may access the memory device 110 by sending access instructions, such as, but not limited to, a read instruction RC and a write instruction WC. For example, when the processing circuit 100 sends the read command RC, the read data RD can be read out from the memory device 110 according to the corresponding address. When the processing circuit 100 sends a write command WC, the write data WD may be written to the memory device 110 according to the corresponding address.
In one embodiment, the transmission of instructions and data between the processing circuit 100 and the memory device 110 may be performed via a bus 120 therebetween.
It should be noted that the computer system 1 may actually include other components that can interact with the processing circuit 100 and the memory device 110, and is not limited by the number of components shown in fig. 1.
The memory device 110 includes: a first memory SRAM1, a second memory SRAM2, and an access circuit FET.
In one embodiment, the first memory SRAM1 and the second memory SRAM2 are both static random access memories. However, the invention is not limited thereto.
The access circuit FET is configured to access the first memory SRAM1 and the second memory SRAM2 according to a read command RC and a write command WC sent by the processing circuit 100.
When the memory device 110 performs a read operation, the access circuit FET receives a read command RC from the processing circuit 100, converts a reference read address corresponding to the read command RC into actual read addresses of the first memory SRAM1 and the second memory SRAM2, and reads corresponding data from the first memory SRAM1 and the second memory SRAM 2. The memory device 110 further includes a first read buffer BUF1 and a second read buffer BUF2 for temporarily storing data read by the access circuit FET, respectively, and the access circuit FET responds the data to the processing circuit 100.
On the other hand, the memory device 110 does not include a write buffer. When the memory device 110 performs a write operation, the access circuit FET receives a write command WC from the processing circuit 100, converts a reference write address corresponding to the write command WC into actual write addresses of the first memory SRAM1 and the second memory SRAM2, and writes corresponding data into the first memory SRAM1 and the second memory SRAM2 without temporary storage.
The structure and operation of the memory device 1 will be further explained below by way of a more detailed example.
The plurality of reference addresses of the processing circuit 100 are interleaved to correspond to the plurality of real addresses of the first memory SRAM1 and the second memory SRAM 2. More specifically, in one embodiment, the real address of the Mth first memory corresponds to the reference address of the 2M-1 processing circuit, the real address of the Mth second memory corresponds to the reference address of the 2M processing circuit, and M is a positive integer greater than or equal to 1.
Please refer to fig. 2. Fig. 2 is a partial comparison table of the reference address of the processing circuit 100 and the actual addresses of the first memory SRAM1 and the second memory SRAM2 according to an embodiment of the present invention.
As shown in fig. 2, the real addresses of the first memory SRAM1 from 1 st to 4 th are represented by a 16-carry, and are respectively consecutive 0x0, 0x1, 0x2, and 0x 3. The actual addresses of the second memory SRAM2 from 1 st to 4 th are represented by a 16 carry, and are respectively consecutive 0x0, 0x1, 0x2 and 0x 3. The real addresses of the respective first and second memories SRAM1 and 2 correspond to a word length, that is, 32 bits of data.
On the other hand, the processing circuit 100 is represented by a 16-bit carry from the 1 st to the 8 th reference addresses, and is respectively 0x0, 0x4, 0x8, 0xC, 0x10, 0x14, 0x18, and 0x1C which are consecutive.
Therefore, the actual addresses 1 to 4 of the first memory SRAM1 correspond to the reference addresses 1, 3, 5, and 7 of the processing circuit 100, respectively, after the above-described calculation. The actual addresses 1 to 4 of the second memory SRAM2 correspond to the reference addresses 2, 4, 6, and 8 of the processing circuit 100, respectively, after the above-described operation.
Note that fig. 2 shows only a part of the lookup table. The processing circuit 100 and the first memory SRAM1 and the second memory SRAM2 may actually include more corresponding reference addresses and actual addresses.
Taking a read operation as an example, when the access circuit FET receives the read command RC from the processing circuit 100, the corresponding reference read addresses are converted into the actual read addresses of the first memory SRAM1 and the second memory SRAM 2.
Please refer to fig. 3. FIG. 3 is a timing diagram illustrating a read operation performed by the memory device 110 according to an embodiment of the present invention.
In fig. 3, CPUADD represents a reference read address corresponding to a read command RC of the processing circuit 100, CPUDA represents data of the processing circuit 100 to which the access circuit FET responds, SRAM1 represents data read from the first memory SRAM1, SRAM2 represents data read from the second memory SRAM2, BUF1 represents data temporarily stored in the first read buffer BUF1, and BUF2 represents data temporarily stored in the second read buffer BUF 2.
As shown in fig. 3, the signal and data transmission among the processing circuit 100, the first memory SRAM1 and the second memory SRAM2 operate according to the clock signal CLK. The read command RC of the processing circuit 100 is transmitted at least one unit frequency. The data in the first SRAM1 and the data in the second SRAM2 are read at two unit frequencies.
At the unit frequency T0, the access circuit FET receives a first read command RC1 from the processing circuit 100 via, for example, but not limited to, the bus 120, and the read command RC1 corresponds to the reference read address 0x 0.
The access circuit FET then reads the first set of read data RD1 from the first memory SRAM1 and reads the second set of read data RD2 from the second memory SRAM2 in advance according to the actual read address (the actual address 0x0 of the first memory SRAM1 and corresponds to the reference read address 0x0 of the processing circuit 100) and the next actual read address (the actual address 0x0 of the second memory SRAM2 and corresponds to the reference read address 0x4 of the processing circuit 100) at the same time.
At the unit frequency T1, the processing circuit 100 is halted since the first memory SRAM1 and the second memory SRAM2 require two unit frequencies to read.
At the unit frequency T2, the first set of read data RD1 and the second set of read data RD2 are buffered in the first read buffer BUF1 and the second read buffer BUF 2. At this time, the access circuit FET will respond with a unit frequency to the first set of read data RD1 to the processing circuit 100.
It should be noted that, in practice, since the first memory SRAM1 requires two unit frequencies for reading, the first set of read data RD1 can also be selectively responded by the access circuit FET directly to the processing circuit 100 at the unit frequency T2 without being buffered by the first read buffer BUF 1. However, the present invention is not limited thereto.
At the same time, the access circuit FET receives the next read command RC2 from the processing circuit 100. In the embodiment, the next reference read address corresponding to the read command RC2 is 0x 4.
The access circuit FET determines that the next reference read address corresponds to the aforementioned next actual read address, and reads the third set of read data RD3 in advance from the first memory SRAM1 and the fourth set of read data RD4 in advance from the second memory SRAM2 according to the next actual read address (the actual address 0x1 of the first memory SRAM1 and corresponds to the reference read address 0x8 of the processing circuit 100) and the next three actual read addresses (the actual address 0x1 of the second memory SRAM2 and corresponds to the reference read address 0xC of the processing circuit 100).
At the unit frequency T3, since the reference read address corresponding to the read command RC2 received by the access circuit FET at the unit frequency T2 is 0x4, which corresponds to the second set of read data RD2 that the access circuit FET previously read from the second memory SRAM2, the access circuit FET will respond to the second set of read data RD2 from the second read buffer BUF2 to the processing circuit 100.
At the same time, the access circuit FET receives the next read command RC3 from the processing circuit 100. In the embodiment, the reference read address corresponding to the read command RC3 is 0x 8. The access circuit FET determines that the reference read address corresponds to the previously read third read data RD3 without performing any other operation.
At the unit frequency T4, since the reference read address corresponding to the read command RC3 received by the access circuit FET at the unit frequency T3 is 0x8, which corresponds to the third set of read data RD3 that the access circuit FET previously read from the first memory SRAM1, the access circuit FET will respond to the third set of read data RD3 from the first read buffer BUF1 to the processing circuit 100.
Similarly, as mentioned above, in practice, since the first memory SRAM1 requires two unit frequencies to be read, the third set of read data RD3 can also be selectively responded by the access circuit FET directly to the processing circuit 100 at the unit frequency T4 without being buffered by the first read buffer BUF 1. However, the present invention is not limited thereto.
At the same time, the access circuit FET receives the next read command RC4 from the processing circuit 100. In the embodiment, the reference read address corresponding to the read command RC4 is 0 xC. The access circuit FET determines that the reference read address corresponds to the previously read fourth group of read data RD4, and reads two groups of read data from the first memory SRAM1 and the second memory SRAM2 in advance according to the next actual read address (the actual address 0x2 of the first memory SRAM1 and the reference read address 0x10 of the processing circuit 100) and the next actual read address (the actual address 0x2 of the second memory SRAM2 and the reference read address 0x14 of the processing circuit 100).
Therefore, when the access circuit FET receives a plurality of read commands RC from the processing circuit 100 corresponding to consecutive reference read addresses, data can be continuously read without interruption up to the unit frequency T5.
At the unit frequency T6, the access circuit FET does not receive the read command RC from the processing circuit 100 due to the idle status of the processing circuit 100, but still reads two sets of read data in advance. Therefore, when the reference read address of the read command RC received by the access circuit FET at the unit frequency T7 is still continuous with the previous read address, the read-ahead mechanism is facilitated without unnecessary pauses.
Please refer to fig. 4. FIG. 4 is a timing diagram illustrating a read operation performed by the memory device 110 according to another embodiment of the present invention. The reference numerals in fig. 4 are the same as those in fig. 3, and thus are not described herein. Further, the operations of the unit frequencies T0-T3 in FIG. 4 are the same as the unit frequencies T0-T3 in FIG. 3, and thus are not described again.
In the embodiment, at the unit frequency T4, the reference read address corresponding to the read command RC4 received by the access circuit FET is 0x84 instead of 0 xC. Such reference read addresses are not consecutive to the reference read address 0x8 corresponding to the previous read command RC 3. Thus, read instruction RC4 is a branch instruction that jumps from what would otherwise correspond to a sequential address read.
Although the first read buffer BUF1 and the second read buffer BUF2 temporarily store the third read data RD3 and the fourth read data RD4 that are read in advance and correspond to the reference read addresses 0x8 and 0xC, respectively, the access circuit FET can only respond the third read data RD3 to the processing circuit 100 according to the read command RC3 at the unit frequency T4, but cannot respond the fourth read data RD4 to the processing circuit 100 according to the read command RC3 at the next unit frequency T5.
Therefore, at the unit frequency T4, the access circuit FET will additionally read the fifth set of read data RD5 (corresponding to the reference read address 0x84 of the processing circuit 100) and read the sixth set of read data RD6 (corresponding to the reference read address 0x88 of the processing circuit 100) according to the read command RC 4. For example, in the correspondence relationship in the lookup table shown in fig. 2, the fifth set of read data RD5 will be read from the second memory SRAM2, and the sixth set of read data RD6 will be read from the first memory SRAM 1.
Since two unit frequencies are required to read data, the processing circuit 100 will halt at unit frequency T5.
At the unit frequency T6, the access circuit FET receives the read command RC5 from the processing circuit 100, and the reference read address corresponding to the read command RC5 is 0x88, corresponding to the sixth set of read data RD6 read in advance.
Therefore, in addition to responding the fifth set of read data RD5 to the processing circuit 100, the access circuit FET also reads in advance the seventh set of read data RD7 and the eighth set of read data RD8 corresponding to the reference read addresses 0x8C and 0x90, respectively, from the second memory SRAM2 and the first memory SRAM 1. The subsequent operations of each unit frequency are the same as the consecutive reference read addresses of the read command, and thus are not described again.
In some techniques, although the access width of a single memory is increased to the amount of data that can be read two words at a time, the target address to be read must be aligned to 64 bits. When a branch instruction is encountered during a memory read and the target address of the branch is not aligned to 64 bits, the read time of the target address and the next consecutive instruction fetch time of the target address both require two unit frequencies and the processing circuit must suspend access for multiple cycles.
The memory device 110 of the present invention eliminates the read latency caused by multi-cycle access by parallel reading and command pre-reading through the arrangement of two memories. Moreover, the two memories are staggered to respectively control the address configuration, so that the branch instruction is not limited by 64-bit alignment, and the pause period caused by the branch instruction is reduced.
On the other hand, taking the write operation as an example, when the access circuit FET receives a read command WC corresponding to one reference write address from the processing circuit 100, the reference write address is converted into the actual write addresses of the first memory SRAM1 and the second memory SRAM2 to determine the address where data is actually to be written.
Please refer to fig. 5. FIG. 5 is a timing diagram illustrating a write operation of the memory device 110 according to an embodiment of the present invention.
As shown in FIG. 5, the access circuit FET sequentially receives write commands WC1-WC4 between unit frequencies T0-T3. The write commands WC1-WC4 are received at one unit frequency, and the write commands WC1-WC4 correspond to consecutive reference write addresses 0x0, 0x4, 0x8, and 0xC, respectively.
The reference write addresses 0x0, 0x4, 0x8 and 0xC correspond to the real address 0x0 of the first memory SRAM1, the real address 0x0 of the second memory SRAM2, the real address 0x1 of the first memory SRAM1 and the real address 0x1 of the second memory SRAM2, respectively.
Since the first memory SRAM1 and the second memory SRAM2 require two unit frequencies to be written, and the first memory SRAM1 and the second memory SRAM2 can be independently accessed, respectively, the access circuit FET sequentially writes the write data WD1-WD4 corresponding to the write commands WC1-WC4 into the addresses of the first memory SRAM1 and the second memory SRAM2 at the unit frequencies T0-T3. Similarly, the unit frequency T4 can be operated according to the same writing method, and therefore, the description thereof is omitted.
Therefore, the access circuit FET can write a set of data in every unit frequency without any pause condition, without setting a write buffer.
FIG. 6 is a flow chart of a method 600 for operating a memory device according to an embodiment of the invention.
The memory device operation method 600 may be applied to the memory device 110 as shown in fig. 1, and the memory device operation method 600 is used to perform a read operation of the access circuit FET to the first memory SRAM1 and the second memory SRAM 2. The method 600 includes the following steps (it should be understood that the steps mentioned in this embodiment, except for the sequence specifically mentioned, can be performed simultaneously or partially simultaneously according to the actual requirement).
In step 601, the access circuit FET is enabled to receive a read command RC corresponding to a reference read address from the processing circuit 100.
In step 602, the access circuit FET is enabled to convert the reference read addresses into the actual read addresses of the first memory SRAM1 and the second memory SRAM2, wherein the reference addresses of the processing circuit 100 are interleaved with the actual addresses of the first memory SRAM1 and the second memory SRAM 2.
In step 603, the access circuit FET is enabled to simultaneously read the first set of read data RD1 from the first one of the first memory SRAM1 and the second memory SRAM2 and the second set of read data RD2 from the second one of the first memory SRAM2 according to the actual read address and the next actual read address.
In step 604, the access circuit FET is enabled to respond to the first set of read data RD1 to the processing circuit 100.
In step 605, the access circuit FET is caused to receive a next read command RC corresponding to a next reference read address and determine whether the next reference read address corresponds to a next actual read address.
In step 606, the access circuit FET is enabled to respond to the pre-read data, such as the second set of read data RD2, to the processing circuit 100 when the next reference read address of the next read command RC corresponds to the next actual read address.
In step 607, the access circuit FET reads the next two sets of read data in advance and stores them in the first read register BUF1 and the second read register BUF2, respectively.
More specifically, the access circuit FET reads the third set of read data RD3 from the first memory SRAM1 and the fourth set of read data RD4 from the second memory SRAM2 in advance according to the second actual read address and the third actual read address, and stores the read data RD4 and the read data RD4 in the first read register BUF1 and the second read register BUF2, respectively.
After step 607, the process returns to step 605 to continue determining the next received read command.
When the access circuit FET determines that the next reference read address does not correspond to the next actual read address in step 605, the access circuit FET determines whether the processing circuit 100 has not issued a read command in step 608.
When the access circuit FET determines that the processing circuit 100 has not issued a read command, the flow proceeds to step 607 to continue the pre-read. When the access circuit FET determines that the processing circuit 100 issues a read command, the flow returns to step 602 to read the first memory SRAM1 and the second memory SRAM2 according to the new read address.
FIG. 7 is a flow chart of a method 700 for operating a memory device according to an embodiment of the invention.
The memory device operation method 700 can be applied to the memory device 110 shown in fig. 1, and the memory device operation method 700 is used to perform a write operation of the access circuit FET to the first memory SRAM1 and the second memory SRAM 2. The method 700 for operating a memory device includes the following steps (it should be understood that the steps mentioned in this embodiment, except for the sequence specifically mentioned, can be performed simultaneously or partially simultaneously, with the sequence being adjusted according to actual needs).
In step 701, the access circuit FET receives a write command WC corresponding to a reference read address from the processing circuit 100.
In step 702, the access circuit FET switches the reference write address to the actual write address of the first memory SRAM1 and the second memory SRAM 2.
In step 703, the access circuit FET writes the write data WD to the actual write address according to the write command WC.
Although the foregoing embodiments have been described with reference to specific embodiments, it will be understood by those skilled in the art that various changes and modifications may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.
[ notation ] to show
1 … computer system
110 … memory device
600 … memory device operation method
700 … memory device operation method
BUF1 … first read buffer
CLK … frequency signal
CPUDA … data
RC, RC1-RC6 … read instructions
SRAM1 … first memory
T0-T7 … unit frequency
WD, WD1-WD4 … write data 100 … processing circuit
120 … bus
601-608 … step
701-703 … steps
BUF2 … second read buffer
CPUADD … refers to read address
FET … access circuit
RD, RD1-RD8 … read data
SRAM2 … second memory
WC, WC1-WC4 … write commands.

Claims (10)

1. A memory device, comprising:
a first memory and a second memory, wherein a plurality of reference addresses of a processing circuit are interleaved corresponding to a plurality of real addresses of the first memory and the second memory; and
an access circuit configured to:
receiving a read command corresponding to a reference read address from the processing circuit to convert the reference read address into an actual read address of the first memory and the second memory;
reading a first set of read data and a second set of read data in advance from a first one of the first memory and the second memory according to the actual read address and a next actual read address adjacent to the actual read address;
responding to the first set of read data to the processing circuit; and
the second set of read data is responded to the processing circuitry when a next read command corresponding to a next read reference address is received by the processing circuitry and the next read reference address corresponds to the next actual read address.
2. The memory device of claim 1, wherein when the next reference read address corresponds to the next actual read address, the access circuit concurrently pre-reads a third set of read data and a fourth set of read data from the first and second memories according to a next actual read address and a next three actual read address; and
when the next reference read address does not correspond to the next actual read address, the access circuit converts a corresponding memory read address and a next memory read address adjacent to the memory read address according to the next reference read address, and reads a fifth set of read data from the first memory and the second memory and a sixth set of read data from the second memory in advance.
3. The memory device of claim 1, wherein the read command is received from the processing circuit at least one unit frequency, the first set of read data and the second set of read data are read from the first memory and the second memory at two of the unit frequencies, and the first set of read data and the second set of read data are read by the processing circuit at one of the unit frequencies.
4. The memory device of claim 1, further comprising a first read register and a second read register configured to temporarily store the first set of read data and the second set of read data, respectively.
5. The memory device of claim 1, wherein the real addresses of the first memory and the second memory correspond to data of one word length.
6. The memory device of claim 1, wherein the mth real address of the first memory corresponds to the 2M-1 th reference address of the processing circuit, the mth real address of the second memory corresponds to the 2M reference address of the processing circuit, and M is a positive integer greater than or equal to 1.
7. The memory device of claim 1, wherein the access circuit is further configured to receive a write command from the processing circuit corresponding to a reference write address, to convert the reference write address into a real write address of the first memory and the second memory, and to write a set of write data into the real write address further according to the write command.
8. The memory device of claim 6, wherein write commands are received from the processing circuit at least one unit frequency, write data being written at both of the unit frequencies.
9. The memory device of claim 1, wherein the memory device does not include a write buffer.
10. A method of memory device operation, comprising:
enabling an access circuit to receive a read command corresponding to a reference read address from a processing circuit to convert the reference read address into a real read address of a first memory and a second memory, wherein a plurality of reference addresses of the processing circuit are interleaved with a plurality of real addresses of the first memory and the second memory;
enabling the access circuit to simultaneously read a first set of read data and a second set of read data from a first one of the first memory and the second memory according to the actual read address and a next actual read address;
causing the access circuit to respond to the first set of read data to the processing circuit; and
the access circuit is enabled to respond to the second set of read data to the processing circuit when a next read command corresponding to a next read reference address is received by the processing circuit and the next read reference address corresponds to the next actual read address.
CN202010022523.1A 2020-01-09 2020-01-09 Memory device and operation method thereof Pending CN113110878A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010022523.1A CN113110878A (en) 2020-01-09 2020-01-09 Memory device and operation method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010022523.1A CN113110878A (en) 2020-01-09 2020-01-09 Memory device and operation method thereof

Publications (1)

Publication Number Publication Date
CN113110878A true CN113110878A (en) 2021-07-13

Family

ID=76708587

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010022523.1A Pending CN113110878A (en) 2020-01-09 2020-01-09 Memory device and operation method thereof

Country Status (1)

Country Link
CN (1) CN113110878A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4888679A (en) * 1988-01-11 1989-12-19 Digital Equipment Corporation Method and apparatus using a cache and main memory for both vector processing and scalar processing by prefetching cache blocks including vector data elements
US4918587A (en) * 1987-12-11 1990-04-17 Ncr Corporation Prefetch circuit for a computer memory subject to consecutive addressing
CN1694076A (en) * 2004-04-27 2005-11-09 威盛电子股份有限公司 Interlock mapping method and device of memory access and its application method
CN102648456A (en) * 2009-09-21 2012-08-22 飞思卡尔半导体公司 Memory device and method
US20130138867A1 (en) * 2011-11-30 2013-05-30 International Business Machines Corporation Storing Multi-Stream Non-Linear Access Patterns in a Flash Based File-System
US20140136748A1 (en) * 2011-10-03 2014-05-15 Choon Gun Por System and method for performance optimization in usb operations

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4918587A (en) * 1987-12-11 1990-04-17 Ncr Corporation Prefetch circuit for a computer memory subject to consecutive addressing
US4888679A (en) * 1988-01-11 1989-12-19 Digital Equipment Corporation Method and apparatus using a cache and main memory for both vector processing and scalar processing by prefetching cache blocks including vector data elements
CN1694076A (en) * 2004-04-27 2005-11-09 威盛电子股份有限公司 Interlock mapping method and device of memory access and its application method
CN102648456A (en) * 2009-09-21 2012-08-22 飞思卡尔半导体公司 Memory device and method
US20140136748A1 (en) * 2011-10-03 2014-05-15 Choon Gun Por System and method for performance optimization in usb operations
US20130138867A1 (en) * 2011-11-30 2013-05-30 International Business Machines Corporation Storing Multi-Stream Non-Linear Access Patterns in a Flash Based File-System

Similar Documents

Publication Publication Date Title
US7209405B2 (en) Memory device and method having multiple internal data buses and memory bank interleaving
US5289584A (en) Memory system with FIFO data input
US5367494A (en) Randomly accessible memory having time overlapping memory accesses
US5526508A (en) Cache line replacing system for simultaneously storing data into read and write buffers having multiplexer which controls by counter value for bypassing read buffer
US7707328B2 (en) Memory access control circuit
US5210842A (en) Data processor having instruction varied set associative cache boundary accessing
JP5431003B2 (en) Reconfigurable circuit and reconfigurable circuit system
KR20080104184A (en) Memory device with mode-selectable prefetch and clock-to-core timing
KR20080030112A (en) Memory device and method having multiple address, data and command buses
US7995419B2 (en) Semiconductor memory and memory system
US6360307B1 (en) Circuit architecture and method of writing data to a memory
KR100317542B1 (en) Semiconductor memory device
CN113110878A (en) Memory device and operation method thereof
TW440761B (en) The cache device and method
KR100463205B1 (en) Computer system embedded sequantial buffer for improving DSP data access performance and data access method thereof
TWI762852B (en) Memory device and operation method of the same
US20180181335A1 (en) Apparatus and method to speed up memory frequency switch flow
US20090235010A1 (en) Data processing circuit, cache system, and data transfer apparatus
JP2006164070A5 (en)
KR20050057060A (en) Address decode
US6556484B2 (en) Plural line buffer type memory LSI
EP0943998B1 (en) Cache memory apparatus
CN110390973B (en) Memory controller
JP4549073B2 (en) Memory control circuit
JP2945525B2 (en) Processor, memory, and data processing device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination