CN111881068A - Multi-entry fully associative cache memory and data management method - Google Patents

Multi-entry fully associative cache memory and data management method Download PDF

Info

Publication number
CN111881068A
CN111881068A CN202010614315.0A CN202010614315A CN111881068A CN 111881068 A CN111881068 A CN 111881068A CN 202010614315 A CN202010614315 A CN 202010614315A CN 111881068 A CN111881068 A CN 111881068A
Authority
CN
China
Prior art keywords
module
management module
read
address
memory
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010614315.0A
Other languages
Chinese (zh)
Inventor
谭吉来
黄涛
刘雨婷
李瑞鹏
侯子超
王东琳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Si Lang Science And Technology Co ltd
Original Assignee
Beijing Si Lang Science And Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Si Lang Science And Technology Co ltd filed Critical Beijing Si Lang Science And Technology Co ltd
Priority to CN202010614315.0A priority Critical patent/CN111881068A/en
Publication of CN111881068A publication Critical patent/CN111881068A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0877Cache access modes
    • G06F12/0884Parallel mode, e.g. in parallel with main memory or CPU
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The application discloses a multi-entry fully-associative cache memory and a data management method. When the memory receives a plurality of read requests, if at least two read requests with the same read address exist, the memory only reads once, namely simultaneously responds to the read requests with the same read address, and only reads the buffer memory once when the group of requests hit the buffer memory. When the plurality of main devices access the same address interval in a burst mode or access overlapping address intervals, the buffer memory can read data once to respond to the plurality of main devices without repeatedly accessing the same address for each main device, so that the power consumption is reduced and the efficiency is improved; the buffer memory can process a plurality of groups of read-write requests with the same address in parallel, and the whole access efficiency is high.

Description

Multi-entry fully associative cache memory and data management method
Technical Field
The present application relates to the field of memory, and more particularly, to a multi-entry fully associative cache memory and a data management method.
Background
Due to the rise of the communication era and society, under the background that more and more users use intelligent devices for daily life and office work, the data reading technology is also used more and more frequently.
Among them, the buffer memory is a memory with a small capacity but a high speed between the CPU and the main memory of the computer. Since the operation speed of the CPU is much higher than that of the main memory, the CPU waits for a certain period of time when accessing data directly from the main memory, and the buffer memory can store a part of data that has just been used or cyclically used by the CPU. And if the CPU needs to use the part of data again, the data can be directly called from the buffer memory, so that repeated reading and writing of the main memory are avoided, the waiting time of the CPU is reduced, and the efficiency of the system is improved. However, the implementation structure of the buffer memory in the related art is complex, and there is a problem that each read request can only be processed in sequence when the read requests sent by multiple users are received, which may result in a problem of low data reading efficiency.
Specifically, since the structure of a Cache memory (Cache) is similar to that of a main memory, the Cache memory contains main memory address information (Tag) of stored data in addition to addresses and data, since addressing information issued by a CPU is directed to the main memory. To speed up the search, a cache memory line is called a line of Tag and its corresponding data, and typically contains a Valid bit (Valid) to mark whether the cache memory holds Valid data. In the related art, the cache memory has the disadvantage of low efficiency caused by processing each read request one by one in sequence each time when the accessed read request is received, which results in low speed and high cost.
Disclosure of Invention
The embodiment of the application provides a multi-entry fully-associative cache and a data reading method implemented by adopting the cache.
According to one aspect of the application, a multi-entry fully associative cache memory is disclosed, comprising a data synchronization management module and an on-chip memory management module, the on-chip memory management module comprising an arbitration module and an on-chip memory bank, wherein: the arbitration module is connected with a plurality of main devices and the on-chip memory and is used for processing read-write requests sent by the main devices in parallel; the data synchronization management module is connected with the arbitration module, and the data synchronization management module is connected with the off-chip storage body and used for synchronizing data.
Further, the data synchronization management module comprises a miss check module and a dynamic address management module, wherein: the input end of the miss checking module is connected with the plurality of main devices and is used for selecting missed read-write request addresses; the output end of the miss checking module is connected with the dynamic address management module and is used for sending the missed read-write request address to the dynamic address management module, and the dynamic address management module is used for distributing a memory segment for the missed read-write request address in the on-chip memory area.
Further, the data synchronization management module further includes a segment state management module and a priority policy module, wherein: the input end of the segment state management module is connected with the plurality of main devices, and the output end of the segment state management module is connected with the priority policy module, and is used for counting segment states according to read-write requests sent by the plurality of main devices and providing information of the segment states for the priority policy module.
Further, the priority policy module comprises a first priority policy module, wherein the segment state management module is connected with the first priority policy module and the dynamic address management module in sequence; the segment state management module is used for providing segment state information for the dynamic address management module.
Further, the priority policy module further comprises a second priority policy module, wherein the segment state management module is sequentially connected with the second priority policy module and the available space management module; the segment state management module is used for providing segment state information for the available space management module.
Further, the data synchronization management module comprises an inward synchronization module, wherein: the inward synchronization module is connected with the dynamic address management module, the arbitration module and the off-chip memory, and is used for synchronizing the data of the off-chip memory area to the on-chip memory area.
Further, the data synchronization management module comprises an outward synchronization module, wherein: the outward synchronization module is connected with the available space management module, the arbitration module and the off-chip memory bank and is used for synchronizing data of the on-chip memory bank to the off-chip memory bank.
Further, the arbitration module comprises a plurality of sub-modules, the on-chip memory bank comprises a plurality of sub-memory banks, wherein: the plurality of main devices are connected with each submodule, and each submodule is connected with each corresponding sub-memory and is used for processing a plurality of read-write requests in parallel; the arbitration module is used for processing a plurality of groups of read-write requests with the same address in parallel, each group comprises a plurality of read-write requests sent by a plurality of main devices, and the read-write requests are serial and sequential responses only when different requests need to be responded by the same sub-memory bank, and in addition, the read-write requests are parallel responses; therefore, the larger the number of sub-banks, the fewer the response conflicts, and the more efficient the overall access of the buffer memory.
According to another aspect of the present application, there is disclosed a data management method implemented using the above cache memory, the method comprising: a plurality of master devices initiate a read request; whenever there are at least two read requests having the same address, the read requests having the same address are simultaneously responded to at once.
According to another aspect of the present application, there is disclosed a data management method implemented using the above cache memory, the method comprising: a plurality of master devices initiate a read request; based on the strategy that the minimum address value has the highest priority, when the plurality of main devices access the same address interval in a burst (burst) mode with increasing addresses or access overlapping address intervals, the buffer memory realizes reading data once to respond to the plurality of main devices without repeatedly accessing the same address for each main device.
According to the memory proposed by the present application, when a plurality of read requests are received, if there are at least two read requests having the same read address, the read requests having the same read address are responded to only once, that is, the buffer memory is read only once when the group of requests hit in the buffer memory. According to the memory provided by the application, based on a strategy that the minimum address value has the highest priority, when a plurality of main devices access the same address interval or access overlapping address intervals in a burst (burst) mode with increasing addresses, the buffer memory can read data once to respond to the plurality of main devices, the same address does not need to be repeatedly accessed for each main device, power consumption is reduced, efficiency is improved, and the problem that the memory in the prior art only can process each read request one by one in sequence and the efficiency is low can be avoided. Therefore, multiple groups of read-write requests with the same address can be processed in parallel, each group comprises multiple read-write requests sent by multiple main devices, and only when different requests need the same sub-memory bank to respond, serial sequential response is achieved, and in addition, parallel response can be achieved. Therefore, the larger the number of sub-banks, the fewer the response conflicts, and the more efficient the overall access of the buffer memory.
The technical solution of the present application is further described in detail by the accompanying drawings and examples.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description, serve to explain the principles of the application.
The present application may be more clearly understood from the following detailed description with reference to the accompanying drawings, in which:
FIG. 1 is a diagram of a cache memory architecture according to the present application;
FIG. 2 is a block diagram of an on-chip memory management module of the buffer memory proposed in the present application;
FIG. 3 is a block diagram of a data synchronization management module of the buffer memory according to the present invention;
fig. 4 is a flowchart of a data management method implemented by the buffer memory according to the present application.
Detailed Description
Various exemplary embodiments of the present application will now be described in detail with reference to the accompanying drawings.
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the application, its application, or uses. Like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, further discussion thereof is not required in subsequent figures.
In addition, technical solutions between the various embodiments of the present application may be combined with each other, but it must be based on the realization of the technical solutions by a person skilled in the art, and when the technical solutions are contradictory or cannot be realized, such a combination of technical solutions should be considered to be absent and not within the protection scope of the present application.
FIG. 1 schematically illustrates an overall architecture of a multi-entry fully associative cache. As shown in fig. 1, the cache memory 100 includes a data synchronization management module 120 and an on-chip memory management module 110, where the on-chip memory management module 110 includes an arbitration module 111 and an on-chip memory bank 112, where the arbitration module 111 is connected to a plurality of host devices 200 and the on-chip memory bank 112, and is configured to process read and write requests sent by the plurality of host devices in parallel; the data synchronization management module 120 is connected to the arbitration module 111, and the data synchronization management module 120 is connected to the off-chip memory bank 300 for synchronizing data. The plurality of masters 200 include master 1, master 2 … …, and master N.
In the cache memory of the present application, the masters 1 to N serve as an input terminal of the arbitration module 111 and an input terminal of the data synchronization management module 120 (not shown in fig. 1), the on-chip memory bank 112 is an on-chip memory array (RAMArray), which is connected to the data synchronization management module 120 and the arbitration module 111, and an output terminal of the arbitration module 111 serves as an output terminal of the multi-entry fully associative cache memory. The data synchronization management module 120 is connected to an off-chip memory bank 300, which may be a DDR SDRAM.
FIG. 2 is a schematic diagram of an on-chip memory management module 110 of the cache memory of the present application. The arbitration module 111 includes a plurality of sub-modules ARB (0) -ARB (m), the on-chip memory bank 112 includes a plurality of sub-memory banks RAM (0) -RAM (m), the main device 1-the main device N are connected to the sub-modules ARB (0) -ARB (m), and the sub-modules ARB (0) -ARB (m) are connected to the corresponding sub-memory banks RAM (0) -RAM (m) for parallel processing of a plurality of read-write requests. For example, the master devices 1 to N are connected to the sub-modules ARB (0) for transmitting requests and data, and the sub-modules ARB (0) are connected to the corresponding sub-banks RAM (0) for reading and writing data.
The arbitration module 111 is configured to process read and write requests in parallel, and only when multiple different requests all require the same sub-bank to respond, resource contention of "many-to-one" occurs, and the sub-arbitration module corresponding to the sub-bank arbitrates 1 request per cycle to respond, where multiple requests on the same sub-bank are sequentially responded in series. When the sub-banks required by a plurality of requests are different from each other, each sub-bank receiving the request is in a one-to-one state, and the responses of all the requests can be completed concurrently.
Whenever there is a read request with the same address, the arbitration module treats it as 1 request, and can read once and respond simultaneously. There may be multiple sets of read requests for the same address, e.g., A and B are the same and C, D, E are the same, which is considered to be 2 read requests. The larger the number of sub-banks, the fewer the response conflicts, and the more efficient the overall access to the buffer memory.
Fig. 3 is a schematic diagram illustrating an internal architecture of the data synchronization management module 120 of the cache memory and an architecture for interfacing with peripheral modules according to the present application. The cache data synchronization management module 120 includes a miss checking module 121, a dynamic address management module 122, a segment status management module 123, a first priority policy module 124, a second priority policy module 125, an available space management module 126, an inward synchronization module 127, and an outward synchronization module 128. Wherein the content of the first and second substances,
the input end of the data synchronization management module 120 is connected to the main devices 1 to N, and the main devices 1 to N serve as the input ends of the miss detection module 121 and the segment status management module 123. The output of the miss checking module 121 terminates the dynamic address management module 122, and the other input of the dynamic address management module 122 is from the output of the first priority policy module 124, and the input of the first priority policy module 124 is one of the outputs of the segment status management module 123. The output of the dynamic address management module 122 is connected to the inbound synchronization module 127, the input of the data synchronization management module from the off-chip memory bank 300 (such as DDR SDRAM) serves as another input of the inbound synchronization module 127, and the output of the inbound synchronization module 127 serves as the output of the data synchronization management module 120 and is connected to the arbitration module 111. The other output terminal of the segment status management module 123 is connected to the second priority policy module 125, the output terminal of the second priority policy module 125 is connected to the available space management module 126, the output terminal of the available space management module 126 is connected to the outbound synchronization module 128, the input terminal of the data synchronization management module 120 from the arbitration module 111 is used as the other input terminal of the outbound synchronization module 128, and the output terminal of the outbound synchronization module 128 is connected to the off-chip memory bank 300 (such as DDR SDRAM) as the output terminal of the data synchronization management module 120.
Specifically, the input end of the miss checking module 121 is connected to the master devices 1 to N, and is configured to select a missed read-write request address; the output end of the miss checking module 121 is connected to the dynamic address management module 122, and is configured to send the missed read-write request address to the dynamic address management module 122; the dynamic address management module 122 is configured to allocate a memory segment in the on-chip memory area for the missed read-write request address.
Specifically, the input end of the segment state management module 123 is connected to the master devices 1 to N, and the output end of the segment state management module 123 is connected to the first priority policy module 124 and the second priority policy module 125, and is configured to count the segment states according to the read/write requests sent by the master devices 1 to N, and provide information of the segment states to the first priority policy module 124 and the second priority policy module 125.
Specifically, the segment status management module 123 is connected to the first priority policy module 124 and the dynamic address management module 122 in sequence, and is configured to provide the segment status information to the dynamic address management module 122. The segment state management module 123 is connected to the second priority policy module 125 and the available space management module 126 in sequence, and is configured to provide segment state information to the available space management module.
Specifically, the on-chip synchronization module 127 is connected to the dynamic address management module 122, the arbitration module 111, and the off-chip memory bank 300, and is configured to synchronize data of the off-chip memory area to the on-chip memory area. The outbound synchronization module 128 is connected to the available space management module 126, the arbitration module 111, and the off-chip memory bank 300, and is configured to synchronize data of the on-chip memory bank to the off-chip memory bank.
Specifically, the on-chip memory bank 112 of the cache memory of the present application is a memory Array (RAM Array) which is composed of a plurality of physical RAMs, each of the physical RAMs is divided into a plurality of virtual data segments (data segments), each of the virtual data segments (data segments) has a dynamic address Tag (Tag), and each of the virtual data segments (data segments) covers a segment of data of a continuous address.
Fig. 4 schematically shows a data management method of a cache memory according to the present application, including that when the master 1 to the master N initiate a read/write request to the cache memory 100 of the present application, the arbitration module 111 compares the read/write request address with the dynamic addresses of all the virtual data segments (data segments) in parallel, and if the read hit occurs, returns the data required by the request; if the data is write hit, writing the data in a chip memory Array (RAM Array)122, and returning the write back successfully; if the data is not hit, the miss checking module 121 of the data synchronization management module 120 will sort and select the missed address, and send the address to the dynamic address management module 122, and the dynamic address management module 122 will update the address Tag (Tag) for a certain virtual data segment (data segment), and notify the internal synchronization module 127 to synchronize the data in the off-chip memory bank (e.g. DDR SDRAM)300 to the RAM in which the virtual data segment (data segment) is located. The segment state management module 123 counts the usage state of the segment (segment) in the cache memory according to the read/write requests input by the masters 1 to N, and the first priority policy module 124 selects a specific virtual data segment (DataSegment) to be used by the dynamic address management module 122. The segment status management module 123 counts the usage status of the segments (segments) in the cache memory according to the read/write requests input by the masters 1 to N, the second priority policy module 125 selects a specific virtual data segment (data segment) to the available space management module 126, the available space management module 126 is responsible for maintaining enough available space for the dynamic address management module 122 to use dynamically, and the available space management module 126 notifies the external synchronization module 128 according to the policy to synchronize the data of the virtual data segment (data segment) to the off-chip storage (such as DDR SDRAM) 300.
According to the data management method, the application further comprises the following data reading method.
A data reading method of a cache memory according to the present application includes: a plurality of master devices initiate a read request; whenever there are at least two read requests with the same address, the read requests with the same address are simultaneously responded to at once ("same address simultaneous response" feature).
According to the application, the cache memory further comprises: a plurality of master devices initiate a read request; based on that the access buffer of the master device mainly takes Burst (Burst) access with increasing address, based on the characteristic of simultaneous response of the same address, and based on the strategy of the highest priority of the minimum address value, when the master devices access the same address interval in the Burst (Burst) mode with increasing address or access overlapping address intervals, the buffer memory can realize data response for reading once to the master devices without repeatedly accessing the same address for each master device.
According to the present application, a cache memory further includes: a plurality of main devices initiate read-write requests; if the address of the read-write request is not hit, the miss checking module sends the missed address to the dynamic address management module, the dynamic address management module updates address information for a certain data segment, and informs the inward synchronization module to synchronize the data in the off-chip memory bank to the on-chip memory bank where the data segment is located.
According to the present application, a cache memory further includes: a plurality of main devices initiate read-write requests; the segment state management module counts segment states according to the read-write request, selects a first data segment to the dynamic address management module, and selects a second data segment to the available space management module, wherein the available space management module is used for informing the external synchronization module to synchronize the data of the data segment to the off-chip memory bank so as to keep enough available space for the dynamic address management module to dynamically use.
Therefore, when a plurality of read requests are received, if at least two read requests with the same read address exist, the memory provided by the application only reads once, namely simultaneously responds to the read requests with the same read address, and only reads the buffer memory once when the group of requests hit the buffer memory. According to the memory provided by the application, based on a strategy that the minimum address value has the highest priority, when a plurality of main devices access the same address interval in a burst (burst) mode or access an overlapped address interval, the buffer memory can read data once to respond to the plurality of main devices, the same address does not need to be repeatedly accessed for each main device, power consumption is further reduced, efficiency is improved, and the problem that the memory in the prior art only can process each read request one by one in sequence to cause low efficiency can be avoided. Therefore, multiple groups of read-write requests with the same address can be processed in parallel, each group comprises multiple read-write requests sent by multiple main devices, and only when different requests need the same sub-memory bank to respond, serial sequential response is achieved, and in addition, parallel response can be achieved. Therefore, the larger the number of sub-banks, the fewer the response conflicts, and the more efficient the overall access of the buffer memory.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (10)

1. A multi-entry fully associative cache memory, comprising a data synchronization management module and an on-chip memory management module, wherein the on-chip memory management module comprises an arbitration module and an on-chip memory bank, wherein:
the arbitration module is connected with a plurality of main devices and the on-chip memory and is used for processing read-write requests sent by the main devices in parallel;
the data synchronization management module is connected with the arbitration module, and the data synchronization management module is connected with the off-chip storage body and used for synchronizing data.
2. The buffer memory of claim 1, wherein the data synchronization management module comprises a miss check module and a dynamic address management module, wherein:
the input end of the miss checking module is connected with the plurality of main devices and is used for selecting missed read-write request addresses;
the output end of the miss checking module is connected with the dynamic address management module and is used for sending the missed read-write request address to the dynamic address management module,
and the dynamic address management module is used for distributing a memory segment for the missed read-write request address in the on-chip memory area.
3. The buffer memory of claim 1, wherein the data synchronization management module further comprises a segment status management module and a priority policy module, wherein:
the input end of the segment state management module is connected with the plurality of main devices, and the output end of the segment state management module is connected with the priority policy module, and is used for counting segment states according to read-write requests sent by the plurality of main devices and providing information of the segment states for the priority policy module.
4. The buffer memory of claim 3, further comprising: the priority policy module comprises a first priority policy module, wherein,
the section state management module is sequentially connected with the first priority policy module and the dynamic address management module; the segment state management module is used for providing segment state information for the dynamic address management module.
5. The buffer memory of claim 4, further comprising: the priority policy module further comprises a second priority policy module, wherein,
the segment state management module is sequentially connected with the second priority policy module and the available space management module; the segment state management module is used for providing segment state information for the available space management module.
6. The buffer memory of claim 2 or 5, wherein the data synchronization management module comprises an inward synchronization module, wherein:
the inward synchronization module is connected with the dynamic address management module, the arbitration module and the off-chip memory, and is used for synchronizing the data of the off-chip memory area to the on-chip memory area.
7. The buffer memory of claim 5, wherein the data synchronization management module comprises an outbound synchronization module, wherein:
the outward synchronization module is connected with the available space management module, the arbitration module and the off-chip memory bank and is used for synchronizing data of the on-chip memory bank to the off-chip memory bank.
8. The buffer memory of claim 1, wherein the arbitration module comprises a plurality of sub-modules, the on-chip memory bank comprises a plurality of sub-memory banks, wherein:
the plurality of main devices are connected with each submodule, and each submodule is connected with each corresponding sub-memory and is used for processing a plurality of read-write requests in parallel;
the arbitration module is used for processing a plurality of groups of read-write requests with the same address in parallel, each group comprises a plurality of read-write requests sent by a plurality of main devices, and the read-write requests are serial and sequential responses only when different requests need the same sub-memory bank to respond, and in addition, the read-write requests are parallel responses; therefore, the larger the number of sub-banks, the fewer the response conflicts, and the more efficient the overall access of the buffer memory.
9. A method for managing data, comprising:
a plurality of master devices initiate a read request;
whenever there are at least two read requests having the same address, the read requests having the same address are simultaneously responded to at once.
10. A method for managing data, comprising:
a plurality of master devices initiate a read request;
based on the strategy that the minimum address value has the highest priority, when the plurality of main devices access the same address interval in a burst (burst) mode with increasing addresses or access overlapping address intervals, the buffer memory realizes reading data once to respond to the plurality of main devices without repeatedly accessing the same address for each main device.
CN202010614315.0A 2020-06-30 2020-06-30 Multi-entry fully associative cache memory and data management method Pending CN111881068A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010614315.0A CN111881068A (en) 2020-06-30 2020-06-30 Multi-entry fully associative cache memory and data management method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010614315.0A CN111881068A (en) 2020-06-30 2020-06-30 Multi-entry fully associative cache memory and data management method

Publications (1)

Publication Number Publication Date
CN111881068A true CN111881068A (en) 2020-11-03

Family

ID=73158175

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010614315.0A Pending CN111881068A (en) 2020-06-30 2020-06-30 Multi-entry fully associative cache memory and data management method

Country Status (1)

Country Link
CN (1) CN111881068A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113051194A (en) * 2021-03-02 2021-06-29 长沙景嘉微电子股份有限公司 Buffer memory, GPU (graphic processing unit), processing system and cache access method
WO2023029729A1 (en) * 2021-09-03 2023-03-09 International Business Machines Corporation Using track status information on active or inactive status of track to determine whether to process a host request on a fast access channel
CN116107929A (en) * 2023-04-13 2023-05-12 摩尔线程智能科技(北京)有限责任公司 Data access method and device, storage medium and electronic equipment
US11720500B2 (en) 2021-09-03 2023-08-08 International Business Machines Corporation Providing availability status on tracks for a host to access from a storage controller cache

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101206624A (en) * 2006-12-21 2008-06-25 扬智科技股份有限公司 Method and device for reading external memory
CN101609438A (en) * 2008-06-19 2009-12-23 索尼株式会社 Accumulator system, its access control method and computer program
CN104346285A (en) * 2013-08-06 2015-02-11 华为技术有限公司 Memory access processing method, device and system
CN104572528A (en) * 2015-01-27 2015-04-29 东南大学 Method and system for processing access requests by second-level Cache
US20150186208A1 (en) * 2012-11-05 2015-07-02 Mitsubishi Electric Corporation Memory control apparatus
CN110275841A (en) * 2019-06-20 2019-09-24 上海燧原智能科技有限公司 Access request processing method, device, computer equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101206624A (en) * 2006-12-21 2008-06-25 扬智科技股份有限公司 Method and device for reading external memory
CN101609438A (en) * 2008-06-19 2009-12-23 索尼株式会社 Accumulator system, its access control method and computer program
US20150186208A1 (en) * 2012-11-05 2015-07-02 Mitsubishi Electric Corporation Memory control apparatus
CN104346285A (en) * 2013-08-06 2015-02-11 华为技术有限公司 Memory access processing method, device and system
CN104572528A (en) * 2015-01-27 2015-04-29 东南大学 Method and system for processing access requests by second-level Cache
CN110275841A (en) * 2019-06-20 2019-09-24 上海燧原智能科技有限公司 Access request processing method, device, computer equipment and storage medium

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113051194A (en) * 2021-03-02 2021-06-29 长沙景嘉微电子股份有限公司 Buffer memory, GPU (graphic processing unit), processing system and cache access method
WO2023029729A1 (en) * 2021-09-03 2023-03-09 International Business Machines Corporation Using track status information on active or inactive status of track to determine whether to process a host request on a fast access channel
US11720500B2 (en) 2021-09-03 2023-08-08 International Business Machines Corporation Providing availability status on tracks for a host to access from a storage controller cache
GB2623732A (en) * 2021-09-03 2024-04-24 Ibm Using track status information on active or inactive status of track to determine whether to process a host request on a fast access channel
CN116107929A (en) * 2023-04-13 2023-05-12 摩尔线程智能科技(北京)有限责任公司 Data access method and device, storage medium and electronic equipment

Similar Documents

Publication Publication Date Title
CN111881068A (en) Multi-entry fully associative cache memory and data management method
CN103246616B (en) A kind of globally shared buffer replacing method of access frequency within long and short cycle
KR100667384B1 (en) Methods and apparatus for prioritization of access to external devices
US6725336B2 (en) Dynamically allocated cache memory for a multi-processor unit
US5664151A (en) System and method of implementing read resources to maintain cache coherency in a multiprocessor environment permitting split transactions
CN100593217C (en) Flash memory control apparatus, memory management method, and memory chip
CN110347331B (en) Memory module and memory system including the same
US20140372696A1 (en) Handling write requests for a data array
KR102048762B1 (en) Method, device and system for refreshing dynamic random access memory(dram)
JPH02503722A (en) set associative memory
CN1822224B (en) Memory device capable of refreshing data using buffer and refresh method thereof
US9697111B2 (en) Method of managing dynamic memory reallocation and device performing the method
KR20010081016A (en) Methods and apparatus for detecting data collision on data bus for different times of memory access execution
US10108555B2 (en) Memory system and memory management method thereof
CN117389914B (en) Cache system, cache write-back method, system on chip and electronic equipment
US9104531B1 (en) Multi-core device with multi-bank memory
US20170040050A1 (en) Smart in-module refresh for dram
CN105487988B (en) The method for improving the effective access rate of SDRAM bus is multiplexed based on memory space
CN114691571A (en) Data processing method, reordering buffer and interconnection equipment
WO2018094620A1 (en) Memory allocation method and apparatus
US6687786B1 (en) Automated free entry management for content-addressable memory using virtual page pre-fetch
KR20010086034A (en) Universal resource access controller
CN108509151B (en) Line caching method and system based on DRAM memory controller
US9116814B1 (en) Use of cache to reduce memory bandwidth pressure with processing pipeline
CN100472422C (en) Device with dual write-in functions and memory control device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination