CN115237602B - Normalized RAM (random Access memory) and distribution method thereof - Google Patents

Normalized RAM (random Access memory) and distribution method thereof Download PDF

Info

Publication number
CN115237602B
CN115237602B CN202210980856.4A CN202210980856A CN115237602B CN 115237602 B CN115237602 B CN 115237602B CN 202210980856 A CN202210980856 A CN 202210980856A CN 115237602 B CN115237602 B CN 115237602B
Authority
CN
China
Prior art keywords
ram
normalized
service
bank
banks
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210980856.4A
Other languages
Chinese (zh)
Other versions
CN115237602A (en
Inventor
胡新立
刘浩
马超
蒋丹崴
张钰勃
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Moore Threads Technology Co Ltd
Original Assignee
Moore Threads Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Moore Threads Technology Co Ltd filed Critical Moore Threads Technology Co Ltd
Priority to CN202210980856.4A priority Critical patent/CN115237602B/en
Publication of CN115237602A publication Critical patent/CN115237602A/en
Application granted granted Critical
Publication of CN115237602B publication Critical patent/CN115237602B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5022Mechanisms to release resources
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The application discloses a normalized RAM. The RAM comprises a plurality of RAM storage units with the same type, the plurality of RAM storage units are divided into a plurality of banks, and each bank comprises the same number of RAM storage units. The RAM also comprises an interface for reading and writing data from the bank. The RAM also comprises a register, which is used for storing occupation information of corresponding banks, wherein the occupation information represents the occupation condition of RAM storage units of each bank. The RAM also comprises a multiplexer MUX for selecting the memory units of the interface and the bank according to the instruction and the occupation information returned by the register. In addition, the present disclosure also provides a method for distributing normalized RAM.

Description

Normalized RAM (random Access memory) and distribution method thereof
Technical Field
The application relates to the technical field of processors, in particular to a normalized RAM and an allocation method thereof.
Background
The random access memory RAM (Random Access Memory) is an internal memory that exchanges data directly with the processor. RAM is typically integrated on a semiconductor chip and contains a large number of memory cells arranged in rows and columns in a matrix form and mostly distributed over individually addressable banks (banks).
In the prior art, since a Graphics Processor (GPU) or a general purpose processor (CPU) may process different data in different manners after reading the data from a system memory, many different types of RAMs (such as read-write bit width, depth, number of independent banks addressed each time, addressing manner, etc.) need to be respectively disposed inside a chip to facilitate data processing. That is, it is necessary to preset a plurality of types of RAM dedicated to various kinds of services in the chip. Furthermore, the frequencies of use of the various types of RAM are different among the different services, resulting in a non-optimal overall RAM use efficiency for the different services.
The disadvantage of the prior art currently applied is that the individual sub-divided RAM is solidified, the type of which is fixed at the beginning of the design and cannot be changed later on, which results in an adverse flexible allocation and full utilization of the RAM memory cells.
Disclosure of Invention
The application aims to solve the problems of hard maintenance and low use efficiency of the solidification of the type of the internal storage RAM of a processor.
The application normalizes the RAM, the normalized RAM includes a plurality of RAM memory cells with the same type, the plurality of RAM memory cells are divided into a plurality of banks, the RAM memory cells included in each bank are the same in number, wherein the type of the memory cell of each bank can be configured by software through an internal register according to the service requirement, and a hardware circuit processes various read-write requests according to configuration information.
According to the application, the types of the storage units of each bank in the normalized RAM are configured by software according to the service requirement, so that the use efficiency of the storage units in the RAM is improved, and the whole RAM space is saved.
According to an aspect of the present application, there is provided a normalized RAM. The RAM comprises a plurality of RAM storage units with the same type, the plurality of RAM storage units are divided into a plurality of banks, and each bank comprises the same number of RAM storage units. The RAM also comprises an interface for reading and writing data from the bank. The RAM also comprises a register, which is used for storing occupation information of corresponding banks, wherein the occupation information represents the occupation condition of RAM storage units of each bank. The RAM also comprises a multiplexer MUX for selecting the memory units of the interface and the bank according to the instruction and the occupation information returned by the register.
In some embodiments, the RAM is RAM internal to a graphics processor GPU or a general purpose CPU.
In some embodiments, the types include size and addressing mode.
In some embodiments, the size includes read-write bit width, depth, and number of independent banks per addressing.
In some embodiments, after a service is completed, the memory cells used by the service are released and marked as unused, and the released memory cells may be reassigned.
In some embodiments, when no new service runs after the service is finished, releasing the storage unit in a power-down mode; and after the next power-up, the memory cells in the RAM are reassigned.
In some embodiments, where the chip scale is large, two or more RAMs are arranged in a multi-drop fashion.
In some embodiments, when the processor supports multiple threads, one or more banks are allocated when each thread is started, and then specific partitioning is performed for traffic within the allocated one or more banks.
According to another aspect of the present application, there is provided a method for allocating normalized RAM, wherein the normalized RAM includes a plurality of RAM storage units of the same type, the plurality of RAM storage units being divided into a plurality of banks, each bank including the same number of RAM storage units. The allocation method comprises determining the type of RAM storage unit required by service; reading the bank occupation information stored in the register of the normalized RAM; and configuring the normalized RAM based on the type of the RAM storage unit required by the service and the occupation information to enable the normalized RAM to be matched with the type of the RAM storage unit required by the service.
According to the application, the normalization processing is carried out on the RAMs with different types distributed at a plurality of positions in the prior art, the normalization RAMs comprise the RAM storage units with the same type, and the configuration can be flexibly carried out based on different services, so that the utilization efficiency of the RAMs is improved.
Drawings
Embodiments of the application will now be described in more detail and with reference to the accompanying drawings, in which:
FIG. 1 is a schematic diagram schematically illustrating an example RAM employing prior art distributed storage techniques;
FIG. 2 is a schematic diagram schematically illustrating an example normalized RAM employing centralized storage techniques, according to an embodiment of the application; and
fig. 3 is a schematic diagram schematically illustrating an extended RAM employing a centralized storage technique according to an embodiment of the present application.
Detailed Description
The technical solutions of the present application will be clearly and completely described below with reference to the accompanying drawings. The described embodiments are only some, but not all, embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to fall within the scope of the application.
In the prior art, the size of each RAM is designed according to the product requirement, and is divided into several types with different functions, and different data are independently stored for the processor to process different services. The type of each RAM is determined at the beginning of the design. Also, each RAM is hardware isolated, i.e., each type of RAM is used only for that particular service. For each service, only a particular type of RAM designed for that service can be used, and no other type of RAM can be used. For each RAM, it can only be used for specific services and not for other services.
An example of the prior art is shown in fig. 1. Fig. 1 is a schematic diagram schematically illustrating an example RAM employing a distributed storage technique in the related art. In this example, three RAMs are included in the chip. The first RAM includes an interface, a multiplexer MUX and two banks, bank0 and bank1. Each memory cell in the respective bank in the first RAM is set to a specific type one at the beginning of design, such as 37bit x 32, and the type of the memory cell will not change after the setting. This first RAM is only suitable for the first type of traffic (which requires the use of 37bit x 32 type memory cells). For other types of traffic, such as traffic that requires the use of 64bit x 32, 28bit x 16, or 32bit x 78, etc., the first RAM will not be used.
With continued reference to FIG. 1, the second RAM includes an interface and a bank. Since there is only one interface and one bank, there is no need for a multiplexer MUX. Each memory cell in the bank in the second RAM is set to a specific type two at the beginning of design, such as 64bit x 32, and the type of the memory cell cannot be changed after the setting. This second RAM is only suitable for the second type of traffic (which requires the use of memory cells of the 64bit x 32 type). For other types of traffic, such as traffic that requires the use of 37bit x 32, 28bit x 16, or 32bit x 78, etc., the second RAM will not be used.
As shown in fig. 1, the third RAM includes two interfaces, one multiplexer MUX, and three banks, i.e., bank0, bank1, and bank2. Each memory cell in the respective bank in the third RAM is set to a specific type three at the beginning of design, such as 28bit x 16, and the type of the memory cell will not be changed after the setting. This third RAM is only suitable for the third class of traffic (which requires the use of 28bit x 16 type memory locations). For other types of traffic, such as traffic requiring the use of 37bit x 32, 64bit x 32 or 32bit x 78 types of memory cells, the third RAM would not be used.
It can be seen that in the prior art, for the situation as shown in fig. 1, three different types of RAM are required to be used for three different services. Each RAM is not generic, i.e. each RAM is only suitable for certain types of traffic and not for other types of traffic.
In addition, when the service is different, the use condition of each RAM is different, and the RAM cannot be comprehensively managed, so that the use efficiency is influenced by hardware isolation. In addition, each RAM is different in type (such as size and addressing mode), so that independent maintenance is required, more manpower is required, and the probability of problem is higher than that of a single type of RAM. In addition, the MUX and the arbitration circuit need to be made according to the amount of RAM, and in the related art, since various types of RAM need to be set for various types of traffic, respectively, a plurality of MUX and arbitration circuits are required, thereby increasing the chip area.
Therefore, improvements to the RAM, normalizing the RAM to increase its efficiency of use, and thus save overall RAM space, are needed.
In order to solve the above-described various problems, the inventors have proposed improvements to the RAM of the related art. In a modified scheme, the normalized RAM comprises a plurality of RAM storage units with the same type, the plurality of RAM storage units are divided into a plurality of banks, and each bank comprises the same number of RAM storage units. And then determining the type of a RAM storage unit required by the service according to various specific services, reading the bank occupation information stored in a register of the normalized RAM, configuring the normalized RAM based on the type of the RAM storage unit required by the service and the occupation information to enable the normalized RAM to be matched with the type of the RAM storage unit required by the service, and then carrying out service processing. The type of RAM memory cells is configured according to the traffic. The same normalized RAM may be configured to be of different types for different services so as to be suitable for different services. For example, when needed for a first service, configuring the normalized RAM to be of a first type suitable for the first service; when needed for a second service, the normalized RAM is configured to be applicable to a second type of the second service.
When the next service operation is continuously carried out after one service is finished, the memory unit used by the service is released and marked as unused, and the released memory unit can be reassigned. When no new service runs after the service is finished, the storage unit can be released in a power-down mode, and after the next power-up, the storage unit in the RAM is allocated again, so that the use efficiency of the RAM is further improved.
Compared with the RAM in the prior art, the normalized RAM in the embodiment of the application has single type and is beneficial to the maintenance of designers. Since the normalized RAM is a block of RAM including a plurality of RAM memory cells of the same type, only one MUX and arbitration selection circuit are required, thereby saving chip area.
The following description is made with reference to the accompanying drawings.
A RAM according to an embodiment of the application may refer to fig. 2. FIG. 2 is a schematic diagram schematically illustrating an example normalized RAM employing centralized storage techniques, according to an embodiment of the application.
As shown in fig. 2, the normalized RAM according to the embodiment of the present application has a plurality of interfaces, such as interface a, interface B, interface C, and interface D. The interface is used for reading and writing data from the bank.
The normalized RAM includes a plurality of RAM memory cells of the same type, which are divided into a plurality of banks, for example, as shown in fig. 2, which include 6 banks, respectively, bank0, bank1, bank2, bank3, bank4, and bank5, wherein the memory cells of each bank in the normalized RAM are of the same type, and thereafter the memory cells of each bank can be independently configured by software according to the service requirements.
The normalized RAM also has a plurality of registers REG for storing occupancy information for the corresponding banks, said occupancy information representing occupancy of RAM memory cells of each bank.
As shown in fig. 2, the normalized RAM further includes a MUX for selecting memory locations of the interface and bank according to the instruction and occupancy information returned by the registers. Since there is only one RAM, only one MUX is needed, which can save chip area.
For the prior art RAM previously shown in fig. 1, when memory cells for two services of 37bit x 32, 64bit x 32 and 28bit x 16, 32bit x 78 are allocated, four types of RAM blocks dedicated to each independent design are required for design use.
However, with the normalized RAM according to the embodiment of the present application shown in fig. 2, only one block of RAM having a general type is required, without separately setting a dedicated RAM for each type of service.
In the normalized RAM according to the embodiment of the present application shown in fig. 2, a plurality of RAM memory cells of the same type, such as 8bit x 32, are included. For each different service, it will be allocated according to the smallest memory unit. For example, for the above-mentioned services, when it is desired to allocate memory locations for two services of 37bit x 32, 64bit x 32 and 28bit x 16, 32bit x 78, two blocks of RAM memory locations of 8bit x 5 x 32 and 8bit x 8 x 32 may be allocated in the centrally stored normalized RAM for use by the first service. When the first service line is finished, the used storage units are released, and two RAM storage units of 8bit x 4 x 16 and 8bit x 4 x 78 are allocated from the normalized RAM for the second service.
It can be seen that the normalized RAM and the allocation method thereof according to the embodiments of the present application have at least the following advantages:
(1) A smaller amount of RAM may be used than in the prior art, e.g. the need for different services may be met using one normalized RAM, thus reducing the amount of RAM required. Because the normalized RAM comprises a plurality of RAM storage units with the same type, when specific services are faced, the types of the storage units of each bank in the normalized RAM can be flexibly configured through software according to the needs of the specific services, so that the RAM can be suitable for different services without setting specific special RAMs for different services as in the prior art, and the whole RAM space is further saved.
(2) As the amount of RAM required is reduced, the required multiplexer MUX and arbitration circuitry is correspondingly reduced, thereby reducing chip area.
(3) Since only one normalization RAM needs to be provided, the internal storage is more centralized and more convenient to maintain than in the prior art.
(4) Compared with a plurality of types of RAMs in the prior art, the normalized RAM of the embodiment of the application has lower probability of problem, and is free from independent maintenance and is free from too much manpower due to the single type.
(5) Because the normalized RAM comprises a plurality of RAM storage units with the same type, the types of the storage units of each bank in the normalized RAM can be flexibly configured according to the service requirements when aiming at different services, thereby enhancing the universality of the RAM and improving the use efficiency of the RAM.
In addition, the inventors have found that when the chip scale is large, the layout of the chip may be affected by the size of the chip scale, which may result in an increase in the RAM interface trace length, thereby increasing the delay. For this case, for example, when multiple functional blocks access the centralized RAM at different locations, resulting in unsatisfactory timing, a multipoint distribution method may be adopted to solve the problem, referring specifically to fig. 3.
In fig. 3, the RAM on the left is a single normalized RAM according to an embodiment of the present application, while the RAM on the right is arranged in a multi-point distribution for both RAMs. As shown in fig. 3, the furthest distance of the RAM on the right side is significantly smaller than the furthest distance of the RAM on the left side, so that the RAM further improved by adopting the multipoint distribution manner can reduce the wiring length and further reduce the delay. Further, for each service, a RAM memory unit is allocated to the service use from a RAM having a smaller trace length for the functional block used by the service.
The application is also applicable to the case of multithreading. When the processor supports multiple threads, the processor needs to be designed according to RAM storage units required by service integration, one or more banks are allocated when each thread is started, then specific division is performed on the service in the allocated one or more banks, and the currently allocated RAM storage units are released for subsequent use after the thread is finished.
It should be understood that the above embodiments are described by way of example only. While the embodiments have been illustrated and described in detail in the drawings and foregoing description, such illustration and description are to be considered illustrative or exemplary and not restrictive, and the scope of the application is not limited to the disclosed embodiments.
Terms such as "first," "second," and the like, may be used herein to describe various devices, elements, components, or portions, but are not intended to limit these devices, elements, components, or portions in order or importance. These terms are only used to distinguish one device, element, component, or section from another device, element, component, or section.
Other variations to the disclosed embodiments can be understood and effected by those skilled in the art in view of the drawings, the disclosure, and the appended claims. In the present disclosure, the word "comprising" does not exclude other elements or steps, and the indefinite article "a" or "an" does not exclude a plurality. Features which are listed in mutually different embodiments can be combined without conflict. The recited order of various steps of a method according to an embodiment of the present application in the context of the present application should not be understood as limiting the order in which the various steps are performed unless explicitly defined.

Claims (14)

1. A normalized RAM, the RAM comprising:
the system comprises a plurality of RAM storage units with the same type, wherein the plurality of RAM storage units are divided into a plurality of banks, the number of the RAM storage units included in each bank is the same, and the type of the storage unit of each bank is configured by software according to the service requirement;
the interface is used for reading and writing data from the bank;
the register is used for storing occupation information of corresponding banks, and the occupation information represents occupation conditions of RAM storage units of each bank;
the multiplexer MUX is used for selecting the memory units of the interface and the bank according to the instruction and the occupation information returned by the register;
the processor with the RAM supports multiple threads, one or more banks are allocated when each thread is started, then specific division is carried out on services in the allocated one or more banks, and the currently allocated RAM storage unit is released for subsequent use after the thread is finished.
2. The normalized RAM of claim 1, wherein the RAM is RAM internal to a graphics processor GPU or a general purpose CPU.
3. The normalized RAM of claim 1 or 2, wherein the types include size and addressing mode.
4. The normalized RAM of claim 3, wherein the size includes a read-write bit width, a depth, and a number of independent banks per addressing.
5. A normalized RAM according to claim 1 or 2, wherein after a service is completed, the memory cells used by said service are released and marked as unused, the released memory cells being reassignable.
6. The normalized RAM of claim 5, wherein when no new service is running after the service is finished, the memory cell is released by powering down; and after the next power-up, the memory cells in the RAM are reassigned.
7. The normalized RAM of claim 1 or 2, wherein two or more RAMs are arranged in a multi-point distribution manner when the chip scale is large.
8. The method for distributing normalized RAM is characterized in that the normalized RAM comprises a plurality of RAM storage units with the same type, the plurality of RAM storage units are divided into a plurality of banks, the number of the RAM storage units included in each bank is the same, the type of the storage unit of each bank is configured by software according to the service requirement, and the method for distributing the normalized RAM comprises the following steps:
determining the type of a RAM storage unit required by the service;
reading the bank occupation information stored in the register of the normalized RAM;
based on the type of the RAM storage unit required by the service and the occupation information, configuring the normalized RAM to be matched with the type of the RAM storage unit required by the service;
the processor with the RAM supports multiple threads, one or more banks are allocated when each thread is started, then specific division is carried out on services in the allocated one or more banks, and the currently allocated RAM storage unit is released for subsequent use after the thread is finished.
9. The allocation method of claim 8, wherein the RAM is RAM internal to a graphics processor GPU or a general purpose CPU.
10. The allocation method according to claim 8 or 9, wherein said types include size and addressing mode.
11. The allocation method of claim 10, wherein the size comprises a read-write bit width, a depth, and a number of independent banks per addressing.
12. A method of allocating as claimed in claim 8 or 9, wherein after a service has ended, the memory cells used by the service are released and marked as unused, the released memory cells being re-allocated.
13. The allocation method according to claim 12, wherein when no new service runs after the service is finished, the memory cell is released by powering down; and after the next power-up, the memory cells in the RAM are reassigned.
14. The allocation method according to claim 8 or 9, wherein when the chip scale is large and two or more RAMs are set in a multipoint distribution manner, RAM memory cells are allocated to the service use from RAMs having a smaller wiring length for the functional block used by the service.
CN202210980856.4A 2022-08-16 2022-08-16 Normalized RAM (random Access memory) and distribution method thereof Active CN115237602B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210980856.4A CN115237602B (en) 2022-08-16 2022-08-16 Normalized RAM (random Access memory) and distribution method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210980856.4A CN115237602B (en) 2022-08-16 2022-08-16 Normalized RAM (random Access memory) and distribution method thereof

Publications (2)

Publication Number Publication Date
CN115237602A CN115237602A (en) 2022-10-25
CN115237602B true CN115237602B (en) 2023-09-05

Family

ID=83678569

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210980856.4A Active CN115237602B (en) 2022-08-16 2022-08-16 Normalized RAM (random Access memory) and distribution method thereof

Country Status (1)

Country Link
CN (1) CN115237602B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116028388B (en) * 2023-01-17 2023-12-12 摩尔线程智能科技(北京)有限责任公司 Caching method, caching device, electronic device, storage medium and program product

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180165092A1 (en) * 2016-12-14 2018-06-14 Qualcomm Incorporated General purpose register allocation in streaming processor
CN109785882A (en) * 2017-11-15 2019-05-21 三星电子株式会社 SRAM with Dummy framework and the system and method including it
CN112199039A (en) * 2020-09-04 2021-01-08 厦门星宸科技有限公司 Virtual storage management method and processor
CN112368676A (en) * 2019-09-29 2021-02-12 深圳市大疆创新科技有限公司 Method and apparatus for processing data
CN114356223A (en) * 2021-12-16 2022-04-15 深圳云天励飞技术股份有限公司 Memory access method and device, chip and electronic equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180165092A1 (en) * 2016-12-14 2018-06-14 Qualcomm Incorporated General purpose register allocation in streaming processor
CN109785882A (en) * 2017-11-15 2019-05-21 三星电子株式会社 SRAM with Dummy framework and the system and method including it
CN112368676A (en) * 2019-09-29 2021-02-12 深圳市大疆创新科技有限公司 Method and apparatus for processing data
CN112199039A (en) * 2020-09-04 2021-01-08 厦门星宸科技有限公司 Virtual storage management method and processor
CN114356223A (en) * 2021-12-16 2022-04-15 深圳云天励飞技术股份有限公司 Memory access method and device, chip and electronic equipment

Also Published As

Publication number Publication date
CN115237602A (en) 2022-10-25

Similar Documents

Publication Publication Date Title
US7660951B2 (en) Atomic read/write support in a multi-module memory configuration
EP2313890B1 (en) Independently controllable and reconfigurable virtual memory devices in memory modules that are pin-compatible with standard memory modules
EP1754229B1 (en) System and method for improving performance in computer memory systems supporting multiple memory access latencies
US9251899B2 (en) Methods for upgrading main memory in computer systems to two-dimensional memory modules and master memory controllers
US7516264B2 (en) Programmable bank/timer address folding in memory devices
US8990490B2 (en) Memory controller with reconfigurable hardware
DE19983745B9 (en) Use of page label registers to track a state of physical pages in a storage device
US20140075101A1 (en) Methods for two-dimensional main memory
US10162557B2 (en) Methods of accessing memory cells, methods of distributing memory requests, systems, and memory controllers
US20070005890A1 (en) Automatic detection of micro-tile enabled memory
US6459646B1 (en) Bank-based configuration and reconfiguration for programmable logic in a system on a chip
EP3910488A1 (en) Systems, methods, and devices for near data processing
CN115237602B (en) Normalized RAM (random Access memory) and distribution method thereof
CN111916120B (en) Bandwidth boosted stacked memory
DE112020003733T5 (en) STORAGE CONTROLLER FOR NON-DISRUPTIVE ACCESSES TO NON-VOLATILE STORAGE BY VARIOUS MASTERS AND RELATED SYSTEMS AND METHODS
US20130031327A1 (en) System and method for allocating cache memory
US6094710A (en) Method and system for increasing system memory bandwidth within a symmetric multiprocessor data-processing system
JPH10260895A (en) Semiconductor storage device and computer system using the same
US20100122039A1 (en) Memory Systems and Accessing Methods
US20100058025A1 (en) Method, apparatus and software product for distributed address-channel calculator for multi-channel memory
JP2938453B2 (en) Memory system
US7760577B1 (en) Programmable power down scheme for embedded memory block
CN115934364A (en) Memory management method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant