CN115237602A - Normalized RAM and distribution method thereof - Google Patents

Normalized RAM and distribution method thereof Download PDF

Info

Publication number
CN115237602A
CN115237602A CN202210980856.4A CN202210980856A CN115237602A CN 115237602 A CN115237602 A CN 115237602A CN 202210980856 A CN202210980856 A CN 202210980856A CN 115237602 A CN115237602 A CN 115237602A
Authority
CN
China
Prior art keywords
ram
normalized
service
bank
banks
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210980856.4A
Other languages
Chinese (zh)
Other versions
CN115237602B (en
Inventor
胡新立
刘浩
马超
蒋丹崴
张钰勃
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Moore Threads Technology Co Ltd
Original Assignee
Moore Threads Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Moore Threads Technology Co Ltd filed Critical Moore Threads Technology Co Ltd
Priority to CN202210980856.4A priority Critical patent/CN115237602B/en
Publication of CN115237602A publication Critical patent/CN115237602A/en
Application granted granted Critical
Publication of CN115237602B publication Critical patent/CN115237602B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5022Mechanisms to release resources
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The application discloses a normalized RAM. The RAM comprises a plurality of RAM storage units with the same type, the RAM storage units are divided into a plurality of banks, and the number of the RAM storage units included in each bank is the same. The RAM also comprises an interface used for reading and writing data from the bank. The RAM also comprises a register used for storing the occupation information of the corresponding bank, and the occupation information represents the occupation condition of the RAM storage unit of each bank. The RAM also comprises a multiplexer MUX used for selecting the storage units of the interface and the bank according to the occupation information returned by the instruction and the register. In addition, the disclosure also provides a distribution method of the normalized RAM.

Description

Normalized RAM and distribution method thereof
Technical Field
The present application relates to the field of processor technology, and more particularly, to a normalized RAM and a method for allocating the same.
Background
A Random Access Memory RAM (Random Access Memory) is an internal Memory that directly exchanges data with a processor. The RAM is typically integrated on a semiconductor chip and contains a large number of memory cells, which are arranged in rows and columns in a matrix and are mostly distributed over individually addressable memory banks (banks).
In the prior art, since a Graphics Processing Unit (GPU) or a general purpose processor (CPU) may process different data in different ways after reading out the data from a system memory, many different types of RAMs (such as read/write bit width, depth, number of independent banks addressed each time, addressing mode, etc.) need to be respectively arranged inside a chip to facilitate data processing. That is, it is necessary to set in advance a plurality of types of RAMs dedicated to various types of services in a chip. Furthermore, the frequency of use of various types of RAM is different in different services, resulting in the overall RAM usage efficiency of different services not being optimal.
The disadvantage of the currently applied prior art is that the individual subdivided RAMs are solidified, their type is fixed at the beginning of the design and cannot be changed afterwards, which leads to disadvantages for flexible allocation and full utilization of the RAM memory cells.
Disclosure of Invention
The application aims to solve the problems that the RAM type solidification of the internal storage of a processor is difficult to maintain and the use efficiency is low.
The RAM is normalized, the normalized RAM comprises a plurality of RAM storage units with the same type, the RAM storage units are divided into a plurality of banks, the number of the RAM storage units included in each bank is the same, the type of the RAM storage unit of each bank can be configured by software according to the service requirement through an internal register, and a hardware circuit processes various read-write requests according to configuration information.
According to the method and the device, the types of the storage units of the banks in the normalized RAM are configured according to the business requirements through software, so that the use efficiency of the storage units in the RAM is improved, and the whole RAM space is saved.
According to an aspect of the present application, a normalized RAM is provided. The RAM comprises a plurality of RAM memory units with the same type, the RAM memory units are divided into a plurality of banks, and the number of the RAM memory units in each bank is the same. The RAM also comprises an interface used for reading and writing data from the bank. The RAM also comprises a register which is used for storing the occupation information of the corresponding bank, and the occupation information represents the occupation condition of the RAM storage unit of each bank. The RAM also comprises a multiplexer MUX used for selecting the interface and the storage unit of the bank according to the occupation information returned by the instruction and the register.
In some embodiments, the RAM is RAM internal to a graphics processor GPU or a general purpose CPU.
In some embodiments, the types include size and addressing mode.
In some embodiments, the sizes include read and write bit widths, depths, and the number of independent banks per addressing.
In some embodiments, after a service is finished, the memory locations used by the service are released and marked as unused, and the released memory locations may be reallocated.
In some embodiments, when no new service runs after the service is finished, the storage unit is released in a power-off mode; and the memory locations in RAM are reallocated after the next power-up.
In some embodiments, two or more RAMs are arranged in a multi-drop arrangement when the chip size is large.
In some embodiments, when a processor supports multithreading, each thread is started with one or more banks allocated and then is specifically partitioned for traffic within the allocated one or more banks.
According to another aspect of the present application, there is provided an allocation method of a normalized RAM, wherein the normalized RAM includes a plurality of RAM storage units of the same type, the plurality of RAM storage units are divided into a plurality of banks, and each bank includes the same number of RAM storage units. The allocation method comprises the steps of determining the type of the RAM storage unit required by the service; reading bank occupation information stored in a register of the normalized RAM; and configuring the normalized RAM based on the type of the RAM storage unit required by the service and the occupation information to enable the normalized RAM to be matched with the type of the RAM storage unit required by the service.
According to the method and the device, normalization processing is carried out on the RAMs of different types distributed at multiple positions in the prior art, the normalization RAMs comprise a plurality of RAM storage units of the same type, flexible configuration can be carried out based on different services, and therefore the use efficiency of the RAMs is improved.
Drawings
Embodiments of the present application will now be described in more detail and with reference to the accompanying drawings, in which:
FIG. 1 is a schematic diagram schematically illustrating an example RAM employing distributed storage techniques in the prior art;
FIG. 2 is a schematic diagram that schematically illustrates an example normalized RAM that employs centralized storage techniques, in accordance with an embodiment of the present application; and
fig. 3 is a schematic diagram schematically illustrating an extended RAM employing a centralized storage technique according to an embodiment of the present application.
Detailed Description
The technical solutions in the present application will be described clearly and completely with reference to the accompanying drawings in the present application. The described embodiments are only some embodiments of the present application and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present application without making any creative effort, shall fall within the protection scope of the present application.
In the prior art, each RAM is sized according to product requirements, and is divided into several types with different functions, and different data are independently stored for a processor to process different services. The type of each RAM is determined at the beginning of the design. Also, each RAM is hardware isolated, i.e., each type of RAM is used only for that particular service. For each service, only a particular type of RAM designed for that service can be used, and other types of RAM cannot be used. For each RAM, it can only be used for a specific service, and not for other services.
An example of the prior art is shown in figure 1. Fig. 1 is a schematic diagram schematically illustrating an example RAM employing a distributed storage technique in the related art. In this example, three RAMs are included in the chip. The first RAM includes an interface, a multiplexer MUX and two banks, i.e., bank0 and bank1. Each memory cell in the respective bank in the first RAM is set to a specific type one at the beginning of the design, such as 37bit x 32, and the type of its memory cell cannot be changed any more after the setting. This first RAM is only suitable for the first class of traffic (which requires the use of 37bit x 32 type memory cells). For other types of services, for example, services that need to use memory cells of 64bit x 32, 28bit x 16, or 32bit x 78, etc., the first RAM cannot be used.
With continued reference to fig. 1, the second RAM includes an interface and a bank. Since there is only one interface and one bank, the multiplexer MUX is not needed. Each memory cell in the bank in the second RAM is set to a specific type two at the beginning of the design, for example 64bit x 32, and the type of the memory cell cannot be changed after setting. This second RAM is only suitable for the second class of traffic (which requires the use of 64bit x 32 type memory cells). For other types of services, such as services that need to use memory cells of the type 37bit x 32, 28bit x 16 or 32bit x 78, the second RAM will not be used.
As shown in fig. 1, the third RAM includes two interfaces, one multiplexer MUX, and three banks, i.e., bank0, bank1, and bank2. Each memory cell in each bank in the third RAM is set to a specific type three, such as 28bit x 16, at the beginning of the design, and the type of the memory cell cannot be changed any more after the setting. This third RAM is only suitable for the third class of traffic (which requires the use of 28bit x 16 type memory cells). For other types of services, such as services that require the use of memory cells of the type 37bit x 32, 64bit x 32, or 32bit x 78, the third RAM will not be used.
It can be seen that in the prior art, for the situation as shown in fig. 1, three different types of RAM need to be used for three different services. Each RAM is not generic, i.e., each RAM is only suitable for a particular type of traffic and cannot be used for other types of traffic.
In addition, when the services are different, the use conditions of the RAMs are different from one another, and the RAMs cannot be managed in a centralized manner, so that the use efficiency is affected by hardware isolation. In addition, each RAM is different in type (e.g., size, addressing mode, etc.), so that it needs to be maintained independently, requires more manpower, and has a higher probability of being problematic than a single type of RAM. In addition, the MUX and the arbitration circuit need to be configured according to the number of RAMs, and in the prior art, since various types of RAMs need to be respectively configured for various types of services, a plurality of MUXs and arbitration circuits are required, thereby increasing the chip area.
Therefore, there is a need for an improved RAM that is normalized to improve its efficiency of use, thereby saving overall RAM space.
In order to solve the above various problems, the inventors propose an improvement to the RAM of the prior art. In a modified scheme, the normalized RAM comprises a plurality of RAM storage units with the same type, the plurality of RAM storage units are divided into a plurality of banks, and each bank comprises the same number of RAM storage units. And then, aiming at various specific services, determining the type of the RAM storage unit required by the service, reading bank occupation information stored in a register of the normalized RAM, configuring the normalized RAM based on the type of the RAM storage unit required by the service and the occupation information to enable the normalized RAM to be matched with the type of the RAM storage unit required by the service, and then carrying out service processing. The type of RAM memory cells is configured according to the service. For different services, the same normalized RAM may be configured to be of different types so as to be suitable for different services. For example, when needed for a first service, the normalized RAM is configured to be applicable to a first type of the first service; when needed for a second service, the normalized RAM is configured to be of a second type suitable for the second service.
And after one service is finished and the next service is continuously operated, releasing the storage unit used by the service, marking the storage unit as unused, and reallocating the released storage unit. When no new service runs after the service is finished, the storage unit can be released in a power-off mode, and the storage unit in the RAM is distributed again after the next power-on, so that the use efficiency of the RAM is further improved.
Compared with the RAM in the prior art, the normalized RAM is single in type and is beneficial to maintenance of designers. Since the normalized RAM is a piece of RAM including a plurality of RAM memory cells of the same type, only one MUX and arbitration selection circuit are required, thereby saving chip area.
The following description is made with reference to the accompanying drawings.
A RAM according to an embodiment of the present application may refer to fig. 2. FIG. 2 is a schematic diagram that schematically illustrates an example normalized RAM that employs centralized storage techniques, in accordance with an embodiment of the present application.
As shown in fig. 2, the normalized RAM according to the embodiment of the present application has a plurality of interfaces, such as interface a, interface B, interface C, and interface D. The interface is used for reading and writing data from the bank.
The normalized RAM includes a plurality of RAM memory cells of the same type, which are divided into a plurality of banks, for example, as shown in fig. 2, which includes 6 banks, respectively bank0, bank1, bank2, bank3, bank4, and bank5, wherein the type of the memory cell of each bank in the normalized RAM is the same, and then the type of the memory cell of each bank can be independently configured by software according to the needs of a service.
The normalized RAM is also provided with a plurality of registers REG for storing the occupation information of the corresponding bank, wherein the occupation information represents the occupation condition of the RAM storage unit of each bank.
As shown in fig. 2, the normalized RAM further includes a MUX for selecting the interface and the memory location of the bank based on the instruction and the occupancy information returned by the register. Since there is only one RAM, only one MUX is needed, which can save chip area.
For the RAM of the prior art shown in fig. 1, when memory cells for two services, 37bit x 32, 64bit x 32 and 28bit x 16, 32bit x 78, are allocated, four types of dedicated RAM blocks need to be designed independently for use in design.
However, with the normalized RAM according to the embodiment of the present application shown in fig. 2, only one RAM having a general type is required, and a dedicated RAM does not need to be provided separately for each type of traffic.
In the normalized RAM according to the embodiment of the present application shown in fig. 2, multiple RAM storage units of the same type, for example, 8bit x 32, are included. For each different service, it will be allocated according to the smallest unit of storage. For example, for the above-mentioned services, when the storage units of the 37bit x 32, 64bit x 32, 28bit x 16, and 32bit x 78 services need to be allocated, two RAM storage units of the 8bit x 5 x 32 and the 8bit x 8 x 32 services may be allocated in the centrally stored normalized RAM for use by the first service. And after the first service line is finished, releasing the used memory cell, and allocating two RAM memory cells of 8bit x 4 x 16 and 8bit x 4 x 78 from the normalized RAM for the second service.
Therefore, the normalized RAM and the distribution method thereof according to the embodiment of the application have at least the following advantages:
(1) Compared with the prior art, a smaller amount of RAM can be used, for example, the requirement of different services can be met by using one normalized RAM, so that the required amount of RAM is reduced. Since the normalized RAM comprises a plurality of RAM storage units with the same type, when the normalized RAM faces a specific service, the types of the storage units of the various banks in the normalized RAM can be flexibly configured through software according to the requirement of the specific service, so that the RAM can be suitable for different services without setting specific special RAMs for different services as in the prior art, and the whole RAM space is further saved.
(2) As the amount of RAM required is reduced, the number of multiplexers MUX and arbitration circuits required is correspondingly reduced, thereby reducing chip area.
(3) Since only one normalized RAM needs to be provided, the internal storage is made more centralized and easier to maintain than in the prior art.
(4) Compared with the multiple types of RAMs in the prior art, the normalized RAM in the embodiment of the application has lower problem probability, and because the type is single, independent maintenance is not needed, and too much manpower is not needed for maintenance.
(5) Because the normalized RAM comprises a plurality of RAM storage units with the same type, the types of the storage units of the banks in the normalized RAM can be flexibly configured according to the service requirements when different services are aimed at, so that the universality of the RAM is enhanced, and the use efficiency of the RAM is improved.
In addition, the inventor also finds that when the chip scale is large, the routing length of the RAM interface can be increased due to the influence of the size of the chip scale on the layout of the chip, so that the time delay is increased. For such a situation, for example, when the timing is not satisfied due to the multiple functional blocks accessing the centralized RAM at different positions, a multi-point distribution method may be adopted to solve the problem, specifically referring to fig. 3.
In fig. 3, the RAM on the left side is a single normalized RAM according to an embodiment of the present application, and the RAM on the right side is set in a multi-drop arrangement for the two RAMs. As shown in fig. 3, the farthest distance of the RAM on the right side is significantly smaller than that of the RAM on the left side, so that the further improved RAM adopting the multi-point distribution mode can reduce the trace length and further reduce the delay. In addition, for each service, a RAM storage unit is allocated to the service from the RAM with the smaller routing length for the functional block used by the service.
The invention is also applicable to the case of multithreading. When the processor supports multithreading, the processor needs to be designed according to the RAM memory unit comprehensively needed by the service, one or more banks are allocated when each thread is started, then the service is specifically divided in the allocated one or more banks, and the currently allocated RAM memory unit is released for subsequent use after Cheng Jie.
It should be understood that the above embodiments are described by way of example only. While the embodiments have been illustrated and described in detail in the drawings and foregoing description, such illustration and description are to be considered illustrative or exemplary and not restrictive, and the scope of the application is not limited to the disclosed embodiments.
Terms such as "first," "second," and the like may be used in this application to describe various devices, elements, components or sections, but are not intended to limit the devices, elements, components or sections in terms of sequence or importance. These terms are only used to distinguish one device, element, component or section from another device, element, component or section.
Other variations to the disclosed embodiments can be understood and effected by those skilled in the art from a study of the drawings, the disclosure, and the appended claims. In this application, the word "comprising" does not exclude other elements or steps, and the indefinite article "a", "an", etc. does not exclude a plurality. Features which are listed in mutually different embodiments may be combined without conflict. The order in which the steps of a method according to the embodiments of the present application are recited in the context of the present application should not be construed as limiting the order in which the steps are performed, unless explicitly defined otherwise.

Claims (16)

1. A normalized RAM, the RAM comprising:
the RAM memory units are of the same type and divided into a plurality of banks, and the number of the RAM memory units included in each bank is the same;
the interface is used for reading and writing data from the bank;
the register is used for storing occupation information of corresponding banks, and the occupation information represents the occupation condition of the RAM storage unit of each bank;
and the multiplexer MUX is used for selecting the interface and the storage unit of the bank according to the occupation information returned by the instruction and the register.
2. The normalized RAM of claim 1, wherein the RAM is a RAM internal to a graphics processor, GPU, or general purpose CPU.
3. The normalized RAM of claim 1 or 2, wherein the type comprises size and addressing mode.
4. The normalized RAM of claim 3, wherein the sizes comprise read and write bit widths, depths, and number of independent banks per addressing.
5. Normalized RAM according to claim 1 or 2, wherein after a service is finished, the memory locations used by said service are released and marked as unused, and the released memory locations can be reallocated.
6. The normalized RAM of claim 5, wherein when no new service is running after a service is over, the storage unit is released in a power-down manner; and the memory cells in the RAM are reallocated after the next power-up.
7. The normalized RAM according to claim 1 or 2, wherein two or more RAMs are arranged in a multi-point distribution when the chip size is large.
8. The normalized RAM of claim 1 or 2, wherein when the processor supports multithreading, each thread is started with one or more banks allocated and then is further partitioned specifically for traffic within the allocated one or more banks.
9. A distribution method of a normalized RAM (random access memory), wherein the normalized RAM comprises a plurality of RAM memory units with the same type, the RAM memory units are divided into a plurality of banks, and the number of the RAM memory units in each bank is the same, and the distribution method comprises the following steps:
determining the type of a RAM storage unit required by the service;
reading bank occupation information stored in a register of the normalized RAM;
and configuring the normalized RAM based on the type of the RAM storage unit required by the service and the occupation information to enable the normalized RAM to be matched with the type of the RAM storage unit required by the service.
10. The allocation method according to claim 9, wherein said RAM is a RAM internal to a graphics processor GPU or a general purpose CPU.
11. The allocation method according to claim 9 or 10, wherein the type comprises size and addressing mode.
12. The allocation method according to claim 11, wherein said sizes include read-write bit width, depth and number of independent banks per addressing.
13. A method according to claim 9 or 10, wherein after a service is terminated, the memory locations used by said service are released and marked as unused, and the released memory locations can be reallocated.
14. The allocation method according to claim 13, when no new service runs after the service is finished, adopting a power-off mode to release the storage unit; and the memory cells in the RAM are reallocated after the next power-up.
15. The allocation method according to claim 9 or 10, when the chip size is large and two or more RAMs are arranged in a multi-drop distribution manner, allocating the RAM memory unit from the RAM with a smaller trace length for the functional block used by the service to the service.
16. The allocation method according to claim 9 or 10, when the processor supports multithreading, one or more banks are allocated when each thread is started, and then the specific division is performed for the traffic within the allocated one or more banks.
CN202210980856.4A 2022-08-16 2022-08-16 Normalized RAM (random Access memory) and distribution method thereof Active CN115237602B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210980856.4A CN115237602B (en) 2022-08-16 2022-08-16 Normalized RAM (random Access memory) and distribution method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210980856.4A CN115237602B (en) 2022-08-16 2022-08-16 Normalized RAM (random Access memory) and distribution method thereof

Publications (2)

Publication Number Publication Date
CN115237602A true CN115237602A (en) 2022-10-25
CN115237602B CN115237602B (en) 2023-09-05

Family

ID=83678569

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210980856.4A Active CN115237602B (en) 2022-08-16 2022-08-16 Normalized RAM (random Access memory) and distribution method thereof

Country Status (1)

Country Link
CN (1) CN115237602B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116028388A (en) * 2023-01-17 2023-04-28 摩尔线程智能科技(北京)有限责任公司 Caching method, caching device, electronic device, storage medium and program product

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180165092A1 (en) * 2016-12-14 2018-06-14 Qualcomm Incorporated General purpose register allocation in streaming processor
CN112368676A (en) * 2019-09-29 2021-02-12 深圳市大疆创新科技有限公司 Method and apparatus for processing data
CN114356223A (en) * 2021-12-16 2022-04-15 深圳云天励飞技术股份有限公司 Memory access method and device, chip and electronic equipment

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117524279A (en) * 2017-11-15 2024-02-06 三星电子株式会社 SRAM with virtual-body architecture, and system and method including the same
CN112199039B (en) * 2020-09-04 2022-08-05 星宸科技股份有限公司 Virtual storage management method and processor

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180165092A1 (en) * 2016-12-14 2018-06-14 Qualcomm Incorporated General purpose register allocation in streaming processor
CN112368676A (en) * 2019-09-29 2021-02-12 深圳市大疆创新科技有限公司 Method and apparatus for processing data
CN114356223A (en) * 2021-12-16 2022-04-15 深圳云天励飞技术股份有限公司 Memory access method and device, chip and electronic equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116028388A (en) * 2023-01-17 2023-04-28 摩尔线程智能科技(北京)有限责任公司 Caching method, caching device, electronic device, storage medium and program product
CN116028388B (en) * 2023-01-17 2023-12-12 摩尔线程智能科技(北京)有限责任公司 Caching method, caching device, electronic device, storage medium and program product

Also Published As

Publication number Publication date
CN115237602B (en) 2023-09-05

Similar Documents

Publication Publication Date Title
US8645893B1 (en) Method of generating a layout of an integrated circuit comprising both standard cells and at least one memory instance
CN102292778B (en) Memory devices and methods for managing error regions
EP2313890B1 (en) Independently controllable and reconfigurable virtual memory devices in memory modules that are pin-compatible with standard memory modules
EP1896961B1 (en) Automatic detection of micro-tile enabled memory
US10162557B2 (en) Methods of accessing memory cells, methods of distributing memory requests, systems, and memory controllers
US20070008328A1 (en) Identifying and accessing individual memory devices in a memory channel
US8305834B2 (en) Semiconductor memory with memory cell portions having different access speeds
US6459646B1 (en) Bank-based configuration and reconfiguration for programmable logic in a system on a chip
EP3910488A1 (en) Systems, methods, and devices for near data processing
CN115237602A (en) Normalized RAM and distribution method thereof
KR101183739B1 (en) Integrated circuit with multiported memory supercell and data path switching circuitry
US6094710A (en) Method and system for increasing system memory bandwidth within a symmetric multiprocessor data-processing system
CN112368676A (en) Method and apparatus for processing data
CN111916120A (en) Bandwidth boosted stacked memory
CN115757260A (en) Data interaction method, graphics processor and graphics processing system
US20100122039A1 (en) Memory Systems and Accessing Methods
US20040095796A1 (en) Multi-bank memory array architecture utilizing topologically non-uniform blocks of sub-arrays and input/output assignments in an integrated circuit memory device
KR100715525B1 (en) Multi-port memory device including clk and dq power which are independent
CN106649136B (en) Data storage method and storage device
US7760577B1 (en) Programmable power down scheme for embedded memory block
US20230186976A1 (en) Method and apparatus for recovering regular access performance in fine-grained dram
US7376802B2 (en) Memory arrangement
US10216454B1 (en) Method and apparatus of performing a memory operation in a hierarchical memory assembly
JPH01109447A (en) Memory system
US9367456B1 (en) Integrated circuit and method for accessing segments of a cache line in arrays of storage elements of a folded cache

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant