CN114942895A - Address mapping strategy design method based on reinforcement learning - Google Patents
Address mapping strategy design method based on reinforcement learning Download PDFInfo
- Publication number
- CN114942895A CN114942895A CN202210714310.4A CN202210714310A CN114942895A CN 114942895 A CN114942895 A CN 114942895A CN 202210714310 A CN202210714310 A CN 202210714310A CN 114942895 A CN114942895 A CN 114942895A
- Authority
- CN
- China
- Prior art keywords
- bim
- reinforcement learning
- network
- strategy
- learning model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000002787 reinforcement Effects 0.000 title claims abstract description 56
- 238000013507 mapping Methods 0.000 title claims abstract description 52
- 238000000034 method Methods 0.000 title claims abstract description 39
- 238000013461 design Methods 0.000 title claims abstract description 15
- 239000011159 matrix material Substances 0.000 claims abstract description 37
- 230000002441 reversible effect Effects 0.000 claims abstract description 23
- 230000008901 benefit Effects 0.000 claims abstract description 7
- 230000009471 action Effects 0.000 claims description 30
- 230000033001 locomotion Effects 0.000 claims description 24
- 230000008569 process Effects 0.000 claims description 16
- 230000006870 function Effects 0.000 claims description 13
- 230000009466 transformation Effects 0.000 claims description 11
- 238000004422 calculation algorithm Methods 0.000 claims description 8
- 238000012549 training Methods 0.000 claims description 8
- 238000013528 artificial neural network Methods 0.000 claims description 3
- 238000004364 calculation method Methods 0.000 claims description 3
- 230000004913 activation Effects 0.000 claims description 2
- 230000001186 cumulative effect Effects 0.000 claims description 2
- 230000000875 corresponding effect Effects 0.000 claims 2
- 238000003062 neural network model Methods 0.000 claims 1
- 230000005055 memory storage Effects 0.000 abstract description 2
- 238000005457 optimization Methods 0.000 description 16
- 230000006872 improvement Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 230000006835 compression Effects 0.000 description 2
- 238000007906 compression Methods 0.000 description 2
- 238000009795 derivation Methods 0.000 description 2
- 238000002474 experimental method Methods 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 239000003795 chemical substances by application Substances 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000002688 persistence Effects 0.000 description 1
- 239000002243 precursor Substances 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000000844 transformation Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/10—Address translation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Abstract
The invention relates to an address mapping strategy design method based on reinforcement learning. A Binary Invertible Matrix (BIM) is used for representing a mainstream address mapping strategy, and an optimal row cache hit rate address mapping strategy is trained by combining a reinforcement learning model. The reversibility of the binary reversible matrix BIM enables the physical address and the address of the memory storage unit to be effectively mapped, and the BIM has the irreplaceable advantages of flexibly expressing an address mapping strategy, low hardware overhead and the like.
Description
Technical Field
The invention relates to an address mapping strategy design method based on reinforcement learning.
Background
In a computer system structure, the performance improvement speed of a processor and the performance improvement speed of a memory are unbalanced and developed all the time, so that the memory access delay becomes an important factor for limiting the system performance improvement. Since the "memory wall" problem was raised, the performance improvement of hardware accelerator in computer system has been one of the key research objects in computer architecture, and the memory controller is one of the key objects for improving the performance of accelerator. The scholars at home and abroad optimize the memory controller from many angles, and the system delay is reduced. Most address mapping strategies have the problems of strong pertinence, incapability of being widely popularized to other applications and insufficient flexibility for realizing high-performance memory access in field-specific accelerators.
Disclosure of Invention
The invention aims to provide an address mapping strategy design method based on reinforcement learning, which is characterized in that a binary reversible matrix is used for representing a mainstream address mapping strategy, and an address mapping strategy with an optimal line cache hit rate is trained by combining a reinforcement learning model.
In order to achieve the purpose, the technical scheme of the invention is as follows: an address mapping strategy design method based on reinforcement learning is characterized in that a binary reversible matrix BIM is used for representing an address mapping strategy, and an address mapping strategy with an optimal line cache hit rate is trained by combining a reinforcement learning model. The realization mode is as follows: one-dimensional expansion of a binary reversible matrix BIM is used as input of a reinforcement learning model; taking the initial BIM line cache hit rate as the current optimum value H of the reinforcement learning model best (ii) a Selecting actions by the reinforcement learning model according to the probability to obtain candidate BIMs; when the line cache hit rate calculated by the candidate BIM is higher than that of the current BIM, the reinforcement learning model replaces the current BIM with the candidate BIM; then, recalculating the reward value and updating the parameters of the reinforcement learning model; continuously iterating and optimizing the reinforcement learning model according to the process, and converging according to a preset stopping rule to obtain a trained BIM; and simultaneously obtaining the address mapping strategy with the highest line cache hit rate.
Compared with the prior art, the invention has the following beneficial effects: the invention relates to an address mapping strategy design method based on reinforcement learning, which combines a binary reversible matrix BIM and reinforcement learning for the address mapping strategy design of a memory controller for the first time. The binary reversible matrix BIM has extremely high flexibility in the expression of the address mapping strategy and can correctly express all the current address mapping strategies. In addition, the invention combines a reinforced learning model based on strategy gradient to enable the BIM to learn the address mapping strategy with the highest line cache hit rate aiming at different access modes of the neural network accelerator. And (4) realizing the trained and learned BIM in the memory controller by hardware.
Drawings
Fig. 1 is a representation of an address mapping policy.
Fig. 2 is a schematic diagram of a mainstream address mapping policy represented by BIM.
Fig. 3 is a schematic diagram of policy network optimization BIM for reinforcement learning.
FIG. 4 is an optimization iterative BIM algorithm.
FIG. 5 is a Mini-batch training reinforcement learning model algorithm.
FIG. 6 is a schematic diagram of a reinforcement learning model system workflow.
Detailed Description
The technical scheme of the invention is specifically explained below with reference to the accompanying drawings.
The invention relates to an address mapping strategy design method based on reinforcement learning, which uses a binary reversible matrix BIM to represent an address mapping strategy and combines a reinforcement learning model to train an address mapping strategy with an optimal line cache hit rate. The implementation mode is as follows: one-dimensional expansion of a binary reversible matrix BIM is used as input of a reinforcement learning model; taking the initial BIM line cache hit rate as the current optimum value H of the reinforcement learning model best (ii) a Selecting actions according to the probability by the reinforcement learning model to obtain candidate BIMs; when the line cache hit rate calculated by the candidate BIM is higher than that of the current BIM, the reinforcement learning model replaces the current BIM with the candidate BIM; then, recalculating the reward value and updating the parameters of the reinforcement learning model; continuously iterating and optimizing the reinforcement learning model according to the process, and converging according to a preset stopping rule to obtain a trained BIM; and simultaneously obtaining the address mapping strategy with the highest line cache hit rate.
The following is a specific implementation process of the invention.
The principle of the memory address mapping strategy is that addresses and memory cell positions in the DRAM are mapped into specific positions of the DRAM according to a certain rule. FIG. 1 shows the current main memory address mapping strategy, and the present invention uses simplified address bits to represent the different address mapping strategies of DRAM, which are simplified to 8-bit physical addresses. Where the first 2 bits are the Bank bits, followed by 4 row bits and the last 2 bits are the column address bits. Fig. 1(a) shows a BRC, and the policy memory access addresses are mapped corresponding to the physical addresses according to the sequence arrangement of Bank, row and column. FIG. 1(b) shows RBC, the strategy is to swap Bank bits and row bits, place the row bits before the Bank bits, and leave the column coordinate bits unchanged. FIG. 1(c) shows bit inversion, i.e., the initial Bank bits and the line bits are arranged in reverse order. While fig. 1(d) shows a persistence-based strategy that exclusive ors Bank bits and partial row address bits to generate new Bank address bits. Fig. 1(e) shows a memory address mapping policy based on a Binary Invertible Matrix (BIM), in which an initial physical address is multiplied by the Binary Invertible Matrix to obtain address information corresponding to the BIM address mapping.
All of the strategies described above can be represented by a binary invertible matrix BIM. The strategy implementation process is to multiply the original address and the BIM to obtain the needed address mapping. The binary reversible matrix BIM consists of 1 or 0, so that the realization of the memory address mapping can be realized by hardware only by an AND gate and an XOR gate. The AND gate and the XOR gate are used for multiplication and addition respectively, and the process can effectively reduce the hardware overhead of memory address mapping. The reversibility of the binary invertible matrix enables efficient mapping of physical addresses to addresses of memory storage units. As shown in fig. 2, the mainstream address mapping strategies shown in fig. 2(a) - (d) can be represented by BIM. Since the BIM has irreplaceable advantages such as performance and low hardware overhead, the memory controller based on reinforcement learning has obvious advantages of selecting the BIM as a carrier of a memory address mapping strategy in a system.
1. BIM optimization for reinforcement learning
The BIM optimization in the invention mainly comprises the step of performing elementary matrix transformation on a binary unit matrix in a strategy gradient algorithm model. The motion space of the reinforcement learning model is composed of all possible row/column exchange motions of the binary invertible matrix.
(1) Policy network design
In the invention, a policy network pi is used for learning and optimizing the action of the BIM address mapping policy with higher access and storage efficiency. The strategy network is designed into two cascaded full connection layers, the ReLU is used as an activation function in the first layer of the strategy network, and nonlinear factors are introduced. The output of the invention at the second layer of the network is connected with the Softmax function in a fully connected mode. As shown in fig. 3, is an example of BIM optimization. The design expands a binary reversible matrix BIM into one-dimensional data in a row-by-row sequence as the input of a policy network. According to the probability distribution, the model selects an action in the action space as the current optimization action of the binary reversible BIM. And the BIM is transformed according to the last action to become a new binary reversible matrix BIM, and the binary reversible matrix BIM at the moment is used as the input of the strategy network at the next moment. In the following example, BIM is simplified into 6 × 6 binary reversible matrix BIM as an address mapping strategy, and the optimization of BIM can select the corresponding row/column transformation to optimize according to the output of the model, and thus the optimization is repeated iteratively in a circular manner.
(2) Motion space optimization
In the binary reversible matrix BIM model, the number of motion spaces of the reinforcement learning model isWhere b is the number of rows/columns of the binary invertible matrix. Assuming a BIM of 32 × 32 binary invertible matrices, the total motion space is 992, i.e., 992 transform choices are available for BIM transformation at a time. When the training process of the reinforcement learning model needs a plurality of times of iterative learning, therefore, the motion space for optimizing the BIM is a very large search space, and in this case, the learning process is reversible, and the performance of the reinforcement learning model is reduced due to too long search motion. In order to solve the problem of overlarge action space, action space compression is performed in the BIM optimization process in this subsection.
From linear algebra knowledge, the row/column switching of the binary invertible matrix for an infinite number of times can be realized by multiple row switching. The row swapping of the BIM can be performed by multiplying the left side of the BIM by a transpose matrix M, as shown in equation (1) pre (ii) a Column swapping BIM can be performed by right-hand at BIMSide-by-side multiplication by a transpose matrix M post 。
The binary reversible matrix meets the switching law and the combination law, and a series of row/column transformations can be equivalently realized by using the row transformation. Thus, the present study compresses the motion space into a set of only row transform actions. The transformation expression is as follows:
after the above-mentioned motion space compression, the motion space has been reduced by half. To optimize the motion search space to a greater extent, this study forces the transformation of BIM into an exchange of the first row and the other rows, with a total of b-1 possible motions. The feasibility basis of such a design is that no matter which two rows of BIMs are exchanged, the exchange between the first row and the other two rows can be completed, so that the optimization result of BIMs is not affected. At the same time, the design also adds a maintained motion NOP in the motion space. In summary, the motion space optimized by the BIM model is finally optimized into b motions, and if b is 32, the number of the motion spaces is 32.
(3) Iterative optimization
The reinforcement learning model optimizes the address mapping strategy BIM through iteration. Firstly, expanding BIM rows into a one-dimensional matrix as input, simultaneously testing the row cache hit rate H of an initial BIM address mapping strategy, and taking the row cache hit rate H as the current optimal value H of the model best . Setting k iterations to complete BIM optimization, performing a new line hit rate test on each iteration optimization result, and if the line hit rate is higher than H best If high, the BIM will be the best address mapping policy. So BIM is iteratively optimized. And the row hit rate H best And will continue to increase with iteration. Address mapping policy BIM iterative optimization process pseudo code is shown in fig. 4.
2. Model training
In the model training process, the strategy network generates the next action a at the current moment t The BIM at the current time is converted into the BIM at the next time according to the motion. After k cycles, the policy network obtains the reward value r k =H k . The maximum jackpot value can be obtained by reinforcement learning. And meanwhile, a BIM-based address mapping strategy with the highest row hit rate can be obtained.
The present invention iteratively optimizes the model using the strategic gradient algorithm mentioned above. The formula for the jackpot value is:
R t =γ k+1 r k formula (3)
Where γ is the depreciation factor. Value function V φ (BIM t ) The method is mainly used for predicting the cumulative reward value, and the neural network containing the parameter phi is updated through a strategy gradient method.
The value network and the strategy network have the same intermediate structure and are composed of two fully-connected layers. The difference is that the output of the value network is a value describing the predicted jackpot value. The formula of the advantage of the action is used for expressing the advantage of the reward value of the action selected by the intelligent agent in the current environment relative to the action randomly selected by the policy network. The specific formula is as follows:
A t =R t -V t formula (4)
The maximized objective function is:
the strategy gradient is:
the loss function of the value network is:
the gradient of the value network is:
in the network model, the gradient value of the calculation parameter of the back propagation algorithm, lr, is used π And lr v The learning rates (see fig. 5 for a specific formula) divided into the policy network and the value network are set to 0.001 in this item.
The invention updates the model parameters according to the Mini-batch method. In the experiment, Batch is set to 64, which means that in one Batch, the policy network will be updated 64 times iteratively. These 64 iterations yield a pool of experience (actions, rewards, etc.) that is used to update the parameters in the model. However, the Mini-batch method saves all data such as input data and calculation results, which results in a significant storage overhead. In order to solve the problem, the gradient of one Batch is accumulated as a parameter gradient in the experiment, and the accumulated gradient of a strategy network and a value network is g θ And g φ (see FIG. 5 for a specific formula). The algorithm for training the reinforcement learning model using the Mini-batch method is shown in FIG. 5.
3. Workflow process
The overall process of the iterative optimization training of the present invention is shown in FIG. 6. And (3) one-dimensionally expanding a binary reversible matrix BIM of 32 x 32 to be used as input of a strategy network and a value network, and performing precursor derivation on the strategy network and the value network. The policy network training derivation process determines whether the row hit rate case selection updates the BIM. And selecting the action according to the probability by the model to obtain the candidate BIM. When the line cache hit rate calculated by the BIM is higher than that of the current BIM, the reinforcement learning model system replaces the current BIM with the candidate BIM. Subsequently, the line cache hit rate of the new BIM is recalculated, i.e. the reward value is calculated, and the parameters of the two networks are updated. The system continuously iterates and optimizes according to the process, and can converge to obtain a trained BIM strategy according to a set stopping rule. Meanwhile, an address mapping strategy with the highest line hit rate can be obtained, and the address mapping strategy can be transplanted to FPGA MIG IP for hardware implementation.
The above are preferred embodiments of the present invention, and all changes made according to the technical scheme of the present invention that produce functional effects do not exceed the scope of the technical scheme of the present invention belong to the protection scope of the present invention.
Claims (5)
1. A method for designing an address mapping strategy based on reinforcement learning is characterized in that a binary reversible matrix BIM is used for representing the address mapping strategy, and an address mapping strategy with an optimal row cache hit rate is trained by combining a reinforcement learning model.
2. The address mapping strategy design method based on reinforcement learning of claim 1, wherein the method is specifically realized in the following manner: one-dimensional expansion of a binary reversible matrix BIM is used as input of a reinforcement learning model; taking the initial BIM line cache hit rate as the current optimum value H of the reinforcement learning model best (ii) a Selecting actions by the reinforcement learning model according to the probability to obtain candidate BIMs; when the line cache hit rate calculated by the candidate BIM is higher than that of the current BIM, the reinforcement learning model replaces the current BIM with the candidate BIM; then, recalculating the reward value and updating the parameters of the reinforcement learning model; continuously iterating and optimizing the reinforcement learning model according to the process, and converging according to a preset stopping rule to obtain a trained BIM; and simultaneously obtaining the address mapping strategy with the highest line cache hit rate.
3. The method as claimed in claim 2, wherein the action space number of the reinforcement learning model is composed of all possible row/column exchange actions of the binary reversible matrix BIM, that is, the method is thatWhere b is the number of rows/columns of the binary invertible matrix; in order to solve the problem that the action space of the reinforcement learning model is too large, the action space number of the reinforcement learning model is compressedThe method comprises the following steps:
the row swapping of the BIM can be performed by multiplying the left side of the BIM by a transpose matrix M, as shown in pre (ii) a Column swapping BIMs can be performed by multiplying the right side of the BIM by a transpose matrix M post :
BIM satisfies the exchange law and the combination law, and a series of row/column transformation can be equivalently realized by row transformation; thus, the action space is compressed into a set of only row transform actions; the transformation expression is as follows:
after the motion space is compressed, the motion space is reduced by half; in order to optimize the action search space to a greater extent, the transformation of the binary reversible matrix BIM is forced to be the exchange of a first row and other rows, and the total possible actions are b-1; meanwhile, adding the held motion NOP into the motion space; the motion space number of the reinforcement learning model is optimized into b motions finally.
4. The address mapping strategy design method based on reinforcement learning of claim 2 is characterized in that the reinforcement learning model is composed of a strategy network and a value network; the strategy network consists of two cascaded fully-connected layers, wherein the ReLU serves as an activation function in the first layer of the strategy network, and the output of the second layer of the strategy network is connected with a Softmax function in a fully-connected mode; in the process of training the reinforcement learning model, the strategy network generates the next action a at the current moment t The BIM at the current moment can be transformed according to the action to generate the BIM at the next moment; after the preset iteration number k, the strategy network obtains the reward value r k =H k ,H k Line cache hit rate for BIM after k iterations; the formula for the jackpot value is:
R t =γ k+1 r k
wherein γ is the break-up factor;
the value network is the same as the strategy network intermediate structure and also consists of two fully-connected layers, and the difference is that the output of the value network is a numerical value used for describing the predicted accumulated reward value; the formula of the advantage of the action is used for expressing the advantage of selecting the reward value of the corresponding action in the current environment relative to the action randomly selected by the policy network; the concrete formula is as follows:
A t =R t -V t
wherein A is t Is a merit function, V t Is at s t Under the state, selecting a return value estimated after action according to a strategy pi;
the maximized objective function is:
wherein J (theta) is a maximization objective function, and the maximization J (theta) is used for continuously optimizing the parameter theta of the neural network model; pi θ Is a strategic gradient algorithm, which parameterizes the strategy pi to pi θ Learning a strategy for obtaining a maximum accumulated reward value in a corresponding environment; BIM t Representing a current binary invertible matrix;
the policy gradient, i.e. the partial derivative of the maximized objective function, is calculated as:
the loss function of the value network is:
the gradient of the value network is:
value function V φ (BIM t ) Used for predicting the cumulative reward value, through the method of tactics gradient to upgrade the neural network comprising parameter φ;
in the reinforcement learning model, the gradient value of the calculation parameter of the back propagation algorithm, lr, is used π And lr v The learning rates of the policy network and the value network, respectively.
5. The method of claim 4, wherein the parameters of the reinforcement learning model are updated according to the Mini-Batch method, the gradient of a Batch is accumulated as the parameter gradient, and the accumulated gradient of the policy network and the value network is g θ And g φ 。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210714310.4A CN114942895A (en) | 2022-06-22 | 2022-06-22 | Address mapping strategy design method based on reinforcement learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210714310.4A CN114942895A (en) | 2022-06-22 | 2022-06-22 | Address mapping strategy design method based on reinforcement learning |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114942895A true CN114942895A (en) | 2022-08-26 |
Family
ID=82911016
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210714310.4A Pending CN114942895A (en) | 2022-06-22 | 2022-06-22 | Address mapping strategy design method based on reinforcement learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114942895A (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170177470A1 (en) * | 2015-07-14 | 2017-06-22 | Western Digital Technologies, Inc. | Access network for address mapping in non-volatile memories |
CN111858396A (en) * | 2020-07-27 | 2020-10-30 | 福州大学 | Memory self-adaptive address mapping method and system |
CN113568845A (en) * | 2021-07-29 | 2021-10-29 | 北京大学 | Memory address mapping method based on reinforcement learning |
-
2022
- 2022-06-22 CN CN202210714310.4A patent/CN114942895A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170177470A1 (en) * | 2015-07-14 | 2017-06-22 | Western Digital Technologies, Inc. | Access network for address mapping in non-volatile memories |
CN111858396A (en) * | 2020-07-27 | 2020-10-30 | 福州大学 | Memory self-adaptive address mapping method and system |
CN113568845A (en) * | 2021-07-29 | 2021-10-29 | 北京大学 | Memory address mapping method based on reinforcement learning |
Non-Patent Citations (1)
Title |
---|
沈煌辉;王贞松;郑为民;: "面向图像转置和分块处理的一种高效内存访问策略", 计算机研究与发展, no. 01, 15 January 2013 (2013-01-15), pages 188 - 196 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111461311B (en) | Convolutional neural network operation acceleration method and device based on many-core processor | |
CN111985602A (en) | Neural network computing device, method and computing device | |
CN110009048B (en) | Method and equipment for constructing neural network model | |
CN113254391B (en) | Neural network accelerator convolution calculation and data loading parallel method and device | |
CN114942895A (en) | Address mapping strategy design method based on reinforcement learning | |
Yang et al. | A unified incremental updating framework of attribute reduction for two-dimensionally time-evolving data | |
US20230253032A1 (en) | In-memory computation device and in-memory computation method to perform multiplication operation in memory cell array according to bit orders | |
CN117234720A (en) | Dynamically configurable memory computing fusion data caching structure, processor and electronic equipment | |
US11886347B2 (en) | Large-scale data processing computer architecture | |
CN110889259B (en) | Sparse matrix vector multiplication calculation unit for arranged block diagonal weight matrix | |
CN109582911A (en) | For carrying out the computing device of convolution and carrying out the calculation method of convolution | |
CN110766133B (en) | Data processing method, device, equipment and storage medium in embedded equipment | |
Du et al. | Optimization of the critical diameter and average path length of social networks | |
CN113986816A (en) | Reconfigurable computing chip | |
CN113568845A (en) | Memory address mapping method based on reinforcement learning | |
CN113612575A (en) | Wimax protocol-oriented QC-LDPC decoder decoding method and system | |
Yelmewad et al. | Near optimal solution for traveling salesman problem using GPU | |
CN116151171B (en) | Full-connection I Xin Moxing annealing treatment circuit based on parallel tempering | |
US20240095565A1 (en) | Method, Device, Storage Medium and Electronic Device for Data Reading | |
EP4250186A1 (en) | Qram architecture quantum circuit construction method and apparatus, and quantum address data analysis method and apparatus | |
CN114185514B (en) | Polynomial multiplier based on fee Ma Moshu | |
CN113505825B (en) | Graph calculating device | |
CN115965067B (en) | Neural network accelerator for ReRAM | |
CN110647663B (en) | Graph node attribute memory implementation method and device for shortest path problem | |
Chen et al. | How to obtain and run light and efficient deep learning networks |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |