CN112380147B - Computing device and method for loading or updating data - Google Patents

Computing device and method for loading or updating data Download PDF

Info

Publication number
CN112380147B
CN112380147B CN202011260174.3A CN202011260174A CN112380147B CN 112380147 B CN112380147 B CN 112380147B CN 202011260174 A CN202011260174 A CN 202011260174A CN 112380147 B CN112380147 B CN 112380147B
Authority
CN
China
Prior art keywords
processing unit
address
memory
addresses
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011260174.3A
Other languages
Chinese (zh)
Other versions
CN112380147A (en
Inventor
不公告发明人
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Bi Ren Technology Co ltd
Original Assignee
Shanghai Biren Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Biren Intelligent Technology Co Ltd filed Critical Shanghai Biren Intelligent Technology Co Ltd
Priority to CN202011260174.3A priority Critical patent/CN112380147B/en
Publication of CN112380147A publication Critical patent/CN112380147A/en
Application granted granted Critical
Publication of CN112380147B publication Critical patent/CN112380147B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/38Information transfer, e.g. on bus
    • G06F13/40Bus structure
    • G06F13/4004Coupling between buses
    • G06F13/4027Coupling between buses using bus bridges
    • G06F13/404Coupling between buses using bus bridges with address mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/76Architectures of general purpose stored program computers
    • G06F15/78Architectures of general purpose stored program computers comprising a single central processing unit
    • G06F15/7807System on chip, i.e. computer system on a single chip; System in package, i.e. computer system on one or more chips in a single package
    • G06F15/7825Globally asynchronous, locally synchronous, e.g. network on chip
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect

Abstract

Embodiments of the present disclosure relate to computing devices and methods for updating data, and relate to the field of computers. According to the method, a first processing unit sends a write instruction to a first memory via a network on chip, the write instruction including a first address and a plurality of write data; the first processing unit sends out an emptying instruction; the method comprises the steps that a first processing unit sends an updating instruction to a near memory processing unit through an on-chip network, wherein the updating instruction comprises a first address and a plurality of second addresses; the near memory processing unit performs a predetermined operation on a plurality of data items on a plurality of second addresses of the first memory and a plurality of write data items on the first address in response to the update instruction to generate an updated plurality of data items; and the near memory processing unit stores the updated plurality of data items to the plurality of second addresses. Thereby, data can be flexibly processed in the near memory processing unit and the data moving overhead of the data updating process can be reduced.

Description

Computing device and method for loading or updating data
Technical Field
Embodiments of the present disclosure generally relate to the field of computers, and more particularly, to a computing device, a method for loading data, and a method for updating data.
Background
Embedded table data reading has found widespread use in many computer applications, particularly in artificial intelligence applications such as personalized recommendation models. For Deep Learning Recommendation Model (DLRM), sparse embedding operations represented by sparslelengthsum (sls) consist of two actions: first a small amount of sparse lookups are performed in the large embedded table, followed by reduction, such as pooling (Pooling), of the embedded table entries.
The conventional scheme implements sparse embedding operation through Near Memory Processing (NMP). Referring to FIG. 1, there is shown a schematic block diagram of an architecture 100 for near memory data processing in accordance with the prior art. As shown in fig. 1, architecture 100 includes a vector processing unit 110, a Network On Chip (NOC) 120, a near memory processing unit 130, and a main memory 140 of a Central Processing Unit (CPU), where vector processing unit 110 is connected to near memory processing unit 130 and main memory 140 via Network On Chip 120, and near memory processing unit 130 is coupled to main memory 140. The main memory 140 stores a plurality of embedded tables 1-n. The vector processing unit 110 initiates sparse embedding operations on the plurality of embedding tables 1-n on the main memory 140. Near memory processing unit 130 performs sparse embedding operations on multiple embedding tables 1-n, thereby increasing the efficiency of the embedding table data lookup operations.
However, it is a challenge how to efficiently transfer read data from the main memory 140 to the vector processing unit 110 over a network on chip. Furthermore, reading the entries to the vector processing unit 110 so that weight gradients can be added to these entries remains costly, while transferring write data over the network on chip still faces challenges.
Disclosure of Invention
A computing device and method for loading data are provided that enable flexible processing of data at a near memory processing unit and efficient reading of operation results back to a first processing unit. In addition, another computing device and a method for updating data are provided, which can flexibly process data in a near memory processing unit and reduce data moving overhead of a data updating process.
According to a first aspect of the present disclosure, a computing device is provided. The computing device includes: a first processing unit; a first memory connected to the first processing unit via a network on chip; a near memory processing unit coupled with the first memory; the first processing unit is configured to send a first instruction to the near memory processing unit via the network on chip, the first instruction including a first address, a plurality of second addresses, and an operation type, the first address and the plurality of second addresses being associated with a first memory; the near memory processing unit is configured to, in response to a first instruction, perform an operation associated with an operation type on a plurality of data items on a plurality of second addresses of the first memory to generate an operation result; and the near memory processing unit is further configured to store the operation result to a first address of the first memory; the first processing unit is further configured to issue a flush instruction for making the result of the operation on the first address visible to the first processing unit; and the first processing unit is further configured to issue a read instruction for reading data from the first address to read an operation result at the first address to the first processing unit.
According to a second aspect of the present disclosure, a method for loading data is provided. The method comprises the following steps: the method includes the steps that a first processing unit sends a first instruction to a near memory processing unit via a network on chip, the first instruction comprises a first address, a plurality of second addresses and an operation type, and the first address and the plurality of second addresses are associated with a first memory; the near memory processing unit is used for responding to the first instruction and executing the operation associated with the operation type on a plurality of data items on a plurality of second addresses of the first memory so as to generate an operation result; the near memory processing unit stores the operation result to a first address of the first memory; the first processing unit issues a flush instruction for making the result of the operation on the first address visible to the first processing unit; and the first processing unit sends a reading instruction for reading the operation result on the first address to the first processing unit.
According to a third aspect of the present disclosure, another computing device is provided. The computing device includes: a first processing unit; a first memory connected to the first processing unit via a network on chip; a near memory processing unit coupled with the first memory; the first processing unit is configured to send a write instruction to the first memory via the network on chip, the write instruction including a first address and a plurality of items of write data so as to write the plurality of items of write data to the first address of the first memory; the first processing unit is further configured to issue a flush instruction for making the plurality of items of write data at the first address visible to the first processing unit; the first processing unit is further configured to send an update instruction to the near memory processing unit via the network on chip, the update instruction including a first address and a plurality of second addresses, the plurality of second addresses being associated with the first memory; the near memory processing unit is further configured to perform a predetermined operation on a plurality of data items on a plurality of second addresses of the first memory and a plurality of write data items on the first address in response to the update instruction to generate an updated plurality of data items; and the near memory processing unit is further configured to store the updated plurality of data items to the plurality of second addresses.
According to a fourth aspect of the present disclosure, a method for updating data is provided. The method comprises the following steps: the method comprises the steps that a first processing unit sends a write instruction to a first memory through a network on chip, wherein the write instruction comprises a first address and a plurality of items of write data, and the plurality of items of write data are used for being written into the first address of the first memory; the first processing unit issuing a flush instruction for making a plurality of items of write data at the first address visible to the first processing unit; the first processing unit sending an update instruction to a near memory processing unit via the network on chip, the update instruction including the first address and a plurality of second addresses, the plurality of second addresses associated with the first memory; the near memory processing unit performs a predetermined operation on a plurality of data items on the plurality of second addresses of the first memory and a plurality of write data items on the first address in response to the update instruction to generate an updated plurality of data items; and the near memory processing unit storing the updated plurality of data items to the plurality of second addresses.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The above and other features, advantages and aspects of embodiments of the present disclosure will become more apparent upon consideration of the following detailed description, taken in conjunction with the accompanying drawings. In the drawings, like or similar reference characters designate like or similar elements.
FIG. 1 is a schematic block diagram of an architecture 100 for near memory data processing according to the prior art.
Fig. 2 is a schematic block diagram of a computing device 200 according to an embodiment of the present disclosure.
FIG. 3 is a schematic diagram of a method 300 for loading data in accordance with an embodiment of the present disclosure.
Fig. 4 is a schematic diagram of a method 400 for updating data, according to an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
The term "include" and variations thereof as used herein is meant to be inclusive in an open-ended manner, i.e., "including but not limited to". Unless specifically stated otherwise, the term "or" means "and/or". The term "based on" means "based at least in part on". The terms "one example embodiment" and "one embodiment" mean "at least one example embodiment". The term "another embodiment" means "at least one additional embodiment". The terms "first," "second," and the like may refer to different or the same object. Other explicit and implicit definitions are also possible below.
As described above, it is a challenge how to efficiently transfer read data from the main memory 140 to the vector processing unit 110 through the on-chip network. Furthermore, there is still a significant overhead in reading entries to the vector processing unit 110 so that weight gradients can be added to these entries, while transferring write data over the network on chip still faces challenges.
To address at least one of the above issues, the present disclosure provides a computing device, a method for loading data, and a method for updating data.
Fig. 2 shows a schematic block diagram of a computing device 100 according to an embodiment of the present disclosure. As shown in fig. 2, computing device 200 includes at least one first processing unit 210, a network-on-chip 220, at least one near memory processing unit 230, at least one first memory 240, and a main memory 250. It should be understood that although 2 first processing units 210, 2 near memory processing units 230, and 2 first memories 240 are shown in fig. 2, this is merely an example, and the number of first processing units 210, near memory processing units 230, and first memories 240 may be greater or smaller, and the scope of the present disclosure is not limited thereto.
With respect to the first processing unit 210, it may be used to process instructions for vectors and/or scalars. The first processing unit 210 includes, for example but not limited to, a vector processing unit, such as a vector processor, or a hardware circuit, an application specific integrated circuit, or the like that implements a certain function.
With regard to the network on chip 220, it includes, for example, but not limited to, a bus or the like connected between the first processing unit 210 and the first memory 240. The bus bandwidth of the network on chip includes, for example, but is not limited to 1024 bits, 2048 bits.
With respect to the near memory processing unit 230, it is coupled to a first memory 240. The near memory processing unit 230 may receive instructions associated with data in the first memory 240 from the first processing unit 210 via the network on chip 220 and perform related operations on the data in the first memory 240 in response to the instructions.
Regarding the first memory 240, it is connected to the first processing unit 210 via the network on chip 220. The first Memory 240 includes, for example, but not limited to, a Dynamic Random Access Memory (DRAM), a High Bandwidth Memory (HBM). The first memory 240 serves as a local memory of the first processing unit 210. The first memory 240 may store a plurality of data items, including, but not limited to, a plurality of embedded tables, or a plurality of entries in an embedded table, for example.
In some embodiments, the storage space occupied by the plurality of data items stored in the first memory 240 is less than or equal to the product between the length of the data item and the number of addresses corresponding to the length of the second address. For example, entries embedded in the table have fixed-length data elements, such as 32-bit floating point numbers. In this case, the storage space in the first memory 240 for the embedded entry may be determined to be less than or equal to 16GB, so that the physical address of the embedded entry can be represented as 32 bits. Thus, the range in the first memory 240 used for the embedded table enables a more efficient representation of the address pointing to the embedded table entry than the usual 64-bit general address.
Regarding main memory 250, it may be associated with a Central Processing Unit (CPU) (not shown) and may be connected to first processing unit 210 and first memory 240 through, for example, PCIe. Main memory 250 includes, for example, but is not limited to, Dynamic Random Access Memory (DRAM). A plurality of data items may be stored in main memory 250, including, for example and without limitation, a plurality of embedded tables 1-n.
A plurality of data items, such as at least a portion of an embedded table or a plurality of entries in an embedded table, embedded in tables 1-n may be copied from main memory 250 to first memory 240 via Direct Memory Access (DMA).
In some embodiments, the first processing unit 210 may be configured to send a first instruction to the near memory processing unit 230 via the network on chip 220, the first instruction including a first address, a plurality of second addresses, and an operation type, the first address and the plurality of second addresses being associated with the first memory 240.
The first address and the plurality of second addresses may be 32 bits in length, for example. The number of second addresses included in the first instruction may be determined based on a bandwidth of the on-chip network and a length of the second addresses, e.g., the number of second addresses in the first instruction may be determined by dividing a bus bandwidth of the on-chip network by the length of the second addresses.
For example, in the case where the bus bandwidth of the network-on-chip 220 is 1024 bits and the second address length is 32 bits, it may be determined that 32 second addresses may be included in the first instruction. For another example, it may be determined that 32 second addresses may be included in the first instruction in the case where the bus bandwidth of the on-chip network 220 is 2048 bits and the second address length is 64 bits, or it may be determined that 64 second addresses may be included in the first instruction in the case where the bus bandwidth of the on-chip network 220 is 2048 bits and the second address length is 32 bits. Thus, the number of second addresses can fill the bus bandwidth of the network on chip, thereby achieving better bus utilization.
The near memory processing unit 230 may be configured to perform, in response to the first instruction, an operation associated with the operation type on a plurality of data items on a plurality of second addresses of the first memory 240 to generate an operation result. The operation types include, but are not limited to, pooling, splicing, for example.
Specifically, the near memory processing unit 230 may be configured to, in response to the first instruction, fetch a plurality of data items from a plurality of second addresses of the first memory 240, and perform an operation associated with an operation type on the fetched plurality of data items to generate an operation result.
After generating the operation result, the near memory processing unit 230 may be further configured to store the operation result to the first address of the first memory 240.
The first processing unit 210 is further configured to issue a flush instruction for making the result of the operation at the first address visible to the first processing unit 210. For example, a first instruction may be cached or queued somewhere in the network-on-chip, and issuing a flush instruction may cause the first instruction cached or queued somewhere in the network-on-chip to be executed, thereby generating and storing the operation result at the first address.
The first processing unit 210 is further configured to issue a read instruction for reading the operation result at the first address to the first processing unit 210.
Thereby, data of one or more users can be flexibly processed by allocating an address space for the operation result per near memory unit, and then the processed result can be efficiently read back to the first processing unit since the cache in the network on chip efficiently fills the bus of the network on chip.
In some embodiments, the plurality of second addresses in the first instruction may form a first sequence, such as A [0], A [1], … A [31 ]. The first instruction may also include a second sequence indicating a number of consecutive addresses associated with the plurality of users. For example, m users, each having n [ i ] as the number of consecutive addresses, i being 0, 1, … m-1.
The near memory processing unit 230 may be further configured to divide the first sequence into a plurality of address sets based on the second sequence.
As exemplified above for the first and second sequences, the first sequence can be divided into m sets of addresses A [0] -A [ n [0] -1], A [ n [0] ] -A [ n [0] + n [1] -1], A [ n [0] + n [1] ] -A [ n [0] + n [1] + n [2] -1], … … A [ n [0] + n [1] + + n [2] + … + n [ m-1] ] -A [31 ]. Specifically, with m as 3 and the second sequence as 10, 15, 7, for example, the first sequence can be divided into 3 address sets A [0] -A [9], A [10] -A [24], and A [25] -A [31 ].
The near memory processing unit 230 may be further configured to perform an operation associated with the operation type on a plurality of data sets on the plurality of address sets, respectively, to generate a plurality of operation results associated with a plurality of users, a data set of the plurality of data sets including at least one data item of the plurality of data items.
Take the 3 address sets A0-A9, A10-A24, and A25-A31 and pooling operations as an example, adding 10 data items on A0-A9 generates the operation result 1 for user 1, adding 15 data items on A10-A24 generates the operation result 2 for user 2, and adding 7 data items on A25-A31 generates the operation result 3 for user 3.
Near memory processing unit 230 may be further configured to determine a plurality of third addresses associated with the plurality of users based on the first address, a position in the second sequence of a plurality of consecutive addresses associated with the plurality of users, and a length of the data item.
In particular, the third address associated with the user may be, for example, equal to the first address + (the number of consecutive addresses of the user being the position in the second sequence × (the length of the data item)). Further, by taking m as 3, the second sequence as 10, 15, 7, and the data item as 32 bits in length, for example, if the number 10 of consecutive addresses of the user 1 is 0 in the second sequence, the third address associated with the user 1 may be equal to the first address + (0 x 32 bits), if the number 15 of consecutive addresses of the user 2 is 1 in the second sequence, the third address associated with the user 2 may be equal to the first address + (1 x 32 bits), if the number 7 of consecutive addresses of the user 3 is 2 in the third sequence, the third address associated with the user 3 may be equal to the first address + (2 x 32 bits).
The near memory processing unit 230 may be further configured to store a plurality of operation results to a plurality of third addresses of the first memory.
Continuing with the example above, operation result 1 may be stored at the first address + (0 x data item length), operation result 2 may be stored at the first address + (1 x data item length), and operation result 3 may be stored at the first address + (2 x data item length).
Therefore, data processing and result storage of multiple users can be realized for the first instructions associated with multiple users, and result reading for multiple users is facilitated. In case that the data address of a single user is not enough to fill the bus bandwidth of the network on chip, by combining the data addresses of multiple users in the first instruction, the bus bandwidth can be filled, the bus utilization rate is improved, and the data can be efficiently transmitted back to the first processing unit.
Alternatively or additionally, in some embodiments, the first processing unit 210 may be configured to send a write instruction to the first memory 240 via the network on chip 220, the write instruction including a first address and a plurality of write data for writing the plurality of write data to the first address of the first memory 240. The write data includes, for example, but is not limited to, a weight gradient for being added to a corresponding entry, such as a weight.
For example, a plurality of items of write data may be written on an address space in the first memory 240 starting at a first address. Taking 3 write data items as an example, the 1 st write data item may be written into the first address, the 2 nd write data item may be written into the address corresponding to the length of the first address + data item, and the 3 rd write data item may be written into the address corresponding to the length of the first address +2 data item.
The first processing unit 210 may be further configured to issue a flush instruction for making the plurality of items of write data at the first address visible to the first processing unit 210. For example, the write instruction may be buffered or queued somewhere in the network on chip, and the issued flush instruction will cause the buffered or queued write instruction in the network on chip 220 to be executed, thereby causing the plurality of write data to be written to the first address of the first memory 240 so as to be visible to the first processing unit 210.
The first processing unit 210 may be further configured to send an update instruction to the near memory processing unit 230 via the network on chip 210, the update instruction including a first address and a plurality of second addresses. The update instruction is used to instruct the near-memory processing unit 230 to perform a predetermined operation on a plurality of items of write data at the first address and a plurality of items of data at a plurality of second addresses, and update the operation result to the plurality of second addresses as an update result of the plurality of items of data. It should be understood that the number of items of write data and the number of items of data are matched. In some embodiments, the plurality of second addresses are associated with one or more users.
The number of second addresses included in the update instruction may be determined based on a bandwidth of the network on chip and a length of the second addresses, e.g., the number of second addresses in the first instruction may be determined by dividing a bus bandwidth of the network on chip by the length of the second addresses. Other descriptions of the first address and the second address can be found above and are not described here. Data items include, for example, but are not limited to, embedded entries, also commonly referred to as weights.
The near memory processing unit 230 may be further configured to perform a predetermined operation on a plurality of data items on a plurality of second addresses of the first memory 240 with a plurality of write data items on the first address in response to the update instruction to generate an updated plurality of data items. The predetermined operations include, for example, but are not limited to, addition, subtraction.
Specifically, the near memory processing unit 230 may be configured to, in response to an update instruction, read a plurality of data items from a plurality of second addresses of the first memory 240 and read a plurality of write data items from a first address of the first memory 240, and perform a predetermined operation on the read plurality of data items and the plurality of write data items, respectively, to generate an updated plurality of data items.
The near memory processing unit 230 may also be configured to store the updated plurality of data items to the plurality of second addresses.
Thus, by performing a predetermined operation such as addition on data at the near memory processing unit, fetching data such as an entry to the first processing unit such as a vector processor is avoided, thus greatly reducing data movement overhead. Furthermore, data of one or more users can be flexibly processed for one update by allocating an address space for intermediate write data per near memory unit. The way of transmitting a large block of data once through the network on chip is also more efficient than transmitting a small block of data multiple times through the network on chip.
FIG. 3 shows a schematic diagram of a method 300 for loading data, in accordance with an embodiment of the present disclosure. It should be understood that method 300 may also include additional blocks not shown and/or may omit blocks shown, as the scope of the disclosure is not limited in this respect.
At block 302, the first processing unit 210 sends a first instruction to the near memory processing unit 230 via the network on chip 220, the first instruction including a first address, a plurality of second addresses, and an operation type, the first address and the plurality of second addresses being associated with the first memory 240.
At block 304, the near memory processing unit 230, in response to the first instruction, performs an operation associated with the operation type on a plurality of data items on a plurality of second addresses of the first memory 240 to generate an operation result.
Specifically, the near memory processing unit 230 may, in response to the first instruction, fetch a plurality of data items from a plurality of second addresses of the first memory 240, and then perform an operation associated with an operation type on the fetched plurality of data items to generate an operation result.
At block 306, the near memory processing unit 230 stores the operation result to a first address of the first memory 240.
At block 308, the first processing unit 210 issues a flush instruction for making the result of the operation on the first address visible to the first processing unit 210.
At block 310, the first processing unit 210 issues a read instruction for reading the result of the operation at the first address to the first processing unit 210.
Thereby, data of one or more users can be flexibly processed by allocating an address space for the operation result per near memory unit, and then the processed result can be efficiently read back to the first processing unit since the cache in the network on chip efficiently fills the bus of the network on chip.
In some embodiments, the plurality of second addresses in the first instruction may form a first sequence. The first instruction may also include a second sequence indicating a number of consecutive addresses associated with the plurality of users.
In this case, the near memory processing unit 230 may divide the first sequence into a plurality of address sets based on the second sequence.
Near memory processing unit 230 may perform operations associated with the operation type on a plurality of data sets on the plurality of address sets, respectively, to generate a plurality of operation results associated with a plurality of users, each of the plurality of data sets including at least one data item of the plurality of data items.
Near memory processing unit 230 may determine a plurality of third addresses associated with the plurality of users based on the first address, a position in the second sequence of a plurality of consecutive addresses associated with the plurality of users, and a length of the data item.
The near memory processing unit 230 may store a plurality of operation results to a plurality of third addresses of the first memory 240.
An example logic flow executed at the near memory processing unit 230 when the operation type is pooled is shown below.
Figure BDA0002774384160000111
Therefore, data processing and result storage of multiple users can be realized for the first instructions associated with multiple users, and result reading for multiple users is facilitated. In case that the data address of a single user is not enough to fill the bus bandwidth of the network on chip, by combining the data addresses of multiple users in the first instruction, the bus bandwidth can be filled, the bus utilization rate is improved, and the data can be efficiently transmitted back to the first processing unit.
Fig. 4 shows a schematic diagram of a method 400 for updating data according to an embodiment of the present disclosure. It should be understood that method 400 may also include additional blocks not shown and/or may omit blocks shown, as the scope of the disclosure is not limited in this respect.
At block 402, the first processing unit 210 sends a write instruction to the first memory 240 via the network on chip 220, the write instruction including a first address and a plurality of items of write data for writing the plurality of items of write data to the first address of the first memory 240.
At block 404, the first processing unit 210 issues a flush instruction for making the plurality of write data at the first address visible to the first processing unit 210.
At block 406, the first processing unit 210 sends an update instruction to the near memory processing unit 230 via the network on chip 220, the update instruction including a first address and a plurality of second addresses.
At block 408, the near memory processing unit 230 performs a predetermined operation on a plurality of data items on a plurality of second addresses of the first memory with a plurality of write data items on the first address in response to the update instruction to generate an updated plurality of data items.
At block 410, the near memory processing unit 230 stores the updated plurality of data items to a plurality of second addresses.
An example logic flow executed at near memory processing unit 230 when the predetermined operation is an add is shown below.
Figure BDA0002774384160000121
Thus, by performing a predetermined operation such as addition on data at the near memory processing unit, fetching data such as an entry to the first processing unit such as a vector processor is avoided, thus greatly reducing data movement overhead. Furthermore, data of one or more users can be flexibly handled for one update by allocating an address space for intermediate write data per near memory unit. The way of transmitting a large block of data once through the network on chip is also more efficient than transmitting a small block of data multiple times through the network on chip.
It will be appreciated by a person skilled in the art that the method steps described herein are not limited to the order shown schematically in the figures, but may be performed in any other feasible order.
The previous description of the disclosure is provided to enable any person skilled in the art to make or use the disclosure. Various modifications to the disclosure will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other variations without departing from the spirit or scope of the disclosure. Thus, the disclosure is not intended to be limited to the examples and designs described herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (18)

1. A method for updating data, comprising:
the method comprises the steps that a first processing unit sends a write instruction to a first memory through a network on chip, wherein the write instruction comprises a first address and a plurality of items of write data, and the plurality of items of write data are used for being written into the first address of the first memory;
the first processing unit issuing a flush instruction for making a plurality of write data at the first address visible to the first processing unit;
the first processing unit sending an update instruction to a near memory processing unit via the network on chip, the update instruction including the first address and a plurality of second addresses, the plurality of second addresses associated with the first memory;
the near memory processing unit performs a predetermined operation on a plurality of data items on the plurality of second addresses of the first memory and a plurality of write data items on the first address in response to the update instruction to generate an updated plurality of data items; and
the near memory processing unit stores the plurality of data items that are updated to the plurality of second addresses.
2. The method of claim 1, wherein the plurality of data items are copied from a main memory to the plurality of second addresses via direct memory access.
3. The method of claim 1, wherein the number of the second addresses is determined based on a bandwidth of the network on chip and a length of the second addresses.
4. The method of claim 1, the first address and the plurality of second addresses are 32 bits in length.
5. The method of claim 1, wherein the first processing unit comprises a vector processing unit.
6. The method of claim 1, wherein a storage space occupied by the plurality of data items is less than or equal to a product between a length of a data item and an addressing number corresponding to a length of the second address.
7. The method of claim 1, wherein the plurality of data items comprises a plurality of embedded entries.
8. The method of claim 1, wherein the predetermined operation comprises adding or subtracting.
9. The method of claim 1, wherein the plurality of second addresses are associated with one or more users.
10. A computing device, comprising:
a first processing unit;
a first memory connected to the first processing unit via a network on chip;
a near memory processing unit coupled with the first memory;
the first processing unit is configured to send a write instruction to the first memory via the network on chip, the write instruction including a first address and a plurality of items of write data for writing the plurality of items of write data to the first address of the first memory;
the first processing unit is further configured to issue a flush instruction for making a plurality of write data items at the first address visible to the first processing unit;
the first processing unit is further configured to send an update instruction to the near memory processing unit via the network on chip, the update instruction including the first address and a plurality of second addresses, the plurality of second addresses associated with the first memory;
the near memory processing unit is further configured to perform a predetermined operation on a plurality of data items on the plurality of second addresses of the first memory and a plurality of write data items on the first address in response to the update instruction to generate an updated plurality of data items; and
the near memory processing unit is further configured to store the updated plurality of data items to the plurality of second addresses.
11. The computing device of claim 10, wherein the plurality of data items are copied from a main memory to the plurality of second addresses via direct memory access.
12. The computing device of claim 10, wherein the number of the second address is determined based on a bandwidth of the network on chip and a length of the second address.
13. The computing device of claim 10, the first address and the plurality of second addresses being 32 bits in length.
14. The computing device of claim 10, wherein the first processing unit comprises a vector processing unit.
15. The computing device of claim 10, wherein a storage space occupied by the plurality of data items is less than or equal to a product between a length of a data item and an addressing number corresponding to a length of the second address.
16. The computing device of claim 10, wherein the plurality of data items comprises a plurality of embedded entries.
17. The computing device of claim 10, wherein the predetermined operation comprises an addition or a subtraction.
18. The computing device of claim 10, wherein the plurality of second addresses are associated with one or more users.
CN202011260174.3A 2020-11-12 2020-11-12 Computing device and method for loading or updating data Active CN112380147B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011260174.3A CN112380147B (en) 2020-11-12 2020-11-12 Computing device and method for loading or updating data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011260174.3A CN112380147B (en) 2020-11-12 2020-11-12 Computing device and method for loading or updating data

Publications (2)

Publication Number Publication Date
CN112380147A CN112380147A (en) 2021-02-19
CN112380147B true CN112380147B (en) 2022-06-10

Family

ID=74583192

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011260174.3A Active CN112380147B (en) 2020-11-12 2020-11-12 Computing device and method for loading or updating data

Country Status (1)

Country Link
CN (1) CN112380147B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113297111B (en) * 2021-06-11 2023-06-23 上海壁仞智能科技有限公司 Artificial intelligence chip and operation method thereof

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103946811B (en) * 2011-09-30 2017-08-11 英特尔公司 Apparatus and method for realizing the multi-level store hierarchy with different operation modes
US8832530B2 (en) * 2012-09-26 2014-09-09 Intel Corporation Techniques associated with a read and write window budget for a two level memory system
US9519804B2 (en) * 2013-02-05 2016-12-13 Hackproof Technologies, Inc. Domain-specific hardwired symbolic machine that validates and maps a symbol
US20180336034A1 (en) * 2017-05-17 2018-11-22 Hewlett Packard Enterprise Development Lp Near memory computing architecture
US10534719B2 (en) * 2017-07-14 2020-01-14 Arm Limited Memory system for a data processing network
US10592424B2 (en) * 2017-07-14 2020-03-17 Arm Limited Range-based memory system
US11669454B2 (en) * 2019-05-07 2023-06-06 Intel Corporation Hybrid directory and snoopy-based coherency to reduce directory update overhead in two-level memory
US10803549B1 (en) * 2019-06-24 2020-10-13 Intel Corporation Systems and method for avoiding duplicative processing during generation of a procedural texture

Also Published As

Publication number Publication date
CN112380147A (en) 2021-02-19

Similar Documents

Publication Publication Date Title
US11074190B2 (en) Slot/sub-slot prefetch architecture for multiple memory requestors
JP6505132B2 (en) Memory controller utilizing memory capacity compression and associated processor based system and method
JP5203358B2 (en) Apparatus and method for prefetching data
EP3493084B1 (en) Method for processing data in bloom filter and bloom filter
JP6599898B2 (en) Providing memory bandwidth compression using a compression memory controller (CMC) in a system with a central processing unit (CPU)
CN101361049B (en) Patrol snooping for higher level cache eviction candidate identification
WO2012023986A1 (en) High speed memory systems and methods for designing hierarchical memory systems
CN107430550B (en) Asymmetric set combined cache
US11023410B2 (en) Instructions for performing multi-line memory accesses
CN105144120A (en) Storing data from cache lines to main memory based on memory addresses
CN109918131B (en) Instruction reading method based on non-blocking instruction cache
CN113934655B (en) Method and apparatus for solving ambiguity problem of cache memory address
JP2018503924A (en) Providing memory bandwidth compression using continuous read operations by a compressed memory controller (CMC) in a central processing unit (CPU) based system
CN112380147B (en) Computing device and method for loading or updating data
CN112380150B (en) Computing device and method for loading or updating data
WO2013030628A1 (en) Integrated circuit device, memory interface module, data processing system and method for providing data access control
CN109710309B (en) Method for reducing memory bank conflict
US6684267B2 (en) Direct memory access controller, and direct memory access control method
JP3935871B2 (en) MEMORY SYSTEM FOR COMPUTER CIRCUIT HAVING PIPELINE AND METHOD FOR PROVIDING DATA TO PIPELINE FUNCTIONAL UNIT
TWI759397B (en) Apparatus, master device, processing unit, and method for compare-and-swap transaction
CN113656330B (en) Method and device for determining access address
CN114924794A (en) Address storage and scheduling method and device for transmission queue of storage component
CN113656331A (en) Method and device for determining access address based on high and low bits
CN112199400A (en) Method and apparatus for data processing
CN114258533A (en) Optimizing access to page table entries in a processor-based device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address

Address after: 201114 room 1302, 13 / F, building 16, 2388 Chenhang Road, Minhang District, Shanghai

Patentee after: Shanghai Bi Ren Technology Co.,Ltd.

Country or region after: China

Address before: 201114 room 1302, 13 / F, building 16, 2388 Chenhang Road, Minhang District, Shanghai

Patentee before: Shanghai Bilin Intelligent Technology Co.,Ltd.

Country or region before: China