CN108009126B - Calculation method and related product - Google Patents

Calculation method and related product Download PDF

Info

Publication number
CN108009126B
CN108009126B CN201711362406.4A CN201711362406A CN108009126B CN 108009126 B CN108009126 B CN 108009126B CN 201711362406 A CN201711362406 A CN 201711362406A CN 108009126 B CN108009126 B CN 108009126B
Authority
CN
China
Prior art keywords
vector
instruction
matrix
address
unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711362406.4A
Other languages
Chinese (zh)
Other versions
CN108009126A (en
Inventor
胡帅
刘恩赫
张尧
孟小甫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui Cambricon Information Technology Co Ltd
Original Assignee
Anhui Cambricon Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui Cambricon Information Technology Co Ltd filed Critical Anhui Cambricon Information Technology Co Ltd
Priority to CN201711362406.4A priority Critical patent/CN108009126B/en
Publication of CN108009126A publication Critical patent/CN108009126A/en
Application granted granted Critical
Publication of CN108009126B publication Critical patent/CN108009126B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/30003Arrangements for executing specific machine instructions
    • G06F9/30007Arrangements for executing specific machine instructions to perform operations on data operands
    • G06F9/3001Arithmetic instructions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/38Concurrent instruction execution, e.g. pipeline, look ahead
    • G06F9/3867Concurrent instruction execution, e.g. pipeline, look ahead using instruction pipelines

Abstract

The present disclosure provides an information processing method, which is applied in a computing device, and the computing device comprises: the device comprises a storage medium, a register unit and a matrix calculation unit; the method comprises the following steps: the computing device controls the matrix computing unit to obtain a first operation instruction, wherein the first operation instruction comprises a vector reading instruction required by executing the instruction; the computing device controls the arithmetic unit to send a reading command to the storage medium according to the vector reading instruction; and the computing device controls the operation unit to read the vector corresponding to the vector reading instruction by adopting a batch reading mode and execute the first operation instruction on the vector. The technical scheme provided by the application has the advantages of high calculation speed and high efficiency.

Description

Calculation method and related product
Technical Field
The present application relates to the field of data processing technologies, and in particular, to a computing method and a related product.
Background
Data processing is steps or stages which are needed to be carried out by most algorithms, after a computer is introduced into the field of data processing, more and more data processing is realized by the computer, and in the existing algorithms, the speed is low and the efficiency is low when computing equipment carries out matrix data computation.
Content of application
The embodiment of the application provides a computing method and a related product, which can improve the processing speed of a computing device and improve the efficiency.
In a first aspect, a computing method is provided, which is applied in a computing device, where the computing device includes a storage medium, a register unit, and a matrix operation unit, and the method includes:
the computing device controls the matrix operation unit to obtain a first operation instruction, wherein the first operation instruction is used for realizing operation between vectors and matrixes, the first operation instruction comprises a vector reading instruction required by executing the instruction, the required vector is at least one vector, and the at least one vector is the same in length or different in length;
the computing device controls the matrix operation unit to send a reading command to the storage medium according to the vector reading instruction;
and the computing device controls the matrix operation unit to read the vector corresponding to the vector reading instruction from the storage medium in a batch reading mode and execute the first operation instruction on the vector.
In some possible embodiments, the executing the first operation instruction on the vector comprises:
and the computing device controls the matrix operation unit to adopt a multi-stage pipeline-level computing mode to execute the first operation instruction on the vector.
In some possible embodiments, each pipeline stage in the multiple pipeline stages includes a preset fixed operator, and the fixed operators in each pipeline stage are different;
the computing device controls the matrix operation unit to adopt a multi-level pipeline computing mode, and the executing of the first operation instruction on the vector comprises the following steps:
the computing device controls the matrix operation unit to calculate the network topology according to the first operation instruction, and utilizes the Kth1The selection arithmetic unit in the stage pipeline stage calculates the vector to obtain a first result, and then the first result is input to the Kth result2The selection arithmetic unit in the stage-pipeline stage executes calculation to obtain a second result, and so on until the (i-1) th result is input into the Kth resultjThe selection arithmetic unit in the stage pipeline stage executes calculation to obtain the ith result; inputting the ith result into the storage medium for storage;
wherein, KjBelonging to any of i pipeline stagesJ is less than or equal to i, j and i are positive integers, the number i of the multi-stage pipeline stages and the selected execution sequence K of the multi-stage pipeline stagesjAnd the K thjAnd the selection arithmetic units in the stage pipeline stages are determined according to the calculation topological structure of the first operation instruction, and the selection arithmetic units are arithmetic units in the fixed arithmetic units.
In some possible embodiments, the number of fixed operators and the number of fixed operators included in each of the multiple pipeline stages are custom set by a user side or the computing device side.
In some possible embodiments, the operators in each of the multiple pipeline stages comprise any one or a combination of more of: a matrix addition operator, a matrix multiplication operator, a matrix scalar multiplication operator, a nonlinear operator, and a matrix comparison operator.
In some possible embodiments, the first operation instruction comprises any one of: a vector derivative instruction VDIER, a vector generation diagonal matrix instruction VDIAG, and a vector multiplication transpose matrix instruction VMULT.
In some possible embodiments, the instruction format of the first operation instruction includes an opcode and at least one operation field, the opcode is used to indicate a function of the operation instruction, the operation unit can perform different vector operations by identifying the opcode, and the operation field is used to indicate data information of the operation instruction, where the data information may be an immediate number or a register number, for example, when a vector is to be obtained, a vector start address and a vector length may be obtained in a corresponding register according to the register number, and a vector stored at a corresponding address is obtained in the storage medium according to the vector start address and the vector length. Optionally, any one or combination of more of the following information may be obtained in the respective registers: the instruction requires the number of rows, columns, data type, identification, memory address (head address), and length of the dimension of the vector, which refers to the length of the vector row and/or the length of the vector column.
In some possible embodiments, the multi-stage pipeline stage is a three-stage pipeline stage, the first stage pipeline stage includes a preset matrix multiplication operator, the second stage pipeline stage includes a preset matrix addition operator and a matrix comparison operator, and the third stage pipeline stage includes a preset nonlinear operator and a matrix scalar multiplication operator; the first operation instruction is a vector derivative instruction VDIER,
the computing device controls the matrix operation unit to adopt a multi-level pipeline computing mode, and the executing of the first operation instruction on the vector comprises the following steps:
the computing device controls the matrix operation unit to input the vector to a nonlinear operator in a third-level pipeline stage to perform matrix element partial derivative computation on the matrix to obtain a first result; and inputting the first result to the storage medium for storage.
In some possible embodiments, the multi-stage pipeline stage is a three-stage pipeline stage, the first stage pipeline stage includes a preset matrix multiplication operator, the second stage pipeline stage includes a preset matrix addition operator and a matrix comparison operator, and the third stage pipeline stage includes a preset nonlinear operator and a matrix scalar multiplication operator; the first operation instruction generates a diagonal matrix instruction VDIAG for a vector,
the computing device controls the matrix operation unit to adopt a multi-level pipeline computing mode, and the executing of the first operation instruction on the vector comprises the following steps:
the computing device controls the matrix arithmetic unit to input the vector to a matrix comparison arithmetic unit in a second-level pipeline stage to carry out matrix element address comparison and write corresponding vector elements in the position of n +1 at every interval to obtain a first result; and inputting the first result to the storage medium for storage.
In some possible embodiments, the multi-stage pipeline stage is a three-stage pipeline stage, the first stage pipeline stage includes a preset matrix multiplication operator, the second stage pipeline stage includes a preset matrix addition operator and a matrix comparison operator, and the third stage pipeline stage includes a preset nonlinear operator and a matrix scalar multiplication operator; the first operation instruction is a vector multiply transpose matrix instruction VMULT,
the computing device controls the matrix operation unit to adopt a multi-level pipeline computing mode, and the executing of the first operation instruction on the vector comprises the following steps:
the computing device controls the matrix arithmetic unit to input the vector to a matrix multiplication arithmetic unit in a first-stage pipeline stage for vector multiplication computation to obtain a first result; and inputting the first result to the storage medium for storage.
In some possible embodiments, the vector read indication comprises: a memory address of a vector required by the instruction or an identification of a vector required by the instruction.
In some possible embodiments, when the vector read indicates an identification of a vector required by the instruction,
the control of the matrix operation unit by the computing device to send a read command to the storage medium according to the vector read instruction comprises:
the computing device controls the matrix operation unit to read the storage address corresponding to the identifier from the register unit in a unit reading mode according to the identifier;
and the computing device controls the matrix operation unit to send a reading command for reading the storage address to the storage medium and obtains the vector by adopting a batch reading mode.
In some possible embodiments, the computing device further comprises: a cache unit, the method further comprising:
the computing device caches operation instructions to be executed in the cache unit.
In some possible embodiments, before the computing device controls the matrix operation unit to obtain the first operation instruction, the method further comprises:
the computing device determines whether the first operation instruction is associated with a second operation instruction before the first operation instruction, if so, the first operation instruction is cached in the cache unit, and after the second operation instruction is executed, the first operation instruction is extracted from the cache unit and transmitted to the operation unit;
the determining whether the first operation instruction and a second operation instruction before the first operation instruction have an association relationship includes:
extracting a first storage address interval of a required vector in the first operation instruction according to the first operation instruction, extracting a second storage address interval of the required vector in the second operation instruction according to the second operation instruction, determining that the first operation instruction and the second operation instruction have an association relationship if the first storage address interval and the second storage address interval have an overlapped area, and determining that the first operation instruction and the second operation instruction do not have an association relationship if the first storage address interval and the second storage address interval do not have an overlapped area.
In a second aspect, a computing device is provided, comprising functional units for performing the method of the first aspect described above.
In a third aspect, a computer-readable storage medium is provided, which stores a computer program for electronic data exchange, wherein the computer program causes a computer to perform the method provided in the first aspect.
In a fourth aspect, there is provided a computer program product comprising a non-transitory computer readable storage medium having a computer program stored thereon, the computer program being operable to cause a computer to perform the method provided by the first aspect.
In a fifth aspect, there is provided a chip comprising a computing device as provided in the second aspect above.
In a sixth aspect, a chip packaging structure is provided, which includes the chip provided in the fifth aspect.
In a seventh aspect, a board is provided, where the board includes the chip packaging structure provided in the sixth aspect.
In an eighth aspect, an electronic device is provided, which includes the board card provided in the seventh aspect.
In some embodiments, the electronic device comprises a data processing apparatus, a robot, a computer, a printer, a scanner, a tablet, a smart terminal, a cell phone, a tachograph, a navigator, a sensor, a camera, a server, a cloud server, a camera, a camcorder, a projector, a watch, a headset, a mobile storage, a wearable device, a vehicle, a household appliance, and/or a medical device.
In some embodiments, the vehicle comprises an aircraft, a ship, and/or a vehicle; the household appliances comprise a television, an air conditioner, a microwave oven, a refrigerator, an electric cooker, a humidifier, a washing machine, an electric lamp, a gas stove and a range hood; the medical equipment comprises a nuclear magnetic resonance apparatus, a B-ultrasonic apparatus and/or an electrocardiograph.
The embodiment of the application has the following beneficial effects:
it can be seen that, through the embodiments of the present application, the computing apparatus is provided with the register unit and the storage medium, which are respectively used for storing scalar data and vector data, and the present application allocates a unit reading mode and a batch reading mode to the two memories, and allocates a data reading mode matching the characteristics of the vector data to the characteristics of the vector data, so that the bandwidth can be well utilized, and the influence of the bottleneck of the bandwidth on the computation speed is avoided.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic structural diagram of a computing device according to an embodiment of the present application.
Fig. 2 is a schematic structural diagram of an arithmetic unit according to an embodiment of the present application.
Fig. 3 is a schematic flowchart of a calculation method according to an embodiment of the present invention.
Fig. 4 is a schematic diagram of an architecture of a pipeline stage according to an embodiment of the present application.
Fig. 5 is a schematic structural diagram of a pipeline stage according to an embodiment of the present application.
Fig. 6A and fig. 6B are schematic diagrams of formats of two instruction sets provided by an embodiment of the present application.
Fig. 7 is a schematic structural diagram of another computing device according to an embodiment of the present application.
Fig. 8 is a flowchart illustrating a computing device executing a vector derivation instruction according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first," "second," "third," and "fourth," etc. in the description and claims of this application and in the accompanying drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
The matrix referred to in the present application may be specifically an m × N matrix, where m and N are integers greater than or equal to 1, and when m or N is 1, it may be represented as a 1 × N matrix or an m × 1 matrix, and may also be referred to as a vector; when m and n are both 1, it can be regarded as a special matrix of 1 x 1. The matrix can be any one of the three types of matrices, which is not described in detail below.
The embodiment of the application provides a computing method which can be applied to a computing device. Fig. 1 is a schematic structural diagram of a possible computing device according to an embodiment of the present invention. The computing device shown in fig. 1 includes:
the storage medium 201 is used for storing a matrix (which may also be a vector). The storage medium can be a high-speed temporary storage memory which can support matrix data (vector data) with different lengths; the application temporarily stores necessary calculation data on a scratch pad Memory (Scratcchpad Memory), so that the arithmetic device can more flexibly and effectively support data with different lengths in the matrix operation process. The storage medium may also be an off-chip database, a database or other medium capable of storage, etc.
Register unit 202 to store scalar data, wherein the scalar data includes, but is not limited to: the scalar quantity of the matrix data or vector data (also referred to herein as matrix/vector) at the memory address of the storage medium 201 and the vector and matrix operations. In one embodiment, the register unit may be a scalar register file that provides scalar registers needed during operations, the scalar registers storing not only matrix addresses, but also scalar data. It should be understood that the matrix address (i.e., the memory address of the matrix, such as the first address) is also a scalar. When matrix and vector operations are involved, the arithmetic unit needs to obtain not only the matrix address from the register unit, but also the corresponding scalar from the register unit, such as the row number and column number of the matrix, the type of matrix data (also may be referred to as data type), the length of matrix dimension (specifically, the length of matrix row, length of matrix column, etc.).
The arithmetic unit 203 (also referred to as a matrix arithmetic unit 203 in this application) is configured to obtain and execute a first arithmetic instruction. As shown in fig. 2, the arithmetic unit includes a plurality of arithmetic units, which include but are not limited to: a matrix addition operator 2031, a matrix multiplication operator 2032, a size comparison operator 2033 (which may also be a matrix comparison operator), a nonlinear operator 2034, and a matrix scalar multiplication operator 2035.
The method, as shown in fig. 3, includes the following steps:
step S301, the arithmetic unit 203 obtains a first arithmetic instruction, where the first arithmetic instruction is used to implement the operation of a vector and a matrix, and the first arithmetic instruction includes: the vector read indication required to execute the instruction.
In step S301, the vector read instruction required for executing the instruction may be various, for example, in an optional technical solution of the present application, the vector read instruction required for executing the instruction may be a storage address of a required vector. For another example, in another alternative embodiment of the present application, the vector read instruction required for executing the instruction may be an identifier of the required vector, and the identifier may be represented in various forms, for example, a name of the vector, an identifier of the vector, and a register number or a storage address of the vector in a register unit.
The following describes, by way of a practical example, a vector read instruction required for executing the first operation instruction, where the vector operation formula is assumed to be f (x) a + B, where A, B are vectors. Then in addition to carrying the vector operation formula, the first operation instruction may also carry the memory address of the vector required by the vector operation formula, specifically, for example, the memory address of a is 0000-0FFF, and the memory address of B is 1000-1 FFF. As another example, the identities of a and B may be carried, for example, the identity of a is 0101 and the identity of B is 1010.
In step S302, the arithmetic unit 203 sends a read command to the storage medium 201 according to the vector read instruction.
The implementation method of the step S302 may specifically be:
if the vector reading instruction can be the memory address of the required vector, the arithmetic unit 203 sends the reading command for reading the memory address to the storage medium 201 and obtains the corresponding vector by using a batch reading method.
If the vector reading instruction can be an identifier of a required vector, the arithmetic unit 203 reads a storage address corresponding to the identifier from the register unit in a unit reading manner according to the identifier, and then the arithmetic unit 203 sends a reading command for reading the storage address to the storage medium 201 and obtains a corresponding vector in a batch reading manner.
The single reading mode may be specifically that data of a unit is read each time, that is, 1bit data. The reason why the unit reading mode, i.e., the 1-bit reading mode, is provided at this time is that, for scalar data, the occupied capacity is very small, and if the batch data reading mode is adopted, the read data amount is easily larger than the required data capacity, which may cause waste of bandwidth, so that the unit reading mode is adopted for reading scalar data to reduce the waste of bandwidth.
In step S303, the arithmetic unit 203 reads the vector corresponding to the instruction in a batch reading manner, and executes the first operation instruction on the vector.
The batch reading mode in step S303 may specifically be that each reading is performed on data of multiple bits, for example, the number of bits of the data read each time is 16 bits, 32 bits, or 64 bits, that is, the data read each time is data of a fixed number of bits regardless of the required data amount, and this batch reading mode is very suitable for reading large data.
The technical scheme's that this application provided calculating device is provided with register unit and storage medium, it stores scalar data and vector data respectively, and this application has distributed unit reading mode and batch reading mode for two kinds of memory, through the data reading mode to the characteristics distribution matching of vector data, the utilization bandwidth that can be fine, avoid because the bottleneck of bandwidth is to the influence of volume computational rate, in addition, to the register unit, because its storage be scalar data, the reading mode of scalar data has been set up, the utilization ratio of bandwidth has been improved, so the technical scheme that this application provides can be fine utilizes the bandwidth, avoid the influence of bandwidth to computational rate, so it has the advantage that computational rate is fast, high efficiency.
Optionally, the executing the first operation instruction on the vector may specifically be:
the operation unit 203 may be implemented by a multi-stage pipeline stage, where the multi-stage pipeline stage may be set in advance by a user or the computing device, that is, it is designed fixedly. For example, the computing device described herein is designed with i-level pipeline stages. The following are specific embodiments:
the arithmetic unit can select and utilize the Kth arithmetic network topology according to the first arithmetic instruction1The selection arithmetic unit in the stage pipeline stage executes calculation on the vector to obtain a first result, and then the Kth result is selected and utilized2The selection arithmetic unit in the stage-pipeline stage executes calculation on the first result to obtain a second result, and the rest is done in the same way, and the Kth result is selectedjAnd the selection arithmetic unit in the stage pipeline stage executes calculation on the (i-1) th result to obtain an ith result until the operation of the first operation instruction is completed. Here, the ith result is an output result (specifically, an output matrix). Further, the arithmetic unit 203 may store the output result to the storage medium 201.
Wherein, the said is moreNumber of pipeline stages i, execution order of the pipeline stages i.e. selection KjA pipelined stage) and the KthjThe selection arithmetic units in the stage pipeline stages are determined according to the calculation topology of the first operation instruction, and i is a positive integer. Typically, i ═ 3. A respective operator may be provided in each pipeline stage, including, but not limited to, any one or combination of: matrix addition operators, matrix multiplication operators, matrix scalar multiplication operators, nonlinear operators, matrix comparison operators, and other matrix operators. That is, the number of fixed operators and fixed operators included in each pipeline stage may be set by a user side or the computing device side in a self-defined manner, and is not limited.
It should be understood that in the above-mentioned computing device in the present application, the Kth execution is selected each time1、K2…KjThe pipeline stages and the selection operators in the pipeline stages can be selected repeatedly, i.e. the execution times of each pipeline stage are not limited. The first operation instruction is used as a vector derivation for example, and the following description will be made.
In specific implementation, fig. 4 shows an architecture diagram of a pipeline stage. As shown in fig. 4, there may be a fully connected bypass design (i.e., the illustrated bypass circuit) between the i-stage pipeline stages, which is used to select a pipeline stage and an operator (i.e., a selection operator in the present application) in the pipeline stage that are currently required to be used according to the computing network topology corresponding to the first operation instruction. Optionally, the method is also used for data transmission among multiple pipeline stages, for example, an output result of a third-stage pipeline stage is forwarded to a first-stage pipeline stage as an input, an original input may be an input of any one of the three-stage pipeline stages, an output of any one of the three-stage pipeline stages may be a final output of the arithmetic unit, and the like.
Taking i as 3, three-stage pipeline as an example, the arithmetic unit may select the execution order of the pipeline stage and the arithmetic units (which may also be referred to as arithmetic units) required to be used in the pipeline stage through the bypass circuit. Fig. 5 shows a flow chart of the operation of a pipeline stage. Accordingly, the arithmetic unit performs the calculation of the first pipeline stage on the vector to obtain a first result, (optionally) inputs the first result to the second pipeline stage to perform the calculation of the second pipeline stage to obtain a second result, (optionally) inputs the second result to the third pipeline stage to perform the calculation of the third pipeline stage to obtain a third result, and (optionally) stores the third result in the storage medium 201.
The first effluent stage includes, but is not limited to: matrix multiplication operators, etc.
The second pipeline stage includes but is not limited to: matrix addition operators, magnitude comparison operators, and the like.
Such third effluent stages include, but are not limited to: non-linear operators, matrix scalar multiplication operators, and the like.
For vector calculation, for example, when a general-purpose processor is used for calculation, the calculation steps may specifically be that the processor performs calculation on a vector to obtain a first result, then stores the first result in a memory, reads the first result from the memory, performs a second calculation to obtain a second result, then stores the second result in the memory, reads the second result from the memory, performs a third calculation to obtain a third result, and then stores the third result in the memory. It can be seen from the above calculation steps that when the general-purpose processor performs vector calculation, it does not perform calculation at the split water level, and then the calculated data needs to be stored after each calculation, and needs to be read again when performing the next calculation, so this scheme needs to repeatedly store and read data many times.
In another embodiment of the present application, the flow components may be freely combined or one stage of flow stages may be adopted. For example, the second pipeline stage may be merged with the third pipeline stage, or both the first and second pipelines and the third pipeline may be merged, or each pipeline stage may be responsible for different operations. For example, the first stage pipeline is responsible for comparison operations, partial multiplication operations, the second stage pipeline is responsible for combinations of nonlinear operations and matrix scalar multiplication, etc. That is, the i pipeline stages designed in the present application support parallel connection, serial connection, and combination of any multiple pipeline stages to form different permutation and combination, which is not limited in the present application.
It should be noted that, the arithmetic unit in each pipeline stage in the computing apparatus is set by self-definition in advance, and once it is determined that the arithmetic unit cannot be changed; i.e. the i-stage pipeline stage can be designed as any permutation and combination of the arithmetic units, and once the i-stage pipeline stage is driven, the i-stage pipeline stage is not changed, different arithmetic instructions can be designed into different i-stage pipeline stage devices. Wherein the computing device may adaptively increase/decrease the number of pipeline stages as required by a particular instruction. Finally, pipeline devices designed for different instructions may be combined together to form the computing device.
By adopting the computing device (namely the arithmetic unit/arithmetic part in each level of the pipeline is designed and fixed), the following beneficial effects are achieved: besides improving the bandwidth, no extra selection signal judgment overhead exists, the same operation component overlapping and redundancy do not exist between different pipeline stages, the reusability is high, and the area is small.
Optionally, the computing device may further include: the cache unit 204 is configured to cache the first operation instruction. When an instruction is executed, if the instruction is the earliest instruction in the uncommitted instructions in the instruction cache unit, the instruction is back-committed, and once the instruction is committed, the change of the device state caused by the operation of the instruction cannot be cancelled. In one embodiment, the instruction cache unit may be a reorder cache.
Optionally, before step S301, the method may further include:
and determining whether the first operation instruction is associated with a second operation instruction before the first operation instruction, if so, extracting the first operation instruction from the cache unit and transmitting the first operation instruction to the operation unit 203 after the second operation instruction is completely executed. If the first operation instruction is not related to the instruction before the first operation instruction, the first operation instruction is directly transmitted to the operation unit.
The specific implementation method for determining whether the first operation instruction and the second operation instruction before the first operation instruction have an association relationship may be:
extracting a first storage address interval of a required vector in the first operation instruction according to the first operation instruction, extracting a second storage address interval of the required vector in the second operation instruction according to the second operation instruction, and determining that the first operation instruction and the second operation instruction have an association relation if the first storage address interval and the second storage address interval have an overlapped area. And if the first storage address interval and the second storage address interval are not overlapped, determining that the first operation instruction and the second operation instruction do not have an association relation.
If an overlapping area appears in the storage region section, which indicates that the first operation instruction and the second operation instruction access the same vector, and the space for storing the vectors is relatively large, for example, the same storage region is used as a condition for judging whether the vectors are in the association relationship, it may be the case that the storage region accessed by the second operation instruction includes the storage region accessed by the first operation instruction, for example, the second operation instruction accesses the a vector storage region, the B vector storage region and the C vector storage region, and if the A, B storage region is adjacent or the A, C storage region is adjacent, the storage region accessed by the second operation instruction is the A, B storage region and the C storage region, or the A, C storage region and the B storage region. In this case, if the storage area of the a vector and the storage area of the D vector are accessed by the first operation instruction, the storage area of the vector accessed by the first operation instruction cannot be the same as the storage area of the vector of the second operation instruction paradigm, and if the same judgment condition is adopted, it is determined that the first operation instruction and the second operation instruction are not associated, but practice proves that the first operation instruction and the second operation instruction belong to an association relationship at this time, so the present application judges whether the storage area is the association relationship condition by whether there is an overlapping area, and can avoid the misjudgment of the above situation.
The following describes, by way of an actual example, which cases belong to the associative relationship and which cases belong to the non-associative relationship. It is assumed here that vectors required by the first operation instruction are an a vector and a D vector, where a vector has a storage area of [ 0001, 0FFF ], and D vector has a storage area of [ a000, AFFF ], and for the second operation instruction, the vectors are an a vector, a B vector, and a C vector, and their corresponding storage areas are [ 0001, 0FFF ], [ 1000, 1FFF ], [ B000, BFFF ], respectively, and for the first operation instruction, the corresponding storage areas are: (0001, 0 FFF), (a 000, AFFF), for the second operation instruction, the corresponding storage area is: [ 0001, 1FFF ], [ B000, BFFF ], so that the storage area of the second operation instruction has an overlapping area [ 0001, 0FFF ] with the storage area of the first operation instruction, so that the first operation instruction has an association relationship with the second operation instruction.
It is assumed here that vectors required by the first operation instruction are an E vector and a D vector, where a vector has a storage area of [ C000, CFFF ], and D vector has a storage area of [ a000, AFFF ], and for the second operation instruction, the vectors are an a vector, a B vector, and a C vector, and their corresponding storage areas are [ 0001, 0FFF ], [ 1000, 1FFF ], [ B000, BFFF ], respectively, and for the first operation instruction, the corresponding storage areas are: for the second operation instruction, the corresponding storage area is: because [ 0001, 1FFF ] and [ B000, BFFF ], the storage area of the second operation instruction does not have an overlapping area with the storage area of the first operation instruction, and the first operation instruction and the second operation instruction have no relationship.
In this application, as shown in fig. 6A, the operation instruction includes an operation code and at least one operation field, where the operation code is used to indicate a function of the operation instruction, and as shown in fig. 6A, the operation unit can perform different vector operations by identifying the operation code, and the operation field is used to indicate data information of the operation instruction, where the data information may be an immediate or a register number, for example, when a vector is to be obtained, a vector start address and a vector length can be obtained in a corresponding register according to the register number, and then a vector stored in a corresponding address is obtained in a storage medium according to the vector start address and the vector length.
That is, the first operation instruction may include: the operation domains and at least one opcode, for example, vector operation instructions, are shown in table 1, where register 0, register 1, register file 2, register 3, and register 4 may be the operation domains. Wherein, each register 0, register 1, register 2, register 3, register 4 is used to identify the number of the register, which may be one or more registers. It should be understood that the number of registers in the opcode is not limited, and each register is used to store data information associated with an operation instruction.
Figure GDA0002471042520000131
Fig. 6B is a schematic diagram of a format of an instruction set of another instruction (which may be a first operation instruction and may also be referred to as an operation instruction) provided in the present application, where, as shown in fig. 6B, the instruction includes at least two opcodes and at least one operation field, where the at least two opcodes include a first opcode and a second opcode (shown as opcode 1 and opcode 2, respectively). The opcode 1 is used to indicate a type of an instruction (i.e., a certain class of instructions), and may specifically be an IO instruction, a logic instruction, or an operation instruction, etc., and the opcode 2 is used to indicate a function of an instruction (i.e., an interpretation of a specific instruction under the class of instructions), such as a matrix operation instruction in the operation instruction (e.g., a matrix multiplication vector instruction MMUL, a matrix inversion instruction MINV, etc.), a vector operation instruction (e.g., a vector derivation instruction VDIER, etc.), etc., which are not limited in this application.
It should be understood that the format of the instructions may be custom set either on the user side or on the computing device side. The opcode of the instruction may be designed to be a fixed length, such as 8-bit, 16-bit, and so on. The instruction format as shown in fig. 6A has the following advantageous features: the operation code occupies less bits and the design of the decoding system is simple. The instruction format as shown in fig. 6B has the following advantageous features: the variable length and the higher average decoding efficiency are achieved, and when the number of specific instructions is small and the calling frequency is high under a certain class of instructions, the length of the second operation code (i.e. operation code 2) is designed to be short, so that the decoding efficiency can be improved. In addition, the readability and the expandability of the instruction can be enhanced, and the encoding structure of the instruction can be optimized.
In the embodiment of the present application, the instruction set includes operation instructions with different functions, which may specifically be:
a Vector Derivation Instruction (VDIER) according to which the apparatus fetches matrix data of a set length from a specified address of a memory (preferably a scratch pad memory or a scalar register file), performs an operation on a vector (i.e., solves a jacobian matrix) in an arithmetic unit, and writes back the result. Preferably, and writes the results of the computations back to the specified address of the memory (preferably a scratch pad memory or scalar register file).
A vector generation diagonal matrix instruction (VDIAG) according to which the apparatus fetches matrix data of a set length from a specified address of a memory (preferably a scratch pad memory or a scalar register file), performs an operation of generating a diagonal matrix by mapping vector elements to matrix diagonal elements in an operation unit, and writes back the result. Preferably, and writes the results of the computations back to the specified address of the memory (preferably a scratch pad memory or scalar register file).
A vector multiply self-transpose generator matrix instruction (VMULT, also referred to herein as a vector multiply transpose instruction) according to which the apparatus fetches matrix data of a set length from a specified address of a memory (preferably a scratch pad memory or a scalar register file), performs an operation of vector multiply self-transpose generator matrix in an operation unit, and writes back the result. Preferably, and writes the results of the computations back to the specified address of the memory (preferably a scratch pad memory or scalar register file).
It should be understood that the operation/operation instructions proposed in the present application are mainly used to expand the vectors into matrix form, which facilitates the operation in the appropriate dimension between the matrices. To achieve the above functions, the arithmetic units designed in each pipeline stage include, but are not limited to, any one or a combination of more than one of the following: a matrix addition operator, a matrix multiplication operator, a matrix scalar multiplication operator, a nonlinear operator, and a matrix comparison operator.
The following exemplifies calculation of an operation instruction (i.e., a first operation instruction) according to the present application.
Taking the first operation instruction as a vector derivative instruction VDIER as an example, two vectors are given to solve a derivative matrix of one vector with respect to the other vector, i.e. solve a jacobian matrix. In particular implementations, given a vector X and a vector Y, the derivative matrix of Y with respect to X is solved as follows.
Figure GDA0002471042520000141
Wherein x isnIs the nth element in the vector X, ymIs the mth element in the vector Y, and m and n are both positive integers.
Accordingly, the instruction format of the vector derivation instruction VDIER is specifically:
Figure GDA0002471042520000142
Figure GDA0002471042520000151
with reference to the foregoing embodiments, the arithmetic unit may obtain the vector derivation instruction VDIER, decode the vector derivation instruction VDIER, and selectively perform a derivation calculation on the vector by using a non-linear operator in the third pipeline stage through the bypass circuit (i.e., perform a matrix element derivation calculation) to obtain a first result (i.e., an output result). Optionally, the first result is stored in a storage medium.
Taking the first operation instruction as an example to generate a diagonal matrix instruction VDIAG, a diagonal matrix of a given vector is calculated. In a specific implementation, a vector X is given, and each element of the vector is sequentially input as a diagonal element of a diagonal matrix according to the following formula to obtain a corresponding diagonal matrix a.
Figure GDA0002471042520000152
Correspondingly, the instruction format of the vector generation diagonal matrix instruction VDIAG is specifically:
Figure GDA0002471042520000153
Figure GDA0002471042520000161
with reference to the foregoing embodiments, the arithmetic unit may obtain the vector to generate the diagonal matrix instruction VDIAG, decode the vector, and then select and utilize the matrix comparison operator in the second pipeline stage to perform the element address comparison on the vector and write the corresponding elements of the vector into the vector at N +1 positions every interval through the bypass circuit, so as to obtain the first result (i.e., the output result, the diagonal matrix). Optionally, the first result is stored in a storage medium.
Taking the first operation instruction as the vector multiply transpose matrix instruction VMULT as an example, a transpose multiplication of one given vector and another given vector is calculated. In specific implementation, two vectors X and Y are given, and an output matrix a is obtained by calculation according to the following formula. X and Y may be given row vectors or column vectors, without limitation.
A=x×yT
Correspondingly, the instruction format of the vector multiply transpose matrix instruction VMULT is specifically:
Figure GDA0002471042520000162
Figure GDA0002471042520000171
with reference to the foregoing embodiments, the arithmetic unit may obtain the vector multiplication and transposition matrix instruction VMULT, decode the vector multiplication instruction, and select and utilize the matrix multiplier in the first pipeline stage to perform vector multiplication calculation through the bypass circuit to obtain a first result (i.e., an output result). Optionally, the first result is stored in a storage medium.
It should be noted that the fetching and decoding of the various operation instructions will be described in detail later. It should be understood that, by adopting the structure of the above-mentioned computing apparatus to implement the computation of each operation instruction (such as the vector derivative instruction VDIER, etc.), the following beneficial effects can be obtained: the vector scale is variable, so that the instruction number can be reduced, and the use of instructions is simplified; the matrix with different storage formats (row main sequence and column main sequence) can be processed, and the cost for converting the matrix is avoided; the matrix format stored at certain intervals is supported, and the execution overhead of converting the matrix storage format and the space occupation of storing intermediate results are avoided.
The set length in the above-mentioned arithmetic instruction (i.e. vector arithmetic instruction/first arithmetic instruction) can be set by the user, and in an alternative embodiment, the user can set the set length to one value, but in practical application, the user can also set the set length to a plurality of values. The specific value and the number of the set length are not limited in the embodiments of the present invention. In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is further described in detail below with reference to the accompanying drawings in combination with specific embodiments.
Referring to fig. 7, fig. 7 is a block diagram of another computing device 50 according to an embodiment of the present disclosure. As shown in fig. 7, the computing device 50 includes: a storage medium 501, a register unit 502 (preferably, a scalar data storage unit, a scalar register unit), an operation unit 503 (may also be referred to as a matrix operation unit 503), and a control unit 504;
a storage medium 501 for storing a matrix (which may also be a vector);
a scalar data storage unit 502 for storing scalar data including at least: a storage address of the vector within the storage medium;
a control unit 504, configured to control the arithmetic unit to obtain a first arithmetic instruction, where the first arithmetic instruction is used to implement an operation between a vector and a matrix, and the first arithmetic instruction includes a vector read instruction required to execute the instruction;
an arithmetic unit 503, configured to send a read command to the storage medium according to the vector read instruction; and executing the first operation instruction on the vector according to the vector corresponding to the vector reading instruction read by adopting a batch reading mode.
Optionally, the vector reading indication includes: a memory address of a vector required by the instruction or an identification of a vector required by the instruction.
Optionally if the vector read indicates the identity of the vector required by the instruction,
a control unit 504, configured to control the arithmetic unit to read, according to the identifier, the storage address corresponding to the identifier from the register unit in a unit reading manner, control the arithmetic unit to send a read command for reading the storage address to the storage medium, and obtain the vector in a batch reading manner.
Optionally, the operation unit 503 is specifically configured to execute the first operation instruction on the vector in a multi-level pipeline computing manner.
Optionally, each pipeline stage in the multiple pipeline stages includes a preset fixed operator, and the fixed operators in each pipeline stage are different;
an operation unit 503, specifically configured to utilize the kth computing network topology according to the first operation instruction1The selection arithmetic unit in the stage pipeline stage calculates the vector to obtain a first result, and then the first result is input to the Kth result2The selection arithmetic unit in the stage-pipeline stage executes calculation to obtain a second result, and so on until the (i-1) th result is input into the Kth resultjThe selection arithmetic unit in the stage pipeline stage executes calculation to obtain the ith result; inputting the ith result into the storage medium for storage;
wherein, KjBelongs to any one of i pipeline stages, j is less than or equal to i, j and i are positive integers, the number i of the multiple pipeline stages and the selected execution sequence K of the multiple pipeline stagesjAnd the K thjAnd the selection arithmetic units in the stage pipeline stages are determined according to the calculation topological structure of the first operation instruction, and the selection arithmetic units are arithmetic units in the fixed arithmetic units.
Optionally, the multi-stage pipeline stage is a three-stage pipeline stage, the first stage pipeline stage includes a preset matrix multiplication operator, the second stage pipeline stage includes a preset matrix addition operator and a matrix comparison operator, and the third stage pipeline stage includes a preset nonlinear operator and a matrix scalar multiplication operator; the first operation instruction is a vector derivative instruction VDIER,
the operation unit 503 is configured to input the vector to a nonlinear operator in a third-stage pipeline stage to perform matrix element partial derivative calculation on the matrix to obtain a first result; and inputting the second result to the storage medium for storage.
Optionally, the multi-stage pipeline stage is a three-stage pipeline stage, the first stage pipeline stage includes a preset matrix multiplication operator, the second stage pipeline stage includes a preset matrix addition operator and a matrix comparison operator, and the third stage pipeline stage includes a preset nonlinear operator and a matrix scalar multiplication operator; the first operation instruction generates a diagonal matrix instruction VDIAG for a vector,
an arithmetic unit 503, configured to input the vector to a matrix comparison arithmetic unit in the second-level pipeline stage for performing matrix element address comparison and writing corresponding vector elements at n +1 positions every interval to obtain a first result; and inputting the first result to the storage medium for storage.
Optionally, the multi-stage pipeline stage is a three-stage pipeline stage, the first stage pipeline stage includes a preset matrix multiplication operator, the second stage pipeline stage includes a preset matrix addition operator and a matrix comparison operator, and the third stage pipeline stage includes a preset nonlinear operator and a matrix scalar multiplication operator; the first operation instruction is a vector multiply transpose matrix instruction VMULT,
an operation unit 503, configured to input the vector to a matrix multiplication operator in a first-stage pipeline stage to perform vector multiplication calculation to obtain a first result; and inputting the first result to the storage medium for storage.
Optionally, the computing apparatus further includes:
a cache unit 505, configured to cache an operation instruction to be executed;
the control unit 504 is configured to cache an operation instruction to be executed in the cache unit 504.
Optionally, the control unit 504 is configured to determine whether an association relationship exists between the first operation instruction and a second operation instruction before the first operation instruction, if the association relationship exists between the first operation instruction and the second operation instruction, cache the first operation instruction in the cache unit, and after the second operation instruction is executed, extract the first operation instruction from the cache unit and transmit the first operation instruction to the operation unit;
the determining whether the first operation instruction and a second operation instruction before the first operation instruction have an association relationship includes:
extracting a first storage address interval of a required vector in the first operation instruction according to the first operation instruction, extracting a second storage address interval of the required vector in the second operation instruction according to the second operation instruction, if the first storage address interval and the second storage address interval have an overlapped area, determining that the first operation instruction and the second operation instruction have an association relation, and if the first storage address interval and the second storage address interval do not have an overlapped area, determining that the first operation instruction and the second operation instruction do not have an association relation.
Optionally, the control unit 503 may be configured to obtain an operation instruction from the instruction cache unit, process the operation instruction, and provide the processed operation instruction to the operation unit. The control unit 503 may be divided into three modules, which are: an instruction fetching module 5031, a decoding module 5032 and an instruction queue module 5033,
the instruction fetching module 5031 is configured to obtain an operation instruction from the instruction cache unit;
a decoding module 5032, configured to decode the obtained operation instruction;
the instruction queue 5033 is configured to store the decoded operation instructions sequentially, and to cache the decoded instructions in consideration of possible dependencies among registers of different instructions, and to issue the instructions when the dependencies are satisfied.
Referring to fig. 8 and fig. 8 are flowcharts illustrating a computing device according to an embodiment of the present invention to execute an operation instruction, as shown in fig. 8, a hardware structure of the computing device refers to the structure shown in fig. 7, and as shown in fig. 7, a storage medium takes a scratch pad as an example, and a process of executing a vector derivative instruction VDIER includes:
in step S601, the computing apparatus controls the instruction fetching module to fetch a vector derivative instruction, and sends the vector derivative instruction to the decoding module.
In step S602, the decoding module decodes the vector derivative instruction and sends the vector derivative instruction to the instruction queue.
In step S603, in the instruction queue, the vector derivation instruction needs to obtain, from the scalar register file, data in scalar registers corresponding to five operation domains in the instruction, where the data includes an input vector X address, an input vector X length, an output matrix address, an input vector Y address, and an input vector Y length.
In step S604, the control unit determines whether the vector derivation instruction and the operation instruction before the vector derivation instruction have an association relationship, and stores the vector derivation instruction into the cache unit if the association relationship exists, and transmits the vector derivation instruction to the operation unit if the association management does not exist.
In step S605, the arithmetic unit takes out the required matrix data from the high-speed register according to the data in the scalar registers corresponding to the five operation domains, and then completes the vector derivation operation in the arithmetic unit.
In step S606, after the arithmetic unit completes the operation, the result is written into the designated address of the memory (preferably, the scratch pad memory or the scalar register file), and the vector derivation instruction in the reorder buffer is submitted.
Optionally, in step S605, when the operation unit performs the vector derivation operation, the calculation device may use a non-linear operator to perform the matrix element partial derivation calculation to obtain the corresponding derivative matrix.
In a specific implementation, after the decoding module decodes the vector derivation instruction, according to a control signal generated by the decoding, the bypass circuit is used to select to input the vector acquired in S603 to the nonlinear operator in the third-stage pipeline stage to perform matrix element partial derivation calculation to obtain a first result, and then the first result is known as an output result according to the control signal. And correspondingly, writing back the first result as the output of the operation unit or directly transmitting the first result to an output end.
The operation instruction in fig. 8 is exemplified by a vector derivation instruction, and in practical applications, the vector derivation instruction in the embodiment shown in fig. 8 may be replaced by a vector generation diagonal matrix instruction VDIAG, a vector multiplication and transposition matrix instruction VMULT, and other matrix operation/operation instructions, which are not described herein again.
Embodiments of the present application also provide a computer storage medium, wherein the computer storage medium stores a computer program for electronic data exchange, and the computer program enables a computer to execute some or all of the steps of any implementation described in the above method embodiments.
Embodiments of the present application also provide a computer program product comprising a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform any, some or all of the steps of any of the implementations described in the above method embodiments.
An embodiment of the present application further provides an acceleration apparatus, including: a memory: executable instructions are stored; a processor: for executing the executable instructions in the memory unit, and when executing the instructions, operate according to the embodiments described in the above method embodiments.
Wherein the processor may be a single processing unit, but may also comprise two or more processing units. In addition, the processor may also include a general purpose processor (CPU) or a Graphics Processor (GPU); it may also be included in a Field Programmable Gate Array (FPGA) or Application Specific Integrated Circuit (ASIC) to set up and operate the neural network. The processor may also include on-chip memory (i.e., including memory in the processing device) for caching purposes.
In some embodiments, a chip is also disclosed, which includes the neural network processor for performing the above method embodiments.
In some embodiments, a chip packaging structure is disclosed, which includes the above chip.
In some embodiments, a board card is disclosed, which includes the above chip package structure.
In some embodiments, an electronic device is disclosed that includes the above board card.
The electronic device comprises a data processing device, a robot, a computer, a printer, a scanner, a tablet computer, an intelligent terminal, a mobile phone, a vehicle data recorder, a navigator, a sensor, a camera, a server, a cloud server, a camera, a video camera, a projector, a watch, an earphone, a mobile storage, a wearable device, a vehicle, a household appliance, and/or a medical device.
The vehicle comprises an airplane, a ship and/or a vehicle; the household appliances comprise a television, an air conditioner, a microwave oven, a refrigerator, an electric cooker, a humidifier, a washing machine, an electric lamp, a gas stove and a range hood; the medical equipment comprises a nuclear magnetic resonance apparatus, a B-ultrasonic apparatus and/or an electrocardiograph.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are exemplary embodiments and that the acts and modules referred to are not necessarily required in this application.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implementing, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be an electric or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit may be implemented in the form of hardware, or may be implemented in the form of a software program module.
The integrated units, if implemented in the form of software program modules and sold or used as stand-alone products, may be stored in a computer readable memory. Based on such understanding, the technical solution of the present application may be substantially implemented or a part of or all or part of the technical solution contributing to the prior art may be embodied in the form of a software product stored in a memory, and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method described in the embodiments of the present application. And the aforementioned memory comprises: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable memory, which may include: flash Memory disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
The foregoing detailed description of the embodiments of the present application has been presented to illustrate the principles and implementations of the present application, and the above description of the embodiments is only provided to help understand the method and the core concept of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (7)

1. A computing method applied in a computing apparatus including a storage medium, a register unit, and a matrix operation unit, the method comprising:
the computing device controls the matrix operation unit to obtain a first operation instruction, wherein the first operation instruction is used for realizing operation between vectors and matrixes, the first operation instruction comprises a vector reading instruction required by executing the instruction, the required vector is at least one vector, and the at least one vector is the same in length or different in length;
the computing device controls the matrix operation unit to send a reading command to the storage medium according to the vector reading instruction;
the computing device controls the matrix operation unit to read the vector corresponding to the vector reading instruction from the storage medium in a batch reading mode and execute the first operation instruction on the vector in a multi-level pipeline computing mode;
the first operational instruction comprises any one of: a vector derivation instruction VDIER, a vector generation diagonal matrix instruction VDIAG and a vector multiplication and transposition matrix instruction VMULT;
the VDIER includes: an opcode and an operation domain, the operation domain comprising: TYPE, N, X ', INCX, M, Y ', INCY, A '; the TYPE is a data TYPE related to vector operation, N is the length of a vector X, X ' is the first address of the vector X, the INCX is an interval between elements of the vector X, M is the length of the vector Y, Y ' is the first address of the vector Y, INCY is an interval between elements of the vector Y, and A ' is the first address of a matrix A;
the VDIAG comprises: an opcode and an operation domain, the operation domain comprising: TYPE, X ', INCX, A', LDA; the TYPE is a data TYPE related to vector operation, the X 'is a first address of the vector X, the INCX is an interval between elements of the vector X, the A' is a first address of the matrix A, and the LDA is a row main sequence or a column main sequence of the matrix A;
VMULT includes: an opcode and an operation domain, the operation domain comprising: TYPE, N, X ', INCX, M, Y ', INCY, A ', LDA; the TYPE is a data TYPE related to vector operation, N is the length of a vector X, X ' is the first address of the vector X, the INCX is an interval between elements of the vector X, M is the length of the vector Y, Y ' is the first address of the vector Y, INCY is an interval between elements of the vector Y, and A ' is the first address of a matrix A; LDA is the row main sequence or the column main sequence of the matrix A.
2. The method according to claim 1, wherein each pipeline stage in the multiple pipeline stages comprises a preset fixed arithmetic unit, and the fixed arithmetic unit in each pipeline stage is different;
the executing the first operation instruction on the vector by adopting a multi-stage pipeline-level computing mode comprises:
the computing device controls the matrix operation unit to calculate the network topology according to the first operation instruction, and utilizes the Kth1The selection arithmetic unit in the stage pipeline stage calculates the vector to obtain a first result, and then the first result is input to the Kth result2The selection arithmetic unit in the stage-pipeline stage executes calculation to obtain a second result, and so on until the (i-1) th result is input into the Kth resultjThe selection arithmetic unit in the stage pipeline stage executes calculation to obtain the ith result;
inputting the ith result into the storage medium for storage;
wherein, KjBelongs to any one of i pipeline stages, j is less than or equal to i, j and i are positive integers, the number i of the multiple pipeline stages and the selected execution sequence K of the multiple pipeline stagesjAnd the K thjAnd the selection arithmetic units in the stage pipeline stages are determined according to the calculation topological structure of the first operation instruction, and the selection arithmetic units are arithmetic units in the fixed arithmetic units.
3. The method of claim 1, wherein each of the multiple pipeline stages comprises fixed operators and the number of fixed operators is custom set by a user side or the computing device side; or, the fixed arithmetic unit in each pipeline stage in the multi-stage pipeline stages comprises any one or combination of more of the following items: a matrix addition operator, a matrix multiplication operator, a matrix scalar multiplication operator, a nonlinear operator, and a matrix comparison operator.
4. A computing device, comprising a storage medium, a register unit, a matrix operation unit, and a controller unit;
the storage medium is used for storing vectors;
the register unit is configured to store scalar data, where the scalar data at least includes: a storage address of the vector within the storage medium;
the controller unit is configured to control the matrix operation unit to obtain a first operation instruction, where the first operation instruction is used to implement an operation between a vector and a matrix, the first operation instruction includes a vector read instruction required to execute the instruction, the required vector is at least one vector, and the at least one vector is a vector with the same length or a vector with different lengths;
the matrix operation unit is used for sending a reading command to the storage medium according to the vector reading instruction; according to the vector corresponding to the vector reading instruction which is read in a batch reading mode, executing the first operation instruction on the vector by adopting a multi-level pipeline calculation mode;
the first operational instruction comprises any one of: a vector derivation instruction VDIER, a vector generation diagonal matrix instruction VDIAG and a vector multiplication and transposition matrix instruction VMULT;
the VDIER includes: an opcode and an operation domain, the operation domain comprising: TYPE, N, X ', INCX, M, Y ', INCY, A '; the TYPE is a data TYPE related to vector operation, N is the length of a vector X, X ' is the first address of the vector X, the INCX is an interval between elements of the vector X, M is the length of the vector Y, Y ' is the first address of the vector Y, INCY is an interval between elements of the vector Y, and A ' is the first address of a matrix A;
the VDIAG comprises: an opcode and an operation domain, the operation domain comprising: TYPE, X ', INCX, A', LDA; the TYPE is a data TYPE related to vector operation, the X 'is a first address of the vector X, the INCX is an interval between elements of the vector X, the A' is a first address of the matrix A, and the LDA is a row main sequence or a column main sequence of the matrix A;
VMULT includes: an opcode and an operation domain, the operation domain comprising: TYPE, N, X ', INCX, M, Y ', INCY, A ', LDA; the TYPE is a data TYPE related to vector operation, N is the length of a vector X, X ' is the first address of the vector X, the INCX is an interval between elements of the vector X, M is the length of the vector Y, Y ' is the first address of the vector Y, INCY is an interval between elements of the vector Y, and A ' is the first address of a matrix A; LDA is the row main sequence or the column main sequence of the matrix A.
5. A chip characterized in that it comprises a computing device as claimed in claim 4 above.
6. An electronic device, characterized in that it comprises a chip as claimed in claim 5 above.
7. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program comprising program instructions that, when executed by a processor, cause the processor to carry out the method according to any one of claims 1-3.
CN201711362406.4A 2017-12-15 2017-12-15 Calculation method and related product Active CN108009126B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711362406.4A CN108009126B (en) 2017-12-15 2017-12-15 Calculation method and related product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711362406.4A CN108009126B (en) 2017-12-15 2017-12-15 Calculation method and related product

Publications (2)

Publication Number Publication Date
CN108009126A CN108009126A (en) 2018-05-08
CN108009126B true CN108009126B (en) 2021-02-09

Family

ID=62059714

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711362406.4A Active CN108009126B (en) 2017-12-15 2017-12-15 Calculation method and related product

Country Status (1)

Country Link
CN (1) CN108009126B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108388446A (en) 2018-02-05 2018-08-10 上海寒武纪信息科技有限公司 Computing module and method
CN111062483A (en) * 2018-10-16 2020-04-24 上海寒武纪信息科技有限公司 Operation method, operation device, computer equipment and storage medium
CN111353124A (en) * 2018-12-20 2020-06-30 上海寒武纪信息科技有限公司 Operation method, operation device, computer equipment and storage medium
CN111124497B (en) * 2018-10-11 2022-03-29 上海寒武纪信息科技有限公司 Operation method, operation device, computer equipment and storage medium
CN111047030A (en) * 2018-10-11 2020-04-21 上海寒武纪信息科技有限公司 Operation method, operation device, computer equipment and storage medium
CN111275197B (en) * 2018-12-05 2023-11-10 上海寒武纪信息科技有限公司 Operation method, device, computer equipment and storage medium
CN111382390B (en) * 2018-12-28 2022-08-12 上海寒武纪信息科技有限公司 Operation method, device and related product
CN111831543A (en) * 2019-04-18 2020-10-27 中科寒武纪科技股份有限公司 Data processing method and related product
US11847554B2 (en) 2019-04-18 2023-12-19 Cambricon Technologies Corporation Limited Data processing method and related products
CN110780921B (en) * 2019-08-30 2023-09-26 腾讯科技(深圳)有限公司 Data processing method and device, storage medium and electronic device

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9558156B1 (en) * 2015-11-24 2017-01-31 International Business Machines Corporation Sparse matrix multiplication using a single field programmable gate array module
CN107704433A (en) * 2016-01-20 2018-02-16 南京艾溪信息科技有限公司 A kind of matrix operation command and its method
CN106250103A (en) * 2016-08-04 2016-12-21 东南大学 A kind of convolutional neural networks cyclic convolution calculates the system of data reusing

Also Published As

Publication number Publication date
CN108009126A (en) 2018-05-08

Similar Documents

Publication Publication Date Title
CN108009126B (en) Calculation method and related product
CN107957976B (en) Calculation method and related product
CN108121688B (en) Calculation method and related product
CN108108190B (en) Calculation method and related product
CN111310910B (en) Computing device and method
CN110688157B (en) Computing device and computing method
CN109240746B (en) Apparatus and method for performing matrix multiplication operation
CN107315715B (en) Apparatus and method for performing matrix addition/subtraction operation
CN107957975B (en) Calculation method and related product
CN107943756B (en) Calculation method and related product
CN107957977B (en) Calculation method and related product
CN110163361B (en) Computing device and method
CN107315563B (en) Apparatus and method for performing vector compare operations
CN107315717B (en) Device and method for executing vector four-rule operation
CN107315575B (en) Device and method for executing vector merging operation
CN107315716B (en) Device and method for executing vector outer product operation
CN108090028B (en) Calculation method and related product
CN108108189B (en) Calculation method and related product
CN107977231B (en) Calculation method and related product
CN111047020B (en) Neural network operation device and method supporting compression and decompression
CN108037908B (en) Calculation method and related product
CN108021393B (en) Calculation method and related product
US10127040B2 (en) Processor and method for executing memory access and computing instructions for host matrix operations
CN107315565B (en) Device and method for generating random vectors obeying certain distribution
US20210192353A1 (en) Processing unit, processor core, neural network training machine, and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 100000 room 644, No. 6, No. 6, South Road, Beijing Academy of Sciences

Applicant after: Zhongke Cambrian Technology Co., Ltd

Address before: 100000 room 644, No. 6, No. 6, South Road, Beijing Academy of Sciences

Applicant before: Beijing Zhongke Cambrian Technology Co., Ltd.

TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20201214

Address after: Room 611-194, R & D center building, China (Hefei) international intelligent voice Industrial Park, 3333 Xiyou Road, hi tech Zone, Hefei City, Anhui Province

Applicant after: Anhui Cambrian Information Technology Co., Ltd

Address before: 100000 room 644, research complex, 6 South Road, Haidian District Science Academy, Beijing.

Applicant before: Zhongke Cambrian Technology Co.,Ltd.

GR01 Patent grant
GR01 Patent grant