WO2021223645A1 - 数据处理方法及装置以及相关产品 - Google Patents

数据处理方法及装置以及相关产品 Download PDF

Info

Publication number
WO2021223645A1
WO2021223645A1 PCT/CN2021/090676 CN2021090676W WO2021223645A1 WO 2021223645 A1 WO2021223645 A1 WO 2021223645A1 CN 2021090676 W CN2021090676 W CN 2021090676W WO 2021223645 A1 WO2021223645 A1 WO 2021223645A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
vector
address
expansion
destination
Prior art date
Application number
PCT/CN2021/090676
Other languages
English (en)
French (fr)
Other versions
WO2021223645A9 (zh
Inventor
马旭研
吴健华
刘少礼
葛祥轩
刘瀚博
张磊
Original Assignee
安徽寒武纪信息科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 安徽寒武纪信息科技有限公司 filed Critical 安徽寒武纪信息科技有限公司
Priority to EP21800900.9A priority Critical patent/EP4148561A4/en
Priority to US17/619,781 priority patent/US20240126553A1/en
Publication of WO2021223645A1 publication Critical patent/WO2021223645A1/zh
Publication of WO2021223645A9 publication Critical patent/WO2021223645A9/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/30003Arrangements for executing specific machine instructions
    • G06F9/30007Arrangements for executing specific machine instructions to perform operations on data operands
    • G06F9/30032Movement instructions, e.g. MOVE, SHIFT, ROTATE, SHUFFLE
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/30003Arrangements for executing specific machine instructions
    • G06F9/30007Arrangements for executing specific machine instructions to perform operations on data operands
    • G06F9/30036Instructions to perform operations on packed data, e.g. vector, tile or matrix operations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/30003Arrangements for executing specific machine instructions
    • G06F9/30007Arrangements for executing specific machine instructions to perform operations on data operands
    • G06F9/30018Bit or string instructions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/30003Arrangements for executing specific machine instructions
    • G06F9/3004Arrangements for executing specific machine instructions to perform operations on memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/30003Arrangements for executing specific machine instructions
    • G06F9/3004Arrangements for executing specific machine instructions to perform operations on memory
    • G06F9/30043LOAD or STORE instructions; Clear instruction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/30145Instruction analysis, e.g. decoding, instruction word fields
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/30181Instruction operation extension or modification
    • G06F9/30185Instruction operation extension or modification according to one or more bits in the instruction, e.g. prefix, sub-opcode
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/34Addressing or accessing the instruction operand or the result ; Formation of operand address; Addressing modes
    • G06F9/345Addressing or accessing the instruction operand or the result ; Formation of operand address; Addressing modes of multiple operands or results
    • G06F9/3455Addressing or accessing the instruction operand or the result ; Formation of operand address; Addressing modes of multiple operands or results using stride

Definitions

  • the present disclosure relates to the field of computer technology, and in particular to a data processing method and device and related products.
  • a data processing method including:
  • the decoded processing instruction is a vector expansion instruction, determining the source data address, destination data address, and expansion parameters of the data corresponding to the processing instruction;
  • the source data address and the destination data address include consecutive data addresses.
  • a data processing device including:
  • An address determination module configured to determine the source data address, destination data address, and extended parameters of the data corresponding to the processing instruction when the decoded processing instruction is a vector expansion instruction;
  • a data expansion module configured to expand the first vector data of the source data address according to the expansion parameter to obtain expanded second vector data
  • a data storage module for storing the second vector data to the destination data address
  • the source data address and the destination data address include consecutive data addresses.
  • an artificial intelligence chip is provided, and the chip includes the above-mentioned data processing device.
  • an electronic device including the above-mentioned artificial intelligence chip.
  • a board card comprising: a storage device, an interface device, a control device, and the aforementioned artificial intelligence chip;
  • the artificial intelligence chip is connected to the storage device, the control device, and the interface device respectively; the storage device is used to store data;
  • the interface device is used to implement data transmission between the artificial intelligence chip and external equipment
  • the control device is used to monitor the state of the artificial intelligence chip.
  • vector expansion and storage can be realized through the expansion parameter in the vector expansion instruction, and the expanded vector data can be obtained, thereby simplifying the processing process and reducing the data overhead.
  • Fig. 1 shows a schematic diagram of a processor of a data processing method according to an embodiment of the present disclosure.
  • Fig. 2 shows a flowchart of a data processing method according to an embodiment of the present disclosure.
  • Fig. 3 shows a block diagram of a data processing device according to an embodiment of the present disclosure.
  • Fig. 4 shows a structural block diagram of a board according to an embodiment of the present disclosure.
  • the term “if” can be interpreted as “when” or “once” or “in response to determination” or “in response to detection” depending on the context.
  • the phrase “if determined” or “if detected [described condition or event]” can be interpreted as meaning “once determined” or “in response to determination” or “once detected [described condition or event]” depending on the context ]” or “in response to detection of [condition or event described]”.
  • the data processing method can be applied to a processor, and the processor can be a general-purpose processor, such as a CPU (Central Processing Unit, central processing unit), or it can be artificial intelligence processing for performing artificial intelligence operations.
  • Device IPU
  • Artificial intelligence operations can include machine learning operations, brain-like operations, and so on. Among them, machine learning operations include neural network operations, k-means operations, and support vector machine operations.
  • the artificial intelligence processor may, for example, include GPU (Graphics Processing Unit), NPU (Neural-Network Processing Unit, neural network processing unit), DSP (Digital Signal Process, digital signal processing unit), field programmable gate array (Field-Programmable Gate Array, FPGA) One or a combination of chips.
  • GPU Graphics Processing Unit
  • NPU Neuro-Network Processing Unit
  • DSP Digital Signal Process, digital signal processing unit
  • field programmable gate array Field-Programmable Gate Array
  • FPGA Field-Programmable Gate Array
  • the processor mentioned in the present disclosure may include multiple processing units, and each processing unit can independently run various tasks assigned to it, such as convolution computing tasks and pooling tasks. Or fully connected tasks, etc.
  • the present disclosure does not limit the processing unit and the tasks run by the processing unit.
  • Fig. 1 shows a schematic diagram of a processor of a data processing method according to an embodiment of the present disclosure.
  • the processor 100 includes multiple processing units 101 and a storage unit 102.
  • the multiple processing units 101 are used to execute instruction sequences, and the storage unit 102 is used to store data, and may include random access memory (RAM, Random Access Memory). And the register file.
  • the multiple processing units 101 in the processor 100 can not only share part of the storage space, for example, share part of the RAM storage space and the register file, but also have their own storage space at the same time.
  • Fig. 2 shows a flowchart of a data processing method according to an embodiment of the present disclosure. As shown in Figure 2, the method includes:
  • step S11 when the decoded processing instruction is a vector expansion instruction, determine the source data address, destination data address, and expansion parameters of the data corresponding to the processing instruction;
  • step S12 expand the first vector data of the source data address according to the expansion parameter to obtain expanded second vector data
  • step S13 store the second vector data to the destination data address
  • the source data address and the destination data address include consecutive data addresses.
  • vector expansion and storage can be realized through the expansion parameter in the vector expansion instruction, and the expanded vector data can be obtained, thereby simplifying the processing process and reducing the data overhead.
  • the method further includes: decoding the received processing instruction to obtain the decoded processing instruction.
  • the decoded processing instruction includes an operation code, and the operation code is used to instruct to perform vector expansion processing.
  • the processor when it receives a processing instruction, it can decode the received processing instruction to obtain a decoded processing instruction.
  • the decoded processing instruction includes an operation code and an operation field.
  • the operation code is used to indicate the processing type of the processing instruction, and the operation field is used to indicate the data to be processed and data parameters. If the operation code of the decoded processing instruction indicates vector extension processing, the instruction is a vector extension instruction (Vector Extension).
  • the source data address, destination data address, and extension parameters of the data corresponding to the processing instruction can be determined in step S11.
  • the data corresponding to the processing instruction is the first vector data indicated by the operation domain of the processing instruction, and the first vector data includes a plurality of data points.
  • the source data address represents the current data storage address of multiple data points in the data storage space, which is a continuous data address;
  • the destination data address represents the data storage address of multiple data points used to store the expanded second vector data, also It is a continuous data address.
  • the data storage space where the source data address is located and the data storage space where the destination data address is located may be the same or different, which is not limited in the present disclosure.
  • the processor may read multiple data points of the first vector data from the source data address in step S12, and read the data points according to the extended parameter pair The multiple data points of are respectively expanded to obtain multiple data points of the expanded second vector data, thereby realizing vector expansion.
  • multiple data points of the expanded second vector data may be sequentially stored in the destination data address to obtain the second vector data, thereby completing the vector expansion process.
  • the original vector can be expanded into a new vector through the vector expansion instruction and stored in a continuous address space, thereby simplifying the processing process and reducing the data Overhead.
  • step S11 may include: determining the source of the multiple data points according to the source data base addresses and data sizes of the multiple data points of the first vector data in the operation domain of the processing instruction Data address.
  • the vector expansion instruction may have an operation field for indicating the parameters of the vector data to be expanded.
  • the operation field may include, for example, the source data base address (Source Data Base Address), the destination data base address (Destination Data Base Address), the size of a single data point (Single Point Data Size), and the number of a single data point (Single Point Data Number). ), extended parameters, etc.
  • the source data base address may indicate the current base addresses of multiple data points of the first vector data in the data storage space; the destination data base address may indicate that multiple data points of the expanded second vector data are in the data storage space
  • the base address; the size of a single data point can represent the data size of each data point of the first vector data and the second vector data (for example, 4 or 8 bits); the number of a single data point can represent the data of the first vector data
  • the number of points N (N is an integer greater than 1);
  • the expansion parameter may indicate the expansion mode of the N data points of the first vector data.
  • the present disclosure does not limit the number and types of specific parameters in the operation domain of the vector extension instruction.
  • the operation field of the vector extension instruction may include the source data base address (Source Data Base Address) and the size of a single data point (Single Point Data Size). Since the source data address is a continuous data address, the source data address of each data point can be determined directly according to the data size of the data point and the serial number of each data point.
  • the source data address of the nth data point can be expressed as:
  • Single Point Src Addr[n] represents the source data address of the nth data point.
  • the source data base address is, for example, Addr1[0,3]
  • the size of a single data point is 4 bits
  • n is 3, it can be determined that the source data address of the third data point is Addr1[12,15].
  • the source data address of each data point can be determined separately, so that each data point of the first vector data can be read from the source data address.
  • the first vector data includes N data points, and N is an integer greater than 1, and correspondingly, the extended parameter includes N extended parameter bits corresponding to the N data points.
  • step S12 may include:
  • k n data points of the nth data position of the second vector data 1 ⁇ n ⁇ N, k n ⁇ 0;
  • the second vector data is determined according to the data points of the N data positions of the second vector data.
  • the extended parameter may include N extended parameter bits, which respectively represent the number of times of copying N data points of the first vector data k n .
  • N 5
  • the extended parameter may be expressed as [1,2,0,3 ,1], means to copy 1 time, 2 times, 0 times, 3 times, 1 time for 5 data points respectively.
  • the nth extended parameter bit corresponding to the nth data point is k n (k n ⁇ 0 )
  • the nth data position of the second vector data has the nth data point of the k n first vector data.
  • the first vector data is [A,B,C,D,E]
  • the expansion parameter can be [1,2,0,3,1]
  • the second vector data obtained is [A,B ,B,D,D,D,E].
  • the extended parameter may also include other extended content (for example, the value of each data point is enlarged or reduced by a certain multiple), and the extended parameter may also include other expressions, which can be set by those skilled in the art according to the actual situation. This is not limited.
  • step S13 may include: sequentially storing each data point of the second vector data according to the destination data base address and the data size of the destination data address.
  • the second vector data can be stored in a preset destination data address.
  • the operation field of the vector extension instruction may include a destination data base address, and the destination data address of each data point of the second vector data may be determined according to the destination data base address and the data size of a single data point.
  • Single Point Src Addr[m] represents the destination data address of the mth data point of the second vector data (the second vector data includes M data points, 1 ⁇ m ⁇ M, and M is greater than 1. Integer).
  • the destination data base address is Addr2[14,17]
  • the size of a single data point is 4 bits
  • m is 3, it can be determined that the source data address of the third data point is Addr2[26,29].
  • each data point of the second vector data can be sequentially stored in the destination data address, thereby completing the entire process of vector expansion.
  • the vector can be expanded through the vector expansion instruction, so that when the vector data needs to be expanded in application scenarios such as image recognition, the original vector can be expanded into a new vector and stored in the continuous Address space, thereby simplifying the processing process and reducing data overhead.
  • steps in the flowchart are displayed in sequence as indicated by the arrows, these steps are not necessarily executed in sequence in the order indicated by the arrows. Unless specifically stated in this article, the execution of these steps is not strictly limited in order, and these steps can be executed in other orders. Moreover, at least part of the steps in the flowchart may include multiple sub-steps or multiple stages. These sub-steps or stages are not necessarily executed at the same time, but can be executed at different times. The execution of these sub-steps or stages The sequence is not necessarily performed sequentially, but may be performed alternately or alternately with at least a part of other steps or sub-steps or stages of other steps.
  • Fig. 3 shows a block diagram of a data processing device according to an embodiment of the present disclosure. As shown in Figure 3, the device includes:
  • the address determining module 31 is configured to determine the source data address, destination data address, and extended parameters of the data corresponding to the processing instruction when the decoded processing instruction is a vector expansion instruction;
  • the data expansion module 32 is configured to expand the first vector data of the source data address according to the expansion parameter to obtain expanded second vector data;
  • the data storage module 33 is configured to store the second vector data to the destination data address,
  • the source data address and the destination data address include consecutive data addresses.
  • the address determining module includes:
  • the source address determining submodule is used to determine the source data addresses of the multiple data points according to the source data base addresses and data sizes of the multiple data points of the first vector data in the operation domain of the processing instruction.
  • the first vector data includes N data points
  • the extended parameter includes N extended parameter bits corresponding to the N data points
  • N is an integer greater than 1
  • the data expansion module includes:
  • the data point determination submodule is used to determine the nth data position of the second vector data according to the nth data point of the first vector data and the nth extended parameter bit corresponding to the nth data point K n data points, 1 ⁇ n ⁇ N, k n ⁇ 0;
  • the data determining sub-module is configured to determine the second vector data according to the data points of the N data positions of the second vector data.
  • the data storage module includes:
  • the storage sub-module is configured to sequentially store each data point of the second vector data according to the destination data base address and the data size of the destination data address.
  • the device further includes:
  • the decoding module is used to decode the received processing instructions to obtain the decoded processing instructions
  • the decoded processing instruction includes an operation code, and the operation code is used to instruct to perform vector expansion processing.
  • the above device embodiments are only illustrative, and the device of the present disclosure may also be implemented in other ways.
  • the division of units/modules in the above-mentioned embodiments is only a logical function division, and there may be other division methods in actual implementation.
  • multiple units, modules or components may be combined or integrated into another system, or some features may be omitted or not implemented.
  • the functional units/modules in the various embodiments of the present disclosure may be integrated into one unit/module, or each unit/module may exist alone physically, or two or more units/modules may exist.
  • the modules are integrated together.
  • the above-mentioned integrated unit/module can be realized in the form of hardware or software program module.
  • the hardware may be a digital circuit, an analog circuit, and so on.
  • the physical realization of the hardware structure includes but is not limited to transistors, memristors and so on.
  • the artificial intelligence processor may be any appropriate hardware processor, such as CPU, GPU, FPGA, DSP, ASIC, and so on.
  • the storage unit may be any suitable magnetic storage medium or magneto-optical storage medium, such as RRAM (Resistive Random Access Memory), DRAM (Dynamic Random Access Memory), Static random access memory SRAM (Static Random-Access Memory), enhanced dynamic random access memory EDRAM (Enhanced Dynamic Random Access Memory), high-bandwidth memory HBM (High-Bandwidth Memory), hybrid storage cube HMC (Hybrid Memory Cube), etc. Wait.
  • RRAM Resistive Random Access Memory
  • DRAM Dynamic Random Access Memory
  • Static random access memory SRAM Static Random-Access Memory
  • enhanced dynamic random access memory EDRAM Enhanced Dynamic Random Access Memory
  • high-bandwidth memory HBM High-Bandwidth Memory
  • hybrid storage cube HMC Hybrid Memory Cube
  • the integrated unit/module is implemented in the form of a software program module and sold or used as an independent product, it can be stored in a computer readable memory.
  • the technical solution of the present disclosure essentially or the part that contributes to the prior art or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a memory, A number of instructions are included to enable a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods described in the various embodiments of the present disclosure.
  • the aforementioned memory includes: U disk, Read-Only Memory (ROM, Read-Only Memory), Random Access Memory (RAM, Random Access Memory), mobile hard disk, magnetic disk or optical disk and other media that can store program codes.
  • an artificial intelligence chip is also disclosed, which includes the above-mentioned data processing device.
  • an electronic device is also disclosed, and the electronic device includes the aforementioned artificial intelligence chip.
  • a board card which includes a storage device, an interface device, a control device, and the aforementioned artificial intelligence chip; wherein, the artificial intelligence chip is connected to the storage device and the control device. And the interface devices are respectively connected; the storage device is used to store data; the interface device is used to realize data transmission between the artificial intelligence chip and an external device; the control device is used to The state of the artificial intelligence chip is monitored.
  • Fig. 4 shows a structural block diagram of a board card according to an embodiment of the present disclosure.
  • the board card may include other supporting components in addition to the chip 389 described above.
  • the supporting components include, but are not limited to: a storage device 390, Interface device 391 and control device 392;
  • the storage device 390 is connected to the artificial intelligence chip through a bus for storing data.
  • the storage device may include multiple groups of storage units 393. Each group of the storage unit and the artificial intelligence chip are connected through a bus. It can be understood that each group of the storage units may be DDR SDRAM (English: Double Data Rate SDRAM, double-rate synchronous dynamic random access memory).
  • the storage device may include 4 groups of the storage units. Each group of the storage unit may include a plurality of DDR4 particles (chips).
  • the artificial intelligence chip may include four 72-bit DDR4 controllers. In the 72-bit DDR4 controller, 64 bits are used for data transmission and 8 bits are used for ECC verification. It can be understood that when DDR4-3200 particles are used in each group of the storage units, the theoretical bandwidth of data transmission can reach 25600MB/s.
  • each group of the storage unit includes a plurality of double-rate synchronous dynamic random access memories arranged in parallel.
  • DDR can transmit data twice in one clock cycle.
  • a controller for controlling the DDR is provided in the chip for controlling the data transmission and data storage of each storage unit.
  • the interface device is electrically connected with the artificial intelligence chip.
  • the interface device is used to implement data transmission between the artificial intelligence chip and an external device (such as a server or a computer).
  • the interface device may be a standard PCIE interface.
  • the data to be processed is transferred from the server to the chip through a standard PCIE interface to realize data transfer.
  • the interface device may also be other interfaces. The present disclosure does not limit the specific manifestations of the above other interfaces, as long as the interface unit can realize the switching function.
  • the calculation result of the artificial intelligence chip is still transmitted by the interface device back to an external device (such as a server).
  • the control device is electrically connected with the artificial intelligence chip.
  • the control device is used to monitor the state of the artificial intelligence chip.
  • the artificial intelligence chip and the control device may be electrically connected through an SPI interface.
  • the control device may include a single-chip microcomputer (Micro Controller Unit, MCU).
  • MCU Micro Controller Unit
  • the artificial intelligence chip may include multiple processing chips, multiple processing cores, or multiple processing circuits, and can drive multiple loads. Therefore, the artificial intelligence chip can be in different working states such as multi-load and light-load.
  • the control device can realize the regulation and control of the working states of multiple processing chips, multiple processing and or multiple processing circuits in the artificial intelligence chip.
  • an electronic device which includes the aforementioned artificial intelligence chip.
  • Electronic equipment includes data processing devices, robots, computers, printers, scanners, tablets, smart terminals, mobile phones, driving recorders, navigators, sensors, cameras, servers, cloud servers, cameras, cameras, projectors, watches, headsets , Mobile storage, wearable devices, vehicles, household appliances, and/or medical equipment.
  • the transportation means include airplanes, ships, and/or vehicles;
  • the household appliances include TVs, air conditioners, microwave ovens, refrigerators, rice cookers, humidifiers, washing machines, electric lights, gas stoves, and range hoods;
  • the medical equipment includes nuclear magnetic resonance, B-ultrasound and/or electrocardiograph.
  • a method of data processing including:
  • the decoded processing instruction is a vector expansion instruction, determining the source data address, destination data address, and expansion parameters of the data corresponding to the processing instruction;
  • the source data address and the destination data address include consecutive data addresses.
  • determining the source data address, destination data address, and expansion parameters of the data corresponding to the processing instruction includes:
  • the first vector data includes N data points
  • the extended parameter includes N extended parameter bits corresponding to the N data points
  • N is an integer greater than 1
  • the expanding the first vector data of the source data address according to the expansion parameter to obtain the expanded second vector data includes:
  • the k n data points of the nth data position of the second vector data are determined, 1 ⁇ n ⁇ N, k n ⁇ 0;
  • the second vector data is determined according to the data points of the N data positions of the second vector data.
  • storing the second vector data to the destination data address includes:
  • each data point of the second vector data is sequentially stored.
  • A5. The method according to any one of A1-A4, the method further comprising:
  • the decoded processing instruction includes an operation code, and the operation code is used to instruct to perform vector expansion processing.
  • a data processing device including:
  • An address determination module configured to determine the source data address, destination data address, and extended parameters of the data corresponding to the processing instruction when the decoded processing instruction is a vector expansion instruction;
  • a data expansion module configured to expand the first vector data of the source data address according to the expansion parameter to obtain expanded second vector data
  • a data storage module for storing the second vector data to the destination data address
  • the source data address and the destination data address include consecutive data addresses.
  • the address determining module includes:
  • the source address determining submodule is used to determine the source data addresses of the multiple data points according to the source data base addresses and data sizes of the multiple data points of the first vector data in the operation domain of the processing instruction.
  • the first vector data includes N data points
  • the extended parameter includes N extended parameter bits corresponding to the N data points
  • N is an integer greater than 1
  • the data expansion module includes:
  • the data point determination submodule is used to determine the nth data position of the second vector data according to the nth data point of the first vector data and the nth extended parameter bit corresponding to the nth data point K n data points, 1 ⁇ n ⁇ N, k n ⁇ 0;
  • the data determining sub-module is configured to determine the second vector data according to the data points of the N data positions of the second vector data.
  • the data storage module includes:
  • the storage sub-module is configured to sequentially store each data point of the second vector data according to the destination data base address and the data size of the destination data address.
  • A10 The device according to any one of A6-A9, the device further comprising:
  • the decoding module is used to decode the received processing instructions to obtain the decoded processing instructions
  • the decoded processing instruction includes an operation code, and the operation code is used to instruct to perform vector expansion processing.
  • An artificial intelligence chip comprising the data processing device according to any one of A6-A10.
  • A12 An electronic device comprising the artificial intelligence chip as described in A11.
  • a board comprising: a storage device, an interface device, a control device, and the artificial intelligence chip as described in A11;
  • the artificial intelligence chip is connected to the storage device, the control device, and the interface device respectively;
  • the storage device is used to store data
  • the interface device is used to implement data transmission between the artificial intelligence chip and external equipment
  • the control device is used to monitor the state of the artificial intelligence chip.

Abstract

本公开涉及一种数据处理方法及装置以及相关产品。所述产品包括控制模块,所述控制模块包括:指令缓存单元、指令处理单元和存储队列单元;所述指令缓存单元,用于存储所述人工神经网络运算关联的计算指令;所述指令处理单元,用于对所述计算指令解析得到多个运算指令;所述存储队列单元,用于存储指令队列,该指令队列包括:按该队列的前后顺序待执行的多个运算指令或计算指令。通过以上方法,本公开可以提高相关产品在进行神经网络模型的运算时的运算效率。

Description

数据处理方法及装置以及相关产品
本申请要求在2020年05月08日提交中国专利局、申请号为202010383677.3、申请名称为“数据处理方法及装置以及相关产品”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本公开涉及计算机技术领域,特别是涉及一种数据处理方法及装置以及相关产品。
背景技术
随着人工智能技术的发展,其在图像识别等领域取得了较好的效果。在图像识别过程中,可能需要对大量的向量数据进行处理(例如差值运算、扩展、变形等),然而,相关技术中的处理过程较为复杂,数据开销较大。
发明内容
基于此,有必要针对上述技术问题,提供一种数据处理方法及装置以及相关产品。
根据本公开的一方面,提供了一种数据处理方法,包括:
在解码后的处理指令为向量扩展指令时,确定与所述处理指令对应的数据的源数据地址、目的数据地址及扩展参数;
根据所述扩展参数,对所述源数据地址的第一向量数据进行扩展,得到扩展后的第二向量数据;
将所述第二向量数据存储到所述目的数据地址,
其中,所述源数据地址和所述目的数据地址包括连续的数据地址。
根据本公开的另一方面,提供了一种数据处理装置,包括:
地址确定模块,用于在解码后的处理指令为向量扩展指令时,确定与所述处理指令对应的数据的源数据地址、目的数据地址及扩展参数;
数据扩展模块,用于根据所述扩展参数,对所述源数据地址的第一向量数据进行扩展,得到扩展后的第二向量数据;
数据存储模块,用于将所述第二向量数据存储到所述目的数据地址,
其中,所述源数据地址和所述目的数据地址包括连续的数据地址。
根据本公开的另一方面,提供了一种人工智能芯片,所述芯片包括上述的数据处理装置。
根据本公开的另一方面,提供了一种电子设备,所述电子设备包括上述的人工智能芯片。
根据本公开的另一方面,提供了一种板卡,所述板卡包括:存储器件、接口装置和控制器件以及上述的人工智能芯片;
其中,所述人工智能芯片与所述存储器件、所述控制器件以及所述接口装置分别连接;所述存储器件,用于存储数据;
所述接口装置,用于实现所述人工智能芯片与外部设备之间的数据传输;
所述控制器件,用于对所述人工智能芯片的状态进行监控。
根据本公开的实施例,能够通过向量扩展指令中的扩展参数实现向量扩展及存储,得到扩展后的向量数据,从而简化处理过程,减小数据开销。
通过权要中的技术特征进行推导,能够达到对应背景技术中的技术问题的有益效果。根据下面参考附图对示例性实施例的详细说明,本公开的其它特征及方面将变得清楚。
附图说明
包含在说明书中并且构成说明书的一部分的附图与说明书一起示出了本公开的示例性实施例、特征和方面,并且用于解释本公开的原理。
图1示出根据本公开实施例的数据处理方法的处理器的示意图。
图2示出根据本公开实施例的数据处理方法的流程图。
图3示出根据本公开实施例的数据处理装置的框图。
图4示出根据本公开实施例的板卡的结构框图。
具体实施方式
下面将结合本公开实施例中的附图,对本公开实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本公开一部分实施例,而不是全部的实施例。基于本公开中的实施例,本领域技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本公开保护的范围。
应当理解,本公开的说明书和权利要求书中使用的术语“包括”和“包含”指示所描述特征、整体、步骤、操作、元素和/或组件的存在,但并不排除一个或多个其它特征、整体、步骤、操作、元素、组件和/或其集合的存在或添加。
还应当理解,在此本公开说明书中所使用的术语仅仅是出于描述特定实施例的目的,而并不意在限定本公开。如在本公开说明书和权利要求书中所使用的那样,除非上下文清楚地指明其它情况,否则单数形式的“一”、“一个”及“该”意在包括复数形式。还应当进一步理解,在本公开说明书和权利要求书中使用的术语“和/或”是指相关联列出的项中的一个或多个的任何组合以及所有可能组合,并且包括这些组合。
如在本说明书和权利要求书中所使用的那样,术语“如果”可以依据上下文被解释为“当...时”或“一旦”或“响应于确定”或“响应于检测到”。类似地,短语“如果确定”或“如果检测到[所描述条件或事件]”可以依据上下文被解释为意指“一旦确定”或“响应于确定”或“一旦检测到[所描述条件或事件]”或“响应于检测到[所描述条件或事件]”。
根据本公开实施例的数据处理方法可应用于处理器中,该处理器可以是通用处理器,例如CPU(Central Processing Unit,中央处理器),也可以是用于执行人工智能运算的人工智能处理器(IPU)。人工智能运算可包括机器学习运算,类脑运算等。其中,机器学 习运算包括神经网络运算、k-means运算、支持向量机运算等。该人工智能处理器可例如包括GPU(Graphics Processing Unit,图形处理单元)、NPU(Neural-Network Processing Unit,神经网络处理单元)、DSP(Digital Signal Process,数字信号处理单元)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)芯片中的一种或组合。本公开对处理器的具体类型不作限制。
在一种可能的实现方式中,本公开中所提及的处理器可包括多个处理单元,每个处理单元可以独立运行所分配到的各种任务,如:卷积运算任务、池化任务或全连接任务等。本公开对处理单元及处理单元所运行的任务不作限制。
图1示出根据本公开实施例的数据处理方法的处理器的示意图。如图1所示,处理器100包括多个处理单元101以及存储单元102,多个处理单元101用于执行指令序列,存储单元102用于存储数据,可包括随机存储器(RAM,Random Access Memory)和寄存器堆。处理器100中的多个处理单元101既可共用部分存储空间,例如共用部分RAM存储空间和寄存器堆,又可同时拥有各自的存储空间。
图2示出根据本公开实施例的数据处理方法的流程图。如图2所示,该方法包括:
在步骤S11中:在解码后的处理指令为向量扩展指令时,确定与所述处理指令对应的数据的源数据地址、目的数据地址及扩展参数;
在步骤S12中:根据所述扩展参数,对所述源数据地址的第一向量数据进行扩展,得到扩展后的第二向量数据;
在步骤S13中,将所述第二向量数据存储到所述目的数据地址,
其中,所述源数据地址和所述目的数据地址包括连续的数据地址。
根据本公开的实施例,能够通过向量扩展指令中的扩展参数实现向量扩展及存储,得到扩展后的向量数据,从而简化处理过程,减小数据开销。
在一种可能的实现方式中,所述方法还包括:对接收到的处理指令进行解码,得到解码后的处理指令。其中,所述解码后的处理指令包括操作码,所述操作码用于指示进行向量扩展处理。
举例来说,当处理器接收到处理指令时,可对接收到的处理指令进行解码,得到解码后的处理指令。该解码后的处理指令包括操作码及操作域,操作码用于指示该处理指令的处理类型,操作域用于指示待处理的数据及数据参数。如果解码后的处理指令的操作码指示进行向量扩展处理,则该指令为向量扩展指令(Vector Extension)。
在一种可能的实现方式中,如果解码后的处理指令为向量扩展指令,则可在步骤S11中确定与处理指令对应的数据的源数据地址、目的数据地址及扩展参数。与处理指令对应的数据即处理指令的操作域所指示的第一向量数据,该第一向量数据包括多个数据点。源数据地址表示多个数据点在数据存储空间中当前的数据存储地址,为连续的数据地址;目的数据地址表示用于存储扩展后的第二向量数据的多个数据点的数据存储地址,也为连续的数据地址。源数据地址所在的数据存储空间与目的数据地址所在的数据存储空间 可以相同或不同,本公开对此不作限制。
在一种可能的实现方式中,在确定源数据地址以及扩展参数后,处理器可在步骤S12中从源数据地址读取第一向量数据的多个数据点,并根据扩展参数对读取到的多个数据点分别进行扩展,得到扩展后的第二向量数据的多个数据点,从而实现向量扩展。
在一种可能的实现方式中,可将扩展后的第二向量数据的多个数据点依次存储到目的数据地址中,得到第二向量数据,从而完成向量扩展过程。
通过这种方式,可以在图像识别等应用场景中需要对向量数据进行扩展处理时,通过向量扩展指令将原向量扩展为新的向量,并存储到连续地址空间,从而简化处理过程,减小数据开销。
在一种可能的实现方式中,步骤S11可包括:根据所述处理指令的操作域中第一向量数据的多个数据点的源数据基地址及数据尺寸,确定所述多个数据点的源数据地址。
举例来说,向量扩展指令可具有操作域,用于指示待扩展的向量数据的参数。操作域中可例如包括源数据基地址(Source Data Base Address)、目的数据基地址(Destination Data Base Address)、单个数据点的尺寸(Single Point Data Size)、单个数据点的数量(Single Point Data Number)、扩展参数等。
其中,源数据基地址可表示第一向量数据的多个数据点当前在数据存储空间中的基地址;目的数据基地址可表示扩展后的第二向量数据的多个数据点在数据存储空间中的基地址;单个数据点的尺寸可表示第一向量数据和第二向量数据的每个数据点的数据尺寸(例如4位或8位);单个数据点的数量可表示第一向量数据的数据点的数量N(N为大于1的整数);扩展参数可指示对第一向量数据的N个数据点的扩展方式。本公开对向量扩展指令的操作域中具体的参数数量及类型不作限制。
在一种可能的实现方式中,向量扩展指令的操作域中可包括源数据基地址(Source Data Base Address)、单个数据点的尺寸(Single Point Data Size)。由于源数据地址为连续的数据地址,因此可直接根据数据点的数据尺寸以及各个数据点的序号依次确定各个数据点的源数据地址。第n个数据点的源数据地址可表示为:
Single Point Src Addr[n]=Source Data Base Address+n*Single Point Data Size   (1)
在公式(1)中,Single Point Src Addr[n]表示第n个数据点的源数据地址。在源数据基地址例如为Addr1[0,3],单个数据点的尺寸为4位,n为3时,可确定出第3个数据点的源数据地址为Addr1[12,15]。
通过这种方式,可以分别确定各个数据点的源数据地址,以便从源数据地址读取第一向量数据的各个数据点。
在一种可能的实现方式中,第一向量数据包括N个数据点,N为大于1的整数,相应地,所述扩展参数包括与所述N个数据点对应的N个扩展参数位。其中,步骤S12可包括:
对所述第一向量数据的第n个数据点以及与所述第n个数据点对应的第n个扩展参数 位,确定第二向量数据的第n个数据位置的k n个数据点,1≤n≤N,k n≥0;
根据所述第二向量数据的N个数据位置的数据点,确定所述第二向量数据。
举例来说,扩展参数可包括N个扩展参数位,分别表示第一向量数据的N个数据点的复制次数k n,例如N=5时,扩展参数可表示为[1,2,0,3,1],表示对5个数据点分别复制1次、2次、0次、3次、1次。
在一种可能的实现方式中,对于第一向量数据的第n个数据点(1≤n≤N),该第n个数据点对应的第n个扩展参数位为k n(k n≥0),则可确定第二向量数据的第n个数据位置具有k n个第一向量数据的第n个数据点。这样,分别对第一向量数据N个数据点进行扩展处理,可确定第二向量数据的N个数据位置的数据点。例如,第一向量数据为[A,B,C,D,E],扩展参数可为[1,2,0,3,1],经扩展后,得到的第二向量数据为[A,B,B,D,D,D,E]。
应当理解,扩展参数还可以包括其他扩展内容(例如将各个数据点的值放大或缩小一定的倍数),并且扩展参数还可以包括其他表示方式,本领域技术人员可根据实际情况设置,本公开对此不作限制。
通过这种方式,可得到扩展后的第二向量数据。
在一种可能的实现方式中,步骤S13可包括:根据所述目的数据地址的目的数据基地址及数据尺寸,依次存储所述第二向量数据的各个数据点。
举例来说,在得到扩展后的第二向量数据后,可将第二向量数据存储到预设的目的数据地址。其中,向量扩展指令的操作域中可包括目的数据基地址(Destination Data Base Address),可根据目的数据基地址及单个数据点的数据尺寸,确定第二向量数据的各个数据点的目的数据地址。
Single Point Dest Addr[m]=Destination Data Base Address+m*Single Point Data Size(2)
在公式(2)中,Single Point Src Addr[m]表示第二向量数据的第m个数据点的目的数据地址(第二向量数据包括M个数据点,1≤m≤M,M为大于1的整数)。在目的数据基地址例如为Addr2[14,17],单个数据点的尺寸为4位,m为3时,可确定出第3个数据点的源数据地址为Addr2[26,29]。
通过这种方式,可将第二向量数据的各个数据点依次存储到目的数据地址中,从而完成向量扩展的整个过程。
根据本公开实施例的数据处理方法,能够通过向量扩展指令对向量进行扩展,从而能够在图像识别等应用场景中需要对向量数据进行扩展处理时,将原向量扩展为新的向量并存储到连续地址空间,从而简化处理过程,减小数据开销。
需要说明的是,对于前述的各方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本公开并不受所描述的动作顺序的限制,因为依据本公开,某些步骤可以采用其他顺序或者同时进行。其次,本领域技术人员也 应该知悉,说明书中所描述的实施例均属于可选实施例,所涉及的动作和模块并不一定是本公开所必须的。
进一步需要说明的是,虽然流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,这些步骤可以以其它的顺序执行。而且,流程图中的至少一部分步骤可以包括多个子步骤或者多个阶段,这些子步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,这些子步骤或者阶段的执行顺序也不必然是依次进行,而是可以与其它步骤或者其它步骤的子步骤或者阶段的至少一部分轮流或者交替地执行。
图3示出根据本公开实施例的数据处理装置的框图。如图3所示,所述装置包括:
地址确定模块31,用于在解码后的处理指令为向量扩展指令时,确定与所述处理指令对应的数据的源数据地址、目的数据地址及扩展参数;
数据扩展模块32,用于根据所述扩展参数,对所述源数据地址的第一向量数据进行扩展,得到扩展后的第二向量数据;
数据存储模块33,用于将所述第二向量数据存储到所述目的数据地址,
其中,所述源数据地址和所述目的数据地址包括连续的数据地址。
在一种可能的实现方式中,所述地址确定模块包括:
源地址确定子模块,用于根据所述处理指令的操作域中第一向量数据的多个数据点的源数据基地址及数据尺寸,确定所述多个数据点的源数据地址。
在一种可能的实现方式中,所述第一向量数据包括N个数据点,所述扩展参数包括与所述N个数据点对应的N个扩展参数位,N为大于1的整数,
所述数据扩展模块包括:
数据点确定子模块,用于根据所述第一向量数据的第n个数据点以及与所述第n个数据点对应的第n个扩展参数位,确定第二向量数据的第n个数据位置的k n个数据点,1≤n≤N,k n≥0;
数据确定子模块,用于根据所述第二向量数据的N个数据位置的数据点,确定所述第二向量数据。
在一种可能的实现方式中,所述数据存储模块包括:
存储子模块,用于根据所述目的数据地址的目的数据基地址及数据尺寸,依次存储所述第二向量数据的各个数据点。
在一种可能的实现方式中,所述装置还包括:
解码模块,用于对接收到的处理指令进行解码,得到解码后的处理指令,
其中,所述解码后的处理指令包括操作码,所述操作码用于指示进行向量扩展处理。
应该理解,上述的装置实施例仅是示意性的,本公开的装置还可通过其它的方式实 现。例如,上述实施例中所述单元/模块的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。例如,多个单元、模块或组件可以结合,或者可以集成到另一个系统,或一些特征可以忽略或不执行。
另外,若无特别说明,在本公开各个实施例中的各功能单元/模块可以集成在一个单元/模块中,也可以是各个单元/模块单独物理存在,也可以两个或两个以上单元/模块集成在一起。上述集成的单元/模块既可以采用硬件的形式实现,也可以采用软件程序模块的形式实现。
所述集成的单元/模块如果以硬件的形式实现时,该硬件可以是数字电路,模拟电路等等。硬件结构的物理实现包括但不局限于晶体管,忆阻器等等。若无特别说明,所述人工智能处理器可以是任何适当的硬件处理器,比如CPU、GPU、FPGA、DSP和ASIC等等。若无特别说明,所述存储单元可以是任何适当的磁存储介质或者磁光存储介质,比如,阻变式存储器RRAM(Resistive Random Access Memory)、动态随机存取存储器DRAM(Dynamic Random Access Memory)、静态随机存取存储器SRAM(Static Random-Access Memory)、增强动态随机存取存储器EDRAM(Enhanced Dynamic Random Access Memory)、高带宽内存HBM(High-Bandwidth Memory)、混合存储立方HMC(Hybrid Memory Cube)等等。
所述集成的单元/模块如果以软件程序模块的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储器中。基于这样的理解,本公开的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储器中,包括若干指令用以使得一台计算机设备(可为个人计算机、服务器或者网络设备等)执行本公开各个实施例所述方法的全部或部分步骤。而前述的存储器包括:U盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、移动硬盘、磁碟或者光盘等各种可以存储程序代码的介质。
在一种可能的实现方式中,还公开了一种人工智能芯片,其包括了上述数据处理装置。
在一种可能的实现方式中,还公开了一种电子设备,所述电子设备包括上述的人工智能芯片。
在一种可能的实现方式中,还公开了一种板卡,其包括存储器件、接口装置和控制器件以及上述人工智能芯片;其中,所述人工智能芯片与所述存储器件、所述控制器件以及所述接口装置分别连接;所述存储器件,用于存储数据;所述接口装置,用于实现所述人工智能芯片与外部设备之间的数据传输;所述控制器件,用于对所述人工智能芯片的状态进行监控。
图4示出根据本公开实施例的板卡的结构框图,参阅图4,上述板卡除了包括上述芯片389以外,还可以包括其他的配套部件,该配套部件包括但不限于:存储器件390、接 口装置391和控制器件392;
所述存储器件390与所述人工智能芯片通过总线连接,用于存储数据。所述存储器件可以包括多组存储单元393。每一组所述存储单元与所述人工智能芯片通过总线连接。可以理解,每一组所述存储单元可以是DDR SDRAM(英文:Double Data Rate SDRAM,双倍速率同步动态随机存储器)。
DDR不需要提高时钟频率就能加倍提高SDRAM的速度。DDR允许在时钟脉冲的上升沿和下降沿读出数据。DDR的速度是标准SDRAM的两倍。在一个实施例中,所述存储装置可以包括4组所述存储单元。每一组所述存储单元可以包括多个DDR4颗粒(芯片)。在一个实施例中,所述人工智能芯片内部可以包括4个72位DDR4控制器,上述72位DDR4控制器中64bit用于传输数据,8bit用于ECC校验。可以理解,当每一组所述存储单元中采用DDR4-3200颗粒时,数据传输的理论带宽可达到25600MB/s。
在一个实施例中,每一组所述存储单元包括多个并联设置的双倍速率同步动态随机存储器。DDR在一个时钟周期内可以传输两次数据。在所述芯片中设置控制DDR的控制器,用于对每个所述存储单元的数据传输与数据存储的控制。
所述接口装置与所述人工智能芯片电连接。所述接口装置用于实现所述人工智能芯片与外部设备(例如服务器或计算机)之间的数据传输。例如在一个实施例中,所述接口装置可以为标准PCIE接口。比如,待处理的数据由服务器通过标准PCIE接口传递至所述芯片,实现数据转移。优选的,当采用PCIE 3.0 X 16接口传输时,理论带宽可达到16000MB/s。在另一个实施例中,所述接口装置还可以是其他的接口,本公开并不限制上述其他的接口的具体表现形式,所述接口单元能够实现转接功能即可。另外,所述人工智能芯片的计算结果仍由所述接口装置传送回外部设备(例如服务器)。
所述控制器件与所述人工智能芯片电连接。所述控制器件用于对所述人工智能芯片的状态进行监控。具体的,所述人工智能芯片与所述控制器件可以通过SPI接口电连接。所述控制器件可以包括单片机(Micro Controller Unit,MCU)。如所述人工智能芯片可以包括多个处理芯片、多个处理核或多个处理电路,可以带动多个负载。因此,所述人工智能芯片可以处于多负载和轻负载等不同的工作状态。通过所述控制装置可以实现对所述人工智能芯片中多个处理芯片、多个处理和或多个处理电路的工作状态的调控。
在一种可能的实现方式中,公开了一种电子设备,其包括了上述人工智能芯片。电子设备包括数据处理装置、机器人、电脑、打印机、扫描仪、平板电脑、智能终端、手机、行车记录仪、导航仪、传感器、摄像头、服务器、云端服务器、相机、摄像机、投影仪、手表、耳机、移动存储、可穿戴设备、交通工具、家用电器、和/或医疗设备。所述交通工具包括飞机、轮船和/或车辆;所述家用电器包括电视、空调、微波炉、冰箱、电饭煲、加湿器、洗衣机、电灯、燃气灶、油烟机;所述医疗设备包括核磁共振仪、B超仪和/或心电图仪。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分, 可以参见其他实施例的相关描述。上述实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。
依据以下条款可更好地理解前述内容:
条款A1.一种数据处理方法,包括:
在解码后的处理指令为向量扩展指令时,确定与所述处理指令对应的数据的源数据地址、目的数据地址及扩展参数;
根据所述扩展参数,对所述源数据地址的第一向量数据进行扩展,得到扩展后的第二向量数据;
将所述第二向量数据存储到所述目的数据地址,
其中,所述源数据地址和所述目的数据地址包括连续的数据地址。
A2.根据A1所述方法,在解码后的处理指令为向量扩展指令时,确定与所述处理指令对应的数据的源数据地址、目的数据地址及扩展参数,包括:
根据所述处理指令的操作域中第一向量数据的多个数据点的源数据基地址及数据尺寸,确定所述多个数据点的源数据地址。
A3.根据A1或A2所述方法,所述第一向量数据包括N个数据点,所述扩展参数包括与所述N个数据点对应的N个扩展参数位,N为大于1的整数,
所述根据所述扩展参数,对所述源数据地址的第一向量数据进行扩展,得到扩展后的第二向量数据,包括:
根据所述第一向量数据的第n个数据点以及与所述第n个数据点对应的第n个扩展参数位,确定第二向量数据的第n个数据位置的k n个数据点,1≤n≤N,k n≥0;
根据所述第二向量数据的N个数据位置的数据点,确定所述第二向量数据。
A4.根据A1-A3中任意一项所述方法,将所述第二向量数据存储到所述目的数据地址,包括:
根据所述目的数据地址的目的数据基地址及数据尺寸,依次存储所述第二向量数据的各个数据点。
A5.根据A1-A4中任意一项所述方法,所述方法还包括:
对接收到的处理指令进行解码,得到解码后的处理指令,
其中,所述解码后的处理指令包括操作码,所述操作码用于指示进行向量扩展处理。
A6.一种数据处理装置,包括:
地址确定模块,用于在解码后的处理指令为向量扩展指令时,确定与所述处理指令对应的数据的源数据地址、目的数据地址及扩展参数;
数据扩展模块,用于根据所述扩展参数,对所述源数据地址的第一向量数据进行扩展,得到扩展后的第二向量数据;
数据存储模块,用于将所述第二向量数据存储到所述目的数据地址,
其中,所述源数据地址和所述目的数据地址包括连续的数据地址。
A7.根据A6所述装置,所述地址确定模块包括:
源地址确定子模块,用于根据所述处理指令的操作域中第一向量数据的多个数据点的源数据基地址及数据尺寸,确定所述多个数据点的源数据地址。
A8.根据A6或A7所述装置,所述第一向量数据包括N个数据点,所述扩展参数包括与所述N个数据点对应的N个扩展参数位,N为大于1的整数,所述数据扩展模块包括:
数据点确定子模块,用于根据所述第一向量数据的第n个数据点以及与所述第n个数据点对应的第n个扩展参数位,确定第二向量数据的第n个数据位置的k n个数据点,1≤n≤N,k n≥0;
数据确定子模块,用于根据所述第二向量数据的N个数据位置的数据点,确定所述第二向量数据。
A9.根据A6-A8中任意一项所述装置,所述数据存储模块包括:
存储子模块,用于根据所述目的数据地址的目的数据基地址及数据尺寸,依次存储所述第二向量数据的各个数据点。
A10.根据A6-A9中任意一项所述装置,所述装置还包括:
解码模块,用于对接收到的处理指令进行解码,得到解码后的处理指令,
其中,所述解码后的处理指令包括操作码,所述操作码用于指示进行向量扩展处理。
A11.一种人工智能芯片,所述芯片包括如A6-A10中任意一项所述的数据处理装置。
A12.一种电子设备,所述电子设备包括如A11所述的人工智能芯片。
A13.一种板卡,所述板卡包括:存储器件、接口装置和控制器件以及如A11所述的人工智能芯片;
其中,所述人工智能芯片与所述存储器件、所述控制器件以及所述接口装置分别连接;
所述存储器件,用于存储数据;
所述接口装置,用于实现所述人工智能芯片与外部设备之间的数据传输;
所述控制器件,用于对所述人工智能芯片的状态进行监控。
以上对本公开实施例进行了详细介绍,本文中应用了具体个例对本公开的原理及实施方式进行了阐述,以上实施例的说明仅用于帮助理解本公开的方法及其核心思想。同时,本领域技术人员依据本公开的思想,基于本公开的具体实施方式及应用范围上做出的改变或变形之处,都属于本公开保护的范围。综上所述,本说明书内容不应理解为对本公开的限制。

Claims (13)

  1. 一种数据处理方法,其特征在于,包括:
    在解码后的处理指令为向量扩展指令时,确定与所述处理指令对应的数据的源数据地址、目的数据地址及扩展参数;
    根据所述扩展参数,对所述源数据地址的第一向量数据进行扩展,得到扩展后的第二向量数据;
    将所述第二向量数据存储到所述目的数据地址,
    其中,所述源数据地址和所述目的数据地址包括连续的数据地址。
  2. 根据权利要求1所述方法,其特征在于,在解码后的处理指令为向量扩展指令时,确定与所述处理指令对应的数据的源数据地址、目的数据地址及扩展参数,包括:
    根据所述处理指令的操作域中第一向量数据的多个数据点的源数据基地址及数据尺寸,确定所述多个数据点的源数据地址。
  3. 根据权利要求1或2所述方法,其特征在于,所述第一向量数据包括N个数据点,所述扩展参数包括与所述N个数据点对应的N个扩展参数位,N为大于1的整数,
    所述根据所述扩展参数,对所述源数据地址的第一向量数据进行扩展,得到扩展后的第二向量数据,包括:
    根据所述第一向量数据的第n个数据点以及与所述第n个数据点对应的第n个扩展参数位,确定第二向量数据的第n个数据位置的k n个数据点,1≤n≤N,k n≥0;
    根据所述第二向量数据的N个数据位置的数据点,确定所述第二向量数据。
  4. 根据权利要求1-3中任意一项所述方法,其特征在于,将所述第二向量数据存储到所述目的数据地址,包括:
    根据所述目的数据地址的目的数据基地址及数据尺寸,依次存储所述第二向量数据的各个数据点。
  5. 根据权利要求1-4中任意一项所述方法,其特征在于,所述方法还包括:
    对接收到的处理指令进行解码,得到解码后的处理指令,
    其中,所述解码后的处理指令包括操作码,所述操作码用于指示进行向量扩展处理。
  6. 一种数据处理装置,其特征在于,包括:
    地址确定模块,用于在解码后的处理指令为向量扩展指令时,确定与所述处理指令对应的数据的源数据地址、目的数据地址及扩展参数;
    数据扩展模块,用于根据所述扩展参数,对所述源数据地址的第一向量数据进行扩展,得到扩展后的第二向量数据;
    数据存储模块,用于将所述第二向量数据存储到所述目的数据地址,
    其中,所述源数据地址和所述目的数据地址包括连续的数据地址。
  7. 根据权利要求6所述装置,其特征在于,所述地址确定模块包括:
    源地址确定子模块,用于根据所述处理指令的操作域中第一向量数据的多个数据点的源数据基地址及数据尺寸,确定所述多个数据点的源数据地址。
  8. 根据权利要求6或7所述装置,其特征在于,所述第一向量数据包括N个数据点,所述扩展参数包括与所述N个数据点对应的N个扩展参数位,N为大于1的整数,
    所述数据扩展模块包括:
    数据点确定子模块,用于根据所述第一向量数据的第n个数据点以及与所述第n个数据点对应的第n个扩展参数位,确定第二向量数据的第n个数据位置的k n个数据点,1≤n≤N,k n≥0;
    数据确定子模块,用于根据所述第二向量数据的N个数据位置的数据点,确定所述第二向量数据。
  9. 根据权利要求6-8中任意一项所述装置,其特征在于,所述数据存储模块包括:
    存储子模块,用于根据所述目的数据地址的目的数据基地址及数据尺寸,依次存储所述第二向量数据的各个数据点。
  10. 根据权利要求6-9中任意一项所述装置,其特征在于,所述装置还包括:
    解码模块,用于对接收到的处理指令进行解码,得到解码后的处理指令,
    其中,所述解码后的处理指令包括操作码,所述操作码用于指示进行向量扩展处理。
  11. 一种人工智能芯片,其特征在于,所述芯片包括如权利要求6-10中任意一项所述的数据处理装置。
  12. 一种电子设备,其特征在于,所述电子设备包括如权利要求11所述的人工智能芯片。
  13. 一种板卡,其特征在于,所述板卡包括:存储器件、接口装置和控制器件以及如权利要求11所述的人工智能芯片;
    其中,所述人工智能芯片与所述存储器件、所述控制器件以及所述接口装置分别连接;
    所述存储器件,用于存储数据;
    所述接口装置,用于实现所述人工智能芯片与外部设备之间的数据传输;
    所述控制器件,用于对所述人工智能芯片的状态进行监控。
PCT/CN2021/090676 2020-05-08 2021-04-28 数据处理方法及装置以及相关产品 WO2021223645A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP21800900.9A EP4148561A4 (en) 2020-05-08 2021-04-28 DATA PROCESSING METHOD AND APPARATUS, AND RELATED PRODUCT
US17/619,781 US20240126553A1 (en) 2020-05-08 2021-04-28 Data processing method and apparatus, and related product

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010383677.3A CN113626076A (zh) 2020-05-08 2020-05-08 数据处理方法及装置以及相关产品
CN202010383677.3 2020-05-08

Publications (2)

Publication Number Publication Date
WO2021223645A1 true WO2021223645A1 (zh) 2021-11-11
WO2021223645A9 WO2021223645A9 (zh) 2022-04-14

Family

ID=78377375

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/090676 WO2021223645A1 (zh) 2020-05-08 2021-04-28 数据处理方法及装置以及相关产品

Country Status (4)

Country Link
US (1) US20240126553A1 (zh)
EP (1) EP4148561A4 (zh)
CN (1) CN113626076A (zh)
WO (1) WO2021223645A1 (zh)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104011652A (zh) * 2011-12-30 2014-08-27 英特尔公司 打包选择处理器、方法、系统和指令
CN104011648A (zh) * 2011-12-23 2014-08-27 英特尔公司 用于执行向量打包压缩和重复的系统、装置以及方法
CN109952559A (zh) * 2016-12-30 2019-06-28 德州仪器公司 具有单独可选元素及成组复制的流式传输引擎
US20200089495A1 (en) * 2013-07-09 2020-03-19 Texas Instruments Incorporated Vector load and duplicate operations
CN112416256A (zh) * 2020-12-01 2021-02-26 海光信息技术股份有限公司 数据写入方法、装置及数据读取方法、装置

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9557994B2 (en) * 2004-07-13 2017-01-31 Arm Limited Data processing apparatus and method for performing N-way interleaving and de-interleaving operations where N is an odd plural number
EP2584460A1 (en) * 2011-10-20 2013-04-24 ST-Ericsson SA Vector processing system comprising a replicating subsystem and method
CN105653499B (zh) * 2013-03-15 2019-01-01 甲骨文国际公司 用于单指令多数据处理器的高效硬件指令
EP3336692B1 (en) * 2016-12-13 2020-04-29 Arm Ltd Replicate partition instruction

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104011648A (zh) * 2011-12-23 2014-08-27 英特尔公司 用于执行向量打包压缩和重复的系统、装置以及方法
CN104011652A (zh) * 2011-12-30 2014-08-27 英特尔公司 打包选择处理器、方法、系统和指令
US20200089495A1 (en) * 2013-07-09 2020-03-19 Texas Instruments Incorporated Vector load and duplicate operations
CN109952559A (zh) * 2016-12-30 2019-06-28 德州仪器公司 具有单独可选元素及成组复制的流式传输引擎
CN112416256A (zh) * 2020-12-01 2021-02-26 海光信息技术股份有限公司 数据写入方法、装置及数据读取方法、装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4148561A4 *

Also Published As

Publication number Publication date
CN113626076A (zh) 2021-11-09
EP4148561A1 (en) 2023-03-15
WO2021223645A9 (zh) 2022-04-14
EP4148561A4 (en) 2024-03-13
US20240126553A1 (en) 2024-04-18

Similar Documents

Publication Publication Date Title
WO2021223642A1 (zh) 数据处理方法及装置以及相关产品
WO2021027972A1 (zh) 数据同步方法及装置以及相关产品
WO2021083101A1 (zh) 数据处理方法、装置及相关产品
WO2021223645A1 (zh) 数据处理方法及装置以及相关产品
WO2021223638A1 (zh) 数据处理方法及装置以及相关产品
WO2021223639A1 (zh) 数据处理装置以及相关产品
WO2021018313A1 (zh) 数据同步方法及装置以及相关产品
WO2021027973A1 (zh) 数据同步方法及装置以及相关产品
CN111047005A (zh) 运算方法、装置、计算机设备和存储介质
WO2021082723A1 (zh) 运算装置
WO2021223644A1 (zh) 数据处理方法及装置以及相关产品
WO2021082746A1 (zh) 运算装置及相关产品
CN113626083B (zh) 数据处理装置以及相关产品
CN112306949B (zh) 数据处理方法及装置以及相关产品
CN111061507A (zh) 运算方法、装置、计算机设备和存储介质
CN111047030A (zh) 运算方法、装置、计算机设备和存储介质
CN111339060B (zh) 运算方法、装置、计算机设备和存储介质
WO2021082724A1 (zh) 运算方法及相关产品
CN112765539B (zh) 运算装置、方法及相关产品
WO2021169914A1 (zh) 数据量化处理方法、装置、电子设备和存储介质
WO2021082747A1 (zh) 运算装置及相关产品
CN111338694B (zh) 运算方法、装置、计算机设备和存储介质
WO2021185261A1 (zh) 计算装置、方法、板卡和计算机可读存储介质
CN113296736A (zh) 基于随机数的数据处理方法、随机数生成方法及装置
CN111813376A (zh) 运算方法、装置及相关产品

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21800900

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021800900

Country of ref document: EP

Effective date: 20221208

NENP Non-entry into the national phase

Ref country code: DE