CN111210012A - Data processing method and device and related products - Google Patents

Data processing method and device and related products Download PDF

Info

Publication number
CN111210012A
CN111210012A CN201811392262.1A CN201811392262A CN111210012A CN 111210012 A CN111210012 A CN 111210012A CN 201811392262 A CN201811392262 A CN 201811392262A CN 111210012 A CN111210012 A CN 111210012A
Authority
CN
China
Prior art keywords
data
operation signal
data operation
jump
memory
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811392262.1A
Other languages
Chinese (zh)
Other versions
CN111210012B (en
Inventor
不公告发明人
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Cambricon Information Technology Co Ltd
Original Assignee
Shanghai Cambricon Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to CN201811392262.1A priority Critical patent/CN111210012B/en
Application filed by Shanghai Cambricon Information Technology Co Ltd filed Critical Shanghai Cambricon Information Technology Co Ltd
Priority to KR1020207033053A priority patent/KR20200139829A/en
Priority to EP21217802.4A priority patent/EP4009185A1/en
Priority to JP2020569113A priority patent/JP7060720B2/en
Priority to PCT/CN2019/111977 priority patent/WO2020078470A1/en
Priority to KR1020207034145A priority patent/KR102539574B1/en
Priority to EP21217811.5A priority patent/EP4009184A1/en
Priority to EP21217804.0A priority patent/EP4009186A1/en
Priority to US17/278,812 priority patent/US20220035762A1/en
Priority to EP19873122.6A priority patent/EP3869352A4/en
Priority to EP21217809.9A priority patent/EP4009183A1/en
Publication of CN111210012A publication Critical patent/CN111210012A/en
Priority to JP2020206293A priority patent/JP7074832B2/en
Priority to JP2020206272A priority patent/JP7053775B2/en
Priority to JP2020206281A priority patent/JP7074831B2/en
Priority to JP2020206306A priority patent/JP7074833B2/en
Priority to US17/564,509 priority patent/US11797467B2/en
Priority to US17/564,492 priority patent/US11880330B2/en
Priority to US17/564,411 priority patent/US11809360B2/en
Priority to US17/564,398 priority patent/US11880328B2/en
Priority to US17/564,389 priority patent/US11841816B2/en
Priority to US17/564,431 priority patent/US11880329B2/en
Priority to US17/564,579 priority patent/US11960431B2/en
Priority to US17/564,366 priority patent/US11971836B2/en
Priority to US17/564,560 priority patent/US12061564B2/en
Priority to US17/564,529 priority patent/US11868299B2/en
Application granted granted Critical
Publication of CN111210012B publication Critical patent/CN111210012B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/16Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
    • G06F15/163Interprocessor communication
    • G06F15/167Interprocessor communication using a common memory, e.g. mailbox
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0806Multiuser, multiprocessor or multiprocessing cache systems
    • G06F12/084Multiuser, multiprocessor or multiprocessing cache systems with a shared cache
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/10Program control for peripheral devices
    • G06F13/12Program control for peripheral devices using hardware independent of the central processor, e.g. channel or peripheral processor
    • G06F13/124Program control for peripheral devices using hardware independent of the central processor, e.g. channel or peripheral processor where hardware is a sequential transfer control unit, e.g. microprocessor, peripheral processor or state-machine
    • G06F13/128Program control for peripheral devices using hardware independent of the central processor, e.g. channel or peripheral processor where hardware is a sequential transfer control unit, e.g. microprocessor, peripheral processor or state-machine for dedicated transfers to a network
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/14Handling requests for interconnection or transfer
    • G06F13/16Handling requests for interconnection or transfer for access to memory bus
    • G06F13/1605Handling requests for interconnection or transfer for access to memory bus based on arbitration
    • G06F13/1652Handling requests for interconnection or transfer for access to memory bus based on arbitration in a multiprocessor architecture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/76Architectures of general purpose stored program computers
    • G06F15/78Architectures of general purpose stored program computers comprising a single central processing unit
    • G06F15/7807System on chip, i.e. computer system on a single chip; System in package, i.e. computer system on one or more chips in a single package
    • G06F15/7825Globally asynchronous, locally synchronous, e.g. network on chip
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Microelectronics & Electronic Packaging (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Neurology (AREA)
  • Multi Processors (AREA)
  • Advance Control (AREA)

Abstract

The transmission circuit executes corresponding operation on data to be operated in a memory according to a data operation signal which carries a type flag bit of the data operation signal and is sent by internal or external equipment, and obtains required input data. In this embodiment, since the data operation signal carries the type flag bit of the data operation signal, the transmission circuit may determine the type of the data operation signal according to the type flag bit of the data operation signal after receiving the data operation signal, and then perform a corresponding operation on the data to be operated in the memory. Therefore, the corresponding operation can be quickly positioned by classifying according to the type zone bit of the data operation signal, the data access logic is simplified, the data access efficiency is improved, and the access speed of the machine learning chip during data access is greatly improved.

Description

Data processing method and device and related products
Technical Field
The present application relates to the field of information processing technologies, and in particular, to a data processing method and apparatus, and a related product.
Background
With the continuous development and increasing demand of information technology, the demand of data access and data processing is higher and higher, and the demand of some processors for processing data and accessing data is stricter and stricter. Taking a general-purpose processor as an example, a multi-core processor composed of a plurality of general-purpose processor cores (e.g., CPU cores) is a mainstream due to its powerful parallel computing capability.
However, with the continuous development of the current artificial neural network, more and more structural machine learning chips are gradually appeared, and these machine learning chips need to access data or process data in shared storage according to instructions during operation. When the data access or the shared storage data is more, the instruction of the machine learning chip gradually becomes complex, and further the speed of reading the shared storage through the instruction is influenced, so that the processing efficiency of the neuron data is low.
Therefore, how to improve the access speed of the machine learning chip during data access becomes a technical problem to be solved urgently by the current technical staff.
Disclosure of Invention
Therefore, it is necessary to provide a data processing method, an apparatus and a related product for solving the above technical problem of how to improve the access speed of the machine learning chip during data access when the data access or the shared storage data is large.
In a first aspect, an embodiment of the present invention provides a data processing method, where the method includes:
receiving a data operation signal sent by an internal or external device, wherein the data operation signal comprises an operation code, the operation code comprises the type flag bit, and the type flag bit is used for representing the data operation signal broadcast or multicast instruction;
and executing corresponding operation on the data to be operated in the memory according to the data operation signal to obtain the required input data.
In one embodiment, the data operation signal further comprises an operation field, and the operation field comprises a data receiving flag bit, and the data receiving flag bit is used for characterizing a device or a processing circuit for receiving the input data.
In one embodiment, the number of data reception flag bits characterizes the number of devices or processing circuits that can interact with the memory.
In one embodiment, the operation domain further comprises information of data to be operated on; the information of the data to be operated comprises a source address of the data to be operated in the memory, the length of the data to be operated and a data return address after the data is operated; the performing corresponding operation on the data to be operated in the memory according to the data operation signal to obtain the required neuron data and/or weight data includes:
starting to read the memory from the source address, and acquiring input data meeting the data length;
determining a device or a processing circuit for receiving the input data according to the data receiving zone bit;
and returning the input data to the storage space corresponding to the data return address in the device or the processing circuit according to the data return address.
In one embodiment, the apparatus includes at least one machine learning unit, each machine learning unit including a master processing circuit and a plurality of slave processing circuits.
In one embodiment, the operation domains further comprise a skip sub operation domain, and the skip sub operation domain comprises a skip step size and a data length operated after each skip; the reading the memory from the source address to obtain the input data meeting the data length includes:
reading the memory from the source address, and acquiring first jump data according to the jump data length after the current jump;
acquiring the last address of the jump data, and jumping from the last address to a target jump address according to the jump step length;
and starting from the target jump address, acquiring second jump data according to the jump data length after jumping until the jump data length obtained after each jump meets the data length.
In one embodiment, the skip sub-operation domain comprises a stride operation domain and/or a segment operation domain; the stride operation domain is used for representing the jump step length of the data operation signal each time; the segment operation domain is used for representing the preset segmentation size of the data operation signal each time.
In one embodiment, the operation field further comprises a function flag bit for characterizing a processing operation performed on the read data.
In one embodiment, the method further comprises:
and if the value of the type flag bit is CAST, determining that the data operation signal is a broadcast or multicast instruction.
In one embodiment, the receiving a data operation signal sent by an internal or external device includes:
analyzing the data operation signal to obtain a type zone bit of the data operation signal and information of data to be operated;
executing the parsed data operation signal according to an instruction queue; the instruction queue is used for representing the execution sequence of the data operation signals.
In one embodiment, prior to executing the parsed data operation signal in accordance with the instruction queue, the method further comprises:
judging the dependency relationship of the adjacent analyzed data operation signals to obtain a judgment result; the dependency relationship represents whether an association relationship exists between the s-th data operation signal and the s-1 th data operation signal before the s-th data operation signal;
if the judgment result shows that the s-th data operation signal and the s-1 th data operation signal have a dependency relationship, caching the s-th data operation signal, and extracting the s-th data operation signal after the s-1 th data operation signal is executed.
In one embodiment, the determining the dependency relationship between adjacent parsed data operation signals includes:
respectively acquiring a first storage address interval for extracting required data in the s-th data operation signal according to the s-th data operation signal and a zero storage address interval for extracting required data in the s-1-th data operation signal according to the s-1-th data operation signal;
if the first storage address interval and the zeroth storage address interval have an overlapped area, determining that the s-th data operation signal and the s-1 th data operation signal have a dependency relationship;
and if the first storage address interval and the zeroth storage address interval do not have an overlapped area, determining that the s-th data operation signal and the s-1 th data operation signal do not have a dependency relationship.
In a second aspect, an embodiment of the present invention provides a data processing apparatus, including a processor and a memory, where the memory stores a computer program, and the processor implements the following steps when executing the computer program:
receiving a data operation signal sent by an internal or external device, wherein the data operation signal comprises an operation code, the operation code comprises the type flag bit, and the type flag bit is used for representing the data operation signal broadcast or multicast instruction;
and executing corresponding operation on the data to be operated in the memory according to the data operation signal to obtain the required input data.
In a third aspect, an embodiment of the present invention provides a combined processing device, where the combined processing device includes the data processing device described in the second aspect, a universal interconnection interface, and other processing devices except the data processing device; the data processing device interacts with the other processing devices.
In one embodiment, the apparatus further comprises: and the storage device is respectively connected with the data processing device and the other processing devices and is used for storing the data of the data processing device and the other processing devices.
In a fourth aspect, an embodiment of the present invention provides a machine learning chip, where the machine learning chip includes the combination processing apparatus as described in the embodiment of the third aspect.
In a fifth aspect, an embodiment of the present invention provides a machine learning chip package structure, where the machine learning chip package structure includes the machine learning chip as described in the fourth aspect.
In a sixth aspect, an embodiment of the present invention provides a board, where the board includes the machine learning chip package structure described in the fifth aspect.
In a seventh aspect, an embodiment of the present invention provides an electronic device, where the electronic device includes the board described in the above sixth aspect.
According to the data processing method, the data processing device and the related product, the transmission circuit executes corresponding operation on the data to be operated in the memory according to the data operation signal which is sent by the internal or external equipment and carries the type flag bit of the data operation signal, and obtains the required input data. In this embodiment, since the data operation signal carries the type flag bit of the data operation signal, the transmission circuit may determine the type of the data operation signal according to the type flag bit of the data operation signal after receiving the data operation signal, and then perform a corresponding operation on the data to be operated in the memory. Therefore, the corresponding operation can be quickly positioned by classifying according to the type zone bit of the data operation signal, the data access logic is simplified, the data access efficiency is improved, and the access speed of the machine learning chip during data access is greatly improved.
Drawings
FIG. 1 is a diagram illustrating an application environment of a data processing method according to an embodiment;
FIG. 2 is a flow diagram illustrating a data processing method, according to an embodiment;
FIG. 3 is a flowchart illustrating a data processing method according to an embodiment;
FIG. 4 is a flowchart illustrating a data processing method according to an embodiment;
FIG. 5 is a flowchart illustrating a data processing method according to an embodiment;
FIG. 6 is a flowchart illustrating a data processing method according to an embodiment;
FIG. 7 is a schematic structural diagram of a combined treatment apparatus according to an embodiment;
FIG. 8 is a schematic diagram of another combined treatment apparatus according to an embodiment;
fig. 9 is a schematic structural diagram of a board card in an embodiment.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first," "second," "third," and "fourth," etc. in the description and claims of this application and in the accompanying drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
The data processing method provided by the application can be applied to a hardware circuit shown in fig. 1, and the circuit comprises: the device comprises a machine learning device 11, a transmission circuit 12 and a shared memory 13, wherein the machine learning device 11 and the transmission circuit 12, and the transmission circuit 12 and the shared memory 13 are all connected through an interface, wherein the machine learning device 11, the transmission circuit 12 and the shared memory 13 and the interface can be implemented by means of hardware circuits, for example: the Machine Learning device may be a device with an arithmetic function formed by a plurality of Machine Learning Units (MLUs), the transmission circuit may be a broadcast bus (broadcast bus), and the shared memory may be a non-volatile and/or volatile memory, including but not limited to a Random Access Memory (RAM), a cache memory, and the like. The embodiment does not limit the above specific hardware form. The transmission circuit 12 is configured to obtain input data required by the machine learning device 11 from the shared memory 13 according to a data operation signal sent by the machine learning device 11, and return the input data to the machine learning device 11, and the machine learning device 11 is configured to perform a machine learning operation according to the input data to obtain output data, and transmit the output data as new input data to the shared memory 13 through the transmission circuit 12 for data storage.
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application. The data processing method provided by the embodiment of the application aims to solve the technical problem of how to improve the access speed of a machine learning chip during data access when the data access or shared storage data is more. The following describes in detail the technical solutions of the present application and how the technical solutions of the present application solve the above technical problems by embodiments and with reference to the drawings. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments. It should be noted that, in the data processing method provided by the present invention, the execution main body is a transmission circuit, where the execution main body may also be a data processing apparatus, and the apparatus may be implemented as part or all of a data analysis terminal by software, hardware, or a combination of software and hardware.
In one embodiment, fig. 2 provides a data processing method, and this embodiment relates to a specific process in which the transmission circuit determines the type of the data operation signal according to the type flag bit of the data operation signal, and obtains the required data from the memory according to the corresponding operation of the determined type location, so as to improve the access speed. As shown in fig. 2, the method includes:
s101, receiving a data operation signal sent by an internal or external device, wherein the data operation signal comprises an operation code, the operation code comprises the type flag bit, and the type flag bit is used for representing the data operation signal broadcast or multicast instruction.
In this embodiment, the transmission circuit receives a data operation signal sent by an internal or external device, where an operation code of the data operation signal is used to indicate an operation type of the data operation signal, and includes a type flag of the data operation signal, where the internal or external device may be a machine learning device connected to the transmission circuit through an interface, and the machine learning device may be implemented in any hardware form, for example, a device with an operation function formed by multiple MLUs. The transmission circuit can determine the type of the data operation signal according to the type flag bit of the data operation signal carried by the data operation signal. For example: if the value of the type flag bit of the data operation signal is 1, the data operation signal is a broadcast or multicast instruction.
And S102, executing corresponding operation on the data to be operated in the memory according to the data operation signal to obtain the required input data.
Based on the data operation signal sent by the internal or external device received by the transmission circuit in step S101, according to the type flag bit of the data operation signal, the transmission circuit determines to perform a corresponding operation on the data to be operated in the memory, so as to obtain the required input data, for example: the neural data and the weight data are data required by an internal or external device, for example, when the internal or external device is a machine learning apparatus, the neural data and the weight data are data required to be input by the machine learning apparatus when performing a machine learning operation. The data may be stored in the memory in advance, or may be output after the machine learning device performs the machine learning operation, which is not limited in this embodiment.
In the data processing method provided by this embodiment, the transmission circuit performs corresponding operations on the data to be operated in the memory according to the data operation signal carrying the type flag bit of the data operation signal sent by the internal or external device, and obtains the required input data. In this embodiment, since the data operation signal carries the type flag bit of the data operation signal, the transmission circuit may determine the type of the data operation signal according to the type flag bit of the data operation signal after receiving the data operation signal, and then perform a corresponding operation on the data to be operated in the memory. Therefore, the corresponding operation can be quickly positioned by classifying according to the type zone bit of the data operation signal, the data access logic is simplified, the data access efficiency is improved, and the access speed of the machine learning chip during data access is greatly improved.
The operation code and operation field are described below by several embodiments, respectively, and their relationship with the type flag bit of the data operation signal, the information of the data to be operated on, and the data reception flag bit.
In one embodiment, the data operation signal further comprises an operation field, and the operation field comprises a data receiving flag bit, and the data receiving flag bit is used for characterizing a device or a processing circuit for receiving the input data. Optionally, the number of data receiving flag bits characterizes the number of devices or processing circuits capable of interacting with the memory. Optionally, if the value of the type flag bit is CAST, determining that the data operation signal is a broadcast or multicast instruction;
in this embodiment, the operation code of the data operation signal is used to indicate the operation type of the data operation signal, and includes a type flag bit of the data operation signal, for example, the type flag bit of the data operation signal in the operation code is CAST, which indicates that the data operation signal is a broadcast or multicast instruction. The operation field is used for storing required data information of the data operation signal in the execution process, and may include a data receiving flag bit which represents a device or a processing circuit which can receive input data in an internal or external device. The device may be a machine learning device or an MLU, and the processing circuit may be an arithmetic unit or a master processing circuit or a slave processing circuit of the arithmetic unit, which is not limited in this embodiment. For example, if three MLUs (machine learning units) in the data reception flag bits in the operation domain are marked as 1, it indicates that the three MLUs can receive data, and one MLU is marked as 0, it indicates that the one MLU cannot receive data. It should be noted that, here, marking the MLU capable of receiving data as 1 is only an exemplary manner, and a user may mark the MLU capable of receiving data as 0 or other identifier according to actual needs, which is not limited in this embodiment.
In this embodiment, according to the type flag bit of the data signal, the transmission circuit may determine the type of the data operation signal, locate the corresponding operation, and determine the target device that sends the data after the execution of the operation according to the data receiving flag bit, thereby simplifying the data access logic, improving the data access efficiency, and greatly improving the access speed of the machine learning chip during data access.
In another embodiment, the operation domain further includes information of data to be operated on; the information of the data to be operated includes a source address of the data to be operated in the memory, a length of the data to be operated, and a data return address after the data is operated, as shown in fig. 3, a data processing method is provided, and this embodiment relates to a specific process in which a transmission circuit reads data in the memory according to data information carried by a data operation signal, and then returns the read data to a device or a processing circuit according to the data operation information. The S102 includes:
s201, starting to read the memory from the source address, and acquiring input data meeting the data length;
in this embodiment, because the information of the data to be operated of the data operation signal carries the source address of the data to be operated in the memory, the length of the data to be operated, and the data return address after the data is operated, the transmission circuit starts to read the data from the source address in the memory, and reads the length of the data meeting the requirement of the data to be operated according to a preset rule, where the length of the data to be operated is set by a user according to an actual situation, and this implementation does not limit this. The transmission circuit acquires the input data meeting the data length, and reads the data meeting the data length from the memory according to a preset rule. The preset rule is also a rule formulated by the user according to an actual situation, and this embodiment does not limit this, for example, the reading may be performed in a manner of starting from the source address one by one until the read data length satisfies the data length.
And S202, determining a device or a processing circuit for receiving the input data according to the data receiving zone bit.
Based on the input data satisfying the data length acquired by the transmission circuit in step S201, the transmission circuit determines a device or a processing circuit for returning data according to the data receiving flag bits in the data signal, for example, when the device is a machine learning device, the transmission circuit determines that the data is returned to one or more target machine learning units in the machine learning device according to the data receiving flag bits.
And S203, returning the input data to the storage space corresponding to the data return address in the device or the processing circuit according to the data return address.
In this step, based on the device or the processing circuit for returning data determined in the above step, the transmission circuit returns the input data to the storage space corresponding to the data return address in the device or the processing circuit according to the data return address in the information of the data to be operated of the data operation signal, where the data return address in the information of the data to be operated may be an address in a plurality of target machine learning units of the machine learning device.
For example, as shown in table 1 below, on the basis of the above embodiments, the present embodiment may be exemplified as follows: the type flag bit of the data operation signal in the operation code is CAST, which indicates that the data operation signal is a broadcast or multicast command, and the data information to be operated in the operation domain includes a source address 0x110011, a destination address 0x000100, and a data length 0x0100, where the data length is a length set by a user, and the user may set the set length to one value or multiple values. In the data reception flag bits in the operation field, three MLUs are marked as 1 to indicate that the three MLUs can receive data, and one MLU is marked as 0 to indicate that the one MLU cannot receive data. Specifically, the transmission circuit reads data 0x0100 long from the address 0x110011 in the shared memory according to the data operation signal, and then writes the data to the address 0x000100 of MLU3, MLU1, and MLU0 in the machine learning device, respectively.
TABLE 1
Figure BDA0001874294310000101
In the data processing method provided by this embodiment, the transmission circuit starts to read the memory from the source address according to the data operation signal to obtain the input data meeting the data length, and determines the device or processing circuit receiving the input data according to the data receiving flag bit, and then returns the input data to the storage space corresponding to the data return address in the device or processing circuit according to the data return address
Optionally, in the above embodiment shown in fig. 3, the apparatus includes at least one machine learning unit, and each machine learning unit includes a master processing circuit and a plurality of slave processing circuits. The data signal operations performed by at least one machine learning unit (i.e., MLU) included in the machine learning apparatus may share one data receiving interface, and the machine learning unit may be connected to the transmission circuit through the transmission interface or the shared data receiving interface. It should be noted that both the sending interface and the shared data receiving interface may be implemented by a hardware circuit, and the type of the sending interface and the type of the shared data receiving interface are not limited in this embodiment. Wherein each machine learning unit comprises a master processing circuit and a plurality of slave processing circuits, wherein the master processing circuit is configured to distribute input data to the plurality of slave processing circuits; the device comprises a plurality of slave processing circuits, a master processing circuit and a plurality of machine learning units, wherein the slave processing circuits are used for executing intermediate operation in parallel according to input data (such as neuron data) transmitted by the master processing circuit to obtain a plurality of intermediate results and transmitting the plurality of intermediate results to the master processing circuit, so that the device can distribute the neurons of each machine learning unit to process and correspondingly output corresponding output neuron data, and parallel computation of one layer of neural network and another layer of neural network is performed, parallel processing of neural network computation can be realized, and the processing efficiency is improved.
On the basis of the above embodiments, the operation domain further includes a skip sub-operation domain, and the skip sub-operation domain includes a skip step and a skip data length operated after each skip, as shown in fig. 4, a data processing method is provided, and this embodiment relates to a specific process in which a transmission circuit reads data in a memory according to the skip sub-operation domain in the operation domain. The above S201 includes:
s301, reading the memory from the source address, and acquiring first jump data according to the jump data length after the current jump.
In this embodiment, the operation field of the data operation signal includes a skip sub-operation field, and the skip sub-operation field is used to instruct the transmission circuit to read the data information to be operated according to the rule of the sub-operation field when reading the data information according to the data operation signal. Optionally, the skip sub-operation domain includes a stride operation domain and/or a segment operation domain, where the stride operation domain is used to characterize a skip step length of the data operation signal each time; the length and name of the stride operation field and the segment operation field are only given as examples, and the length and name are not limited in the embodiment of the present application. The skip sub-operation domain includes a skip step length and a skip data length operated after each skip, where the skip data length may be a preset data length. Specifically, the transmission circuit reads the memory from the source address in the data information to be operated, and after the current jump, determines the data of the read jump data length as the first jump data, where the first jump data represents data obtained after the transmission circuit jumps to data of a preset length when reading the data, where the preset length is set by the user according to the actual situation, and this embodiment does not limit this.
S302, obtaining the last address of the jump data, and jumping from the last address to a target jump address according to the jump step length.
Based on the first jump data read in the step S301, the transmission circuit obtains the last address of the first jump data, and jumps from the last address of the first jump data to the target jump address according to the jump step (e.g. stride step) in the jump sub-operation domain, and it can be understood that the length between the last address of the first jump data and the target jump address is the jump step in the jump sub-operation domain.
And S303, starting from the target jump address, acquiring second jump data according to the length of the jump data after jumping until the length of the jump data obtained after each jump meets the data length.
In this step, when the transmission circuit reads data, it skips data of a preset length from the target skip address determined in the step S302, then determines the data after skips the preset length as second skip data, if the length between the address of the second skip data and the source address of the start skip satisfies the data length of the data required by the machine learning device, it indicates that the reading of the data required by the machine learning device is completed, if the length between the address of the second skip data and the source address of the start skip does not satisfy the data length of the data required by the machine learning device, it skips from the last address of the second skip data to read the data according to the skip sequence in the steps S301 to S303 until the length between the address of the second skip data and the source address of the start skip satisfies the data length of the data required by the machine learning device, that is, it means that the machine learning device has completed reading the required data.
Illustratively, as shown in table 3 below, the process of reading data by the transmission circuit in this embodiment is as follows: if the operation field also includes a jump rotor operation field stride operation field, the transmission circuit reads the data in the shared memory from the source address 0x110011 in the data information, reads the data with a preset length (the preset length is smaller than the data length 0x0100 in the data information in the following table), jumps the address with the stride length (0x0008), reads the data with the preset length, and reads the data according to the sequence until the total length of the read data is the data length 0x0100 in the data information in the following table 3, which indicates that the data is completely read. If the operation domain also includes a skip rotor operation domain segment operation domain, the transmission circuit reads the data in the shared memory from the source address 0x110011 in the data information, reads the data with segment length (0x0010), then skips the address with stride length (0x0008), then reads the data with segment length (0x0010), and reads the data according to the sequence until the total length of the read data is the data length 0x0100 in the data information in the following table 3, which indicates that the data is completely read. When the skip sub-operation field has only a segment operation field and no stride operation field, the transmission circuit reads data of a segment length (0x0010) from the source address 0x110011 when reading the data until the total length of the read data is the data length 0x0100 in the data information in table 3 below, which indicates that the data is completely read.
TABLE 3
Figure BDA0001874294310000131
In the data processing method provided by the embodiment, the transmission circuit reads the shared memory from the source address, acquires the first jump data according to the jump data length after the current jump, jumps to the target jump address according to the jump step length from the last address of the first jump data, then starts from the target jump address, acquires the second jump data according to the jump data length after the jump until the length of the jump data obtained after each jump meets the data length, so that when the operation domain comprises the jump rotor operation domain, because the transmission circuit reads data according to the jump rule of the sub-operation domain, the read data logic of the transmission circuit is simplified, the data access efficiency is improved, and the access speed of the machine learning chip during data access is greatly improved.
Since the data operation signal to be received is an encoded command when the transmission circuit operates according to the received data operation signal, and the data operation signal needs to be decoded and analyzed first, an embodiment of the present application provides a data processing method, where as shown in fig. 5, the receiving, by the transmission circuit in the data processing apparatus, the data operation signal sent by the machine learning apparatus in the data processing apparatus includes:
s401, analyzing the data operation signal to obtain a type flag bit of the data operation signal and information of data to be operated.
It should be noted that, generally, the number of data operation signals is large in the data processing process, and when one of the data operation signals is processed by the transmission circuit, other data operation signals need to be stored in the transmission circuit. The data operation information may include information such as a length of data to be operated, a target address, and an original address, which is not limited in this embodiment.
S402, executing the analyzed data operation signal according to the instruction queue; the instruction queue is used for representing the execution sequence of the data operation signals.
It should be understood that the data operation signals are required to be sequentially completed in sequence during execution, and based on the data operation information and the type flag obtained after the transmission circuit in the above step S401 parses the data operation signals, the transmission circuit executes the parsed data operation signals according to the instruction queue.
In the data processing method provided by this embodiment, the transmission circuit analyzes the data operation signal to obtain the type flag bit of the data operation signal and the information of the data to be operated, and then the transmission circuit executes the analyzed data operation signal according to the instruction queue, so that before the data operation signal is executed, the data operation signal is analyzed first and then executed in sequence, thereby greatly increasing the speed of the transmission circuit executing the operation according to the data operation signal.
Considering that the transmission circuit needs to execute the data operation signals associated with each other when executing the data operation signals in the order of the queue, this embodiment of the present application provides another embodiment, as shown in fig. 6, before the transmission circuit executes the parsed data operation signals according to the instruction queue, the method further includes:
s501, judging the dependency relationship of the adjacent analyzed data operation signals to obtain a judgment result; the dependency relationship represents whether an s-1 th data operation signal before an s-th data operation signal is associated with the s-th data operation signal.
The transmission circuit needs to judge the dependency relationship between the adjacent analyzed data operation signals, and determines that the processed two adjacent data operation signals are related according to the judgment result, wherein the s-th data operation signal represents any one of the data operation signals, and does not refer to any signal. The s-1 th data operation signal represents a signal previous to the s-th data operation signal.
Optionally, one way that the transmission circuit determines the dependency relationship between the adjacent analyzed data operation signals may be implemented as follows: respectively acquiring an s-th data operation signal for extracting required data in the s-th data operation signal according to the s-th data operation signal and a zero storage address interval for extracting the required data in the s-1-th data operation signal according to the s-1-th data operation signal; if the first storage address interval and the zeroth storage address interval have an overlapped area, determining that the s-th data operation signal and the s-1 th data operation signal have a dependency relationship; and if the first storage address interval and the zeroth storage address interval do not have an overlapped area, determining that the s-th data operation signal and the s-1 th data operation signal do not have a dependency relationship. The transmission circuit respectively judges the dependency relationship of the adjacent analyzed data operation signals according to the relationship between the s-th data operation signal of the s-th data operation signal and the zero-th storage address interval of the s-1-th data operation signal, and the judgment mode can be that if the first storage address interval and the zero-th storage address interval do not have an overlapped region, the s-th data operation signal and the s-1-th data operation signal do not have the dependency relationship, and if the first storage address interval and the zero-th storage address interval have an overlapped region, the s-th data operation signal and the s-1-th data operation signal have the dependency relationship.
S502, if the judgment result shows that the S-th data operation signal and the S-1-th data operation signal have a dependency relationship, caching the S-th data operation signal, and extracting the S-th data operation signal after the S-1-th data operation signal is executed.
Based on the dependency relationship between two adjacent data operation signals judged by the transmission circuit in the above steps, data operation signals are started to be executed according to the sequence, if the judgment result shows that the s-th data operation signal and the s-1 th data operation signal have the dependency relationship, the transmission circuit firstly caches the s-th data operation signal, and after the s-1 th data operation signal is executed, the s-th data operation signal is extracted.
According to the data processing method provided by the embodiment, the transmission circuit can ensure the continuity of the data operation signals by judging the relevance between two adjacent data operation signals, so that the smooth execution of corresponding operations according to the data operation signals in the later period is ensured by orderly preparation work in the earlier period, the data access efficiency is improved, and the access speed of the machine learning chip during data access is greatly improved.
Considering that the data read by the transmission circuit according to the data operation signal is not in a format required by the machine learning device, the transmission circuit is required to perform certain processing on the read data and then transmit the processed data to the machine learning device. The operation domain of the data operation signal includes a functional flag bit to indicate that the transmission circuit needs to perform corresponding processing on the read data according to the functional flag bit, and the number of the functional flag bits included in the operation domain may be one or multiple, which is not limited in this embodiment. For example, if the function flag bit is an increment decompression flag bit, and if the flag is 1, the data is read, the transmission circuit decompresses the data and then transmits it to the designated MLU in the machine learning apparatus, or the encryption flag is 1, after the data is read, the transmission circuit needs to decompress the data, and then transmit the data to the designated MLU in the machine learning apparatus, in this embodiment, because the transmission circuit can firstly carry out corresponding processing on the read data according to the functional flag bit in the data operation signal operation domain and then transmit the data to the machine learning device, the machine learning device can immediately recognize and execute the operation after receiving the data, so that the data processing efficiency is improved, and the access speed of the machine learning chip during data access is greatly improved.
It should be understood that although the various steps in the flow charts of fig. 2-6 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2-6 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performance of the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternating with other steps or at least some of the sub-steps or stages of other steps.
In an embodiment, the present application further provides a data processing apparatus, including a processor and a memory, where the memory stores a computer program, and the processor implements the following steps when executing the computer program:
receiving a data operation signal sent by an internal or external device, wherein the data operation signal comprises an operation code, the operation code comprises the type flag bit, and the type flag bit is used for representing the data operation signal broadcast or multicast instruction;
and executing corresponding operation on the data to be operated in the memory according to the data operation signal to obtain the required input data.
The implementation principle and technical effect of the data processing apparatus provided in this embodiment are similar to those of the data processing method described above, and are not described herein again.
Referring to fig. 7, an embodiment of the present application further provides a combined processing apparatus, which includes the data processing apparatus, a universal interconnect interface, and other processing apparatuses except for the data processing apparatus; the data processing device interacts with other processing devices to jointly complete the computing operation specified by the user. The other processing devices include one or more types of general purpose/special purpose processors such as a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a neural network processor, and the like. The number of processors included in the other processing devices is not limited. The other processing devices are used as interfaces of the data processing device and external data and control, and comprise data transportation to finish basic control of starting, stopping and the like of the data processing device; other processing devices may also cooperate with the data processing device to perform computational tasks. And the universal interconnection interface is used for transmitting data and control instructions between the data processing device and other processing devices. The data processing device acquires required input data from other processing devices and writes the required input data into a shared memory on a data processing device chip; the machine learning device can acquire control instructions from other processing devices and write the control instructions into the data processing device chip; the data in the shared memory of the data processing apparatus may also be read and transmitted to other processing apparatuses.
Optionally, as shown in fig. 8, the combined processing device may further include a storage device, and the storage device is connected to the data processing device and the other processing device respectively. The storage device is used for storing data stored in the data processing device and the other processing devices, and is particularly suitable for storing all data which cannot be stored in the data processing device or the other processing devices.
The combined processing device can be used as an SOC (system on chip) system of equipment such as a mobile phone, a robot, an unmanned aerial vehicle and video monitoring equipment, the core area of a control part is effectively reduced, the processing speed is increased, and the overall power consumption is reduced. In this case, the generic interconnect interface of the combined processing device is connected to some component of the apparatus. Some parts are such as camera, display, mouse, keyboard, network card, wifi interface.
In one embodiment, the present application further provides a machine learning chip, which includes the data processing device and/or the combination processing device.
In an embodiment, an embodiment of the present application further provides a chip packaging structure, which includes the above chip.
In an embodiment, an embodiment of the present application further provides a board card, which includes the chip packaging structure. Referring to fig. 9, the board card may include other accessories besides the chip package structure 81, including but not limited to: a memory device 82, an interface device 83, and a control device 84; the memory device 82 is connected to the machine learning chip 811 in the chip package 81 through a bus for storing data, and the memory device 82 may include a plurality of sets of memory cells 821. Each set of the storage units 821 and the machine learning chip 811 are connected by a bus. It is understood that each group of the memory units 821 may be a DDR SDRAM (Double Data Rate SDRAM).
DDR can double the speed of SDRAM without increasing the clock frequency. DDR allows data to be read out on the rising and falling edges of the clock pulse. DDR is twice as fast as standard SDRAM. In one embodiment, the storage device may include 4 sets of the storage unit. Each group of the memory cells may include a plurality of DDR4 particles (chips). In one embodiment, the machine learning chip may internally include 4 72-bit DDR4 controllers, wherein 64bit of the 72-bit DDR4 controller is used for data transmission, and 8bit is used for ECC check. It can be understood that when DDR4-3200 particles are adopted in each group of memory cells, the theoretical bandwidth of data transmission can reach 25600 MB/s. In one embodiment, each group of the memory cells includes a plurality of double rate synchronous dynamic random access memories arranged in parallel. DDR can transfer data twice in one clock cycle. And a controller for controlling DDR is arranged in the chip and is used for controlling data transmission and data storage of each memory unit.
The interface device 83 is electrically connected to a machine learning chip 811 in the chip package 81. The interface device 83 is used for data transmission between the machine learning chip 811 and an external device (such as a server or a computer). For example, in one embodiment, the interface device 83 may be a standard PCIE (peripheral component interconnect express) interface. For example, the data to be processed is transmitted to the machine learning chip by the server through a standard PCIE interface, so as to implement data transfer. Preferably, when PCIE 3.0X 16 interface transmission is adopted, the theoretical bandwidth can reach 16000 MB/s. In another embodiment, the interface device 83 may also be another interface, and the embodiment of the present application does not limit the concrete expression of the other interface, and the interface device may implement a switching function. In addition, the calculation result of the machine learning chip 811 is still transmitted back to an external device (e.g., a server) by the interface device 83.
The control device 84 is electrically connected to the machine learning chip 811. The control device 84 is used to monitor the state of the chip. Specifically, the machine learning chip 811 and the control device 84 may be electrically connected through an SPI (serial peripheral Interface) Interface. The control device may include a single chip Microcomputer (MCU). As the machine learning chip may include a plurality of data processing devices and/or a combination processing device, a plurality of loads may be carried. Therefore, the machine learning chip can be in different working states such as multi-load and light load. The control device 84 can be used to control the operating states of a plurality of data processing devices and/or combination processing devices in the machine learning chip.
In some embodiments, an electronic device is provided that includes the above board card. The electronic device comprises a data processing device, a robot, a computer, a printer, a scanner, a tablet computer, an intelligent terminal, a mobile phone, a vehicle data recorder, a navigator, a sensor, a camera, a server, a cloud server, a camera, a video camera, a projector, a watch, an earphone, a mobile storage, a wearable device, a vehicle, a household appliance, and/or a medical device. The vehicle comprises an airplane, a ship and/or a vehicle; the household appliances comprise a television, an air conditioner, a microwave oven, a refrigerator, an electric cooker, a humidifier, a washing machine, an electric lamp, a gas stove and a range hood; the medical equipment comprises a nuclear magnetic resonance apparatus, a B-ultrasonic apparatus and/or an electrocardiograph.
Those skilled in the art should also appreciate that the embodiments described in this specification are all alternative embodiments and that the acts and modules involved are not necessarily required for this application. In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implementing, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be an electric or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit may be implemented in the form of hardware, or may be implemented in the form of a software program module.
The integrated units, if implemented in the form of software program modules and sold or used as stand-alone products, may be stored in a computer readable memory. Based on such understanding, the technical solution of the present application may be substantially implemented or a part of or all or part of the technical solution contributing to the prior art may be embodied in the form of a software product stored in a memory, and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method described in the embodiments of the present application. And the aforementioned memory comprises: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
It will be understood by those skilled in the art that all or part of the processing of the above embodiments may be implemented by a program to instruct associated hardware, and the program may be stored in a computer readable memory, and the memory may include: flash Memory disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
The foregoing detailed description of the embodiments of the present application has been presented to illustrate the principles and implementations of the present application, and the above description of the embodiments is only provided to help understand the method and the core concept of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (17)

1. A method of data processing, the method comprising:
receiving a data operation signal sent by an internal or external device, wherein the data operation signal comprises an operation code, the operation code comprises the type flag bit, and the type flag bit is used for representing the data operation signal broadcast or multicast instruction;
and executing corresponding operation on the data to be operated in the memory according to the data operation signal to obtain the required input data.
2. The method of claim 1, wherein the data operation signal further comprises an operation field, and the operation field comprises a data reception flag bit, and the data reception flag bit is used for characterizing a device or a processing circuit receiving the input data.
3. The method of claim 2, wherein the number of data reception flag bits characterizes the number of devices or processing circuits that can interact with the memory.
4. The method according to any of claims 1-3, wherein the operation domain further comprises information of data to be operated on; the information of the data to be operated comprises a source address of the data to be operated in the memory, the length of the data to be operated and a data return address after the data is operated; the performing corresponding operation on the data to be operated in the memory according to the data operation signal to obtain the required neuron data and/or weight data includes:
starting to read the memory from the source address, and acquiring input data meeting the data length;
determining a device or a processing circuit for receiving the input data according to the data receiving zone bit;
and returning the input data to the storage space corresponding to the data return address in the device or the processing circuit according to the data return address.
5. The method of claim 4, wherein the apparatus comprises at least one machine learning unit, each machine learning unit comprising a master processing circuit and a plurality of slave processing circuits.
6. The method of claim 5, wherein the operation domain further comprises a skip sub operation domain, the skip sub operation domain comprising a skip step size and a skip data length operated after each skip; the reading the memory from the source address to obtain the input data meeting the data length includes:
reading the memory from the source address, and acquiring first jump data according to the jump data length after the current jump;
acquiring the last address of the jump data, and jumping from the last address to a target jump address according to the jump step length;
and starting from the target jump address, acquiring second jump data according to the jump data length after jumping until the jump data length obtained after each jump meets the data length.
7. The method of claim 6, wherein the rotor hopping operational domain comprises a stride operational domain and/or a segment operational domain; the stride operation domain is used for representing the jump step length of the data operation signal each time; the segment operation domain is used for representing the preset segmentation size of the data operation signal each time.
8. The method of claim 7, wherein the operation field further comprises a functional flag bit for characterizing a processing operation performed on the read data.
9. The method of any of claim 8, further comprising:
and if the value of the type flag bit is CAST, determining that the data operation signal is a broadcast or multicast instruction.
10. The method of claim 9, wherein the receiving the data operation signal transmitted by the internal or external device comprises:
analyzing the data operation signal to obtain a type zone bit of the data operation signal and information of data to be operated;
executing the parsed data operation signal according to an instruction queue; the instruction queue is used for representing the execution sequence of the data operation signals.
11. The method of claim 10, wherein prior to executing the parsed data operation signal in accordance with an instruction queue, the method further comprises:
judging the dependency relationship of the adjacent analyzed data operation signals to obtain a judgment result; the dependency relationship represents whether an association relationship exists between the s-th data operation signal and the s-1 th data operation signal before the s-th data operation signal;
if the judgment result shows that the s-th data operation signal and the s-1 th data operation signal have a dependency relationship, caching the s-th data operation signal, and extracting the s-th data operation signal after the s-1 th data operation signal is executed.
12. The method of claim 11, wherein determining the dependency of the adjacent parsed data operation signals comprises:
respectively acquiring a first storage address interval for extracting required data in the s-th data operation signal according to the s-th data operation signal and a zero storage address interval for extracting required data in the s-1-th data operation signal according to the s-1-th data operation signal;
if the first storage address interval and the zeroth storage address interval have an overlapped area, determining that the s-th data operation signal and the s-1 th data operation signal have a dependency relationship;
and if the first storage address interval and the zeroth storage address interval do not have an overlapped area, determining that the s-th data operation signal and the s-1 th data operation signal do not have a dependency relationship.
13. A data processing apparatus comprising a processor and a memory, said memory storing a computer program, characterized in that the steps of the method of any of claims 1 to 12 are implemented by said processor when executing said computer program.
14. A combined processing device, characterized in that it comprises a data processing device according to claim 13, a universal interconnect interface and other processing devices than said data processing device; the data processing device interacts with the other processing devices.
15. A machine learning chip, characterized in that it comprises a combined processing device according to claim 14.
16. A board comprising the machine learning chip of claim 15.
17. An electronic device, characterized in that it comprises a card according to claim 16.
CN201811392262.1A 2018-10-18 2018-11-21 Data processing method and device and related products Active CN111210012B (en)

Priority Applications (25)

Application Number Priority Date Filing Date Title
CN201811392262.1A CN111210012B (en) 2018-11-21 2018-11-21 Data processing method and device and related products
EP21217802.4A EP4009185A1 (en) 2018-10-18 2019-10-18 Network-on-chip data processing method and device
JP2020569113A JP7060720B2 (en) 2018-10-18 2019-10-18 Network-on-chip data processing methods and equipment
PCT/CN2019/111977 WO2020078470A1 (en) 2018-10-18 2019-10-18 Network-on-chip data processing method and device
KR1020207033053A KR20200139829A (en) 2018-10-18 2019-10-18 Network on-chip data processing method and device
KR1020207034145A KR102539574B1 (en) 2018-11-21 2019-10-18 Network-on-chip data processing method and device
EP21217811.5A EP4009184A1 (en) 2018-10-18 2019-10-18 Network-on-chip data processing method and device
EP21217804.0A EP4009186A1 (en) 2018-10-18 2019-10-18 Network-on-chip data processing method and device
US17/278,812 US20220035762A1 (en) 2018-10-18 2019-10-18 Network-on-chip data processing method and device
EP19873122.6A EP3869352A4 (en) 2018-10-18 2019-10-18 Network-on-chip data processing method and device
EP21217809.9A EP4009183A1 (en) 2018-10-18 2019-10-18 Network-on-chip data processing method and device
JP2020206272A JP7053775B2 (en) 2018-10-18 2020-12-11 Network-on-chip data processing methods and equipment
JP2020206293A JP7074832B2 (en) 2018-10-18 2020-12-11 Network-on-chip data processing methods and equipment
JP2020206281A JP7074831B2 (en) 2018-10-18 2020-12-11 Network-on-chip data processing methods and equipment
JP2020206306A JP7074833B2 (en) 2018-10-18 2020-12-11 Network-on-chip data processing methods and equipment
US17/564,431 US11880329B2 (en) 2018-10-18 2021-12-29 Arbitration based machine learning data processor
US17/564,366 US11971836B2 (en) 2018-10-18 2021-12-29 Network-on-chip data processing method and device
US17/564,411 US11809360B2 (en) 2018-10-18 2021-12-29 Network-on-chip data processing method and device
US17/564,398 US11880328B2 (en) 2018-10-18 2021-12-29 Network-on-chip data processing method and device
US17/564,389 US11841816B2 (en) 2018-10-18 2021-12-29 Network-on-chip data processing method and device
US17/564,509 US11797467B2 (en) 2018-10-18 2021-12-29 Data processing device with transmission circuit
US17/564,579 US11960431B2 (en) 2018-10-18 2021-12-29 Network-on-chip data processing method and device
US17/564,492 US11880330B2 (en) 2018-10-18 2021-12-29 Network-on-chip data processing method and device
US17/564,560 US12061564B2 (en) 2018-10-18 2021-12-29 Network-on-chip data processing based on operation field and opcode
US17/564,529 US11868299B2 (en) 2018-10-18 2021-12-29 Network-on-chip data processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811392262.1A CN111210012B (en) 2018-11-21 2018-11-21 Data processing method and device and related products

Publications (2)

Publication Number Publication Date
CN111210012A true CN111210012A (en) 2020-05-29
CN111210012B CN111210012B (en) 2022-12-09

Family

ID=70787661

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811392262.1A Active CN111210012B (en) 2018-10-18 2018-11-21 Data processing method and device and related products

Country Status (2)

Country Link
KR (1) KR102539574B1 (en)
CN (1) CN111210012B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024021010A1 (en) * 2022-07-29 2024-02-01 华为技术有限公司 Control system applied to vehicle, and vehicle

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103988559A (en) * 2011-12-20 2014-08-13 英特尔公司 Multicast service using unicast subframe
CN105446888A (en) * 2014-05-30 2016-03-30 华为技术有限公司 Data transferring method between storage devices, controller, and storage system
US20180121795A1 (en) * 2016-10-28 2018-05-03 Canon Kabushiki Kaisha Data processing apparatus, method for controlling the same, and storage medium storing program
CN107992329A (en) * 2017-07-20 2018-05-04 上海寒武纪信息科技有限公司 A kind of computational methods and Related product

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0719246B2 (en) * 1988-01-11 1995-03-06 三洋電機株式会社 Digital signal processor
TW325552B (en) * 1996-09-23 1998-01-21 Advanced Risc Mach Ltd Data processing condition code flags
DE602007006215D1 (en) * 2006-09-06 2010-06-10 Silicon Hive Bv DATA PROCESSING CIRCUIT WITH SEVERAL INSTRUCTION SWITCHING AND SCHEDULING METHOD FOR SUCH DATA SWITCHING
CN100555225C (en) * 2008-03-17 2009-10-28 中国科学院计算技术研究所 A kind of risc processor device and method of supporting the X86 virtual machine

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103988559A (en) * 2011-12-20 2014-08-13 英特尔公司 Multicast service using unicast subframe
CN105446888A (en) * 2014-05-30 2016-03-30 华为技术有限公司 Data transferring method between storage devices, controller, and storage system
US20180121795A1 (en) * 2016-10-28 2018-05-03 Canon Kabushiki Kaisha Data processing apparatus, method for controlling the same, and storage medium storing program
CN107992329A (en) * 2017-07-20 2018-05-04 上海寒武纪信息科技有限公司 A kind of computational methods and Related product

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
丁丁 等: "《多端口模式下的多消息广播算法与分析》", 《计算机科学》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024021010A1 (en) * 2022-07-29 2024-02-01 华为技术有限公司 Control system applied to vehicle, and vehicle

Also Published As

Publication number Publication date
KR102539574B1 (en) 2023-06-01
CN111210012B (en) 2022-12-09
KR20200139256A (en) 2020-12-11

Similar Documents

Publication Publication Date Title
CN111209243B (en) Data processing device, method and related product
CN111209231B (en) Data processing method and device and related products
CN110458285B (en) Data processing method, data processing device, computer equipment and storage medium
CN111210012B (en) Data processing method and device and related products
CN111079909B (en) Operation method, system and related product
CN111723920B (en) Artificial intelligence computing device and related products
CN111079916B (en) Operation method, system and related product
CN111401536A (en) Operation method, device and related product
CN111382850A (en) Operation method, device and related product
CN111382851A (en) Operation method, device and related product
CN111260045B (en) Decoder and atomic instruction analysis method
CN111381872A (en) Operation method, device and related product
CN111399905B (en) Operation method, device and related product
CN111400341B (en) Scalar lookup instruction processing method and device and related product
CN111078125B (en) Operation method, device and related product
CN112052040B (en) Processing method, processing device, computer equipment and storage medium
CN111078285B (en) Operation method, system and related product
CN111078293B (en) Operation method, device and related product
CN111079914B (en) Operation method, system and related product
WO2020192587A1 (en) Artificial intelligence computing device and related product
CN111325331B (en) Operation method, device and related product
CN111079907B (en) Operation method, device and related product
CN111723921B (en) Artificial intelligence computing device and related products
CN111079910B (en) Operation method, device and related product
CN111382390B (en) Operation method, device and related product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant