CN110458286B - Data processing method, data processing device, computer equipment and storage medium - Google Patents

Data processing method, data processing device, computer equipment and storage medium Download PDF

Info

Publication number
CN110458286B
CN110458286B CN201910748310.4A CN201910748310A CN110458286B CN 110458286 B CN110458286 B CN 110458286B CN 201910748310 A CN201910748310 A CN 201910748310A CN 110458286 B CN110458286 B CN 110458286B
Authority
CN
China
Prior art keywords
data
processed
dynamic
label information
processor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910748310.4A
Other languages
Chinese (zh)
Other versions
CN110458286A (en
Inventor
不公告发明人
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cambricon Technologies Corp Ltd
Original Assignee
Cambricon Technologies Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cambricon Technologies Corp Ltd filed Critical Cambricon Technologies Corp Ltd
Priority to CN202110673119.5A priority Critical patent/CN113435591B/en
Priority to CN201910748310.4A priority patent/CN110458286B/en
Publication of CN110458286A publication Critical patent/CN110458286A/en
Application granted granted Critical
Publication of CN110458286B publication Critical patent/CN110458286B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means

Abstract

The present disclosure relates to a data processing method, apparatus, computer device, and storage medium. Its disclosed integrated circuit board includes: a memory device, an interface device, a control device and a data processing device; wherein, the data processing device is respectively connected with the memory device, the control device and the interface device; a memory device for storing data; the interface device is used for realizing data transmission between the data processing device and the external equipment; and the control device is used for monitoring the state of the data processing device. According to the data processing method, the data processing device, the computer equipment and the storage medium, the data in the neural network are marked through the label information, the processes of processing, storing and the like of the data are simplified, the occupation of hardware resources is reduced, and the operation speed of the neural network is improved.

Description

Data processing method, data processing device, computer equipment and storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a data processing method and apparatus, a computer device, and a storage medium.
Background
With the development of computer technology, neural networks (neural networks) have also been significantly improved, and neural network operations can be performed by a specific or general-purpose processor. In the related art, the operation speed of the neural network is greatly limited under the influence of factors such as many data types, large operation amount, hardware limitation and the like in the neural network.
Disclosure of Invention
In view of the above, it is necessary to provide a data processing method, an apparatus, a computer device and a storage medium for solving the above technical problems.
According to an aspect of the present disclosure, there is provided a data processing method applied to a processor, the method including:
acquiring data to be processed and label information of the data to be processed, wherein the data to be processed is used for neural network operation, and the label information comprises dynamic label information;
processing the data to be processed according to the dynamic label information to obtain processed data, and storing the processed data, wherein the data state of the processed data is consistent with the dynamic label information;
wherein the dynamic label information is used for characterizing information of the data to be processed related to a processor operating the neural network.
According to another aspect of the present disclosure, there is provided a data processing apparatus for a processor, the apparatus comprising:
the system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring data to be processed and label information of the data to be processed, the data to be processed is used for carrying out neural network operation, and the label information comprises dynamic label information;
the data processing module is used for processing the data to be processed according to the dynamic label information to obtain processed data and storing the processed data, wherein the data state of the processed data is consistent with the dynamic label information;
wherein the dynamic label information is used for characterizing information of the data to be processed related to a processor operating the neural network.
According to another aspect of the present disclosure, an electronic device is provided, which includes the above data processing apparatus.
According to another aspect of the present disclosure, a board card is provided, which includes the above data processing apparatus.
According to another aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the above-described data processing method.
In some embodiments, the electronic device comprises a data processing apparatus, a robot, a computer, a printer, a scanner, a tablet, a smart terminal, a cell phone, a tachograph, a navigator, a sensor, a camera, a server, a cloud server, a camera, a camcorder, a projector, a watch, a headset, a mobile storage, a wearable device, a vehicle, a household appliance, and/or a medical device.
In some embodiments, the vehicle comprises an aircraft, a ship, and/or a vehicle; the household appliances comprise a television, an air conditioner, a microwave oven, a refrigerator, an electric cooker, a humidifier, a washing machine, an electric lamp, a gas stove and a range hood; the medical equipment comprises a nuclear magnetic resonance apparatus, a B-ultrasonic apparatus and/or an electrocardiograph.
The embodiment of the disclosure provides a data processing method, a data processing device, a computer device and a storage medium, wherein data in a neural network is marked through tag information, and the data is processed and stored based on the tag information, so that a more friendly API is provided for a user, the performance of software is improved, the occupation of hardware resources is reduced, and the operation speed of the neural network is increased.
Through deducing technical characteristics in the claims, the beneficial effects corresponding to the technical problems in the background art can be achieved. Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate exemplary embodiments, features, and aspects of the disclosure and, together with the description, serve to explain the principles of the disclosure.
Fig. 1 shows a schematic diagram of a processor of a data processing method according to an embodiment of the present disclosure.
Fig. 2a shows a flow diagram of a data processing method according to an embodiment of the present disclosure.
Fig. 2b shows a flow chart of determining a data class in a data processing method according to an embodiment of the present disclosure.
Fig. 2c is a schematic diagram illustrating determining a data category in a data processing method according to an embodiment of the present disclosure.
Fig. 3a shows a schematic diagram of an association list in a data processing method according to an embodiment of the present disclosure.
Fig. 3b shows a schematic diagram of a computation graph in a data processing method according to an embodiment of the present disclosure.
Fig. 3c shows a schematic diagram of data conversion in a data processing method according to an embodiment of the present disclosure.
Fig. 4a shows a flow chart of data transfer in a data processing method according to an embodiment of the present disclosure.
Fig. 4b shows a schematic diagram of an apparatus designed for neural network operations in a data processing method according to an embodiment of the present disclosure.
Fig. 4c shows a schematic diagram of the use of the neural network arithmetic device in the data processing method according to the embodiment of the present disclosure.
Fig. 5 shows a block diagram of a data processing apparatus according to an embodiment of the present disclosure.
Fig. 6 shows a block diagram of a board card according to an embodiment of the present disclosure.
Detailed Description
The technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are some, but not all embodiments of the present disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.
It should be understood that the terms "first," "second," and the like in the claims, the description, and the drawings of the present disclosure are used for distinguishing between different objects and not for describing a particular order. The terms "comprises" and "comprising," when used in the specification and claims of this disclosure, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the disclosure herein is for the purpose of describing particular embodiments only, and is not intended to be limiting of the disclosure. As used in the specification and claims of this disclosure, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should be further understood that the term "and/or" as used in the specification and claims of this disclosure refers to any and all possible combinations of one or more of the associated listed items and includes such combinations.
As used in this specification and claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to a determination" or "in response to a detection". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
In the related art, research on neural network accelerators has achieved remarkable achievements and provides powerful hardware support for many deep learning algorithms. In order to improve the performance of the neural network accelerator, algorithm optimization and data layout (including data-related processing such as storage, deformation, and transfer of data) in a neural Network Development Kit (NDK) are indispensable. The rich data types in neural network algorithms result in a variety of data layout information. How to add complex data layout information to the NDK, so as to guide the software to work in all aspects, to avoid user perception, and to provide a user-friendly API, and to improve the performance of the software, and to provide the speed of neural network operation is a problem to be solved urgently. The embodiments of the present disclosure provide a data processing method and apparatus, a computer device, and a storage medium, where data in a neural network is marked by tag information, so as to simplify processes of storing, deforming, transferring, and the like of the data, provide a more friendly API for a user, improve software performance, reduce occupation of hardware resources, and improve a speed of operation of the neural network.
The data Processing method according to the embodiment of the present disclosure may be applied to a processor, and the processor may include a general-purpose processor, such as a Central Processing Unit (CPU), where the general-purpose processor may perform operations such as preprocessing on data and instructions received by the electronic device, and the general-purpose processor may also implement functions such as instruction compiling. The processor may also include an artificial Intelligence Processor (IPU) that performs artificial intelligence operations. The artificial intelligence operation may include a machine learning operation or a brain-like operation, etc. The machine learning operation comprises neural network operation, k-means operation, support vector machine operation and the like. The artificial intelligence processor may include, for example, one or a combination of a GPU (Graphics Processing Unit), a NPU (Neural-Network Processing Unit), a DSP (Digital Signal Processing Unit), and a Field Programmable Gate Array (FPGA) chip. The present disclosure is not limited to a particular type of artificial intelligence processor.
In one possible implementation, the artificial intelligence processor referred to in this disclosure may include a plurality of processing units, each of which may independently run various tasks assigned thereto, such as: a convolution operation task, a pooling task, a full connection task, or the like. The present disclosure is not limited to processing units and tasks executed by processing units. The artificial intelligence processor can execute machine learning operation according to the compiled instruction transmitted by the general processor, for example, the artificial intelligence processor can execute operations such as neural network reasoning or training according to the compiled instruction so as to realize image recognition, voice recognition and the like.
Of course, in other possible embodiments, the artificial intelligence processor may also implement the above-described instruction compiling function. Further optionally, the electronic device of an embodiment of the present disclosure may include the processor described above, i.e., the electronic device may include the general-purpose processor and the artificial intelligence processor described above.
Fig. 1 shows a schematic diagram of a processor of a data processing method according to an embodiment of the present disclosure. As shown in fig. 1, the processor 100 includes a plurality of processing units 101 and a storage unit 102, the plurality of processing units 101 is used for executing instruction sequences, the storage unit 102 is used for storing data, and the storage unit 102 may include a Random Access Memory (RAM) and a register file. Multiple processing units 101 in processor 100 may share a portion of memory, such as a portion of RAM memory and a register file, or may have separate memory spaces at the same time.
Fig. 2a shows a flow diagram of a data processing method according to an embodiment of the present disclosure. As shown in fig. 2a, the method is applied to a processor, and includes step S11 and step S12.
In step S11, tag information of data to be processed for performing neural network operations is acquired. The label information comprises static label information, and the static label information is used for representing information related to the participation of the data to be processed in the neural network operation.
In this embodiment, the static tag information may include information such as a data type, a dimension, and a dimension value describing the nature of the data to be processed, and further include information related to a neural network operation involved in the data to be processed. The static label information can be determined after the neural network is established, and the static label information of the data to be processed can be suitable for any processor running the neural network, namely the static label information of the data to be processed is unchanged in different processors. The static tag information of the same data to be processed in different neural networks may be different. Alternatively, the static tag information of the to-be-processed data may be automatically detected and determined by the processor in the process of acquiring the to-be-processed data (the process of inputting the to-be-processed data by the user), or may be determined according to the information input by the user, which is not limited by the present disclosure.
In step S12, the data to be processed is stored in the data storage space according to the tag information.
In this embodiment, the processor may determine the size of the data storage space required for storing the data to be processed according to the tag information, apply for the required data storage space, and then store the data to be processed into the applied data storage space. Specifically, the processor can directly apply for a data storage space according to the static tag information, and store the data to be processed. For example, the data volume of the data to be processed is determined according to the static tag information of the data to be processed, and then a data storage space for storing the data to be processed is applied according to the determined data volume, so as to store the data to be processed.
Optionally, the static tag information may include at least one of: a data category, a static data type, a static data dimension order, and a dimension value corresponding to each static data dimension.
In this implementation, what kind of data the data to be processed represented by the data category belongs to in the neural network is determined based on information such as whether the user is visible or not, an operation involved in the neural network, and the like. The static data type represents the type and the number of bits of data to be processed, and the static data type can be a 32-bit floating point number or the like. The static data dimension can be one-dimensional, two-dimensional, multi-dimensional and the like, and the static data dimension sequence can represent the dimension sequence of storage and/or reading of the data to be processed. The dimension value for each static data dimension represents the length or size of the corresponding static data dimension. For example, a certain data to be processed is a matrix, the static data dimension includes rows and columns, the static data dimension order is row-first, the dimension value of a row is 10, and the dimension value of a column is 4. The data to be processed is three-dimensional data, the static data dimensions comprise a first dimension, a second dimension and a third dimension, the static data dimensions are in the order of the third dimension > the second dimension > the first dimension, the dimension value of the first dimension is 10, the dimension value of the second dimension is 4, and the dimension value of the third dimension is 8.
In a possible implementation manner, the obtaining tag information of the data to be processed may include: and determining the data category of the data to be processed according to the out-degree and in-degree of the data to be processed corresponding to the neural network and the operation in the neural network in which the data to be processed participates.
In this implementation, the in-degree represents the number of previous operation nodes in which the to-be-processed data participates as a data node (the to-be-processed data is an output of the previous operation nodes), and the out-degree represents the number of subsequent operation nodes in which the to-be-processed data participates as a data node (the to-be-processed data is an input of the subsequent operation nodes). For example, a certain to-be-processed data cc may be the output of 1 previous operation node, the input of 3 subsequent operation nodes, and then the out-degree and in-degree of the to-be-processed data cc are 3 and 1. Different codes may be set for different data categories to distinguish. As shown in table 1 below, the characteristics and corresponding identifications of the data of the different data categories are described.
Optionally, the data categories may include any of: instruction (Instruction), Input Neuron (Input Neuron), Output Neuron (Output Neuron), Hidden Neuron (Hidden Neuron), Constant Neuron (Constant Neuron), Input Weight (Input Weight), Output Weight (Output Weight), Constant Weight (Constant Weight), and Auxiliary data (Auxiliary).
TABLE 1 data categories, corresponding identifications and data characteristics
Figure BDA0002166331000000051
The out-degree and in-degree of the instruction are zero, and the instruction is used for triggering neural network operation. The output degree of the input neuron, the constant neuron, the input weight, the constant weight and the auxiliary data is greater than 1, and the input degree is 0. The output neuron and the output weight have an out degree of 0 and an in degree of 1 or more. The out-degree and in-degree of the hidden neuron are both greater than or equal to 1.
In one possible implementation, the static tag information of the data to be processed may be represented as:
Static:classification,type1,DIM_A1…An,{x1…xn}
wherein static is an identifier indicating that the tag information is static tag information. classification indicates the data type and type1 indicates the static data type. N in DIM _ A1 … An represents the static data dimension, and A1 … An represents the static data dimension order is A1 … An. The dimension value of A1 is x1 … An and the dimension value is xn. The term, "{ }" is only used to separate different parameters in the static label information in the present disclosure, and is not a necessary content of the static label information, and in practical applications, "{ }" may not exist, or may be replaced by other identifiers, and the present disclosure does not limit this.
It should be understood that, those skilled in the art can set the static tag information, the identification of the data category, and the location of each parameter in the static tag information according to actual needs, and the disclosure is not limited thereto.
For example, the static tag information of a certain data to be processed is: IW, Float32, DIM _ HW, {10,4}, which indicates that the data to be processed is input weight (data type), 32-bit floating point number (Static data type), two-dimensional row priority (Static data dimension and Static data dimension order), 10 number of rows (dimension value), 4 number of columns (dimension value).
Fig. 2b shows a flow chart of determining a data class in a data processing method according to an embodiment of the present disclosure. In one possible implementation, as shown in fig. 2b, the step of determining the data category in the static tag information includes steps S31 to S34.
In step S31, a computation graph to be marked is acquired.
The to-be-marked calculation graph comprises non-category data nodes, operation nodes and connection relations between the non-category data nodes and the operation nodes. The non-category data node comprises data (including data to be processed and second data mentioned below) participating in neural network operation, rest static label information of the data except for data categories, and a previous operation node and a subsequent operation node connected with the data node. The operation node includes corresponding parameters, and an input data node and an output data node (i.e., the above-mentioned non-category data node) connected to the operation node.
In this implementation, the computation graph to be marked may be used to store a graph structure of the neural network structure, or may be used to store a data graph structure, where the data graph structure is used only to store data, and the present disclosure is not limited thereto.
In step S32, the computation graph to be labeled is traversed to obtain all the non-category data nodes and operation nodes in the computation graph to be labeled.
In this implementation, the processor may perform traversal from a certain non-category data node (e.g., an input neuron) of the computational graph to be marked, determine a previous operation node and a subsequent operation node of the non-category data node, then acquire a new non-category data node, and continue to determine the previous operation node and the subsequent operation node of the new non-category data node until all nodes (including the non-category data node and the operation node) are traversed.
In step S33, the traversed non-category data nodes are sequentially stored in the data node queue, and the traversed operation nodes are sequentially stored in the operation node queue.
The processor needs to judge whether the class-free data node in the data node queue is stored or not after traversing the class-free data node every time, and the class-free data node is not stored if the class-free data node in the data node queue is stored. And if the non-category data node is not stored in the data node queue, storing the non-category data node. Before the processor stores the traversed operation node each time, whether the operation node is stored in the operation node queue or not needs to be judged, and if the operation node is stored in the operation node queue, the operation node is not stored. And if the operation node is not stored in the operation node queue, storing the operation node. Therefore, the storage space can be saved, and the traversal speed can be improved.
In step S34, during the traversal, the data category corresponding to the non-category data node in the data node queue is determined, and the determined data category is stored in the corresponding data node in the computation graph to be marked (e.g., in the static label information of the data node), and the non-category data node with the determined data category is deleted from the data node queue. At this time, the non-category data node in the computation graph to be marked becomes a data node with complete content because the corresponding data category is stored. And after determining the data types of all the non-type data nodes in the calculation graph to be marked until the data node queue is empty, finishing node scanning and data type determination to obtain the marked calculation graph.
For example, fig. 2c is a schematic diagram illustrating determining a data category in a data processing method according to an embodiment of the disclosure. As shown in fig. 2c, determining the data class includes a node scanning process and a data class determination process.
And (3) node scanning process:
and the processor traverses the computation graph to be marked after acquiring the computation graph. The processor may first acquire the non-category data node 1 and store it at the head (or leading bit) of the data node queue. The processor then stores the subsequent operational node 1' of the non-category data node 1 to the tail of the operational node queue. The processor then obtains the non-category data node 2, and after determining that the non-category data node 2 is not stored in the data node queue, stores the non-category data node 2 to the tail of the data node queue. Then, the processor determines that the operation node 1 'is stored by querying the operation node queue, and the processor does not store the subsequent operation node 1' of the non-category data node 2. Based on the same process, the processor stores the acquired non-category data node 3 to the tail of the data node queue. And the processor also continues to scan the computational graph to be marked, acquires the non-category data node 4, and stores the non-category data node 4 to the tail of the data node queue after determining that the non-category data node 4 is not stored in the data node queue. And judging whether the operation node queue stores the previous operation node 1 'and the subsequent operation node 2' of the non-category data node 4, and storing the operation node 2 'into the tail part of the operation node queue because the operation node queue stores the operation node 1'. And the processor also continues to scan the computational graph to be marked, acquires the non-category data node 5, and stores the non-category data node 5 to the tail of the data node queue after determining that the non-category data node 5 is not stored in the data node queue. And judging whether the operation node queue stores the previous operation node 2 ' of the non-category data node 5 or not, and if so, not storing the operation node 2 ' because the operation node queue stores the operation node 2 '.
Data category determination procedure:
in the process of scanning nodes, the processor may determine that the data type of the non-category data node 1 is an "input neuron" according to the out-degree and in-degree of the non-category data node 1 at the head of the data node queue, a corresponding previous operation node and a subsequent operation node, and then add the identifier of the "input neuron" to the non-category data node 1 in the computation graph to be marked, for example, to the static label information of the data node 1; at the same time, the processor needs to delete the non-category data node 1 of which the data category is determined in the data node queue. The processor continuously inquires the head of the data node queue, and if the head of the data node queue stores a data node without category, the data category determining process is executed; and if the head of the data node queue is empty, ending, and determining that the data types of all the data nodes in the computational graph to be marked are determined to be finished to form the marked computational graph.
Optionally, the processor may determine the static label information, such as the type of the static data, the dimensions of the static data, the order of the dimensions of the static data, and the dimension value corresponding to each dimension of the static data, by detecting the data received by the processor. Alternatively, the processor may determine the static tag information based on information provided by a user or a sender of the data when receiving the data.
In one possible implementation, the tag information may further include dynamic tag information, and the dynamic tag information is used for characterizing information of the data to be processed related to a processor operating the neural network. In step S11, the obtaining of the label information of the data to be processed may include: and generating dynamic label information of the data to be processed according to the processor and the static label information.
In this implementation, the dynamic tag information is determined from the static tag information and the computational power, performance, etc. of the processor after determining the processor running the neural network, so that the data to be processed with the dynamic tag information can be applied to the operation of the processor. When the neural network adopts different processors to operate, the dynamic label information of the data to be processed can be different. When the performance, computing power, etc. parameters of the two processors are the same, the dynamic tag information of the data to be processed may be the same.
In one possible implementation, the dynamic tag information may include at least one of: dynamic data type, dynamic data dimension order, fragmentation parameter, padding parameter, and data size.
In this implementation, the dynamic data type may be determined according to the type, power, etc. of data that can be processed by a processor running the neural network. If a certain processor can process 16-bit floating point numbers, when the processor is used for operating a neural network, the dynamic data type of the data to be processed is the 16-bit floating point numbers. The dynamic data dimension order may be determined by the need for a processor running the neural network to read or store data. The slicing parameter may be determined according to the computational power of the processors operating the neural network, for example, if a certain processor is capable of performing 8 operations at a time, the slicing parameter may be set to 8. The filling parameter may be determined according to a dimension value of a static data dimension of the data to be processed and the slicing parameter. The data size, alternatively referred to as the size of the data, the amount of data, is determined from the dimension values of the static data dimensions, the slicing parameters, and the padding parameters.
In one possible implementation, the step of determining dynamic tag information includes:
the information of a target processor running the neural network is acquired, and the information of the target processor may include information related to the computing power and performance of the target number, such as the data type of data that the target processor can process, the dimensional sequence of reading and storing data of the target processor, the number of data bits (or the number of processed data) processed by the target processor each time, and the like.
And determining the dynamic label information of the data to be processed according to the information of the target processor and the static label information of the data to be processed.
The determining of the dynamic tag information of the data to be processed according to the information of the target processor and the static tag information of the data to be processed may include at least one of:
determining a dynamic data type according to the data type of the data which can be processed by the target processor;
determining a dynamic data dimension sequence according to the dimension sequence of the read and stored data of the target processor;
determining slicing parameters according to the data bit number processed by the target processor each time;
determining filling parameters according to the slicing parameters and the dimension values of the static data dimensions;
and determining the data size according to the dimension value of the static data dimension, the slicing parameter and the filling parameter.
In one possible implementation, the dynamic tag information of the data to be processed may be represented as:
dynamic:type2,DIM_B1…Bn,tiling,padding,size
wherein dynamic indicates that the tag information is an identifier of dynamic tag information. type2 represents a dynamic data type. DIM _ B1 … Bn indicates that the dynamic data dimension order is B1 … Bn. Tiling is a slicing parameter. Padding is the Padding parameter and size is the data size. The term, "{ }" is only used to separate different parameters in the dynamic tag information in the present disclosure, and is not a necessary content of the dynamic tag information, and in practical applications, "{ }" may not be present or may be replaced by other identifiers, and the present disclosure does not limit this.
For example, suppose Static tag information of some pending data is Static IW, Float32, DIM _ HW, {10,4 }. And the processor of the neural network which the data to be processed participates in is operated, the data is stored in a column priority mode, 16-bit floating point numbers can be processed, and at most 8 numbers of calculations can be performed each time. Then, the dynamic tag information of the data to be processed includes tilting 8, padding- (10-tilting) 6, and size (10+6) × 4 × 2 (bytes occupied by 16-bit floating point number) ═ 128 Byte. Then, the dynamic tag information of the data to be processed can be dynamic: Float16, DIM _ WH,8,6,128 Byte.
In this embodiment, the positions of the parameters in the dynamic tag information and the static tag information may be adjusted according to actual needs, which is not limited by this disclosure.
Alternatively, the processor may store tag information of the data to be processed. Alternatively, the tag information of the data to be processed may be stored separately from the data to be processed, that is, the data to be processed has a corresponding data storage space, and the tag information of the data to be processed particularly corresponds to a tag storage space, which may not overlap with the data storage space. Specifically, the method may further include: and storing the label information of the data to be processed into the label storage space. The tag storage space further comprises a static tag storage space and a dynamic tag storage space, and the static tag information is stored in the static tag storage space in the tag storage space; and storing the dynamic label information into a dynamic label storage space in the label storage space. Optionally, the processor may further store the static tag information and the dynamic tag information of the data to be processed in the same tag storage space, which is not limited by this disclosure.
In this implementation manner, the static tag information and the dynamic tag information of the data to be processed may be stored in different tag storage spaces, so that the processor manages the static tag information and the dynamic tag information. In addition, due to the fact that the number of the data to be processed is huge, the situation that a plurality of data to be processed have the same tag information exists, the data to be processed, the static tag information and the dynamic tag information are stored separately, multiplexing of the tag information is facilitated, and storage space is saved.
In one possible implementation, the method may further include: and correspondingly storing the tag identification of the tag information and the data identification of the data to be processed into an association list, wherein the association list is used for recording the corresponding relation between the data to be processed and the tag information. Further optionally, the association list may be stored in the memory, and the processor may determine tag information corresponding to the data to be processed by querying the association list, or determine corresponding data to be processed by using corresponding tag information.
In this implementation manner, the data identifier of the data to be processed and the tag identifier of the tag information of the data to be processed may be the same or matched identifiers, and the tag identifier and the data identifier may be identifiers such as numbers and symbols. The data identifier may also be a storage address of corresponding to-be-processed data (e.g., a first address of a physical address, a pointer indicating the storage address, etc. can represent information of the storage address of the to-be-processed data), and the corresponding tag identifier may be a storage address storing tag information of the to-be-processed data (e.g., a first address of a physical address, a pointer indicating the storage address, etc. can represent information of the storage address of the to-be-processed data). The label identifier and the data identifier can be set by those skilled in the art according to actual needs, and the disclosure does not limit this.
Fig. 3a is a schematic diagram of an association list in a data processing method according to an embodiment of the present disclosure, and as shown in fig. 3a, it may be determined that tag information of data a to be processed is static tag information a and dynamic tag information 1 according to a corresponding relationship 1 in the association list. And determining the label information of the data B to be processed as static label information B and dynamic label information 1 according to the corresponding relation 2. And determining the label information of the data C to be processed as static label information C and dynamic label information 2 according to the corresponding relation 3. According to the corresponding relation 4, the tag information of the data D to be processed can be determined to be the static tag information c and the dynamic tag information 3. The data to be processed a and the data to be processed B share the same dynamic tag information 1. The data C to be processed and the data D to be processed share the same static label information C.
Through the mode, the processor can accurately and quickly determine the corresponding relation between the data to be processed and the label information according to the association list. In other optional implementation manners, the corresponding relationship between the data to be processed and the tag information may also be stored in other manners besides the association list.
In a possible implementation manner, storing the tag information of the data to be processed in the tag storage space may include: and adding a first identifier for the tag information, and storing the tag information with the first identifier into a tag storage space. Wherein, according to the tag information, storing the data to be processed into the data storage space may include: and adding a second identifier matched with the first identifier for the data to be processed, and storing the data to be processed with the second identifier into a data storage space.
In this implementation, the first identifier and the second identifier may be the same or uniquely corresponding identifiers, for example, the same numbers, codes, and the like. The first identifier may be an identifier indicating a storage address of the corresponding to-be-processed data, and the second identifier may be an identifier indicating a storage address of tag information of the to-be-processed data. In this way, the processor can quickly and accurately determine the data to be processed and the corresponding label information according to the first identification and the second identification.
Further alternatively, the processor may store the neural network according to the tag information and the structural data of the neural network, for example, the processor may store the neural network according to the tag information and a structural diagram of the neural network, wherein the structural diagram of the neural network may include a data node and an operation node, wherein the data node includes the tag information of the data.
In one possible implementation, the method may further include:
acquiring data nodes in a neural network, wherein the data nodes comprise second data participating in neural network operation, label information of the second data, the out-degree and in-degree of the second data and the connection relation between the second data and operation nodes, and the second data can comprise data to be processed;
acquiring operation nodes (also called operation nodes) in a neural network, wherein the operation nodes comprise corresponding parameters, corresponding input data nodes and corresponding output data nodes;
acquiring a calculation graph corresponding to the neural network, wherein the calculation graph comprises operation nodes contained in the neural network, corresponding data nodes and connection relations between the data nodes and the operation nodes;
the data nodes, the operation nodes and the calculation graph are stored.
In this implementation, fig. 3b shows a schematic diagram of a computation graph in the data processing method according to the embodiment of the present disclosure, and as shown in fig. 3b, after the neural network is built, corresponding data nodes, operation nodes, and computation graph may be determined according to the built neural network. The data node is used for recording a structure or a mark of the second data (including label information of the second data), and according to the out degree and the in degree of the second data and the connection relation between the second data and the operation node, the data node can determine which previous operation nodes the second data can serve as output and which subsequent operation nodes the second data can serve as input. The operation node is used for recording parameters required by operations or operations performed on one or more input data nodes which are input, and one or more output data nodes which are output after corresponding operations or operations are performed. For example, assuming that a certain operation node performs convolution operation on input data of 3 input data nodes to obtain data of one output data node, parameters to be recorded in the operation node include the size, the stride, the padding, and the like of a convolution kernel. The computational graph represents the neural network in the form of a graph, and the computational graph comprises data nodes, operation nodes and directional 'edges' representing the connection relations of the data nodes and the operation nodes.
In one possible implementation, the processor may generate one or more corresponding instructions according to the computational graph to implement the neural network operations by executing the one or more instructions. Optionally, a general-purpose processor included in the processor may perform compiling processing on the computation graph according to the computation graph and the label information included in the computation graph, so as to obtain one or more instructions, and then, the general-purpose processor may execute the instructions to implement operations such as neural network training or reasoning. Optionally, a general-purpose processor included in the processor may compile the obtained instruction, and an artificial intelligence processor included in the processor may perform operations such as neural network training or inference according to the compiled instruction.
In a possible implementation manner, storing the data to be processed into the data storage space according to the tag information includes:
when the tag information contains dynamic tag information, applying a first data storage space for storing to-be-processed data from the data storage space according to the data size in the dynamic tag information;
and storing the data to be processed into the first data storage space.
In this implementation manner, when there is dynamic tag information in the tag information of the data to be processed, the processor may apply for the first data storage space for the data to be processed according to a request of a user or automatically according to the data size in the dynamic tag information, and store the data to be processed in the first data storage space.
For example, the application of the first data storage space may be implemented by "labelmaloc (& wp, weight)" or the like, where the size of the first data storage space required by weight may be determined according to the data size in the dynamic tag information, and may be determined by a user according to the data size input in the dynamic tag information of the data to be processed provided by the processor (or may be determined by the processor automatically identifying the data size in the dynamic tag information of the data to be processed). In this way, the data to be processed can be applied to the first data storage space from a certain designated data storage space wp (wp is the identification of the number, code and the like of the data storage space).
Further, storing the first data in the first data storage space may include:
judging whether the current data state of the first data is consistent with the dynamic label information or not, wherein the data state comprises the data type of the first data, the sequence of data dimensionality and a dimensionality value; alternatively, the data state may be determined from static tag information of the first data.
When the current data state of the first data is inconsistent with the dynamic tag information, converting the first data according to the static tag information and the dynamic tag information to obtain converted first data, wherein the data state of the converted first data is consistent with the dynamic tag information;
and storing the converted first data into the first data storage space.
In this implementation, before the first data is stored in the data storage space, when it is determined that the current data state of the first data is consistent with the dynamic tag information of the first data, the first data may be directly stored in the first data storage space. Whether the current data state of the first data is consistent with the dynamic label information of the first data or not can be that the current data type of the first data is the same as the dynamic data type, the sequence of the current data dimension is the same as the sequence of the dynamic data dimension, and the dimension value of the current corresponding dimension is the same as the dimension value calculated according to the slicing parameter, the filling parameter and the dimension value of the static data dimension.
In a possible implementation manner, converting the first data according to the static tag information and the dynamic tag information to obtain the converted first data may include at least one of the following processes:
data type conversion processing: converting the data type of the data to be processed into a dynamic data type;
dimension sequence adjustment processing: adjusting the order of the data dimensions of the data to be processed;
and (3) data filling processing: filling the data to be processed according to the filling parameters;
data segmentation treatment: and segmenting the data to be processed according to the slicing parameters.
In a possible implementation manner, the processor may first determine whether the current data type of the data to be processed is consistent with the dynamic data type, and if not, may first convert the data type of the data to be processed into the dynamic data type. And adjusting the data dimension sequence of the data to be processed after the type conversion to be the same as the dynamic data dimension sequence. And filling the data to be processed with the dimension sequence adjusted according to the filling parameters. And fragmenting the filled data to be processed according to the fragmentation parameters to obtain converted data to be processed, wherein the converted data to be processed comprises a plurality of data fragments, and the size of each data fragment corresponds to the fragmentation parameters. The processor may also fragment the data to be processed with the adjusted dimension order, and then fill one or more fragments to be filled, which have a size not corresponding to the fragment parameter, remaining after the fragmentation, to obtain the converted data to be processed.
For example, fig. 3c shows a schematic diagram of data conversion in a data processing method according to an embodiment of the present disclosure. As shown in FIG. 3c, the Static tag information of the pending data is "Static: IW, Float32, DIM _ HW, {10,4 }". The current data state of the data to be processed is consistent with the static label information. The dynamic label information of the data to be processed is "dynamic: Float16, DIM _ WH,8,6,128 Byte". The processor may first convert the data type of the data to be processed from a 32-bit floating point number to a 16-bit floating point number. And then transposing the data to be processed to make the data dimension order consistent with the DIM _ WH. And based on the fragmentation parameter '8', the data to be processed is divided into two data fragments. And then filling the data fragments with the sizes not corresponding to the fragment parameters according to the filling parameter '6', and filling and bit-filling if the sizes are all 0. And finally, obtaining the converted data to be processed.
In the process of actually applying the method, when the method is used for performing neural network operation by using the data to be processed with the tag information, the method needs to be embodied in the form of software for users to use. The API provided for the user in the software needs to provide an input interface for the user to input the data to be processed. Since the user knows or can see that the data to be processed input through the input interface corresponds to the data amount (or size) of the static tag information, the data state of the data to be processed in the storage and operation of the data to be processed in the actual application process corresponds to the dynamic tag information, and the dynamic tag information is not visible to the user. Assuming that the processor needs to apply for a first data storage space for storing to-be-processed data corresponding to the data state and the dynamic tag information according to the data size or the data size input by the user, another input interface for inputting the data size may be added to the API provided for the user, and the processor displays the data size in the dynamic tag information of the to-be-processed data for the user so as to be used for input, thereby enabling the processor to apply for the first data storage space. Or, the processor can directly apply for the first data storage space according to the data size in the dynamic tag information of the data to be processed, and the API provided by the user does not need to add an input interface. The above two modes are very simple and convenient for the user to operate, and the API provided for the user is very friendly. And moreover, the storage space is applied in real time according to the dynamic label information of the data to be processed, so that the storage space can be saved, and the resource occupation can be reduced.
By the method, when the data to be processed has the dynamic label information, the data to be processed is stored according to the dynamic label information, and the data to be processed can be matched with the performance of the processor. When the processor reuses the data to be processed, the data to be processed can be directly processed without data deformation or conversion, so that the process of using the data to be processed by the processor is simplified, the operation speed of the processor can be increased, and the operation time is saved. In a possible implementation manner, in the process that the data to be processed participates in the neural network operation, the data to be processed may need to be transferred between different data storage spaces (for example, access operations such as Load or Store), and the processor may implement the data access operation according to the tag information of the data to be processed.
In a possible implementation manner, fig. 4a shows a flowchart of data transfer in a data processing method according to an embodiment of the present disclosure, and when a processor receives a transfer request for performing a data transfer (or a transfer) to be processed, the processor performs the transfer of the data to be processed, as shown in fig. 4a, a process of performing the data transfer may include step S41 and step S42.
In step S41, data to be processed and tag information of the data to be processed are acquired, where the tag information includes dynamic tag information.
In step S42, the data to be processed is processed according to the dynamic label information to obtain processed data, and the processed data is stored, where the data state of the processed data is consistent with the dynamic label information.
In a possible implementation manner, processing the data to be processed according to the dynamic tag information to obtain processed data may include:
judging whether the current data state of the data to be processed is consistent with the dynamic label information;
and when the current data state of the data to be processed is inconsistent with the dynamic label information, processing the data to be processed according to the dynamic label information to obtain processed data. The processing of the data to be processed includes data type conversion processing, dimension order adjustment processing, data filling processing, data segmentation processing, and the like, which are described in detail above and are not described herein again.
In one possible implementation, storing the processed data may include:
according to the dynamic label information of the processed data, applying a transfer data storage space for storing the processed data from the data storage space of the processor; the dynamic label information of the post-processor data is consistent with the dynamic label information of the data to be processed;
and storing the processed data into the transfer data storage space.
In a possible implementation manner, when receiving a request for transferring data to be processed, a processor may obtain a current data storage space address, a target storage space, a data size, and a transfer direction of the data to be processed, further obtain the data to be processed from the current data storage space, and obtain dynamic tag information and static tag information of the data to be processed based on an association list or a second identifier in the data to be processed. And judging whether the current data state of the data to be processed is consistent with the dynamic label information, if not, processing the data to be processed to obtain processed data with the data state consistent with the dynamic label information. And applying a transfer data storage space for storing the processed data from the target storage space according to the data size in the dynamic label information, and storing the processed data into the transfer data storage space. If the current data state of the data to be processed is consistent with the dynamic label information, the data to be processed can be directly stored in the transfer data storage space.
Wherein the current data storage space address, the transfer data storage space and the transfer direction may be determined according to a user input or according to a received transfer request. The data size may be determined based on dynamic tag information of the data to be processed. The transfer direction may be from one memory to another memory, internal to the memory, and when the processor includes a master processor and one or more slave processors, "transferring from one memory to another memory" includes: the memory corresponding to the main processor is transferred into the memory corresponding to the secondary processor, the memory corresponding to one secondary processor is transferred into the memory corresponding to the main processor, and the memory corresponding to one secondary processor is transferred into the memory corresponding to the other secondary processor. The main processor may be a general-purpose processor such as a CPU, and the secondary processor may be an artificial intelligence processor.
For example, the transfer of the data to be processed may be implemented by a manner such as "labelmeccpy (wp, wp _ CPU, weight, HostToDevice)", which is represented by obtaining the data to be processed from a current data storage space wp _ CPU (which may be a physical address of the current data storage space) in a memory of a host (e.g., a CPU), and storing processed data obtained after processing the data to be processed into a transfer data storage space wp with a size of weight applied in a device (e.g., an artificial intelligence processor). During the data transfer process and before storing the processed data into the transfer data storage space wp, the processor needs to apply for the transfer data storage space wp with the size of weight. When the master processor and the slave processor included in the processor are both CPUs, the master processor or the slave processor may apply for a data storage space during the data transfer process. When the main processor included in the processor is a CPU and the slave processor is an IPU, the main processor may apply for a data storage space during data transfer.
In a possible implementation manner, when receiving a transfer request for transferring (or transferring) data to be processed, if it is determined that the data to be processed only has static tag information, that is, there is no dynamic tag information, the processor may determine the data size according to the static data dimension and the corresponding dimension value. And then applying for transferring the data storage space according to the data size, and storing the data to be processed into the transferred data storage space.
Through the mode, in the process of data transfer, the data state of the data to be processed is preferentially judged according to the dynamic label information of the data to be processed, and the processed data transferred to the transferred data storage space can be guaranteed to be matched with the performance of the processor. When the processor reuses the processed data, the processor can directly perform processing such as operation without data conversion, so that the process of using the processed data by the processor is simplified, the operation speed of the processor can be increased, and the operation time is saved.
In this embodiment, since the tag information and the data to be processed are stored in different storage spaces, it is required to use a corresponding policy to ensure that the processor can determine the corresponding tag information according to the data to be processed.
In one possible implementation, the method may further include setting up a device for neural network development, execution, such that the device can implement the following steps: the device for developing and executing the neural network can be developed and operated by setting the device for developing and executing the neural network through the steps, and the device can comprise a processor (the processor can comprise a general processor such as a CPU and an artificial intelligence processor), wherein the CPU is used for executing the steps of developing and creating, developing and compiling and developing and operating. The method comprises the following specific steps:
and a development and creation step, namely creating data nodes and operation nodes required by neural network operation, generating a calculation graph according to the connection relation between the data nodes and the operation nodes, and storing the data nodes, the operation nodes and the calculation graph.
And developing and compiling, namely scanning the calculation graph, determining the data type and the dynamic label information corresponding to the data in each data node, and generating one or more corresponding instructions according to the calculation graph. Optionally, the general-purpose processor may compile the computation graph according to the computation graph and the tag information therein, to obtain the instruction.
And a development operation step, namely converting the data in the data nodes according to the dynamic label information of the data in the data nodes to obtain the converted data. Wherein the dynamic tag information of the converted data may be consistent with hardware information of a device for instruction, such as an artificial intelligence processor.
In one possible implementation, the developing and running step may further include: managing a device (e.g., a processor such as an IPU) includes at least one of: the control device performs the start and end of the neural network operation, configures the device, sets the registers, and performs processing related to the operation of the device, such as checking for an interrupt, which is not limited by the present disclosure.
In one possible implementation, the developing and running step may further include: the memory of a device (such as a processor of an IPU) is managed, and the association performed on the memory of the device includes association related to the use of the memory, such as memory application and memory release. For example, the development operation step includes applying for a first data storage space for storing corresponding data according to dynamic tag information of the data in the data node. The development operation step comprises transferring the data from the current data storage space to the transfer data storage space according to the label information of the data in the data node. The developing and running step comprises releasing idle memory resources.
In a possible implementation manner, fig. 4b shows a schematic diagram of an apparatus designed for neural network operation in the data processing method according to the embodiment of the disclosure, as shown in fig. 4b, the processor may include a development creation module 41, a development compiling module 42, and a development running module 43, and the development creation module 41, the development compiling module 42, and the development running module 43 may be integrated in a general-purpose processor of the processor.
The development creation module 41 includes: the data node sub-module is used for creating and storing data nodes, the operation node sub-module is used for creating and storing operation nodes, and the calculation graph sub-module is used for creating and storing a calculation graph. The development creation module 41 is configured to send the computation graph to the development compilation module 42.
The development compiling module 42 includes: the computation graph scanning submodule is used for scanning the computation graph to generate label information (including data categories and dynamic label information) of the data nodes, and the instruction generating submodule is used for generating the dynamic label information of the data nodes and generating instructions according to the computation graph. The development compiling module 42 is configured to send instructions and tag information to the development execution module 43.
The development execution module 43 includes: the device comprises a data transformation submodule for performing data transformation, a device management submodule for managing the device, and a memory management submodule for managing the memory of the device.
In this implementation, the user can simply and quickly design the apparatus for performing the neural network operation according to the above steps for developing the neural network.
In a possible implementation manner, fig. 4c shows a schematic usage diagram of a device for neural network operation in a data processing method according to an embodiment of the present disclosure, and as shown in fig. 4c, the method may further include the following steps of performing creation, compilation and execution of a neural network, through which the device for neural network operation may be used to implement neural network operation. The device for neural network operation can comprise a CPU, an IPU and other processors.
In executing the creating step (as in step S61), a general-purpose processor such as a CPU may be used to create data nodes and operation nodes required for performing the neural network operation, and generate a computation graph according to the connection relationship between the data nodes and the operation nodes.
In the step of compiling (as in step S62), a general-purpose processor such as a CPU may be configured to scan the computation graph to generate a computation graph of the label information (including the data type and the dynamic label information) of the data node, and compile the computation graph to generate a hardware instruction corresponding to the computation graph.
After executing the operation step (e.g., step S63), the artificial intelligence processor such as the IPU may be used to execute the hardware instruction, and implement operations such as neural network training or reasoning. And calling a corresponding function based on the hardware instruction to operate the data, and finishing the operation process of the neural network. Of course, general-purpose processors such as a CPU may also be used to run hardware instructions to implement operations such as neural network training or reasoning.
In one possible implementation, the step of performing operations may further include: the processor such as IPU or CPU is used to perform data storage space (e.g. the first data storage space applied as described above), tag storage space application, and move data (e.g. transfer data to the transfer data storage space as described above) based on hardware instructions.
In one possible implementation, the step of performing operations may further include: processors such as IPUs or CPUs are used to free up resources that are no longer used.
In this implementation, the user can simply and quickly perform the neural network operation using a corresponding device such as a processor according to the above steps for performing the neural network.
It is noted that while for simplicity of explanation, the foregoing method embodiments have been described as a series of acts or combination of acts, it will be appreciated by those skilled in the art that the present disclosure is not limited by the order of acts, as some steps may, in accordance with the present disclosure, occur in other orders and concurrently. Further, those skilled in the art should also appreciate that the embodiments described in the specification are exemplary embodiments and that acts and modules referred to are not necessarily required by the disclosure.
It should be further noted that, although the steps in the flowcharts of fig. 2a, 2b and 4a are shown in sequence as indicated by the arrows, the steps are not necessarily executed in sequence as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a portion of the steps in fig. 2a, 2b, and 4a may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of performing the sub-steps or stages is not necessarily sequential, but may be performed alternately or alternatingly with other steps or at least a portion of the sub-steps or stages of other steps.
Fig. 5 shows a block diagram of a data processing apparatus according to an embodiment of the present disclosure. As shown in fig. 5, the apparatus is applied to a processor, and includes an acquisition module 51 and a data processing module 52. The acquiring module 51 is configured to acquire data to be processed and tag information of the data to be processed, where the data to be processed is used for performing neural network operation, and the tag information includes dynamic tag information; the data processing module 52 is configured to process the data to be processed according to the dynamic tag information to obtain processed data, and store the processed data in the processor, where a data state of the processed data is consistent with the dynamic tag information; the dynamic label information is used for representing information related to the data to be processed and a processor for operating the neural network.
In one possible implementation, the dynamic tag information may include at least one of: dynamic data type, dynamic data dimension order, fragmentation parameter, padding parameter, and data size.
In one possible implementation, the data processing module 52 may include: the judging submodule judges whether the current data state of the data to be processed is consistent with the dynamic label information; the processing submodule is used for processing the data to be processed according to the dynamic label information when the current data state of the data to be processed is inconsistent with the dynamic label information to obtain processed data; wherein the data state includes a data type, an order of data dimensions, and a dimension value.
In one possible implementation, the processing submodule may perform at least one of the following processes: converting the data type of the data to be processed into a dynamic data type; adjusting the order of the data dimensions of the data to be processed; filling the data to be processed according to the filling parameters; and segmenting the data to be processed according to the slicing parameters.
In a possible implementation manner, the obtaining module 51 may include: the dynamic label generation submodule generates dynamic label information of the data to be processed according to the processor and the acquired static label information; wherein the static tag information comprises at least one of: a data category, a static data type, a static data dimension order, and a dimension value corresponding to each static data dimension.
In a possible implementation manner, the obtaining module 51 may include: the class determination submodule determines the data class of the data to be processed according to the out-degree and in-degree of the data to be processed corresponding to the neural network and the operation in the neural network in which the data to be processed participates; wherein the data category may include any of: instructions, input neurons, output neurons, hidden neurons, constant neurons, input weights, output weights, constant weights, and auxiliary data.
In one possible implementation, the apparatus may further include: the tag storage module stores tag information into a tag storage space; the static label information is stored in a static label storage space in the label storage space; and storing the dynamic label information into a dynamic label storage space in the label storage space.
In one possible implementation, the data processing module 52 may include: the space application submodule is used for applying a transfer data storage space for storing the processed data from the data storage space of the processor according to the dynamic label information of the processed data; the dynamic label information of the post-processor data is consistent with the dynamic label information of the data to be processed;
and the data storage submodule stores the processed data into the transfer data storage space.
According to the data processing device provided by the disclosure, the data in the neural network is marked through the tag information, the data is processed and stored by taking the tag information as a basis, a more friendly API is provided for a user, the performance of software is improved, the occupation of hardware resources is reduced, and the operation speed of the neural network is increased.
Embodiments of the present disclosure also provide a non-volatile computer-readable storage medium on which computer program instructions are stored, which when executed by a processor implement the above-mentioned data processing method.
It should be understood that the above-described apparatus embodiments are merely illustrative and that the apparatus of the present disclosure may be implemented in other ways. For example, the division of the units/modules in the above embodiments is only one logical function division, and there may be another division manner in actual implementation. For example, multiple units, modules, or components may be combined, or may be integrated into another system, or some features may be omitted, or not implemented.
In addition, unless otherwise specified, each functional unit/module in each embodiment of the present disclosure may be integrated into one unit/module, each unit/module may exist alone physically, or two or more units/modules may be integrated together. The integrated units/modules may be implemented in the form of hardware or software program modules.
If the integrated unit/module is implemented in hardware, the hardware may be digital circuits, analog circuits, etc. Physical implementations of hardware structures include, but are not limited to, transistors, memristors, and the like. The artificial intelligence processor may be any suitable hardware processor, such as a CPU, GPU, FPGA, DSP, ASIC, etc., unless otherwise specified. Unless otherwise specified, the Memory unit may be any suitable magnetic storage medium or magneto-optical storage medium, such as resistive Random Access Memory rram (resistive Random Access Memory), Dynamic Random Access Memory dram (Dynamic Random Access Memory), Static Random Access Memory SRAM (Static Random-Access Memory), enhanced Dynamic Random Access Memory edram (enhanced Dynamic Random Access Memory), High-Bandwidth Memory HBM (High-Bandwidth Memory), hybrid Memory cubic hmc (hybrid Memory cube), and so on.
The integrated units/modules, if implemented in the form of software program modules and sold or used as a stand-alone product, may be stored in a computer readable memory. Based on such understanding, the technical solution of the present disclosure may be embodied in the form of a software product, which is stored in a memory and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present disclosure. And the aforementioned memory comprises: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
In a possible implementation manner, a board card is further disclosed, which comprises a storage device, an interface device, a control device and the data processing device; wherein the data processing device is connected with the storage device, the control device and the interface device respectively; the storage device is used for storing data; the interface device is used for realizing data transmission between the data processing device and external equipment; the control device is used for monitoring the state of the data processing device.
Fig. 6 shows a block diagram of a board according to an embodiment of the present disclosure, and referring to fig. 6, the board may include other kit components besides the data processing device 389, where the kit components include, but are not limited to: memory device 390, interface device 391 and control device 392;
the memory device 390 is connected to the data processing apparatus through a bus for storing data. The memory device may include a plurality of groups of memory cells 393. Each group of the storage units is connected with the data processing device through a bus. It is understood that each group of the memory cells may be a DDR SDRAM (Double Data Rate SDRAM).
DDR can double the speed of SDRAM without increasing the clock frequency. DDR allows data to be read out on the rising and falling edges of the clock pulse. DDR is twice as fast as standard SDRAM. In one embodiment, the storage device may include 4 sets of the storage unit. Each group of the memory cells may include a plurality of DDR4 particles (chips). In one embodiment, the data processing device may internally include 4 72-bit DDR4 controllers, wherein 64 bits of the 72-bit DDR4 controller are used for data transmission, and 8 bits are used for ECC checking. It can be understood that when DDR4-3200 particles are adopted in each group of memory cells, the theoretical bandwidth of data transmission can reach 25600 MB/s.
In one embodiment, each group of the memory cells includes a plurality of double rate synchronous dynamic random access memories arranged in parallel. DDR can transfer data twice in one clock cycle. And a controller for controlling DDR is arranged in the data processing device and is used for controlling data transmission and data storage of each memory unit.
The interface device is electrically connected with the data processing device. The interface means are used to enable data transfer between the data processing apparatus and an external device, such as a server or a computer. For example, in one embodiment, the interface device may be a standard PCIE interface. For example, the data to be processed is transmitted to the data processing apparatus by the server through the standard PCIE interface, so as to implement data transfer. Preferably, when PCIE 3.0X 16 interface transmission is adopted, the theoretical bandwidth can reach 16000 MB/s. In another embodiment, the interface device may also be another interface, and the disclosure does not limit the specific expression of the other interface, and the interface unit may implement the switching function. In addition, the calculation results of the data processing device are still transmitted back to an external device (e.g. a server) by the interface device.
The control device is electrically connected with the data processing device. The control device is used for monitoring the state of the data processing device. Specifically, the data processing device and the control device may be electrically connected through an SPI interface. The control device may include a single chip Microcomputer (MCU). As the data processing apparatus may comprise a plurality of processing chips, a plurality of processing cores or a plurality of processing circuits, a plurality of loads may be carried. Therefore, the data processing device can be in different working states such as multi-load and light load. The control device can regulate and control the working states of a plurality of processing chips, a plurality of processing circuits and/or a plurality of processing circuits in the data processing device.
In one possible implementation, an electronic device is disclosed that includes the above-described data processing apparatus. The electronic device comprises a data processing device, a robot, a computer, a printer, a scanner, a tablet computer, an intelligent terminal, a mobile phone, a vehicle data recorder, a navigator, a sensor, a camera, a server, a cloud server, a camera, a video camera, a projector, a watch, an earphone, a mobile storage, a wearable device, a vehicle, a household appliance, and/or a medical device. The vehicle comprises an airplane, a ship and/or a vehicle; the household appliances comprise a television, an air conditioner, a microwave oven, a refrigerator, an electric cooker, a humidifier, a washing machine, an electric lamp, a gas stove and a range hood; the medical equipment comprises a nuclear magnetic resonance apparatus, a B-ultrasonic apparatus and/or an electrocardiograph.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments. The technical features of the embodiments may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The foregoing may be better understood in light of the following clauses:
clause a1. a data processing method, applied to a processor, the method comprising:
acquiring data to be processed and label information of the data to be processed, wherein the data to be processed is used for neural network operation, and the label information comprises dynamic label information;
processing the data to be processed according to the dynamic label information to obtain processed data, and storing the processed data, wherein the data state of the processed data is consistent with the dynamic label information;
wherein the dynamic label information is used for characterizing information of the data to be processed related to a processor operating the neural network.
Clause a2. the method of clause a1, wherein the dynamic label information comprises at least one of: dynamic data type, dynamic data dimension order, fragmentation parameter, padding parameter, and data size.
Clause a3. according to the method of clause a2, processing the data to be processed according to the dynamic tag information to obtain processed data, including:
judging whether the current data state of the data to be processed is consistent with the dynamic label information;
when the current data state of the data to be processed is inconsistent with the dynamic label information, processing the data to be processed according to the dynamic label information to obtain processed data;
wherein the data state includes a data type, an order of data dimensions, and a dimension value.
Clause a4. according to the method of clause a3, when the current data state of the data to be processed is inconsistent with the dynamic tag information, processing the data to be processed according to the dynamic tag information to obtain processed data, including at least one of the following processes:
converting the data type of the data to be processed into the dynamic data type;
adjusting the sequence of the data dimensions of the data to be processed;
filling the data to be processed according to the filling parameters;
and segmenting the data to be processed according to the slicing parameters.
Clause a5. obtaining data to be processed and label information of the data to be processed according to any one of clause a 1-clause a4, including:
generating dynamic label information of the data to be processed according to the processor and the acquired static label information;
wherein the static tag information comprises at least one of: a data category, a static data type, a static data dimension order, and a dimension value corresponding to each static data dimension.
Clause a6. according to the method of clause a5, acquiring data to be processed and label information of the data to be processed includes:
determining the data category of the data to be processed according to the out degree and the in degree of the data to be processed corresponding to the neural network and the operation in the neural network in which the data to be processed participates;
wherein the data categories include any of: instructions, input neurons, output neurons, hidden neurons, constant neurons, input weights, output weights, constant weights, and auxiliary data.
Clause A7. the method of clause a5, the method further comprising:
storing the label information into a label storage space;
the static label information is stored in a static label storage space in the label storage space;
and the dynamic label information is stored in a dynamic label storage space in the label storage space.
Clause A8. the method of clause a1 or 6, storing the processed data, comprising:
according to the dynamic label information of the processed data, applying a transfer data storage space for storing the processed data from the data storage space of the processor; the dynamic label information of the post-processor data is consistent with the dynamic label information of the data to be processed;
and storing the processed data into the transfer data storage space.
Clause A9. a data processing apparatus, applied to a processor, the apparatus comprising:
the system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring data to be processed and label information of the data to be processed, the data to be processed is used for carrying out neural network operation, and the label information comprises dynamic label information;
the data processing module is used for processing the data to be processed according to the dynamic label information to obtain processed data and storing the processed data to the processor, wherein the data state of the processed data is consistent with the dynamic label information;
wherein the dynamic label information is used for characterizing information of the data to be processed related to a processor operating the neural network.
Clause a10. the apparatus of clause a9, the dynamic label information comprising at least one of: dynamic data type, dynamic data dimension order, fragmentation parameter, padding parameter, and data size.
Clause a11. the apparatus of clause a10, the data processing module, comprising:
the judging submodule judges whether the current data state of the data to be processed is consistent with the dynamic label information;
the processing submodule is used for processing the data to be processed according to the dynamic label information when the current data state of the data to be processed is inconsistent with the dynamic label information to obtain processed data;
wherein the data state includes a data type, an order of data dimensions, and a dimension value.
Clause a12. the apparatus of clause a11, the processing submodule performs at least one of the following processes:
converting the data type of the data to be processed into the dynamic data type;
adjusting the sequence of the data dimensions of the data to be processed;
filling the data to be processed according to the filling parameters;
and segmenting the data to be processed according to the slicing parameters.
Clause a13. the apparatus of any of clauses a 9-clause a12, the obtaining module comprising:
the dynamic label generating submodule generates dynamic label information of the data to be processed according to the processor and the obtained static label information;
wherein the static tag information comprises at least one of: a data category, a static data type, a static data dimension order, and a dimension value corresponding to each static data dimension.
Clause a14. the apparatus of clause a13, the acquisition module comprising:
the class determination submodule determines the data class of the data to be processed according to the out degree and the in degree of the data to be processed corresponding to the neural network and the operation in the neural network in which the data to be processed participates;
wherein the data categories include any of: instructions, input neurons, output neurons, hidden neurons, constant neurons, input weights, output weights, constant weights, and auxiliary data.
Clause a15. the apparatus of clause a13, further comprising:
the tag storage module stores the tag information into a tag storage space;
the static label information is stored in a static label storage space in the label storage space;
and the dynamic label information is stored in a dynamic label storage space in the label storage space.
Clause a16. the apparatus of clause a9 or 14, the data processing module, comprising:
the space application submodule is used for applying a transfer data storage space for storing the processed data from the data storage space of the processor according to the dynamic label information of the processed data; the dynamic label information of the post-processor data is consistent with the dynamic label information of the data to be processed;
and the data storage submodule stores the processed data into the transfer data storage space.
Clause a17. an electronic device comprising the data processing apparatus of any of clauses a9 to clause a16.
Clause a18. a card, the card comprising: a storage device, an interface arrangement and a control device and a data processing arrangement as in any of clauses a9 to clause a 16;
wherein the data processing device is connected with the storage device, the control device and the interface device respectively;
the storage device is used for storing data;
the interface device is used for realizing data transmission between the data processing device and external equipment;
the control device is used for monitoring the state of the data processing device.
Clause a19. the board of clause a18,
the memory device includes: a plurality of groups of memory cells, each group of memory cells connected with the data processing device through a bus, the memory cells are: DDR SDRAM;
the data processing apparatus includes: the DDR controller is used for controlling data transmission and data storage of each memory unit;
the interface device is as follows: a standard PCIE interface.
Clause a20. a non-transitory computer-readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the data processing method of any one of clauses a1 to A8.
The embodiments of the present disclosure have been described in detail, and the principles and embodiments of the present disclosure are explained herein using specific examples, which are provided only to help understand the method and the core idea of the present disclosure. Meanwhile, a person skilled in the art should, based on the idea of the present disclosure, change or modify the specific embodiments and application scope of the present disclosure. In view of the above, the description is not intended to limit the present disclosure.

Claims (12)

1. A data processing method, applied to a processor, the method comprising:
acquiring data to be processed and label information of the data to be processed, wherein the data to be processed is used for neural network operation, and the label information comprises dynamic label information;
processing the data to be processed according to the dynamic label information to obtain processed data, and storing the processed data according to a pre-obtained storage instruction, wherein the processed data is used as data required by a target processor when a neural network is used for operation, and the data state of the processed data is consistent with the dynamic label information;
wherein the dynamic label information is used for representing the information of the data to be processed related to a target processor for operating the neural network; the dynamic label information includes at least one of: the method comprises the steps of determining the type of dynamic data, the dimension sequence of the dynamic data, a fragmentation parameter, a filling parameter and a data size;
the pre-obtained storage instruction is an instruction corresponding to the neural network, which is obtained by compiling a computation graph of the neural network in advance according to the tag information.
2. The method of claim 1, wherein processing the data to be processed according to the dynamic tag information to obtain processed data comprises:
judging whether the current data state of the data to be processed is consistent with the dynamic label information;
when the current data state of the data to be processed is inconsistent with the dynamic label information, processing the data to be processed according to the dynamic label information to obtain processed data;
wherein the data state includes a data type, an order of data dimensions, and a dimension value.
3. The method according to claim 2, wherein when the current data state of the data to be processed is inconsistent with the dynamic tag information, the processing the data to be processed according to the dynamic tag information to obtain processed data includes at least one of the following processes:
converting the data type of the data to be processed into the dynamic data type;
adjusting the sequence of the data dimensions of the data to be processed;
filling the data to be processed according to the filling parameters;
and segmenting the data to be processed according to the slicing parameters.
4. The method according to any one of claims 1 to 3, wherein obtaining the data to be processed and the tag information of the data to be processed comprises:
generating dynamic label information of the data to be processed according to the processor and the acquired static label information;
wherein the static tag information comprises at least one of: a data category, a static data type, a static data dimension order, and a dimension value corresponding to each static data dimension.
5. The method of claim 4, wherein obtaining the data to be processed and the tag information of the data to be processed comprises:
determining the data category of the data to be processed according to the out degree and the in degree of the data to be processed corresponding to the neural network and the operation in the neural network in which the data to be processed participates;
wherein the data categories include any of: instructions, input neurons, output neurons, hidden neurons, constant neurons, input weights, output weights, constant weights, and auxiliary data.
6. The method of claim 4, further comprising:
storing the label information into a label storage space;
the static label information is stored in a static label storage space in the label storage space;
and the dynamic label information is stored in a dynamic label storage space in the label storage space.
7. The method of claim 1 or 5, wherein storing the processed data comprises:
according to the dynamic label information of the processed data, applying a transfer data storage space for storing the processed data from the data storage space of the processor; the dynamic label information of the post-processor data is consistent with the dynamic label information of the data to be processed;
and storing the processed data into the transfer data storage space.
8. A data processing apparatus, for use with a processor, the apparatus comprising:
the system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring data to be processed and label information of the data to be processed, the data to be processed is used for carrying out neural network operation, and the label information comprises dynamic label information;
the data processing module is used for processing the data to be processed according to the dynamic label information to obtain processed data, and storing the processed data according to a pre-obtained storage instruction, wherein the processed data is used as data required by a target processor when a neural network is used for operation, and the data state of the processed data is consistent with the dynamic label information;
wherein the dynamic label information is used for representing the information of the data to be processed related to a target processor for operating the neural network; the dynamic label information includes at least one of: the method comprises the steps of determining the type of dynamic data, the dimension sequence of the dynamic data, a fragmentation parameter, a filling parameter and a data size;
the pre-obtained storage instruction is an instruction corresponding to the neural network, which is obtained by compiling a computation graph of the neural network in advance according to the tag information.
9. An electronic device, characterized in that the electronic device comprises the data processing apparatus as claimed in claim 8.
10. The utility model provides a board card, its characterized in that, the board card includes: memory means, interface means and control means and data processing apparatus according to claim 8;
wherein the data processing device is connected with the storage device, the control device and the interface device respectively;
the storage device is used for storing data;
the interface device is used for realizing data transmission between the data processing device and external equipment;
the control device is used for monitoring the state of the data processing device.
11. The board card of claim 10,
the memory device includes: a plurality of groups of memory cells, each group of memory cells connected with the data processing device through a bus, the memory cells are: DDR SDRAM;
the data processing apparatus includes: the DDR controller is used for controlling data transmission and data storage of each memory unit;
the interface device is as follows: a standard PCIE interface.
12. A non-transitory computer-readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the data processing method of any one of claims 1 to 7.
CN201910748310.4A 2019-08-14 2019-08-14 Data processing method, data processing device, computer equipment and storage medium Active CN110458286B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110673119.5A CN113435591B (en) 2019-08-14 2019-08-14 Data processing method, device, computer equipment and storage medium
CN201910748310.4A CN110458286B (en) 2019-08-14 2019-08-14 Data processing method, data processing device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910748310.4A CN110458286B (en) 2019-08-14 2019-08-14 Data processing method, data processing device, computer equipment and storage medium

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202110673119.5A Division CN113435591B (en) 2019-08-14 2019-08-14 Data processing method, device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN110458286A CN110458286A (en) 2019-11-15
CN110458286B true CN110458286B (en) 2022-02-08

Family

ID=68486445

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202110673119.5A Active CN113435591B (en) 2019-08-14 2019-08-14 Data processing method, device, computer equipment and storage medium
CN201910748310.4A Active CN110458286B (en) 2019-08-14 2019-08-14 Data processing method, data processing device, computer equipment and storage medium

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202110673119.5A Active CN113435591B (en) 2019-08-14 2019-08-14 Data processing method, device, computer equipment and storage medium

Country Status (1)

Country Link
CN (2) CN113435591B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102981941A (en) * 2012-11-08 2013-03-20 大唐软件技术股份有限公司 Alarm handling method and alarm handling device
CN106447034A (en) * 2016-10-27 2017-02-22 中国科学院计算技术研究所 Neutral network processor based on data compression, design method and chip
CN107832842A (en) * 2017-11-28 2018-03-23 北京地平线信息技术有限公司 The method and apparatus that convolution algorithm is performed for fold characteristics data
CN109657782A (en) * 2018-12-14 2019-04-19 北京中科寒武纪科技有限公司 Operation method, device and Related product
CN109754011A (en) * 2018-12-29 2019-05-14 北京中科寒武纪科技有限公司 Data processing method, device and Related product based on Caffe

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7433860B1 (en) * 2003-08-05 2008-10-07 Sprint Communications Company L.P. Method and system for labeling communications cables
US9418331B2 (en) * 2013-10-28 2016-08-16 Qualcomm Incorporated Methods and apparatus for tagging classes using supervised learning
US9600764B1 (en) * 2014-06-17 2017-03-21 Amazon Technologies, Inc. Markov-based sequence tagging using neural networks
US10223371B2 (en) * 2014-11-21 2019-03-05 Vmware, Inc. Host-based deduplication using array generated data tags
WO2017079568A1 (en) * 2015-11-06 2017-05-11 Google Inc. Regularizing machine learning models
CN109219821B (en) * 2017-04-06 2023-03-31 上海寒武纪信息科技有限公司 Arithmetic device and method
CN111309486B (en) * 2018-08-10 2024-01-12 中科寒武纪科技股份有限公司 Conversion method, conversion device, computer equipment and storage medium
CN110096309B (en) * 2018-11-14 2020-04-14 上海寒武纪信息科技有限公司 Operation method, operation device, computer equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102981941A (en) * 2012-11-08 2013-03-20 大唐软件技术股份有限公司 Alarm handling method and alarm handling device
CN106447034A (en) * 2016-10-27 2017-02-22 中国科学院计算技术研究所 Neutral network processor based on data compression, design method and chip
CN107832842A (en) * 2017-11-28 2018-03-23 北京地平线信息技术有限公司 The method and apparatus that convolution algorithm is performed for fold characteristics data
CN109657782A (en) * 2018-12-14 2019-04-19 北京中科寒武纪科技有限公司 Operation method, device and Related product
CN109754011A (en) * 2018-12-29 2019-05-14 北京中科寒武纪科技有限公司 Data processing method, device and Related product based on Caffe

Also Published As

Publication number Publication date
CN110458286A (en) 2019-11-15
CN113435591B (en) 2024-04-05
CN113435591A (en) 2021-09-24

Similar Documents

Publication Publication Date Title
CN110096310B (en) Operation method, operation device, computer equipment and storage medium
CN110119807B (en) Operation method, operation device, computer equipment and storage medium
CN114580606A (en) Data processing method, data processing device, computer equipment and storage medium
CN110458285B (en) Data processing method, data processing device, computer equipment and storage medium
CN110555522B (en) Data processing method, data processing device, computer equipment and storage medium
CN110647981B (en) Data processing method, data processing device, computer equipment and storage medium
CN110458286B (en) Data processing method, data processing device, computer equipment and storage medium
CN111047005A (en) Operation method, operation device, computer equipment and storage medium
WO2021018313A1 (en) Data synchronization method and apparatus, and related product
CN109542837B (en) Operation method, device and related product
CN109543835B (en) Operation method, device and related product
CN109558565B (en) Operation method, device and related product
CN111061507A (en) Operation method, operation device, computer equipment and storage medium
CN111047030A (en) Operation method, operation device, computer equipment and storage medium
CN112395008A (en) Operation method, operation device, computer equipment and storage medium
CN111275197B (en) Operation method, device, computer equipment and storage medium
CN111339060B (en) Operation method, device, computer equipment and storage medium
CN111290789B (en) Operation method, operation device, computer equipment and storage medium
CN111026440B (en) Operation method, operation device, computer equipment and storage medium
CN111290788B (en) Operation method, operation device, computer equipment and storage medium
CN111353125B (en) Operation method, operation device, computer equipment and storage medium
CN111124497B (en) Operation method, operation device, computer equipment and storage medium
CN109543836B (en) Operation method, device and related product
WO2021233187A1 (en) Method and device for allocating storage addresses for data in memory
CN109543833B (en) Operation method, device and related product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 100190 room 644, comprehensive research building, No. 6 South Road, Haidian District Academy of Sciences, Beijing

Applicant after: Zhongke Cambrian Technology Co., Ltd

Address before: 100190 room 644, comprehensive research building, No. 6 South Road, Haidian District Academy of Sciences, Beijing

Applicant before: Beijing Zhongke Cambrian Technology Co., Ltd.

GR01 Patent grant
GR01 Patent grant