CN109726797A - Data processing method, device, computer system and storage medium - Google Patents
Data processing method, device, computer system and storage medium Download PDFInfo
- Publication number
- CN109726797A CN109726797A CN201811569176.3A CN201811569176A CN109726797A CN 109726797 A CN109726797 A CN 109726797A CN 201811569176 A CN201811569176 A CN 201811569176A CN 109726797 A CN109726797 A CN 109726797A
- Authority
- CN
- China
- Prior art keywords
- neural network
- node
- recognition
- recurrent neural
- data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Landscapes
- Image Analysis (AREA)
Abstract
This application involves a kind of data processing method, device, computer system and storage mediums.The off-line model that data processing method, device, computer system and the storage medium of the application can greatly shorten Recognition with Recurrent Neural Network node generates the time, and then improves the processing speed and efficiency of processor.
Description
Technical field
This application involves field of computer technology, more particularly to a kind of data processing method, device, computer system and
Storage medium.
Background technique
With the development of artificial intelligence technology, nowadays deep learning is ubiquitous and essential, and produces therewith
Many expansible deep learning systems, for example, TensorFlow, MXNet, Caffe and PyTorch etc., above-mentioned depth
Learning system may be used to provide the various neural network models that can be run on the processors such as CPU or GPU.Generally, neural
Network may include Recognition with Recurrent Neural Network and acyclic neural network etc..
However, the time for generally producing Recognition with Recurrent Neural Network is directly proportional to the index of cycle-index and the number of plies, followed at one layer
In ring neural network, if cycle-index is 10^2 magnitude, the time for directly generating off-line model needs is more than 12 hours, from
Line model generates overlong time, causes treatment effeciency low.
Summary of the invention
Based on this, it is necessary in view of the above technical problems, provide a kind of data processing method that can be improved treatment effeciency,
Device, computer system and storage medium.
A kind of data processing method, which comprises
Recognition with Recurrent Neural Network node is obtained, the Recognition with Recurrent Neural Network node includes at least one Recognition with Recurrent Neural Network list
Member;
According to the model data collection and model knot of Recognition with Recurrent Neural Network unit single in the Recognition with Recurrent Neural Network node
Structure parameter runs the single Recognition with Recurrent Neural Network unit, obtains the single corresponding finger of Recognition with Recurrent Neural Network unit
Enable data;
According to the single corresponding director data of Recognition with Recurrent Neural Network unit, the single circulation nerve net is obtained
Corresponding first off-line model of network unit;
Wherein, first off-line model includes the weight data and instruction number of the single Recognition with Recurrent Neural Network unit
According to.
In one of the embodiments, by the weight data of the single Recognition with Recurrent Neural Network unit and director data into
The corresponding storage of row, obtains corresponding first off-line model of the single Recognition with Recurrent Neural Network unit.
Judge whether first off-line model is stateful in one of the embodiments,;
If first off-line model be it is stateful, first off-line model further includes state input data, institute
State the output data of the upper Recognition with Recurrent Neural Network unit before state input data is the hidden layer.
The primitive network comprising the Recognition with Recurrent Neural Network node is obtained in one of the embodiments,;
According to the model structure parameter of the primitive network, the dependence of each node in the primitive network is determined;
According to the dependence of node each in the primitive network, each circulation mind in the primitive network is determined
Input node and output node through network node;
Disconnect the Recognition with Recurrent Neural Network node input node and output node and the Recognition with Recurrent Neural Network node it
Between connection, obtain at least one described Recognition with Recurrent Neural Network node.
In one of the embodiments, according to the dependence of node each in the primitive network, determine described original
Each node executes sequence in network;
The primitive network is run according to the execution of each node sequence, obtains each in the primitive network non-follow
The director data of ring neural network node;
By the corresponding weight data of each acyclic neural network node and the corresponding storage of director data, second is obtained
Off-line model;
Wherein, second off-line model includes the weight number of each acyclic neural network node in the primitive network
According to and director data.
New primitive network is obtained in one of the embodiments,;
If there are corresponding off-line models for the new primitive network, it is corresponding offline to obtain the new primitive network
Model, and the new primitive network is run according to the corresponding off-line model of the new primitive network, wherein the new original
The corresponding off-line model of beginning network includes first off-line model and second off-line model.
A kind of data processing equipment, described device include:
First obtains module, and for obtaining Recognition with Recurrent Neural Network node, the Recognition with Recurrent Neural Network node includes at least one
A Recognition with Recurrent Neural Network unit;
Module is run, for the pattern number according to Recognition with Recurrent Neural Network unit single in the Recognition with Recurrent Neural Network node
According to collection and model structure parameter, the single Recognition with Recurrent Neural Network unit is run, obtains the single Recognition with Recurrent Neural Network
The corresponding director data of unit;
Generation module, for obtaining the list according to the single corresponding director data of Recognition with Recurrent Neural Network unit
Corresponding first off-line model of a Recognition with Recurrent Neural Network unit;
Wherein, first off-line model includes the weight data and instruction number of the single Recognition with Recurrent Neural Network unit
According to.
The generation module is also used to the power of the single Recognition with Recurrent Neural Network unit in one of the embodiments,
Value Data and director data carry out corresponding storage, obtain the corresponding first offline mould of the single Recognition with Recurrent Neural Network unit
Type.
Described device further includes judgment module and the first execution module in one of the embodiments,;
The judgment module, for judging whether first off-line model is stateful;
First execution module, if for first off-line model being stateful, first off-line model
It further include state input data, the state input data is defeated for the upper Recognition with Recurrent Neural Network unit before the hidden layer
Data out.
The acquisition module includes: in one of the embodiments,
First acquisition unit, for obtaining the primitive network comprising the Recognition with Recurrent Neural Network node;
First determination unit determines each in the primitive network for the model structure parameter according to the primitive network
The dependence of a node;
First determination unit, is also used to the dependence according to node each in the primitive network, determine described in
The input node and output node of each Recognition with Recurrent Neural Network node in primitive network;
First execution unit, input node and output node for disconnecting the Recognition with Recurrent Neural Network node are followed with described
Connection between ring neural network node obtains at least one described Recognition with Recurrent Neural Network node.
The acquisition module further includes the second determination unit and the second execution unit in one of the embodiments:
Second determination unit determines the original for the dependence according to node each in the primitive network
Each node executes sequence in beginning network;
Second execution unit is obtained for running the primitive network according to the execution sequence of each node
The director data of each acyclic neural network node in the primitive network;
The generation module, for by the corresponding weight data of each acyclic neural network node and director data
Corresponding storage, obtains the second off-line model;
Wherein, second off-line model includes the weight number of each acyclic neural network node in the primitive network
According to and director data.
A kind of computer system, comprising: including processor and memory, it is stored with computer program in the memory,
The processor executes method described in any of the above embodiments when executing the computer program.
The processor packet arithmetic element and controller unit in one of the embodiments,;The arithmetic element packet
It includes: a main process task circuit and multiple from processing circuit;
The controller unit, for obtaining input data and instruction;
The controller unit is also used to parse described instruction and obtains multiple instruction data, by multiple instruction data and
The input data is sent to the main process task circuit;
The main process task circuit, for executing preamble processing and with the multiple from processing circuit to the input data
Between transmit data and instruction data;
It is the multiple from processing circuit, for parallel according to the data and director data from the main process task circuit transmission
It executes intermediate operations and obtains multiple intermediate results, and multiple intermediate results are transferred to the main process task circuit;
The main process task circuit obtains the result of described instruction for executing subsequent processing to the multiple intermediate result.
A kind of computer storage medium is stored with computer program in the computer storage medium, when the computer
When program is executed by one or more first processors, method as described in any one of the above embodiments is executed.
Above-mentioned data processing method, device, computer system and storage medium, by according to single Recognition with Recurrent Neural Network
The model data collection and model structure parameter of unit run the Recognition with Recurrent Neural Network unit, obtain single Recognition with Recurrent Neural Network
The director data of unit, and then obtain single corresponding first off-line model of Recognition with Recurrent Neural Network unit, the first off-line model
Weight data and director data including single Recognition with Recurrent Neural Network unit.The data processing method of the application only needs to obtain
First off-line model of single Recognition with Recurrent Neural Network unit, without to all circulations nerve in the Recognition with Recurrent Neural Network node
Network unit is compiled and operation, generates the time so as to greatly shorten the off-line model of Recognition with Recurrent Neural Network node,
And then improve the processing speed and efficiency of processor.
Detailed description of the invention
Fig. 1 is the system block diagram of the computer system of an embodiment;
Fig. 2 is the system block diagram of the computer system of another embodiment;
Fig. 3 is the system block diagram of the processor of an embodiment;
Fig. 4 is the flow diagram of data processing method in one embodiment;
Fig. 5 is the structural schematic diagram of Recognition with Recurrent Neural Network in one embodiment;
Fig. 6 is the flow diagram of step S310;
Fig. 7 is the flow diagram of step S100;
Fig. 8 is the flow diagram of the data processing method of one embodiment;
Fig. 9 is the network structure of the neural network of one embodiment;
Figure 10 is the structural block diagram of data processing equipment in one embodiment;
Figure 11 is the flow diagram of the data processing method of another embodiment;
Figure 12 is the flow diagram of data processing method in one embodiment;
Figure 13 is the flow diagram of the operation equivalent network of one embodiment;
Figure 14 is the flow diagram of the acquisition equivalent network of another embodiment;
Figure 15 is the flow diagram of step S7012;
Figure 16 is the flow diagram of step S900;
Figure 17 is the structural block diagram of data processing equipment in one embodiment.
Specific embodiment
In order to keep technical solution of the present invention clearer, below in conjunction with attached drawing, to Processing with Neural Network side of the invention
Method, computer system and storage medium are described in further detail.It should be appreciated that specific embodiment described herein is only used
To explain that the present invention is not intended to limit the present invention.
Fig. 1 is the block diagram of the computer system 1000 of an embodiment, which may include processor 110
And the memory 120 being connect with the processor 110.Please continue to refer to Fig. 2, wherein the processor 110 for provide calculate and
Control ability may include obtaining module 111, computing module 113 and control module 112 etc., wherein the acquisition module
111, which can be hardware modules, computing module 113 and the control modules 112 such as IO (Input input/Output output) interface, is
Hardware module.For example, computing module 113 and control module 112 can be digital circuit or analog circuit etc..Above-mentioned hardware electricity
The physics realization on road includes but is not limited to physical device, and physical device includes but is not limited to transistor and memristor etc..
Optionally, processor 110 can be general processor, such as CPU (Central Processing Unit, centre
Manage device), GPU (Graphics Processing Unit, graphics processor) or DSP (Digital Signal
Processing, Digital Signal Processing), the processor 110 can also for IPU (Intelligence Processing Unit,
Intelligent processor) etc. dedicated neural network processor.Certainly, which can also be instruction set processor, related chip
Group, special microprocessor (e.g., specific integrated circuit (ASIC)) or onboard storage device for caching purposes etc..
Optionally, referring to Fig. 3, the processor 110 is calculated for executing machine learning, which includes: control
Device unit 20 and arithmetic element 12, wherein controller unit 20 is connect with arithmetic element 12, which includes: one
Main process task circuit and multiple from processing circuit;
Controller unit 20, for obtaining input data and computations;In a kind of optinal plan, specifically, obtaining
Take input data and computations mode that can obtain by data input-output unit, the data input-output unit is specific
It can be one or more data I/O interfaces or I/O pin.
Above-mentioned computations include but is not limited to: forward operation instruction or reverse train instruction or other neural networks fortune
Instruction etc. is calculated, such as convolution algorithm instruction, the application specific embodiment are not intended to limit the specific manifestation of above-mentioned computations
Form.
Controller unit 20 is also used to parse the computations and obtains multiple operational orders, by multiple operational order with
And the input data is sent to the main process task circuit;
Main process task circuit 101, for executing preamble processing and with the multiple from processing circuit to the input data
Between transmit data and operational order;
It is multiple from processing circuit 102, for parallel according to the data and operational order from the main process task circuit transmission
It executes intermediate operations and obtains multiple intermediate results, and multiple intermediate results are transferred to the main process task circuit;
Main process task circuit 101 obtains based on the computations by executing subsequent processing to the multiple intermediate result
Calculate result.
Arithmetic element is arranged to one master and multiple slaves structure by technical solution provided by the present application, and the calculating of forward operation is referred to
Enable, can will split data according to the computations of forward operation, in this way by it is multiple can from processing circuit
Concurrent operation is carried out to the biggish part of calculation amount, to improve arithmetic speed, saves operation time, and then reduce power consumption.
Optionally, above-mentioned machine learning calculating can specifically include: artificial neural network operation, above-mentioned input data are specific
It may include: input neuron number evidence and weight data.Above-mentioned calculated result is specifically as follows: the knot of artificial neural network operation
Fruit, that is, output nerve metadata.
It can be one layer of operation in neural network for the operation in neural network, for multilayer neural network,
Realization process is, in forward operation, after upper one layer of artificial neural network, which executes, to be completed, next layer of operational order can be incited somebody to action
Calculated output neuron carries out operation (or to the output nerve as next layer of input neuron in arithmetic element
Member carries out the input neuron that certain operations are re-used as next layer), meanwhile, weight is also replaced with to next layer of weight;Anti-
Into operation, after the completion of the reversed operation of upper one layer of artificial neural network executes, next layer of operational order can be by arithmetic element
In it is calculated input neuron gradient as next layer output neuron gradient carry out operation (or to the input nerve
First gradient carries out certain operations and is re-used as next layer of output neuron gradient), while weight being replaced with to next layer of weight.
It can also include support vector machines operation, k- neighbour (k-nn) operation, k- mean value (k- that above-mentioned machine learning, which calculates,
Means) operation, principal component analysis operation etc..For convenience of description, illustrate by taking artificial neural network operation as an example below
The concrete scheme that machine learning calculates.
For artificial neural network operation, if the artificial neural network operation have multilayer operation, multilayer operation it is defeated
Enter neuron and output neuron does not mean that in the input layer of entire neural network neuron in neuron and output layer, but
For two layers of arbitrary neighborhood in network, the neuron in network forward operation lower layer is to input neuron, is in net
Neuron in network forward operation upper layer is output neuron.By taking convolutional neural networks as an example, if a convolutional neural networks
There is layer, for the layer and for the layer, we are known as input layer for the layer, and neuron therein is the input neuron, the
Layer is known as output layer, and neuron therein is the output neuron.I.e. in addition to top, each layer all can serve as to input
Layer, next layer are corresponding output layer.
Optionally, above-mentioned computing device can also include: the storage unit 10 and direct memory access unit 50, and storage is single
Member 10 may include: register, one or any combination in caching, specifically, the caching, refers to for storing the calculating
It enables;The register, for storing the input data and scalar;The caching is that scratchpad caches.Direct memory access
Unit 50 is used to read from storage unit 10 or storing data.
Optionally, which includes: the location of instruction 210, instruction process unit 211 and storage team's list
Member 212;
The location of instruction 210, for storing the associated computations of artificial neural network operation;
Described instruction processing unit 211 obtains multiple operational orders for parsing to the computations;
Storage queue unit 212, for storing instruction queue, the instruction queue include: to wait for by the tandem of the queue
The multiple operational orders or computations executed.
For example, main arithmetic processing circuit also may include a controller list in an optional technical solution
Member, the controller unit may include master instruction processing unit, be specifically used for Instruction decoding into microcommand.Certainly in another kind
Also may include another controller unit from arithmetic processing circuit in optinal plan, another controller unit include from
Instruction process unit, specifically for receiving and processing microcommand.Above-mentioned microcommand can be the next stage instruction of instruction, micro- finger
Order can further can be decoded as each component, each unit or each processing circuit by obtaining after the fractionation or decoding to instruction
Control signal.
The memory 120 can also be stored with computer program, and the computer program is for realizing in the embodiment of the present application
The data processing method of offer.Specifically, the data processing method is in generation and the received primitive network of processor 110
Corresponding first off-line model of Recognition with Recurrent Neural Network node, may include single circulation mind in first off-line model
Weight data and director data through network unit, wherein director data may be used to indicate that the node by executing based on which kind of
Function is calculated, so that recursive call circulation nerve can be passed through when processor 110 runs the Recognition with Recurrent Neural Network node again
Corresponding first off-line model of network unit, without repeating compiling etc. to each network unit in Recognition with Recurrent Neural Network node
Arithmetic operation, the off-line model for greatly shortening Recognition with Recurrent Neural Network node generate the time, run the net so as to shorten processor 110
Runing time when network, and then improve the processing speed and efficiency of processor 110.
Optionally, please continue to refer to Fig. 2, which may include the first storage unit 121, the second storage unit
122 and third storage unit 123, wherein first storage unit 121 can be used for storing computer program, the computer journey
Sequence is for realizing the data processing method provided in the embodiment of the present application.Second storage unit 122 can be used for storing nerve
Related data during the network operation, the third storage unit 123 is for storing off-line model.Optionally, which includes
Storage unit quantity can also be greater than three, be not specifically limited herein.Memory 120 can be built-in storage, such as slow
It the volatile memory such as deposits, can be used for storing the related data in neural network operational process, such as input data, output number
According to, weight and instruction etc..Memory 120 is also possible to the nonvolatile memories such as external memory, can be used for storing mind
Through the corresponding off-line model of network.Thus, when computer system 1000 needs again to be compiled to transport same neural network
When the row network, the corresponding off-line model of the network can be directly obtained from memory, to improve the processing speed of processor
Degree and efficiency.
Optionally, the quantity of the memory 120 can be three or three or more.One of memory 120 is for depositing
Computer program is stored up, the computer program is for realizing the data processing method provided in the embodiment of the present application.One of them is deposited
Reservoir 120 is for storing related data in neural network operational process, and optionally, this is used to store in neural network operational process
The memory of related data can be volatile memory.It is corresponding that another memory 120 can be used for storing the neural network
Off-line model, optionally, which can be nonvolatile memory.
It should be understood that the operation primitive network in the present embodiment refers to, processor uses artificial nerve network model
Certain machine learning algorithm (such as neural network algorithm) of data run, by the target application for realizing algorithm before executing to operation
(such as speech recognition artificial intelligence application).In the present embodiment, directly runs the corresponding off-line model of the primitive network and refer to, make
The corresponding machine learning algorithm of the primitive network (such as neural network algorithm) is run with off-line model, by real to operation before executing
The target application (such as speech recognition artificial intelligence application) of existing algorithm.The primitive network may include Recognition with Recurrent Neural Network,
It may include acyclic neural network.
In one embodiment, as shown in figure 4, this application provides a kind of data processing method, for according to circulation mind
The first off-line model is generated and stored through network unit, without to all circulations nerve in the Recognition with Recurrent Neural Network node
Network unit is compiled and operation, and the off-line model for shortening Recognition with Recurrent Neural Network node generates the time, and then improves processor
Processing speed and efficiency.Specifically, the above method includes the following steps:
S100 obtains Recognition with Recurrent Neural Network node.
Wherein, Recognition with Recurrent Neural Network node (RNN, Recurrent Neural Network) is by single circulation nerve net
Network unit is constituted by each connection circulation, and typical RNN has gating cycle network (GRU) and shot and long term memory network
(LSTM) etc..One layer of computing unit in a RNN is usually called a RNN unit (RNN cell).As shown in figure 5, circulation
Neural network node includes at least one Recognition with Recurrent Neural Network unit, and specifically, which may include input layer, implies
Layer and output layer, wherein the quantity of hidden layer can be more than one.
Specifically, processor gets Recognition with Recurrent Neural Network node, obtains Recognition with Recurrent Neural Network unit for subsequent step.
Further, the model data collection and model structure parameter of the available Recognition with Recurrent Neural Network node of processor, thus according to
The model data collection and model structure parameter of the Recognition with Recurrent Neural Network node determine the Recognition with Recurrent Neural Network node.Wherein, this is followed
The corresponding model data collection of ring neural network node includes the corresponding weight data of each layer in the Recognition with Recurrent Neural Network node, figure
W1~W3 in Recognition with Recurrent Neural Network unit shown in 5 is used to indicate the corresponding weight number of single Recognition with Recurrent Neural Network node
According to.The corresponding model structure parameter of Recognition with Recurrent Neural Network node includes in single Recognition with Recurrent Neural Network unit between each layer
Dependence between dependence or each Recognition with Recurrent Neural Network unit.
Optionally, which can be independent Recognition with Recurrent Neural Network node, the Recognition with Recurrent Neural Network
Node can also be placed in a primitive network, which may include at least one Recognition with Recurrent Neural Network node and acyclic
Neural network node.
Optionally, as shown in figure 5, system can determine single circulation nerve net according to Recognition with Recurrent Neural Network node
Network unit.Specifically, after processor gets Recognition with Recurrent Neural Network node, processor can be according to the Recognition with Recurrent Neural Network node
Structure determination go out single Recognition with Recurrent Neural Network unit.
S200, according to the model data collection and model knot of Recognition with Recurrent Neural Network unit single in Recognition with Recurrent Neural Network node
Structure parameter runs single Recognition with Recurrent Neural Network unit, obtains the single corresponding director data of Recognition with Recurrent Neural Network unit.
Specifically, processor gets the model data collection and model structure parameter of single Recognition with Recurrent Neural Network unit,
Then single Recognition with Recurrent Neural Network unit is run, obtains the single corresponding director data of Recognition with Recurrent Neural Network unit later.
It should be understood that the operation Recognition with Recurrent Neural Network unit in the embodiment of the present application refers to, processor uses artificial neural network
Model data runs certain machine learning algorithm (such as neural network algorithm), by realizing that the target of algorithm is answered to operation before executing
With (such as speech recognition artificial intelligence application).
S300, according to the single corresponding director data of Recognition with Recurrent Neural Network unit, obtain single Recognition with Recurrent Neural Network
Corresponding first off-line model of unit.
Wherein, the first off-line model includes the weight data and director data of single Recognition with Recurrent Neural Network unit.
Specifically, processor can be according to the single corresponding director data of Recognition with Recurrent Neural Network unit and weight number
According to corresponding first off-line model of single Recognition with Recurrent Neural Network unit being obtained, without to the Recognition with Recurrent Neural Network section
All Recognition with Recurrent Neural Network units in point are compiled and operation, so as to greatly shorten Recognition with Recurrent Neural Network node
Off-line model generates the time, and then improves the processing speed and efficiency of processor.
Further, when need to rerun the Recognition with Recurrent Neural Network node when, can be offline by recursive call first
Model realizes the operation of Recognition with Recurrent Neural Network node, operates, mentions to compiling of node each in neural network etc. to reduce
High operation efficiency.
Above-mentioned data processing method is joined according to the model data collection of single Recognition with Recurrent Neural Network unit and model structure
Number, runs the Recognition with Recurrent Neural Network unit, obtains the director data of single Recognition with Recurrent Neural Network unit, and then obtain individually
Corresponding first off-line model of Recognition with Recurrent Neural Network unit, the first off-line model include the power of single Recognition with Recurrent Neural Network unit
Value Data and director data.The data processing method of the application, need to only obtain the first of single Recognition with Recurrent Neural Network unit from
Line model, without being compiled to all Recognition with Recurrent Neural Network units in the Recognition with Recurrent Neural Network node and operation, so as to
The time is generated greatly to shorten the off-line model of Recognition with Recurrent Neural Network node, and then improves the processing speed and effect of processor
Rate.
Above-mentioned steps S300 may include: in one of the embodiments,
The weight data of single Recognition with Recurrent Neural Network unit and director data are carried out corresponding storage, obtained single by S310
Corresponding first off-line model of a Recognition with Recurrent Neural Network unit.
Specifically, processor can be by the weight data of single Recognition with Recurrent Neural Network unit and instruction data storage to depositing
In reservoir, to realize the generation and storage of the first off-line model.Wherein, for single Recognition with Recurrent Neural Network unit, this is single
Recognition with Recurrent Neural Network unit weight data and director data one-to-one correspondence stored.In this way, circulation mind ought be run again
When through network node, can directly obtain the first off-line model from memory, and by the first off-line model of recursive call come
Run Recognition with Recurrent Neural Network node.
Optionally, which can deposit the single corresponding weight data of Recognition with Recurrent Neural Network unit and director data
Storage is into non-volatile memory, to realize the generation and storage of the first off-line model.When running the circulation nerve net again
When network unit, the corresponding off-line model of Recognition with Recurrent Neural Network unit, and root can be directly obtained from nonvolatile memory
Recognition with Recurrent Neural Network unit is run according to corresponding off-line model.
In the present embodiment without being compiled to all Recognition with Recurrent Neural Network units in the Recognition with Recurrent Neural Network node and
Operation shortens the time that Recognition with Recurrent Neural Network node generates off-line model, improves the speed of service and efficiency of system.
Optionally, as shown in fig. 6, above-mentioned steps S310 may include steps of:
S311 determines Recognition with Recurrent Neural Network according to the model data collection and model structure parameter of Recognition with Recurrent Neural Network unit
The corresponding Memory Allocation mode of unit.
Specifically, processor can obtain Recognition with Recurrent Neural Network according to the model structure parameter of Recognition with Recurrent Neural Network unit
Each layer executes sequence in unit, and according to the execution of layer each in Recognition with Recurrent Neural Network unit sequence determines previous cycle nerve net
The Memory Allocation mode of network unit.For example, by execution sequence by the related data of layer each in Recognition with Recurrent Neural Network unit save to
In one stack.Wherein, Memory Allocation mode refers to (including the input number of the relevant data of each layer in determining Recognition with Recurrent Neural Network unit
According to, output data, weight data and intermediate result data etc.) storage location on memory headroom (such as memory).For example,
Each layer relevant data (input data, output data, weight data and intermediate result data etc. can be stored using tables of data
Deng) and memory headroom mapping relations.
S312 runs the Recognition with Recurrent Neural Network unit according to the corresponding Memory Allocation mode of Recognition with Recurrent Neural Network unit
Related data in the process is stored into a storage unit of one of memory or memory.
Wherein, the related data in Recognition with Recurrent Neural Network unit operational process includes that each layer of Recognition with Recurrent Neural Network unit is right
Weight data, director data, input data, results of intermediate calculations and output data for answering etc..For example, as shown in figure 5, X table
Show the input data of the Recognition with Recurrent Neural Network unit, Y indicates the output data of the Recognition with Recurrent Neural Network unit, and processor can incite somebody to action
The output data of the Recognition with Recurrent Neural Network unit is converted to the control command of control robot or different digital interface.W1~W3 is used
In expression weight data.Processor can be according to fixed Memory Allocation mode, by Recognition with Recurrent Neural Network unit operational process
In related data store into a storage unit of one of memory or memory, such as built-in storage or caching are easy
The property lost memory.
S313 obtains the weight data and instruction number of Recognition with Recurrent Neural Network unit from above-mentioned memory or storage unit
According to, obtain the first off-line model, and by first off-line model store into a nonvolatile memory or memory it is non-
In volatile memory cell.
Specifically, weight data and the instruction of Recognition with Recurrent Neural Network unit are obtained from above-mentioned memory or storage unit
Data, obtain the first off-line model, and first off-line model is stored into a nonvolatile memory or memory
In non-volatile memory cells, what is stored in corresponding memory space is the corresponding first offline mould of Recognition with Recurrent Neural Network unit
Type.
In one of the embodiments, as shown in fig. 7, step S100 may include: in above-mentioned data processing method
S110 obtains the primitive network comprising Recognition with Recurrent Neural Network node.
Wherein, which may include Recognition with Recurrent Neural Network node, also include acyclic neural network node.
Specifically, processor can be by obtaining the model data collection and model structure parameter of primitive network, and then passes through
The model data collection and model structure parameter of the primitive network can obtain the network structure of the primitive network.Wherein, model
Data set includes the data such as corresponding weight data of each node in primitive network, W1~W6 in neural network shown in Fig. 10
I.e. for indicating the weight data of node.Model structure parameter includes the dependence of multiple nodes and each section in primitive network
The computation attribute of point, wherein the dependence between each node is for indicating whether there is data transmitting between each node, example
Such as, when the transmitting between multiple nodes with data flow, it can be said that having dependence between bright multiple nodes.Further
Ground, the dependence of each node may include input relationship and output relation etc..
S120 determines the dependence of each node in primitive network according to the model structure parameter of primitive network.
Specifically, processor gets the model structure parameter of primitive network, may include original in the model structure parameter
The dependence of each node in beginning network, so after processor gets the model structure parameter of primitive network, it being capable of basis
The model structure parameter of primitive network determines the dependence of each node in primitive network.
S130 determines the input of each Recognition with Recurrent Neural Network node according to the dependence of node each in primitive network
Node and output node.
Wherein, input node is directed to the node of Recognition with Recurrent Neural Network node input data, and output node refers to circulation mind
Through network node to the node of its input data.
Specifically, processor can be ranked up each node according to the dependence of node each in primitive network,
The linear order between each node is obtained, and then determines the input node and output node of each Recognition with Recurrent Neural Network node.
For example, processor can determine the defeated of each Recognition with Recurrent Neural Network node according to the primitive network in Fig. 9 (a)
Ingress and output node, the output node of available acyclic neural network node (non-RNN) 1 are acyclic neural network
Node (non-RNN) 2 and Recognition with Recurrent Neural Network node (RNN), the input node of acyclic neural network node (non-RNN) 2 are non-
Recognition with Recurrent Neural Network node (non-RNN) 1, the output node of acyclic neural network node (non-RNN) 2 are acyclic neural network
Node (non-RNN) 3 and acyclic neural network node (non-RNN) 4, determine the input node of Recognition with Recurrent Neural Network node (RNN)
For acyclic neural network node (non-RNN) 1, output node is acyclic neural network node (non-RNN) 3.
S140, between the input node and output node and Recognition with Recurrent Neural Network node of disconnection Recognition with Recurrent Neural Network node
Connection relationship obtains at least one Recognition with Recurrent Neural Network node.
Specifically, after processor determines input node and the output node of each Recognition with Recurrent Neural Network node, circulation is disconnected
Connection relationship between the input node and Recognition with Recurrent Neural Network node of neural network node, while also disconnecting Recognition with Recurrent Neural Network
Connection relationship between the output node and Recognition with Recurrent Neural Network node of node, obtains at least one independent Recognition with Recurrent Neural Network
Node.
For example, as shown in figure 9, determining Recognition with Recurrent Neural Network node and its input node and output section in figure b
After point, the Recognition with Recurrent Neural Network node in a will be schemed and the connection relationship between its input node disconnects, and nerve net will be recycled
Connection relationship between network node and its output node disconnects, and each Recognition with Recurrent Neural Network node is separated, is obtained at least
One independent Recognition with Recurrent Neural Network node, and then obtain figure b.
In one of the embodiments, as shown in figure 8, the method can with the following steps are included:
S150 determines that the execution of each node in primitive network is suitable according to the dependence of node each in primitive network
Sequence.
Wherein, which may include Recognition with Recurrent Neural Network node and acyclic neural network node.
Specifically, processor can be by obtaining the model data collection and dependence of primitive network, and then passes through the original
The model data collection and dependence of beginning network can obtain the network structure of the primitive network.Wherein, model data Ji Bao
Include the data such as the corresponding weight data of each node in primitive network.Dependence between each node is for indicating each section
Whether there is data transmitting between point, for example, when the transmitting between multiple nodes with data flow, it can be said that bright multiple nodes
Between have dependence.Further, the dependence of each node may include input relationship and output relation etc..Root
According to obtained dependence, determine each node in primitive network executes sequence.Optionally, processor can be according to original net
The dependence of each node in network, determines the executive mode between each node;If there is no dependence between node, hold
Parallel execute is carried out when row node;If there are dependences between node, carries out when executing node and successively execute.
For example, determining the dependence of each node in neural network shown in Fig. 9, and then determine the line of each node
Property sequence be acyclic neural network node (non-RNN) 1- acyclic neural network node (non-RNN) 2- Recognition with Recurrent Neural Network section
The acyclic neural network node of point (RNN)-acyclic neural network node (non-RNN) 3- (non-RNN) 4, at the same each node it
Between execution sequence or acyclic neural network node (non-RNN) 1- Recognition with Recurrent Neural Network node (RNN)-acyclic mind
Through network node (non-RNN) the 3- acyclic neural network node of acyclic neural network node (non-RNN) 4- (non-RNN) 2.
S160 runs primitive network according to the execution of each node sequence, obtains each acyclic nerve in primitive network
The director data of network node.
Specifically, after processor determines the execution sequence of each node, according to the sequence that executes of each node, operation
Then primitive network obtains the director data of each acyclic neural network node in primitive network respectively.
The corresponding weight data of each acyclic neural network node and the corresponding storage of director data are obtained the by S170
Two off-line models.
Wherein, the second off-line model includes the weight data of each acyclic neural network node and instruction in primitive network
Data.
It specifically, can be by the corresponding weight of each acyclic neural network node after processor operation primitive network
The corresponding storage of data and instruction data, and then obtain the second off-line model.Optionally, which can be by each acyclic mind
Through the corresponding weight data of network node and instruction data storage into non-volatile memory, to realize the second off-line model
Generation and storage.Wherein, for acyclic neural network node, the weight data and director data of the node correspond into
Row storage.In this way, correspondence can be obtained directly from nonvolatile memory when running acyclic neural network node again
The second off-line model, and acyclic neural network node is run according to corresponding the second off-line model, without online right
The updated acyclic neural network node, which is compiled, to be instructed, and the speed of service and efficiency of system are improved.
Optionally, above-mentioned steps S170 may include steps of:
S171 is determined each acyclic in primitive network according to the model data collection and model structure parameter of primitive network
The corresponding Memory Allocation mode of neural network node.
Specifically, processor can obtain each node in primitive network according to the model structure parameter of primitive network
Sequence is executed, and determines the Memory Allocation mode of current network according to the execution of calculate node each in primitive network sequence, into
And obtain the corresponding Memory Allocation mode of each acyclic neural network node in primitive network.For example, holding by each node
Row sequence saves the related data of acyclic neural network node in the process of running to a stack.Wherein, Memory Allocation
Mode refer to determine acyclic neural network node relevant data (including input data, output data, weight data and in
Between result data etc.) storage location on memory headroom (such as memory).For example, can be stored using tables of data acyclic
The relevant data (input data, output data, weight data and intermediate result data etc.) and memory of neural network node
The mapping relations in space.
S172, according to the corresponding Memory Allocation mode of acyclic neural network node, by the acyclic neural network node
Related data in operational process is stored into a storage unit of one of memory or memory.
Wherein, the related data in acyclic neural network node operational process includes that acyclic neural network node is corresponding
Weight data, director data, input data, results of intermediate calculations and output data etc..Processor can be according to having determined that
Memory Allocation mode, the related data in acyclic neural network node operational process is stored to one of memory or
In one storage unit of memory, such as built-in storage or caching volatile memory.
S173 obtains weight data and the instruction of acyclic neural network node from above-mentioned memory or storage unit
Data, obtain the second off-line model, and second off-line model is stored into a nonvolatile memory or memory
In non-volatile memory cells.
Specifically, the weight data of acyclic neural network node is obtained from above-mentioned memory or storage unit and is referred to
It enables data, obtains the second off-line model, and second off-line model is stored into a nonvolatile memory or memory
Non-volatile memory cells in, what is stored in corresponding memory space is the corresponding second offline of Recognition with Recurrent Neural Network unit
Model.
In one of the embodiments, the method can with the following steps are included:
S300 judges whether the first off-line model is stateful.
Wherein, the input of first Recognition with Recurrent Neural Network unit is generally defaulted as 0.First off-line model is stateless expression
Output is the function of input, i.e. output=f (input).First off-line model is that stateful expression output is input and goes through
The function of history, i.e. output, history=g (input, history).
Specifically, judge the first off-line model whether be it is stateful, when judge the first off-line model be it is stateful, then
Step S400 is executed, the first off-line model further includes state input data, and state input data can be upper before hidden layer
The output data of one Recognition with Recurrent Neural Network unit.
The judgement for carrying out state in the present embodiment to the first off-line model, so that the generation of the first off-line model is more quasi-
Really.
In one of the embodiments, the method can with the following steps are included:
S600 obtains the new primitive network comprising new Recognition with Recurrent Neural Network node.
Wherein, which may include new Recognition with Recurrent Neural Network node and acyclic neural network section
Point.
Specifically, processor obtains new primitive network, obtains the model data collection and model structure of new primitive network
Parameter can obtain the network of the new primitive network by the model data collection and model structure parameter of the new primitive network
Structure chart.
S700 obtains the corresponding offline mould of new primitive network if there are corresponding off-line models for new primitive network
Type, and new primitive network is run according to the new corresponding off-line model of primitive network.
Wherein, the corresponding off-line model of new primitive network includes the first off-line model and the second off-line model.
Specifically, when the new primitive network got is there are when corresponding off-line model, new primitive network pair is obtained
The off-line model answered, and according to the corresponding off-line model of new primitive network got, run new primitive network.
Optionally, when running new primitive network, if the present node in new primitive network is Recognition with Recurrent Neural Network section
Point, then the first off-line model of recursive call realizes the operation of Recognition with Recurrent Neural Network node.
Optionally, when running new primitive network, if the present node in new primitive network is acyclic neural network
Node then obtains the weight data and director data of present node from new corresponding second off-line model of primitive network, and
The second off-line model is directly run according to the weight data of present node and director data.
In the present embodiment, when running neural network, the corresponding off-line model of the neural network, and root can be directly acquired
Neural network is run according to corresponding off-line model, is referred to without being compiled online to each node of the neural network
It enables, improves the speed of service and efficiency of system.
In one embodiment, as shown in Figure 10, a kind of data processing equipment is provided, comprising: first obtains module 100
With generation module 200, in which:
First obtains module 100, and for obtaining Recognition with Recurrent Neural Network node, the Recognition with Recurrent Neural Network node includes at least
One Recognition with Recurrent Neural Network unit.
Module 200 is run, for the mould according to Recognition with Recurrent Neural Network unit single in the Recognition with Recurrent Neural Network node
Type data set and model structure parameter run the single Recognition with Recurrent Neural Network unit, obtain the single circulation nerve
The corresponding director data of network unit.
Generation module 300, for according to the single corresponding director data of Recognition with Recurrent Neural Network unit, described in acquisition
Corresponding first off-line model of single Recognition with Recurrent Neural Network unit.
Generation module 300 is also used to the power of the single Recognition with Recurrent Neural Network unit in one of the embodiments,
Value Data and director data carry out corresponding storage, obtain the corresponding first offline mould of the single Recognition with Recurrent Neural Network unit
Type.
The data processing equipment further includes judgment module and the first execution module in one of the embodiments,.It should
Judgment module, for judging whether first off-line model is stateful.First execution module, if being used for described first
Off-line model be it is stateful, then first off-line model further includes state input data, and the state input data is institute
The output data of upper Recognition with Recurrent Neural Network unit before stating hidden layer.
The first acquisition module 100 includes: first acquisition unit in one of the embodiments, includes institute for obtaining
State the primitive network of Recognition with Recurrent Neural Network node;First determination unit, for the model structure parameter according to the primitive network,
Determine the dependence of each node in the primitive network;First determination unit, is also used to according to the primitive network
In each node dependence, determine in the primitive network input node of each Recognition with Recurrent Neural Network node and defeated
Egress;First execution unit, input node and output node for disconnecting the Recognition with Recurrent Neural Network node are followed with described
Connection between ring neural network node obtains at least one described Recognition with Recurrent Neural Network node.
The first acquisition module 100 further includes the second determination unit and the second execution unit in one of the embodiments:
Wherein, second determination unit determines the original net for the dependence according to node each in the primitive network
Each node executes sequence in network;Second execution unit, for running institute according to the execution sequence of each node
Primitive network is stated, the director data of each acyclic neural network node in the primitive network is obtained;The generation module is used
In by the corresponding weight data of each acyclic neural network node and the corresponding storage of director data, the second offline mould is obtained
Type;Wherein, second off-line model include in the primitive network weight data of each acyclic neural network node and
Director data.
Specific about data processing equipment limits the restriction that may refer to above for data processing method, herein not
It repeats again.Modules in above-mentioned data processing equipment can be realized fully or partially through software, hardware and combinations thereof.On
Stating each module can be embedded in the form of hardware or independently of in the processor in computer equipment, can also store in a software form
In memory in computer equipment, the corresponding operation of the above modules is executed in order to which processor calls.
In one of the embodiments, present invention also provides a kind of computer system, including processor and memory, deposit
Computer program is stored in reservoir, processor executes the method such as above-mentioned any embodiment when executing computer program.Tool
Body, when processor executes above-mentioned computer program, specifically execute following steps:
Obtain Recognition with Recurrent Neural Network node.Specifically, processor gets Recognition with Recurrent Neural Network node, is used for subsequent step
Obtain Recognition with Recurrent Neural Network unit.Further, the model data collection of the available Recognition with Recurrent Neural Network node of processor and
Model structure parameter, to determine circulation mind according to the model data collection and model structure parameter of the Recognition with Recurrent Neural Network node
Through network node.
Joined according to the model data collection of Recognition with Recurrent Neural Network unit single in Recognition with Recurrent Neural Network node and model structure
Number, runs single Recognition with Recurrent Neural Network unit, obtains the single corresponding director data of Recognition with Recurrent Neural Network unit.Specifically
Ground, processor get the model data collection and model structure parameter of single Recognition with Recurrent Neural Network unit, and then operation is single
Recognition with Recurrent Neural Network unit, obtain the single corresponding director data of Recognition with Recurrent Neural Network unit later.It should be understood that
Operation Recognition with Recurrent Neural Network unit in the embodiment of the present application refers to that processor uses artificial nerve network model data run
Kind machine learning algorithm (such as neural network algorithm) passes through target application (such as speech recognition for realizing algorithm before executing to operation
Equal artificial intelligence applications).
According to the single corresponding director data of Recognition with Recurrent Neural Network unit, single Recognition with Recurrent Neural Network unit pair is obtained
The first off-line model answered.Specifically, processor can be according to the single corresponding director data of Recognition with Recurrent Neural Network unit
And weight data, corresponding first off-line model of single Recognition with Recurrent Neural Network unit is obtained, without to circulation mind
It is compiled through all Recognition with Recurrent Neural Network units in network node and operation, so as to greatly shorten circulation nerve net
The off-line model of network node generates the time, and then improves the processing speed and efficiency of processor.
In one embodiment, a kind of computer storage medium is additionally provided, meter is stored in the computer storage medium
Calculation machine program, when computer program is executed by one or more processors, the method that executes any of the above-described embodiment.Wherein,
The computer storage medium may include non-volatile and/or volatile memory.Nonvolatile memory may include read-only deposits
Reservoir (ROM), programming ROM (PROM), electrically programmable ROM (EPROM), electrically erasable ROM (EEPROM) or flash memory.
Volatile memory may include random access memory (RAM) or external cache.By way of illustration and not limitation,
RAM is available in many forms, such as static state RAM (SRAM), dynamic ram (DRAM), synchronous dram (SDRAM), double data rate (DDR)
SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronization link (Synchlink) DRAM (SLDRAM), memory are total
Line (Rambus) directly RAM (RDRAM), direct memory bus dynamic ram (DRDRAM) and memory bus dynamic ram
(RDRAM) etc..
In one embodiment, as shown in figure 11, present invention also provides a kind of data processing methods, may include as follows
Step:
S800 obtains Recognition with Recurrent Neural Network node.
Specifically, processor acquires Recognition with Recurrent Neural Network node, and runs the Recognition with Recurrent Neural Network node.
S900, corresponding first off-line model of recursive call Recognition with Recurrent Neural Network node are run according to the first off-line model
Recognition with Recurrent Neural Network node.
Wherein, the first off-line model includes the weight data and director data of single Recognition with Recurrent Neural Network unit.
Specifically, processor is after obtaining Recognition with Recurrent Neural Network node, then processor recursive call Recognition with Recurrent Neural Network section
Corresponding first off-line model of point, and Recognition with Recurrent Neural Network node is run according to the first off-line model.
In the present embodiment, Recognition with Recurrent Neural Network node is run by the first off-line model of recursive call, improves the calculating
The treatment effeciency and speed of machine system.
Optionally, processor can determine the sum for the Recognition with Recurrent Neural Network unit that the Recognition with Recurrent Neural Network node includes
Amount, and using the total quantity of RNN unit in the Recognition with Recurrent Neural Network node as the call number of the first off-line model, recursive call
First off-line model.Specifically, above-mentioned steps S900 may include:
Whenever the operation for calling the first off-line model to complete a RNN unit, call number is successively decreased, and it is current once to obtain
Number is executed, until current execution times are equal to initial value.Wherein, initial value can be 0.
Alternatively, call number is progressively increased from initial value whenever the operation for calling the first off-line model to complete a RNN unit
Once, until executing the RNN unit total quantity that number is equal in Recognition with Recurrent Neural Network node.
In one of the embodiments, as shown in figure 12, shown method is further comprising the steps of:
S1000 obtains the primitive network comprising Recognition with Recurrent Neural Network node.
Wherein, which may include Recognition with Recurrent Neural Network node and acyclic neural network node.
Specifically, processor obtains primitive network, obtains the model data collection and model structure parameter of primitive network, passes through
The model data collection and model structure parameter of the primitive network can obtain the network structure of the primitive network.
S1200, if present node in primitive network is acyclic neural network node, from primitive network corresponding the
The weight data and director data of present node are obtained in two off-line models, and according to the weight data of present node and instruction number
According to directly running the present node.
Wherein, the second off-line model includes the weight data of each acyclic neural network node and instruction in primitive network
Data.
Specifically, when running primitive network, if the present node in primitive network is acyclic neural network node, from
The weight data and director data of present node are obtained in corresponding second off-line model of primitive network, and according to present node
Weight data and director data directly run the second off-line model.I.e. processor judge the present node in primitive network whether be
Acyclic neural network node, if present node in primitive network is acyclic neural network node, processor is from second
The weight data and director data of present node are obtained in off-line model, and according to the weight data of present node and instruction number
According to directly operation present node.
In one of the embodiments, as shown in figure 13, the method may include following steps:
S701 determines the corresponding equivalent network of primitive network according to primitive network.
Wherein, equivalent network includes at least one equivalent cycle neural network node and at least one equivalent acyclic nerve
Network node.
Specifically, processor is handled primitive network according to the primitive network of acquisition, the available primitive network
Corresponding equivalent network.
S702 determines equivalent network according to the dependence of each equivalent node in the corresponding equivalent network of primitive network
In each equivalent node execute sequence.
Specifically, processor can be according to the dependence of equivalent node each in primitive network, to each equivalent node
It is ranked up, obtains the linear order between each equivalent node, and then determine each equivalent node executes sequence.
For example, determining shown in Fig. 9, the dependence of each node in the neural network that figure a is indicated, and then determine each
The linear order of node is the acyclic neural network node of acyclic neural network node (non-RNN) 1- (non-RNN) 2- circulation mind
Through the acyclic neural network node of network node (RNN)-acyclic neural network node (non-RNN) 3- (non-RNN) 4, at the same it is each
Execution sequence or acyclic neural network node (non-RNN) 1- Recognition with Recurrent Neural Network node (RNN)-between a node
Acyclic neural network node (non-RNN) 3- acyclic neural network node (non-RNN) acyclic neural network node of 4- is (non-
RNN)2。
S703, if current equivalent node is equivalent acyclic neural network node, from primitive network corresponding second from
The weight data and director data of current equivalent node are obtained in line model, and according to the weight data of current equivalent node and are referred to
Data are enabled directly to run current equivalent node.
Specifically, when running equivalent network, if current equivalent node is equivalent acyclic neural network node, from original
The weight data and director data of current equivalent node are obtained in corresponding second off-line model of beginning network, and according to current equivalent
The weight data and director data of node directly run current equivalent node.
If current equivalent node is equivalent cycle neural network node, from corresponding first off-line model of primitive network
The weight data and director data of current equivalent node are obtained, and is followed according to the weight data and director data of current equivalent node
Ring calls the first off-line model to run current equivalent node.
The present embodiment executes sequence between each equivalent node by determining according to the equivalent network, can pass through concentration
The off-line model of same type of node is run, the execution of same type node is completed, it is other kinds of to switch calling again later
Off-line model improves the operation efficiency of processor so as to reduce the switching times of calling.
In one of the embodiments, as shown in figure 14, the step of obtaining primitive network corresponding equivalent network can wrap
Include following steps:
S7011 obtains the company of at least one Recognition with Recurrent Neural Network node and acyclic neural network in primitive network
Logical piece.
Wherein, the connection piece of acyclic neural network is attached by least one acyclic neural network node, so
The connection piece of acyclic neural network includes at least one acyclic neural network node.As shown in figure 9, in figure b, it will be each
After Recognition with Recurrent Neural Network node is separated, obtain not disconnected also except at least one Recognition with Recurrent Neural Network node
The acyclic neural network node for scheming to link together in b is known as acyclic nerve by the acyclic neural network node of relationship
The connection piece of network.
Specifically, according to primitive network and primitive network structure, primitive network is handled, at least one is obtained and follows
The connection piece of ring neural network node and acyclic neural network.
S7012 updates each in acyclic neural network connection piece according to the dependence of node each in primitive network
The connection relationship of acyclic neural network node obtains updated acyclic neural network and is connected to piece.
Specifically, it is closed according to the dependence of each node in the connection piece and primitive network of obtained acyclic neural network
System, processor handle the connection piece to acyclic neural network, update each non-in acyclic neural network connection piece
The connection relationship of Recognition with Recurrent Neural Network node obtains the connection piece of updated acyclic neural network.As schemed shown in c in Fig. 9.
The connection piece of updated acyclic neural network is equivalent to an equivalent acyclic nerve net respectively by S7013
Network node.
Specifically, after the connection piece for obtaining updated acyclic neural network, by updated acyclic nerve net
The connection piece of network is equivalent to an equivalent acyclic neural network node.
As shown in figure 9, after obtaining the connection piece of updated acyclic neural network, non-being followed updated in figure d
The connection piece of ring neural network is equivalent to an equivalent acyclic neural network node.
S7014 determines each equivalent acyclic neural network node according to the dependence of node each in primitive network
And the dependence of equivalent cycle neural network node, obtain the corresponding equivalent network of primitive network.
Specifically, processor determines each equivalent acyclic nerve according to the dependence of node each in primitive network
The dependence of network node and equivalent cycle neural network node, and by equivalent cycle neural network node and equivalent non-
The input relationship and output relation of Recognition with Recurrent Neural Network node are attached, and obtain the corresponding equivalent network of primitive network.Such as
Figure d in Fig. 9.
In one of the embodiments, above-mentioned steps S7011 can with the following steps are included:
S70111 determines the defeated of each Recognition with Recurrent Neural Network node according to the dependence of node each in primitive network
Ingress and output node.
Wherein, input node is directed to the node of Recognition with Recurrent Neural Network node input data, and output node refers to circulation mind
Through network node to the node of its input data.
Specifically, processor can be ranked up each node according to the dependence of node each in primitive network,
The linear order between each node is obtained, and then determines the input node and output node of each Recognition with Recurrent Neural Network node.
For example, processor can determine the defeated of each Recognition with Recurrent Neural Network node according to the primitive network in Fig. 9 (a)
Ingress and output node, the output node of available acyclic neural network node (non-RNN) 1 are acyclic neural network
Node (non-RNN) 2 and Recognition with Recurrent Neural Network node (RNN), the input node of acyclic neural network node (non-RNN) 2 are non-
Recognition with Recurrent Neural Network node (non-RNN) 1, the output node of acyclic neural network node (non-RNN) 2 are acyclic neural network
Node (non-RNN) 3 and acyclic neural network node (non-RNN) 4, determine the input node of Recognition with Recurrent Neural Network node (RNN)
For acyclic neural network node (non-RNN) 1, output node is acyclic neural network node (non-RNN) 3.
S70112, between the input node and output node and Recognition with Recurrent Neural Network node of disconnection Recognition with Recurrent Neural Network node
Connection relationship, obtain the connection piece of at least one Recognition with Recurrent Neural Network node and acyclic neural network.
Wherein, the connection piece of acyclic neural network is attached by least one acyclic neural network node, so
The connection piece of acyclic neural network includes at least one acyclic neural network node.As shown in figure 9, in figure b, it will be each
After Recognition with Recurrent Neural Network node is separated, obtain not disconnected also except at least one Recognition with Recurrent Neural Network node
The acyclic neural network node for scheming to link together in b is known as acyclic nerve by the acyclic neural network node of relationship
The connection piece of network.
Specifically, after processor determines input node and the output node of each Recognition with Recurrent Neural Network node, circulation is disconnected
Connection relationship between the input node and Recognition with Recurrent Neural Network node of neural network node, while also disconnecting Recognition with Recurrent Neural Network
Connection relationship between the output node and Recognition with Recurrent Neural Network node of node, obtains at least one independent Recognition with Recurrent Neural Network
The connection piece of node and acyclic neural network.
For example, as shown in figure 9, determining Recognition with Recurrent Neural Network node and its input node and output section in figure b
After point, the Recognition with Recurrent Neural Network node in a will be schemed and the connection relationship between input node disconnects, and by Recognition with Recurrent Neural Network
Connection relationship between node and output node disconnects, and each Recognition with Recurrent Neural Network node is separated, at least one is obtained
Independent Recognition with Recurrent Neural Network node, and then obtain figure b.
In one of the embodiments, as shown in figure 15, above-mentioned steps S7012 can with the following steps are included:
S70121 judges the connection of acyclic neural network according to the dependence of node each in primitive network respectively
Whether each acyclic neural network node in piece relies on the output result of Recognition with Recurrent Neural Network node.
Specifically, processor can determine between each node according to the dependence of node each in primitive network
Input/output relation, thus judge respectively acyclic neural network connection piece in each acyclic neural network node whether
The output of Recognition with Recurrent Neural Network node is relied on as a result, i.e. in the connection piece of acyclic neural network, judges whether there is node
Dependence Recognition with Recurrent Neural Network node is input node.
Optionally, as shown in figure 9, according to the data flow between each node, being got between each node in figure a
Dependence or connection relationship, can determine connection piece in each node input/output relation, execution sequence be non-follow
The acyclic neural network node of ring neural network node (non-RNN) 1- (non-RNN) 2- Recognition with Recurrent Neural Network node (RNN) 1- is non-to follow
The acyclic neural network node of ring neural network node (non-RNN) 3- (non-RNN) 4, then it can be seen that acyclic neural network section
Point (non-RNN) 3 and acyclic neural network node (non-RNN) 4 rely on the output result of Recognition with Recurrent Neural Network node.
S70122, if not the acyclic neural network node in the connection piece of Recognition with Recurrent Neural Network relies on Recognition with Recurrent Neural Network
The output of node is as a result, then disconnect the input node of acyclic neural network node and the connection pass of acyclic neural network node
System, obtains the connection piece of updated acyclic neural network.
Specifically, if judging, the acyclic neural network node in the connection piece of acyclic neural network relies on circulation nerve
The output of network node obtains as a result, then processor disconnects the connection relationship of acyclic neural network node Yu its input node
The connection piece of updated acyclic neural network.I.e. after judging has node using Recognition with Recurrent Neural Network node as input node,
Processor disconnects the input relationship in the connection piece of acyclic neural network between the node and existing input node.
For example, as shown in figure 9, figure c in, in the connection piece of acyclic neural network, when exist to recycle nerve net
Network node be input node acyclic neural network node when, by the connection piece of acyclic neural network the node with deposit
Input node between input relationship disconnect, obtain the connection piece of updated acyclic neural network.That is, can
To find out that acyclic neural network node (non-RNN) 3 and acyclic neural network node (non-RNN) 4 rely on Recognition with Recurrent Neural Network
The output of node is as a result, and the input node of acyclic neural network node (non-RNN) 3 is acyclic neural network node
(non-RNN) 2, the input node of acyclic neural network node (non-RNN) 4 are acyclic neural network node (non-RNN) 2, then
The connection relationship between acyclic neural network node (non-RNN) 3 and acyclic neural network node (non-RNN) 2 is disconnected, and
The relationship between acyclic neural network node (non-RNN) 4 and acyclic neural network node (non-RNN) 2 is disconnected, is updated
The connection piece of acyclic neural network afterwards, as shown in figure c.
In one of the embodiments, as shown in figure 16, above-mentioned steps S900 can with the following steps are included:
When the first off-line model is stateful, further includes state input data in the first off-line model, then execute step
Rapid S902, the weight data, director data and state that previous cycle neural network unit is obtained from the first off-line model are defeated
Enter data.
S904, according to the weight data, director data and state input data of single Recognition with Recurrent Neural Network unit, operation
Recognition with Recurrent Neural Network unit.
S906 is stored the output result of previous cycle neural network unit as state input data to the first offline mould
Type returns later and obtains the weight data of previous cycle neural network unit, director data and defeated from the first off-line model
The step of entering data, until completing the operation of Recognition with Recurrent Neural Network node.
When judge the first off-line model be it is stateless, then every time call the first off-line model when, plus state it is defeated
Enter data, and this output result is saved in case next time uses.
It optionally, can also include the input data of Recognition with Recurrent Neural Network unit in state input data.
By the judgement to the first off-line model state in the present embodiment, so that the first off-line model is more quasi- when executing
Really.
Processor can be according to the dependence of equivalent node each in equivalent network, really in one of the embodiments,
Make the executive mode between each equivalent node;If there is no dependence between equivalent node, carried out when executing equivalent node
It is parallel to execute;If there are dependences between equivalent node, carries out when executing equivalent node and successively execute.
Optionally, when running primitive network, processor executes the mode of each node, can also in the manner described above into
Row executes.
The present embodiment then can be abundant by each equivalent node in equivalent network if the demand scene compared with low delay
It is parallel to execute, i.e., it is parallel in model class.If the demand scene that height is handled up, then need to execute each equivalent node sequence,
Nodal parallel is run more parts, i.e., it is parallel in data-level.Various reasoning demand can be run, the need of low delay can be coped with
It asks, and the high requirement handled up can be coped with.
In one embodiment, as shown in figure 17, a kind of data processing equipment is provided, comprising: second obtains module 400
With the second execution module 500, in which:
Second obtains module 400, for obtaining Recognition with Recurrent Neural Network node.
Second execution module 500, for corresponding first off-line model of Recognition with Recurrent Neural Network node described in recursive call, root
The Recognition with Recurrent Neural Network node is run according to first off-line model.
Second module 400 is obtained in one of the embodiments, is also used to obtain comprising the Recognition with Recurrent Neural Network node
Primitive network;Second execution module 500, if the present node being also used in the primitive network is acyclic neural network section
Point then obtains the weight data and director data of the present node from corresponding second off-line model of the primitive network,
And the present node is directly run according to the weight data of the present node and director data;Wherein, described second is offline
Model includes the weight data and director data of each acyclic neural network node in the primitive network.
The data processing equipment further includes equivalent modules and determining module in one of the embodiments,;This is equivalent
Module, for determining that the corresponding equivalent network of the primitive network, the equivalent network include at least according to the primitive network
One equivalent cycle neural network node and at least one equivalent acyclic neural network node;The determining module is used for basis
The dependence of each equivalent node, determines each equivalent in the equivalent network in the corresponding equivalent network of the primitive network
Node executes sequence;Second execution module 500, if being also used to the current equivalent node is equivalent acyclic neural network section
Point then obtains the weight data and instruction number of the current equivalent node from corresponding second off-line model of the primitive network
According to, and the current equivalent node is directly run according to the weight data of the current equivalent node and director data.
The second execution module 500 is also used in one of the embodiments: if each equivalent node in the equivalent network
Between there is no dependence, execute each equivalent node parallel;If being deposited between each equivalent node in the equivalent network
In dependence, the equivalent node is executed according to the dependence.
The equivalent modules include second acquisition unit, updating unit and equivalent unit in one of the embodiments,;Its
In, the second acquisition unit, for obtaining Recognition with Recurrent Neural Network node described at least one of described primitive network and non-
The connection piece of Recognition with Recurrent Neural Network, wherein the connection piece of the acyclic neural network includes at least one acyclic nerve net
Network node;The updating unit updates the acyclic nerve for the dependence according to node each in the primitive network
The connection relationship of each acyclic neural network node in network-in-dialing piece obtains updated acyclic neural network connection
Piece;The equivalent unit equivalent non-is followed for the connection piece of the updated acyclic neural network to be equivalent to one respectively
Ring neural network node;The equivalent unit is also used to the dependence according to the node each in the primitive network, determines
The dependence of each equivalent acyclic neural network node and the equivalent cycle neural network node obtains described
The corresponding equivalent network of primitive network.
Specific about data processing equipment limits the restriction that may refer to above for data processing method, herein not
It repeats again.Modules in above-mentioned data processing equipment can be realized fully or partially through software, hardware and combinations thereof.On
Stating each module can be embedded in the form of hardware or independently of in the processor in computer equipment, can also store in a software form
In memory in computer equipment, the corresponding operation of the above modules is executed in order to which processor calls.
In one of the embodiments, present invention also provides a kind of computer system, including processor and memory, deposit
Computer program is stored in reservoir, processor executes the method such as above-mentioned any embodiment when executing computer program.Tool
Body, when processor executes above-mentioned computer program, specifically execute following steps:
Obtain Recognition with Recurrent Neural Network node.Specifically, processor acquires Recognition with Recurrent Neural Network node, and runs this and follow
Ring neural network node.
Corresponding first off-line model of recursive call Recognition with Recurrent Neural Network node runs circulation mind according to the first off-line model
Through network node.Specifically, processor is after obtaining Recognition with Recurrent Neural Network node, then processor recursive call Recognition with Recurrent Neural Network
Corresponding first off-line model of node, and Recognition with Recurrent Neural Network node is run according to the first off-line model.
In one embodiment, a kind of computer storage medium is additionally provided, meter is stored in the computer storage medium
Calculation machine program, when computer program is executed by one or more processors, the method that executes any of the above-described embodiment.Wherein,
The computer storage medium may include non-volatile and/or volatile memory.Nonvolatile memory may include read-only deposits
Reservoir (ROM), programming ROM (PROM), electrically programmable ROM (EPROM), electrically erasable ROM (EEPROM) or flash memory.
Volatile memory may include random access memory (RAM) or external cache.By way of illustration and not limitation,
RAM is available in many forms, such as static state RAM (SRAM), dynamic ram (DRAM), synchronous dram (SDRAM), double data rate (DDR)
SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronization link (Synchlink) DRAM (SLDRAM), memory are total
Line (Rambus) directly RAM (RDRAM), direct memory bus dynamic ram (DRDRAM) and memory bus dynamic ram
(RDRAM) etc..
Each technical characteristic of above embodiments can be combined arbitrarily, for simplicity of description, not to above-described embodiment
In each technical characteristic it is all possible combination be all described, as long as however, the combination of these technical characteristics be not present lance
Shield all should be considered as described in this specification.
The several embodiments of the application above described embodiment only expresses, the description thereof is more specific and detailed, but simultaneously
It cannot therefore be construed as limiting the scope of the patent.It should be pointed out that coming for those of ordinary skill in the art
It says, without departing from the concept of this application, various modifications and improvements can be made, these belong to the protection of the application
Range.Therefore, the scope of protection shall be subject to the appended claims for the application patent.
Claims (14)
1. a kind of data processing method, which is characterized in that the described method includes:
Recognition with Recurrent Neural Network node is obtained, the Recognition with Recurrent Neural Network node includes at least one Recognition with Recurrent Neural Network unit;
Joined according to the model data collection of Recognition with Recurrent Neural Network unit single in the Recognition with Recurrent Neural Network node and model structure
Number runs the single Recognition with Recurrent Neural Network unit, obtains the corresponding instruction number of the single Recognition with Recurrent Neural Network unit
According to;
According to the single corresponding director data of Recognition with Recurrent Neural Network unit, the single Recognition with Recurrent Neural Network list is obtained
Corresponding first off-line model of member;
Wherein, first off-line model includes the weight data and director data of the single Recognition with Recurrent Neural Network unit.
2. the method according to claim 1, wherein described according to the single Recognition with Recurrent Neural Network unit
Corresponding director data obtains the step of single Recognition with Recurrent Neural Network unit corresponding first off-line model, comprising:
The weight data of the single Recognition with Recurrent Neural Network unit and director data are subjected to corresponding storage, obtained described single
Corresponding first off-line model of Recognition with Recurrent Neural Network unit.
3. method according to claim 1 or 2, which is characterized in that the method also includes:
Judge whether first off-line model is stateful;
If first off-line model be it is stateful, first off-line model further includes state input data, the shape
State input data be the hidden layer before upper Recognition with Recurrent Neural Network unit output data.
4. method according to claim 1 or 2, which is characterized in that the step of the acquisition Recognition with Recurrent Neural Network node, packet
It includes:
Obtain the primitive network comprising the Recognition with Recurrent Neural Network node;
According to the model structure parameter of the primitive network, the dependence of each node in the primitive network is determined;
According to the dependence of node each in the primitive network, each circulation nerve net in the primitive network is determined
The input node and output node of network node;
It disconnects between the input node of the Recognition with Recurrent Neural Network node and output node and the Recognition with Recurrent Neural Network node
Connection, obtains at least one described Recognition with Recurrent Neural Network node.
5. according to the method described in claim 4, it is characterized in that, the method also includes:
According to the dependence of node each in the primitive network, determine that the execution of each node in the primitive network is suitable
Sequence;
The primitive network is run according to the execution of each node sequence, obtains each acyclic mind in the primitive network
Director data through network node;
By the corresponding weight data of each acyclic neural network node and the corresponding storage of director data, it is offline to obtain second
Model;
Wherein, second off-line model include in the primitive network weight data of each acyclic neural network node and
Director data.
6. according to the method described in claim 5, it is characterized in that, the method also includes:
Obtain new primitive network;
If there are corresponding off-line models for the new primitive network, the corresponding offline mould of the new primitive network is obtained
Type, and the new primitive network is run according to the corresponding off-line model of the new primitive network, wherein described new original
The corresponding off-line model of network includes first off-line model and second off-line model.
7. a kind of data processing equipment, which is characterized in that described device includes:
First obtains module, and for obtaining Recognition with Recurrent Neural Network node, the Recognition with Recurrent Neural Network node includes that at least one is followed
Ring neural network unit;
Module is run, for the model data collection according to Recognition with Recurrent Neural Network unit single in the Recognition with Recurrent Neural Network node
And model structure parameter, the single Recognition with Recurrent Neural Network unit is run, the single Recognition with Recurrent Neural Network unit is obtained
Corresponding director data;
Generation module, it is described single for obtaining according to the single corresponding director data of Recognition with Recurrent Neural Network unit
Corresponding first off-line model of Recognition with Recurrent Neural Network unit;
Wherein, first off-line model includes the weight data and director data of the single Recognition with Recurrent Neural Network unit.
8. device according to claim 7, which is characterized in that the generation module is also used to the single circulation mind
Weight data and director data through network unit carry out corresponding storage, and it is corresponding to obtain the single Recognition with Recurrent Neural Network unit
The first off-line model.
9. device according to claim 7, which is characterized in that described device further includes that judgment module and first execute mould
Block;
The judgment module, for judging whether first off-line model is stateful;
First execution module, if for first off-line model be it is stateful, first off-line model also wraps
Include state input data, the state input data be the hidden layer before upper Recognition with Recurrent Neural Network unit output number
According to.
10. device according to claim 7, which is characterized in that the acquisition module includes:
First acquisition unit, for obtaining the primitive network comprising the Recognition with Recurrent Neural Network node;
First determination unit determines each section in the primitive network for the model structure parameter according to the primitive network
The dependence of point;
First determination unit is also used to the dependence according to node each in the primitive network, determines described original
The input node and output node of each Recognition with Recurrent Neural Network node in network;
First execution unit, for disconnect the Recognition with Recurrent Neural Network node input node and output node and the circulation it is refreshing
Through the connection between network node, at least one described Recognition with Recurrent Neural Network node is obtained.
11. device according to claim 10, which is characterized in that the acquisition module further include the second determination unit and
Second execution unit:
Second determination unit determines the original net for the dependence according to node each in the primitive network
Each node executes sequence in network;
Second execution unit, for running the primitive network according to the execution of each node sequence, described in acquisition
The director data of each acyclic neural network node in primitive network;
The generation module, for will the corresponding weight data of each acyclic neural network node and director data correspondence
Storage obtains the second off-line model;
Wherein, second off-line model include in the primitive network weight data of each acyclic neural network node and
Director data.
12. a kind of computer system, which is characterized in that including processor and memory, be stored with computer in the memory
Program, the processor execute as the method according to claim 1 to 6 when executing the computer program.
13. computer system according to claim 12, which is characterized in that the processor packet arithmetic element and control
Device unit;The arithmetic element includes: a main process task circuit and multiple from processing circuit;
The controller unit, for obtaining input data and instruction;
The controller unit is also used to parse described instruction and obtains multiple instruction data, by multiple instruction data and described
Input data is sent to the main process task circuit;
The main process task circuit, for executing preamble processing and with the multiple between processing circuit to the input data
Transmit data and instruction data;
It is the multiple from processing circuit, for according to being executed parallel from the data and director data of the main process task circuit transmission
Intermediate operations obtain multiple intermediate results, and multiple intermediate results are transferred to the main process task circuit;
The main process task circuit obtains the result of described instruction for executing subsequent processing to the multiple intermediate result.
14. a kind of computer storage medium, which is characterized in that it is stored with computer program in the computer storage medium, when
When the computer program is executed by one or more processors, as the method according to claim 1 to 6 is executed.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811569176.3A CN109726797B (en) | 2018-12-21 | 2018-12-21 | Data processing method, device, computer system and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811569176.3A CN109726797B (en) | 2018-12-21 | 2018-12-21 | Data processing method, device, computer system and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109726797A true CN109726797A (en) | 2019-05-07 |
CN109726797B CN109726797B (en) | 2019-11-19 |
Family
ID=66297015
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811569176.3A Active CN109726797B (en) | 2018-12-21 | 2018-12-21 | Data processing method, device, computer system and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109726797B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110347506A (en) * | 2019-06-28 | 2019-10-18 | Oppo广东移动通信有限公司 | Data processing method, device, storage medium and electronic equipment based on LSTM |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070265841A1 (en) * | 2006-05-15 | 2007-11-15 | Jun Tani | Information processing apparatus, information processing method, and program |
US20080172349A1 (en) * | 2007-01-12 | 2008-07-17 | Toyota Engineering & Manufacturing North America, Inc. | Neural network controller with fixed long-term and adaptive short-term memory |
CN103680496A (en) * | 2013-12-19 | 2014-03-26 | 百度在线网络技术(北京)有限公司 | Deep-neural-network-based acoustic model training method, hosts and system |
CN106529669A (en) * | 2016-11-10 | 2017-03-22 | 北京百度网讯科技有限公司 | Method and apparatus for processing data sequences |
US20170187747A1 (en) * | 2015-12-28 | 2017-06-29 | Arbor Networks, Inc. | Using recurrent neural networks to defeat dns denial of service attacks |
CN107103113A (en) * | 2017-03-23 | 2017-08-29 | 中国科学院计算技术研究所 | Towards the Automation Design method, device and the optimization method of neural network processor |
CN107341542A (en) * | 2016-04-29 | 2017-11-10 | 北京中科寒武纪科技有限公司 | Apparatus and method for performing Recognition with Recurrent Neural Network and LSTM computings |
US20180005107A1 (en) * | 2016-06-30 | 2018-01-04 | Samsung Electronics Co., Ltd. | Hybrid memory cell unit and recurrent neural network including hybrid memory cell units |
CN107766939A (en) * | 2017-11-07 | 2018-03-06 | 维沃移动通信有限公司 | A kind of data processing method, device and mobile terminal |
CN108122027A (en) * | 2016-11-29 | 2018-06-05 | 华为技术有限公司 | A kind of training method of neural network model, device and chip |
CN108734288A (en) * | 2017-04-21 | 2018-11-02 | 上海寒武纪信息科技有限公司 | A kind of operation method and device |
CN108875920A (en) * | 2018-02-12 | 2018-11-23 | 北京旷视科技有限公司 | Operation method, device, system and the storage medium of neural network |
-
2018
- 2018-12-21 CN CN201811569176.3A patent/CN109726797B/en active Active
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070265841A1 (en) * | 2006-05-15 | 2007-11-15 | Jun Tani | Information processing apparatus, information processing method, and program |
US20080172349A1 (en) * | 2007-01-12 | 2008-07-17 | Toyota Engineering & Manufacturing North America, Inc. | Neural network controller with fixed long-term and adaptive short-term memory |
CN103680496A (en) * | 2013-12-19 | 2014-03-26 | 百度在线网络技术(北京)有限公司 | Deep-neural-network-based acoustic model training method, hosts and system |
US20170187747A1 (en) * | 2015-12-28 | 2017-06-29 | Arbor Networks, Inc. | Using recurrent neural networks to defeat dns denial of service attacks |
CN107341542A (en) * | 2016-04-29 | 2017-11-10 | 北京中科寒武纪科技有限公司 | Apparatus and method for performing Recognition with Recurrent Neural Network and LSTM computings |
US20180005107A1 (en) * | 2016-06-30 | 2018-01-04 | Samsung Electronics Co., Ltd. | Hybrid memory cell unit and recurrent neural network including hybrid memory cell units |
CN106529669A (en) * | 2016-11-10 | 2017-03-22 | 北京百度网讯科技有限公司 | Method and apparatus for processing data sequences |
CN108122027A (en) * | 2016-11-29 | 2018-06-05 | 华为技术有限公司 | A kind of training method of neural network model, device and chip |
CN107103113A (en) * | 2017-03-23 | 2017-08-29 | 中国科学院计算技术研究所 | Towards the Automation Design method, device and the optimization method of neural network processor |
CN108734288A (en) * | 2017-04-21 | 2018-11-02 | 上海寒武纪信息科技有限公司 | A kind of operation method and device |
CN107766939A (en) * | 2017-11-07 | 2018-03-06 | 维沃移动通信有限公司 | A kind of data processing method, device and mobile terminal |
CN108875920A (en) * | 2018-02-12 | 2018-11-23 | 北京旷视科技有限公司 | Operation method, device, system and the storage medium of neural network |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110347506A (en) * | 2019-06-28 | 2019-10-18 | Oppo广东移动通信有限公司 | Data processing method, device, storage medium and electronic equipment based on LSTM |
CN110347506B (en) * | 2019-06-28 | 2023-01-06 | Oppo广东移动通信有限公司 | Data processing method and device based on LSTM, storage medium and electronic equipment |
Also Published As
Publication number | Publication date |
---|---|
CN109726797B (en) | 2019-11-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR102519467B1 (en) | Data pre-processing method, device, computer equipment and storage medium | |
CN112711422A (en) | Optimization method and system for neural network compiling | |
CN114492770B (en) | Brain-like calculation chip mapping method oriented to cyclic pulse neural network | |
CN111160515B (en) | Running time prediction method, model search method and system | |
CN109993287A (en) | Processing with Neural Network method, computer system and storage medium | |
CN110889497B (en) | Learning task compiling method of artificial intelligence processor and related product | |
CN109376852A (en) | Arithmetic unit and operation method | |
CN111831355B (en) | Weight precision configuration method, device, equipment and storage medium | |
CN111831359A (en) | Weight precision configuration method, device, equipment and storage medium | |
CN111831358B (en) | Weight precision configuration method, device, equipment and storage medium | |
CN110766145A (en) | Learning task compiling method of artificial intelligence processor and related product | |
CN105843679A (en) | Adaptive many-core resource scheduling method | |
CN109726797B (en) | Data processing method, device, computer system and storage medium | |
CN111831354A (en) | Data precision configuration method, device, chip array, equipment and medium | |
CN105936047A (en) | Brain imitation robot controlling and studying system | |
CN111314171B (en) | SDN routing performance prediction and optimization method, equipment and medium | |
CN115713026A (en) | Time series data prediction model construction method, device, equipment and readable storage medium | |
CN110865950B (en) | Data preprocessing method and device, computer equipment and storage medium | |
CN115940294A (en) | Method, system, equipment and storage medium for adjusting real-time scheduling strategy of multi-stage power grid | |
CN111831356B (en) | Weight precision configuration method, device, equipment and storage medium | |
CN109449994B (en) | Power regulation and control method for active power distribution network comprising flexible interconnection device | |
CN109685203B (en) | Data processing method, device, computer system and storage medium | |
CN110766146B (en) | Learning task compiling method of artificial intelligence processor and related product | |
CN112836796B (en) | Method for super-parameter collaborative optimization of system resources and model in deep learning training | |
CN116367190A (en) | Digital twin function virtualization method for 6G mobile network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CP01 | Change in the name or title of a patent holder |
Address after: 100190 room 644, comprehensive research building, No. 6 South Road, Haidian District Academy of Sciences, Beijing Patentee after: Zhongke Cambrian Technology Co., Ltd Address before: 100190 room 644, comprehensive research building, No. 6 South Road, Haidian District Academy of Sciences, Beijing Patentee before: Beijing Zhongke Cambrian Technology Co., Ltd. |
|
CP01 | Change in the name or title of a patent holder |