CN110163372A - Operation method, device and Related product - Google Patents

Operation method, device and Related product Download PDF

Info

Publication number
CN110163372A
CN110163372A CN201910472870.1A CN201910472870A CN110163372A CN 110163372 A CN110163372 A CN 110163372A CN 201910472870 A CN201910472870 A CN 201910472870A CN 110163372 A CN110163372 A CN 110163372A
Authority
CN
China
Prior art keywords
node
reversed
operation core
positive
connection relationship
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910472870.1A
Other languages
Chinese (zh)
Other versions
CN110163372B (en
Inventor
不公告发明人
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cambricon Technologies Corp Ltd
Beijing Zhongke Cambrian Technology Co Ltd
Original Assignee
Beijing Zhongke Cambrian Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zhongke Cambrian Technology Co Ltd filed Critical Beijing Zhongke Cambrian Technology Co Ltd
Priority to CN201910472870.1A priority Critical patent/CN110163372B/en
Publication of CN110163372A publication Critical patent/CN110163372A/en
Application granted granted Critical
Publication of CN110163372B publication Critical patent/CN110163372B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Abstract

This disclosure relates to a kind of operation method, device and Related product.The product may include following one or more components: processing component, memory, power supply module, multimedia component, the interface and communication component of input/output.Processing component controls integrated operation.Processing component may include one or more processors to execute instruction, to complete all or part of the steps of above-mentioned operation method.Operation efficiency of the Related product when carrying out the operation of neural network model can be improved in the disclosure.

Description

Operation method, device and Related product
Technical field
This disclosure relates to technical field of information processing more particularly to a kind of operation method, device and Related product.
Background technique
In field of artificial intelligence, neural network algorithm is a kind of nearest popular machine learning algorithm, each Kind all achieves extraordinary effect, such as image recognition, speech recognition, natural language processing etc. in field.With nerve net The complexity of the development of network algorithm, algorithm is also higher and higher, and in order to improve resolution, the scale of model is also being gradually increased.
Summary of the invention
In view of this, the present disclosure proposes a kind of calculation method and computing devices.
According to the one side of the disclosure, a kind of calculation method of neural network is provided, the method is based on neural network The corresponding contextual information for calculating figure carries out the training of neural network, wherein the contextual information includes saving in calculating figure Connection relationship between point and the corresponding forward operation core of node in figure and reversed operation core are calculated, between the node Connection relationship includes positive connection relationship and Opposite direction connection relationship, and the forward operation core includes positive operation symbol and positive output Operand, the reversed operation core include reverse operating symbol and reversed output operand;
The described method includes:
The neural network, which is checked, according to the positive connection relationship, forward operation carries out positive calculating;
Reverse train is carried out to the neural network according to the Opposite direction connection relationship and reversed operation core, alternatively, root The neural network is checked according to Opposite direction connection relationship, reversed operation core, positive connection relationship and forward operation reversely to be instructed Practice.
In one possible implementation, the method also includes:
The predecessor node and descendant node that each node is obtained according to the input node of calculating figure interior joint, according to described Predecessor node and the successor node determine the connection relationship between the calculating figure interior joint.
In one possible implementation, the method also includes:
The calculating figure is traversed according to the topological order forward direction for calculating figure, is corresponded to calculate each node creation in figure Positive operation symbol and the positive operand that exports as the corresponding forward operation core of each node;
The calculating figure is reversely traversed according to the topological order of the calculating figure, is corresponded to calculate each node creation in figure Reverse operating symbol and reversely export operand as the corresponding reversed operation core of each node.
In one possible implementation, the neural network is checked according to the positive connection relationship, forward operation Carry out positive calculating, comprising: execute following operation for any node calculated in figure:
The predecessor node of any node is obtained from the contextual information by interface;Wherein, the interface is used In the reading contextual information;
According to the corresponding relationship between node and forward operation core obtain the corresponding forward operation core of any node, with And the corresponding forward operation core of predecessor node of any node;
Operand and described are exported according to the forward direction in the corresponding forward operation core of the predecessor node of any node Positive operation symbol in the corresponding forward operation core of one node carries out positive calculating.
In one possible implementation, according to the Opposite direction connection relationship, reversed operation core to the neural network Carry out reverse train, comprising:
Following operation is executed for the first node calculated in Tu Zhong first part node:
The descendant node of the first node is obtained from the contextual information by interface;Wherein, the interface is used In the reading contextual information;
According to the corresponding relationship between node and reversed operation core obtain the corresponding reversed operation core of the first node, with And the corresponding reversed operation core of descendant node of the first node;
According to the reverse operating symbol in the first reversed output operand and the corresponding reversed operation core of the first node Reverse train is carried out,
Wherein, the described first reversed output operand is in the corresponding reversed operation core of descendant node of the first node Reversed output operand.
In one possible implementation, according to the Opposite direction connection relationship, reversed operation core, positive connection relationship with And forward operation checks the neural network and carries out reverse train, comprising:
Following operation is executed for the second node calculated in figure in second part node:
According to the descendant node of reverse operating symbol and second node in the corresponding reversed operation core of the second node Reversed output operand or second node predecessor node positive output operand or second node forward direction it is defeated Operand carries out reverse train out.
In one possible implementation, the operation executed for the second node calculated in figure in second part node Further include:
The predecessor node and descendant node of the second node are obtained from the contextual information by interface;Wherein, The interface is for reading the contextual information;
Obtain the reversed output operand of the descendant node of the second node and the predecessor node of the second node Positive output operand;
According to the positive output operation of the corresponding forward operation core of the second node, the predecessor node of the second node Number carries out the positive output operand that second node is calculated in forward direction.
In one possible implementation, the reversed output operand of the descendant node of the second node is obtained, is wrapped It includes:
According to the corresponding relationship between node and reversed operation core, the descendant node for obtaining the second node is corresponding anti- To operation core,
It is described second by the reversed output operand in the corresponding reversed operation core of the descendant node of the second node The reversed output operand of the descendant node of node.
In one possible implementation, the positive output operand of the predecessor node of the second node, packet are obtained It includes:
According to the corresponding relationship between node and forward operation core, the predecessor node for obtaining the second node is corresponding just To operation core,
Using the positive output operand in the corresponding forward operation core of the predecessor node of the second node as described the The positive output operand of the predecessor node of two nodes.
According to another aspect of the present disclosure, a kind of computing device of neural network is provided, described device is based on nerve net The corresponding contextual information for calculating figure of network carries out the training of neural network, wherein the contextual information includes calculating in figure Connection relationship between node and the corresponding forward operation core of node in figure and reversed operation core are calculated, between the node Connection relationship include positive connection relationship and Opposite direction connection relationship, the forward operation core includes that positive operation symbol and forward direction are defeated Operand out, the reversed operation core include reverse operating symbol and reversed output operand;
Described device includes:
Positive computing module, for checking the neural network according to the positive connection relationship, forward operation and carrying out just To calculating;
Reverse train module, for being carried out according to the Opposite direction connection relationship and reversed operation core to the neural network Reverse train, alternatively, checking the mind according to Opposite direction connection relationship, reversed operation core, positive connection relationship and forward operation Reverse train is carried out through network.
In one possible implementation, described device further include:
Context module, for obtaining the predecessor node of each node according to the input node of calculating figure interior joint with after After node, the connection relationship between the calculating figure interior joint is determined according to the predecessor node and the successor node.
In one possible implementation, described device further include:
First creation module is schemed for traversing the calculating according to the topological order forward direction for calculating figure, to calculate in figure Each node creates corresponding positive operation symbol and the positive operand that exports as the corresponding forward operation core of each node;
Second creation module, for reversely traversing the calculating figure according to the topological order of the calculating figure, to calculate in figure Each node creates corresponding reverse operating symbol and reversely exports operand as the corresponding reversed operation core of each node.
In one possible implementation, the positive computing module is also used to hold for any node in calculating figure The following operation of row:
The predecessor node of any node is obtained from the contextual information by interface;Wherein, the interface is used In the reading contextual information;
According to the corresponding relationship between node and forward operation core obtain the corresponding forward operation core of any node, with And the corresponding forward operation core of predecessor node of any node;
Operand and described are exported according to the forward direction in the corresponding forward operation core of the predecessor node of any node Positive operation symbol in the corresponding forward operation core of one node carries out positive calculating.
In one possible implementation, the reverse train module is also used to for calculating Tu Zhong first part node In first node execute following operation:
The descendant node of the first node is obtained from the contextual information by interface;Wherein, the interface is used In the reading contextual information;
According to the corresponding relationship between node and reversed operation core obtain the corresponding reversed operation core of the first node, with And the corresponding reversed operation core of descendant node of the first node;
According to the reverse operating symbol in the first reversed output operand and the corresponding reversed operation core of the first node Reverse train is carried out,
Wherein, the described first reversed output operand is in the corresponding reversed operation core of descendant node of the first node Reversed output operand.
In one possible implementation, the reverse train module is also used to for second part node in calculating figure In second node execute following operation:
According to the descendant node of reverse operating symbol and second node in the corresponding reversed operation core of the second node Reversed output operand or second node predecessor node positive output operand or second node forward direction it is defeated Operand carries out reverse train out.
In one possible implementation, the reverse train module is also used to for second part node in calculating figure In second node execute following operation:
The predecessor node and descendant node of the second node are obtained from the contextual information by interface;Wherein, The interface is for reading the contextual information;
Obtain the reversed output operand of the descendant node of the second node and the predecessor node of the second node Positive output operand;
According to the positive output operation of the corresponding forward operation core of the second node, the predecessor node of the second node Number carries out the positive output operand that second node is calculated in forward direction.
In one possible implementation, the reverse train module is also used to: according to node and reversed operation core it Between corresponding relationship, the corresponding reversed operation core of descendant node of the second node is obtained, by the subsequent of the second node Reversed output operand in the corresponding reversed operation core of node is the reversed output operation of the descendant node of the second node Number.
In one possible implementation, the reverse train module is also used to according between node and forward operation core Corresponding relationship, obtain the corresponding forward operation core of predecessor node of the second node, the forerunner of the second node saved Positive output operand in the corresponding forward operation core of point exports operation as the positive of predecessor node of the second node Number.
According to another aspect of the present disclosure, a kind of calculating equipment of neural network is provided, comprising: processor;For depositing Store up the memory of processor-executable instruction;Wherein, the processor is configured to executing the above method.
According to another aspect of the present disclosure, a kind of non-volatile computer readable storage medium storing program for executing is provided, is stored thereon with Computer program instructions, wherein the computer program instructions realize the above method when being executed by processor.
By the positive connection relationship and the existing positive calculating of forward operation verification in contextual information, pass through contextual information In Opposite direction connection relationship, reversed operation core or Opposite direction connection relationship, reversed operation core, positive connection relationship and positive fortune The existing reverse train of verification is calculated not need to be additionally provided the calculating figure for reverse train, use one according to disclosed method Calculating figure can realize the positive process calculated with reverse train of neural network.
According to below with reference to the accompanying drawings to detailed description of illustrative embodiments, the other feature and aspect of the disclosure will become It is clear.
Detailed description of the invention
Comprising in the description and constituting the attached drawing of part of specification and specification together illustrates the disclosure Exemplary embodiment, feature and aspect, and for explaining the principles of this disclosure.
Fig. 1 shows the flow chart of the calculation method of the neural network according to one embodiment of the disclosure.
Fig. 2 shows the flow charts according to the calculation method of the neural network of one embodiment of the disclosure.
Fig. 3 shows the flow chart of the method for the step S11 according to one embodiment of the disclosure.
Fig. 4 shows the flow chart of the method for the step S12 according to one embodiment of the disclosure.
Fig. 5 shows the flow chart of the method for the step S12 according to one embodiment of the disclosure.
Fig. 6 shows the block diagram of the computing device of the neural network according to one embodiment of the disclosure.
Fig. 7 shows the block diagram of the computing device of the neural network according to one embodiment of the disclosure.
Fig. 8 is a kind of block diagram of device for neural computing shown according to an exemplary embodiment.
Fig. 9 is a kind of block diagram of device for neural computing shown according to an exemplary embodiment.
Specific embodiment
Various exemplary embodiments, feature and the aspect of the disclosure are described in detail below with reference to attached drawing.It is identical in attached drawing Appended drawing reference indicate element functionally identical or similar.Although the various aspects of embodiment are shown in the attached drawings, remove It non-specifically points out, it is not necessary to attached drawing drawn to scale.
Dedicated word " exemplary " means " being used as example, embodiment or illustrative " herein.Here as " exemplary " Illustrated any embodiment should not necessarily be construed as preferred or advantageous over other embodiments.
In addition, giving numerous details in specific embodiment below to better illustrate the disclosure. It will be appreciated by those skilled in the art that without certain details, the disclosure equally be can be implemented.In some instances, for Method, means, element and circuit well known to those skilled in the art are not described in detail, in order to highlight the purport of the disclosure.
In the related technology, it is respectively used to realize mind to provide forward function and backward function in neural network Positive calculating and reverse train through network.In order to enable the two functions correctly to complete to calculate accordingly, need to distinguish It provides corresponding calculate to scheme, for example, it is desired to provide the calculating figure comprising positive connection relationship for forward function, is Backward function provides the calculating figure comprising Opposite direction connection relationship, very troublesome.
In order to solve the above-mentioned technical problem, present disclose provides a kind of calculation method of neural network, this method can be answered For processor, the processor can be general processor, for example, processor can be central processing unit CPU (Central Processing Unit), graphics processing unit GPU (Graphics Processing Unit) etc..The place Managing device can also may include machine learning fortune for the artificial intelligence process device for executing artificial intelligence operation, artificial intelligence operation It calculates, class brain operation etc..Wherein, machine learning operation includes neural network computing, k-means operation, support vector machines operation etc.. The artificial intelligent processor can be for example including NPU (Neural-Network Processing Unit, Processing with Neural Network list Member), DSP (Digital Signal Process, digital signal processing unit), field programmable gate array (Field- Programmable Gate Array, FPGA) one of chip or combination.Artificial intelligence process device may include multiple fortune Unit is calculated, multiple arithmetic elements can execute operation parallel.
Fig. 1 shows the flow chart of the calculation method of the neural network according to one embodiment of the disclosure.The mind that the disclosure provides Calculation method through network can carry out the training of neural network based on the corresponding contextual information for calculating figure of neural network, In, the contextual information may include that the connection relationship between calculating figure interior joint and the node in calculating figure are corresponding just To operation core and reversed operation core, the connection relationship between the node includes positive connection relationship and Opposite direction connection relationship, institute Stating forward operation core includes positive operation symbol and positive output operand, and the reversed operation core includes reverse operating symbol and reversed Export operand.
As shown in Figure 1, this method may include:
Step S11 checks the neural network according to the positive connection relationship, forward operation and carries out positive calculating;
Step S12 carries out reverse train to the neural network according to the Opposite direction connection relationship and reversed operation core, It is carried out alternatively, checking the neural network according to Opposite direction connection relationship, reversed operation core, positive connection relationship and forward operation Reverse train.
By the positive connection relationship and the existing positive calculating of forward operation verification in contextual information, pass through contextual information In Opposite direction connection relationship, reversed operation core or Opposite direction connection relationship, reversed operation core, positive connection relationship and positive fortune The existing reverse train of verification is calculated not need to be additionally provided the calculating figure for reverse train, use one according to disclosed method Calculating figure can realize the positive process calculated with reverse train of neural network.
In the technical scheme, the connection relationship between node can refer to the predecessor node of calculating figure interior joint and subsequent The information such as node.Positive connection relationship can refer to, the predecessor node and descendant node of node when carrying out positive calculate, reversely Connection relationship can refer to, the predecessor node and descendant node of node when carrying out reverse train.It is understood that in forward direction When calculating, if a node is the predecessor node of another node, then the node is another node in reverse train Descendant node.
The positive operation symbol that forward operation core includes can be the operator of the corresponding operation of node, for example, an if section Point completes the operation being added, then the positive operation symbol that the corresponding forward operation core of the node includes can be " add ".It is positive defeated The forward operation symbol that operand can be finger joint point out carries out the output data that operation obtains to the input data of node.In one kind In possible implementation, forward operation core can also include the data address for saving positive output operand, be convenient for subsequent section Point obtains positive output operand by data address.
The reverse operating symbol that reversed operation core includes can be the forward operation core of same node corresponding with reversed operation core Included positive operation accords with corresponding partial derivative, that is to say, that for same node, the operation of the node is corresponding Operator is positive operator, and the corresponding operator of the partial derivative of the operation of the node is reverse operating symbol.Reversed output behaviour Counting is when carrying out reverse train, and reverse operating symbol carries out the output data that operation obtains to input operand.
For step S11, for example, when carrying out positive calculate, for some node in neural network, even according to forward direction The relationship of connecing can determine the predecessor node of the node, and the positive output operand of the corresponding forward operation core of predecessor node is the section The input operand of point, accords with according to the positive operation in the corresponding forward operation core of the node and the input operand carries out The positive output operand for calculating the available node, is saved in the data address of positive output operand, to the node Descendant node when being calculated, which can be got by the data address in the corresponding forward operation core of the node Input operand of the forward direction output operand as descendant node.Successively neural network can be carried out by above procedure positive Positive calculated result is calculated.
For step S12, since during reverse train, some nodes in neural network are independent of (forward direction meter When calculation) the positive calculated result of predecessor node, and the reverse train of some nodes also with predecessor node or node itself Positive calculated result is related, and therefore, the reverse train of the disclosure is directed to different nodes, and process is different.
In one possible implementation, for the node of the positive calculated result independent of predecessor node, anti- Into training process, can be determined according to Opposite direction connection relationship the node predecessor node (predecessor node forward direction calculate when For the descendant node of the node), the reversed output operand of the predecessor node can be used as the input operand of the node, according to The reversed output operand of reverse operating symbol, predecessor node in the reversed operation core of the node can calculate the ladder of the node Degree, to realize the process of reverse train.
It is reversed in the corresponding reversed operation core of some nodes in neural network in alternatively possible implementation The input data of operator may be the output operand of the predecessor node of node, can also include the descendant node forward direction of node The positive output operand being calculated, it is also possible to the positive output operand that the forward direction including present node is calculated.Cause This can be checked for these nodes according to Opposite direction connection relationship, reversed operation core, positive connection relationship and forward operation The neural network carries out reverse train.
In one possible implementation, above-mentioned positive connection relationship and Opposite direction connection relationship can be in certain forms It pre-saves, is searched from corresponding connection relationship when carrying out positive calculating or reverse train, the present disclosure is not limited to this The implementation of sample can also be realized by other means.
Fig. 2 shows the flow charts according to the calculation method of the neural network of one embodiment of the disclosure.As shown in Fig. 2, described Method can also include:
Step S13 obtains the predecessor node and descendant node of each node according to the input node of calculating figure interior joint, The connection relationship between the calculating figure interior joint is determined according to the predecessor node and the successor node.
Step S14 traverses the calculating according to the topological order forward direction for calculating figure and schemes, to calculate each node in figure Corresponding positive operation symbol and the positive operand that exports are created as the corresponding forward operation core of each node;
Step S15 reversely traverses the calculating figure according to the topological order of the calculating figure, to calculate each node in figure It creates corresponding reverse operating symbol and reversely exports operand as the corresponding reversed operation core of each node
Calculating figure (Computational Graph) is a kind of mode that data function is indicated by graph theoretic language.Scheming In, node is connected by side, node on behalf things, and the side of two nodes of connection indicates the relationship having between two things.And it counts With the input data or operator in node on behalf neural network in nomogram, the side of two nodes of connection indicates the input between two o'clock Output relation, etc..
In one possible implementation, the available input node for calculating each node in figure of processor, according to The input node of each node determines the predecessor node and descendant node of each node, before each determining node The connection relationship between calculating figure interior joint can be determined by driving node and descendant node.
For example, processor first can configure calculating figure according to configuration file, that is, be provided according to user Configuration file analytical Calculation figure, configuration file can be the title of each node in the calculating figure for have recorded neural network, ginseng The file of connection relationship etc. between number, attribute and node, processor can be according to each of configuration file analytical Calculation figure Node.
Specifically, when carrying out calculating diagram analysis, processor can parse configuration file and obtain the first array, and described the In one array include indicate it is described calculate figure node the first object, each first object include nodal community key-value pair and Node parameter key-value pair, wherein the nodal community key-value pair is used to record the attribute of node, and node parameter key-value pair is for remembering Record the parameter of node.First array can be JSON (JavaScript Object Notation, JS object numbered musical notation) array, the An object can be JSON object, and each node in calculating figure can be indicated with the JSON object in JSON array, and JSON pairs As that can describe to calculate the node in figure with key-value pair, a JSON object may include one or more key-value pairs, Duo Gejian Value is to the different information that can describe node.
Wherein, node parameter key-value pair may include nodename key-value pair and operator key values pair, nodename key-value pair For recording the title of node, operator key values are to the action type for recording node.The action type i.e. node of node Operation operation (operation), for example, reshape, transpose etc..
For example, the key of nodename key-value pair can be character string string, calculation keys for name, value The key of value pair can be string for op, value.The key of nodal community key-value pair can be that attrs, value value can Think that JSON object, the key and value of JSON object can be string, string can take different values, and then indicate Different attributes.
First object (JSON object) can also include structure key-value pair, and structure key-value pair is used for interrecord structure key-value pair institute The input node for belonging to node, for example, the value of structure key-value pair can recorde the name of the input node of the affiliated node of structure key-value pair Claim, wherein input data of the output data of the input node of node as the node.For example, the key of structure key-value pair String can be can be for input, value.
In one possible implementation, if the input node of a certain node includes multiple output datas, structure key assignments Pair value can be third array, third array may be JSON array, at this point, the value of structure key-value pair can be The JSON array of string composition, that is to say, that multiple output datas can be respectively indicated by multiple string.
For example, a node can indicate are as follows:
"name":"Add",
"op":"Add",
"inputs":
[
" Input1 ", " Input2 ",
],
,
Wherein, the name of node is add, and the operation of node is add (adding), the input node of node be Input1 and Input2, the input data of node are the output data of node Input1 and Input2.
When carrying out calculating diagram analysis, processor can traverse each of configuration file node according to configuration file, And NodeDef class is created, the information of each node in configuration file is transformed into NodeDef class, and NodeDef class is protected It is stored in GraphDef class.
When building calculates figure, processor can read the input node of each node in NodeDef class, according to each The input node of a node determines the predecessor node and descendant node of each node.For example, example as described above, section Point " Input1 " and " Input2 " are the predecessor node of node " Add ", and node " Add " is node " Input1 " and " Input2 " Descendant node, node " Input1 " may be there are also other descendant nodes with " Input2 ", can be in conjunction with other in calculating figure Node determines.After the predecessor node and descendant node for determining node, Node class is created, Node class is in addition to saving NodeDef Except the information of class interior joint, the predecessor node and descendant node of node are also saved, for example, saving in_node and out_node. Node class does not provide the interface of access predecessor node and descendant node, friendly metaclass of the Graph class as Node class, accessible section The predecessor node and descendant node of point, the mode specifically accessed will be described herein-after.
It should be noted that other processes of figure building are not the emphasis of the disclosure, will not be described in more detail, main process It exactly creates source node (source) and assembles node (sink), the connection between node is then established according to the above process and is closed System.
After having constructed calculating figure, corresponding operation can be created for the node calculated in figure, for example, for calculating Each node in figure can find the predecessor node of the node according to the connection relationship between the node having determined, determine The output tensor of predecessor node is the input tensor of the node, parses the parameter of the node, creates the positive operation symbol of the node, The output tensor (indicating positive output operand) of the node is created according to the operation of the node, input tensor and parameter, and Data address is distributed for output tensor.According to the corresponding forward operation core of each available node of above procedure.
Can also according to the operation of node in calculating library the corresponding partial derivative of search operation (reverse operating), and according to The corresponding reverse operating creation reverse operating found accords with and reversely exports tensor (indicating reversed output operand), and is Reversed output tensor distributes data address.According to the corresponding reversed operation core of each available node of above procedure.
In one possible implementation, node and the corresponding forward operation core of node and reversed operation core can be saved Between corresponding relationship, for example, saving pair between node and the forward operation core and reversed operation core of node in the form of map It should be related to.
In one possible implementation, can create Context class save forward operation core, reversed operation core, with And the corresponding relationship between node forward operation core corresponding with node, reversed operation core.
It should be noted that the execution sequence of step S14 and step S15 are only an example of the disclosure in Fig. 2, this It discloses without being limited thereto.
By obtaining the connection relationship between node, creating corresponding forward operation core and reversed operation core for node, and Save the corresponding relationship between node forward operation core corresponding with node and reversed operation core.Schemed using a calculating, according to Connection relationship and operation core between node can be realized to the positive calculating of calculating figure and reverse train.
Fig. 3 shows the flow chart of the method for the step S11 according to one embodiment of the disclosure.As shown in figure 3, step S11 can To include:
Following operation is executed for any node calculated in figure:
Step S111 obtains the predecessor node of any node by interface from the contextual information;Wherein, institute Interface is stated for reading the contextual information;
Step S112 obtains the corresponding forward direction of any node according to the corresponding relationship between node and forward operation core The corresponding forward operation core of the predecessor node of operation core and any node;
Step S113, according to the positive output operand in the corresponding forward operation core of the predecessor node of any node Positive operation symbol in forward operation core corresponding with any node carries out positive calculating.
As described above, it creates Context class and saves forward operation core and node forward operation corresponding with node Corresponding relationship between core.In one possible implementation, can using Context class as the friendly metaclass of Node class, Context class externally provides the predecessor node of accessed node and the interface of descendant node, which is for reading context letter The interface of breath.
In one possible implementation, the first interface of the predecessor node for reading node can be set and be used for Read the second interface of the descendant node of node.First interface and second interface are in the positive visit for calculating and providing when reverse train It asks exactly the opposite, can accomplish can be realized positive calculate and reverse train with a calculating figure.Wherein, first interface and The access that two interfaces are provided in positive calculating and reverse train is exactly the opposite to be referred to: for first interface: counting in forward direction When calculation, by first interface read contextual information when, return be node in_node (predecessor node), in reverse train When, by first interface read contextual information when, return be node out_node (descendant node);It is connect for second Mouthful, forward direction calculate when, by second interface read contextual information when, return be node out_node (subsequent section Point), in reverse train, by second interface read contextual information when, return be node in_node (forerunner section Point).That is, the predecessor node of node can be obtained by first interface when forward direction is calculated;It, can in reverse train Descendant node (predecessor node when as reverse train) to obtain node by first interface, can be obtained by second interface Take the predecessor node (descendant node when as reverse train) of node.
In one possible implementation, the first interface and second interface can by the polymorphic realization of C++, when So, it will be appreciated by persons skilled in the art that can also realize by other means, the disclosure is not construed as limiting this.
Illustratively, it is assumed that a certain node in the positive calculating process of progress is present node, for step S111, is led to Cross the predecessor node of the available present node of above-mentioned first interface.
For step S112, the corresponding relationship between node and forward operation core can described above be protected with map Corresponding relationship between the corresponding forward operation core of the node and node deposited.It, can be with after determining the predecessor node of present node Present node and the corresponding forward operation core of predecessor node are searched in map.
Pass through the positive output behaviour of the available predecessor node of data address in the corresponding forward operation core of predecessor node It counts, that is, the input data of present node, the positive output operand for the predecessor node that will acquire is input to present node, The positive output that present node is calculated in forward direction is carried out using the positive operation symbol in the corresponding forward operation core of present node Operand is stored into the data address of the positive output operand of present node.
The positive calculating process of neural network may be implemented by above procedure.
Fig. 4 shows the flow chart of the method for the step S12 according to one embodiment of the disclosure.As shown in figure 4, in step S12 " according to the Opposite direction connection relationship, reversed operation core to the neural network carry out reverse train " may include:
Following operation is executed for the first node calculated in Tu Zhong first part node:
Step S121 obtains the descendant node of the first node by interface from the contextual information;Wherein, institute Interface is stated for reading the contextual information;
It is corresponding reversed to obtain the first node according to the corresponding relationship between node and reversed operation core by step S122 The corresponding reversed operation core of the descendant node of operation core and the first node;
Step S123, according to anti-in the first reversed output operand and the corresponding reversed operation core of the first node Reverse train is carried out to operator, wherein the described first reversed output operand is that the descendant node of the first node is corresponding Reversed operation core in reversed output operand.
According to description above it is found that above-mentioned calculating Tu Zhong first part's node i.e. during reverse train independent of The node of the positive calculated result of while calculating (forward direction) predecessor node, which may be the whole in calculating figure Node, it is also possible to calculate the part of nodes in figure, the disclosure is not construed as limiting this.First part's node may include one or The multiple first nodes of person, the disclosure are not construed as limiting this.
For step S121, step S122, detailed process may refer to above, repeat no more.
For step S123, during reverse train, the reversed output operand of the descendant node of node is as node Input data therefore, can be according to anti-in the corresponding reversed operation core of descendant node after determining the descendant node of node Reverse operating symbol into output operand and the corresponding reversed operation core of node carries out reverse train.
By providing the interface for reading contextual information, and interface provides on the contrary when forward direction is calculated with reverse train Access, can accomplish can be realized positive calculate and reverse train with a calculating figure.
Fig. 5 shows the flow chart of the method for the step S12 according to another embodiment of the disclosure.As shown in figure 5, step S12 In " neural network is checked according to the Opposite direction connection relationship, reversed operation core, positive connection relationship and forward operation Carry out reverse train " may include:
Following operation is executed for the second node calculated in figure in second part node:
Step S127, according to the reverse operating symbol and second node in the corresponding reversed operation core of the second node Descendant node reversed output operand or second node predecessor node positive output operand or the second section The positive output operand of point carries out reverse train.
As described above, the input number of the reverse operating symbol in the corresponding reversed operation core of some nodes in neural network It can also include the forward direction that the descendant node forward direction of node is calculated according to the output operand for the predecessor node that may be node Export operand, it is also possible to which the positive output operand that the forward direction including present node is calculated calculates second part in figure Node refers to this kind of nodes.Therefore, it can be accorded with according to the reverse operating in the corresponding reversed operation core of second node and following One or more reverse trains that carry out in three: the reversed output operand of the descendant node of second node, second node The positive output operand of predecessor node or the positive output operand of second node.
Detailed process is the operation executed for the second node calculated in figure in second part node further include:
Step S124 obtains the predecessor node and subsequent section of the second node by interface from the contextual information Point;Wherein, the interface is for reading the contextual information;
Step S125 obtains the reversed output operand and the second node of the descendant node of the second node Predecessor node positive output operand;
Step S126, according to the corresponding forward operation core of the second node, the second node predecessor node just The positive output operand that second node is calculated in forward direction is carried out to output operand.
Second part node in above-mentioned calculating figure can be whole nodes in calculating figure, be also possible in calculating figure Part of nodes, the disclosure are not construed as limiting this.Second part node may include one or more second nodes, and the disclosure is to this It is not construed as limiting.It can carry out the process of step S124-S127 for each of second part node second node.Its In, step S124-S126 can select to execute according to the actual situation, and the disclosure is not construed as limiting this.
The detailed process of step S124 may refer to above, repeat no more.
For step S125, the reversed output operand of the descendant node of the second node is obtained, may include: basis Corresponding relationship between node and reversed operation core obtains the corresponding reversed operation core of descendant node of the second node, will Reversed output operand in the corresponding reversed operation core of the descendant node of the second node is the subsequent of the second node The reversed output operand of node.
The positive output operand for obtaining the predecessor node of the second node, may include: according to node and positive fortune The corresponding relationship between core is calculated, the corresponding forward operation core of predecessor node of the second node is obtained, by the second node The corresponding forward operation core of predecessor node in predecessor node of the positive output operand as the second node forward direction Export operand.
Wherein, the corresponding relationship between node and reversed operation core can be the node saved with map described above Corresponding relationship between the corresponding reversed operation core of node.It, can after the predecessor node and descendant node for determining second node To search the corresponding forward operation core of predecessor node of second node and second node, the subsequent section of second node in map The corresponding reversed operation core of point.It is available by the data address in the corresponding forward operation core of the predecessor node of second node The positive output operand of the predecessor node of second node, passes through the corresponding reversed forward operation core of the descendant node of second node In the available second node of data address descendant node reversed output operand.
It, can be according to the after obtaining the input data (predecessor node of second node positive output operand) needed The corresponding forward operation core of two nodes, the second node the positive output operand of predecessor node carry out forward direction and be calculated The positive output operand of second node, that is, by the output operand of the predecessor node of second node, as second node The input operand input of the positive operation symbol of corresponding forward operation core carries out forward direction and the positive defeated of second node is calculated Operand out.
It should be noted that the process of the reverse train of some neural networks can be simultaneously comprising Fig. 4 and shown in fig. 5 Both there is a part of first node and a part of second node in neural network in exemplary method, the disclosure is not construed as limiting this. In practical applications, the input data needed according to reverse operating symbol, selects different processes to carry out reverse train.
According to use above example it is found that the positive connection relationship and forward operation that pass through in contextual information are verified now just To calculating, pass through the Opposite direction connection relationship and reversed operation core and positive connection relationship and forward operation core in contextual information It is to realize that reverse train does not need to be additionally provided the calculating figure for reverse train according to disclosed method, is counted using one Nomogram can realize the positive process calculated with reverse train of neural network.
Fig. 6 shows the block diagram of the computing device of the neural network according to one embodiment of the disclosure.The nerve that the disclosure provides The computing device of network can carry out the training of neural network based on the corresponding contextual information for calculating figure of neural network, In, the contextual information may include that the connection relationship between calculating figure interior joint and the node in calculating figure are corresponding just To operation core and reversed operation core, the connection relationship between the node includes positive connection relationship and Opposite direction connection relationship, institute Stating forward operation core includes positive operation symbol and positive output operand, and the reversed operation core includes reverse operating symbol and reversed Export operand.
As shown in fig. 6, the apparatus may include:
Positive computing module 61, for checking the neural network according to the positive connection relationship, forward operation and carrying out Forward direction calculates;
Reverse train module 62, for according to the Opposite direction connection relationship and reversed operation core to the neural network into Row reverse train, alternatively, according to Opposite direction connection relationship, reversed operation core, positive connection relationship and forward operation verification Neural network carries out reverse train.
By the positive connection relationship and the existing positive calculating of forward operation verification in contextual information, pass through contextual information In Opposite direction connection relationship, reversed operation core or Opposite direction connection relationship, reversed operation core, positive connection relationship and positive fortune The existing reverse train of verification is calculated not need to be additionally provided the calculating figure for reverse train, use one according to disclosed method Calculating figure can realize the positive process calculated with reverse train of neural network.
Fig. 7 shows the block diagram of the computing device of the neural network according to one embodiment of the disclosure.As shown in fig. 7, in one kind In possible implementation, described device can also include:
Context module 63, for obtained according to the input node of calculating figure interior joint each node predecessor node and Descendant node determines the connection relationship between the calculating figure interior joint according to the predecessor node and the successor node.
In one possible implementation, described device further include:
First creation module 64 is schemed for traversing the calculating according to the topological order forward direction for calculating figure, to calculate figure In each node create corresponding positive operation symbol and the positive operand that exports as the corresponding forward operation of each node Core;
Second creation module 65, for reversely traversing the calculating figure according to the topological order of the calculating figure, to calculate figure In each node create corresponding reverse operating symbol and reversely export operand as the corresponding reversed operation of each node Core.
In one possible implementation, the positive computing module 61 is also used to for any node in calculating figure Execute following operation:
The predecessor node of any node is obtained from the contextual information by interface;Wherein, the interface is used In the reading contextual information;
According to the corresponding relationship between node and forward operation core obtain the corresponding forward operation core of any node, with And the corresponding forward operation core of predecessor node of any node;
Operand and described are exported according to the forward direction in the corresponding forward operation core of the predecessor node of any node Positive operation symbol in the corresponding forward operation core of one node carries out positive calculating.
In one possible implementation, the reverse train module 62 is also used to for calculating Tu Zhong first part section First node in point executes following operation:
The descendant node of the first node is obtained from the contextual information by interface;Wherein, the interface is used In the reading contextual information;
According to the corresponding relationship between node and reversed operation core obtain the corresponding reversed operation core of the first node, with And the corresponding reversed operation core of descendant node of the first node;
According to the reverse operating symbol in the first reversed output operand and the corresponding reversed operation core of the first node Reverse train is carried out,
Wherein, the described first reversed output operand is in the corresponding reversed operation core of descendant node of the first node Reversed output operand.
In one possible implementation, the reverse train module 62 is also used to for second part section in calculating figure Second node in point executes following operation:
According to the descendant node of reverse operating symbol and second node in the corresponding reversed operation core of the second node Reversed output operand or second node predecessor node positive output operand or second node forward direction it is defeated Operand carries out reverse train out.
In one possible implementation, the reverse train module 62 is also used to for second part section in calculating figure Second node in point executes following operation:
The predecessor node and descendant node of the second node are obtained from the contextual information by interface;Wherein, The interface is for reading the contextual information;
Obtain the reversed output operand of the descendant node of the second node and the predecessor node of the second node Positive output operand;
According to the positive output operation of the corresponding forward operation core of the second node, the predecessor node of the second node Number carries out the positive output operand that second node is calculated in forward direction.
In one possible implementation, the reverse train module 62 is also used to: according to node and reversed operation core Between corresponding relationship, the corresponding reversed operation core of descendant node of the second node is obtained, after the second node After the reversed output behaviour for the descendant node that the reversed output operand in the corresponding reversed operation core of node is the second node It counts.
In one possible implementation, the reverse train module is also used to: according to node and forward operation core it Between corresponding relationship, the corresponding forward operation core of predecessor node of the second node is obtained, by the forerunner of the second node Positive output operand in the corresponding forward operation core of node is grasped as the positive output of the predecessor node of the second node It counts.
Fig. 8 is a kind of block diagram of device 800 for neural computing shown according to an exemplary embodiment.Example Such as, device 800 can be mobile phone, computer, digital broadcasting terminal, messaging device, game console, and plate is set It is standby, Medical Devices, body-building equipment, personal digital assistant etc..
Referring to Fig. 8, device 800 may include following one or more components: processing component 802, memory 804, power supply Component 806, multimedia component 808, audio component 810, the interface 812 of input/output (I/O), sensor module 814, and Communication component 816.
The integrated operation of the usual control device 800 of processing component 802, such as with display, telephone call, data communication, phase Machine operation and record operate associated operation.Processing component 802 may include that one or more processors 820 refer to execute It enables, to perform all or part of the steps of the methods described above.In addition, processing component 802 may include one or more modules, just Interaction between processing component 802 and other assemblies.For example, processing component 802 may include multi-media module, it is more to facilitate Interaction between media component 808 and processing component 802.
Memory 804 is configured as storing various types of data to support the operation in device 800.These data are shown Example includes the instruction of any application or method for operating on device 800, contact data, and telephone book data disappears Breath, picture, video etc..Memory 804 can be by any kind of volatibility or non-volatile memory device or their group It closes and realizes, such as static random access memory (SRAM), electrically erasable programmable read-only memory (EEPROM) is erasable to compile Journey read-only memory (EPROM), programmable read only memory (PROM), read-only memory (ROM), magnetic memory, flash Device, disk or CD.
Power supply module 806 provides electric power for the various assemblies of device 800.Power supply module 806 may include power management system System, one or more power supplys and other with for device 800 generate, manage, and distribute the associated component of electric power.
Multimedia component 808 includes the screen of one output interface of offer between described device 800 and user.One In a little embodiments, screen may include liquid crystal display (LCD) and touch panel (TP).If screen includes touch panel, screen Curtain may be implemented as touch screen, to receive input signal from the user.Touch panel includes one or more touch sensings Device is to sense the gesture on touch, slide, and touch panel.The touch sensor can not only sense touch or sliding action Boundary, but also detect duration and pressure associated with the touch or slide operation.In some embodiments, more matchmakers Body component 808 includes a front camera and/or rear camera.When device 800 is in operation mode, such as screening-mode or When video mode, front camera and/or rear camera can receive external multi-medium data.Each front camera and Rear camera can be a fixed optical lens system or have focusing and optical zoom capabilities.
Audio component 810 is configured as output and/or input audio signal.For example, audio component 810 includes a Mike Wind (MIC), when device 800 is in operation mode, when such as call mode, recording mode, and voice recognition mode, microphone is matched It is set to reception external audio signal.The received audio signal can be further stored in memory 804 or via communication set Part 816 is sent.In some embodiments, audio component 810 further includes a loudspeaker, is used for output audio signal.
I/O interface 812 provides interface between processing component 802 and peripheral interface module, and above-mentioned peripheral interface module can To be keyboard, click wheel, button etc..These buttons may include, but are not limited to: home button, volume button, start button and lock Determine button.
Sensor module 814 includes one or more sensors, and the state for providing various aspects for device 800 is commented Estimate.For example, sensor module 814 can detecte the state that opens/closes of device 800, and the relative positioning of component, for example, it is described Component is the display and keypad of device 800, and sensor module 814 can be with 800 1 components of detection device 800 or device Position change, the existence or non-existence that user contacts with device 800,800 orientation of device or acceleration/deceleration and device 800 Temperature change.Sensor module 814 may include proximity sensor, be configured to detect without any physical contact Presence of nearby objects.Sensor module 814 can also include optical sensor, such as CMOS or ccd image sensor, at As being used in application.In some embodiments, which can also include acceleration transducer, gyro sensors Device, Magnetic Sensor, pressure sensor or temperature sensor.
Communication component 816 is configured to facilitate the communication of wired or wireless way between device 800 and other equipment.Device 800 can access the wireless network based on communication standard, such as WiFi, 2G or 3G or their combination.In an exemplary implementation In example, communication component 816 receives broadcast singal or broadcast related information from external broadcasting management system via broadcast channel. In one exemplary embodiment, the communication component 816 further includes near-field communication (NFC) module, to promote short range communication.Example Such as, NFC module can be based on radio frequency identification (RFID) technology, Infrared Data Association (IrDA) technology, ultra wide band (UWB) technology, Bluetooth (BT) technology and other technologies are realized.
In the exemplary embodiment, device 800 can be believed by one or more application specific integrated circuit (ASIC), number Number processor (DSP), digital signal processing appts (DSPD), programmable logic device (PLD), field programmable gate array (FPGA), controller, microcontroller, microprocessor or other electronic components are realized, for executing the above method.
In the exemplary embodiment, a kind of non-volatile computer readable storage medium storing program for executing is additionally provided, for example including calculating The memory 804 of machine program instruction, above-mentioned computer program instructions can be executed above-mentioned to complete by the processor 820 of device 800 Method.
Fig. 9 is a kind of block diagram of device 1900 for neural computing shown according to an exemplary embodiment.Example Such as, device 1900 may be provided as a server.Referring to Fig. 9, device 1900 includes processing component 1922, further comprises One or more processors and memory resource represented by a memory 1932, can be by processing component 1922 for storing Execution instruction, such as application program.The application program stored in memory 1932 may include one or more Each corresponds to the module of one group of instruction.In addition, processing component 1922 is configured as executing instruction, to execute the above method.
Device 1900 can also include that a power supply module 1926 be configured as the power management of executive device 1900, and one Wired or wireless network interface 1950 is configured as device 1900 being connected to network and input and output (I/O) interface 1958.Device 1900 can be operated based on the operating system for being stored in memory 1932, such as Windows ServerTM, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM or similar.
In the exemplary embodiment, a kind of non-volatile computer readable storage medium storing program for executing is additionally provided, for example including calculating The memory 1932 of machine program instruction, above-mentioned computer program instructions can be executed by the processing component 1922 of device 1900 to complete The above method.
The disclosure can be system, method and/or computer program product.Computer program product may include computer Readable storage medium storing program for executing, containing for making processor realize the computer-readable program instructions of various aspects of the disclosure.
Computer readable storage medium, which can be, can keep and store the tangible of the instruction used by instruction execution equipment Equipment.Computer readable storage medium for example can be-- but it is not limited to-- storage device electric, magnetic storage apparatus, optical storage Equipment, electric magnetic storage apparatus, semiconductor memory apparatus or above-mentioned any appropriate combination.Computer readable storage medium More specific example (non exhaustive list) includes: portable computer diskette, hard disk, random access memory (RAM), read-only deposits It is reservoir (ROM), erasable programmable read only memory (EPROM or flash memory), static random access memory (SRAM), portable Compact disk read-only memory (CD-ROM), digital versatile disc (DVD), memory stick, floppy disk, mechanical coding equipment, for example thereon It is stored with punch card or groove internal projection structure and the above-mentioned any appropriate combination of instruction.Calculating used herein above Machine readable storage medium storing program for executing is not interpreted that instantaneous signal itself, the electromagnetic wave of such as radio wave or other Free propagations lead to It crosses the electromagnetic wave (for example, the light pulse for passing through fiber optic cables) of waveguide or the propagation of other transmission mediums or is transmitted by electric wire Electric signal.
Computer-readable program instructions as described herein can be downloaded to from computer readable storage medium it is each calculate/ Processing equipment, or outer computer or outer is downloaded to by network, such as internet, local area network, wide area network and/or wireless network Portion stores equipment.Network may include copper transmission cable, optical fiber transmission, wireless transmission, router, firewall, interchanger, gateway Computer and/or Edge Server.Adapter or network interface in each calculating/processing equipment are received from network to be counted Calculation machine readable program instructions, and the computer-readable program instructions are forwarded, for the meter being stored in each calculating/processing equipment In calculation machine readable storage medium storing program for executing.
Computer program instructions for executing disclosure operation can be assembly instruction, instruction set architecture (ISA) instructs, Machine instruction, machine-dependent instructions, microcode, firmware instructions, condition setup data or with one or more programming languages The source code or object code that any combination is write, the programming language include the programming language-of object-oriented such as Smalltalk, C++ etc., and conventional procedural programming languages-such as " C " language or similar programming language.Computer Readable program instructions can be executed fully on the user computer, partly execute on the user computer, be only as one Vertical software package executes, part executes on the remote computer or completely in remote computer on the user computer for part Or it is executed on server.In situations involving remote computers, remote computer can pass through network-packet of any kind It includes local area network (LAN) or wide area network (WAN)-is connected to subscriber computer, or, it may be connected to outer computer (such as benefit It is connected with ISP by internet).In some embodiments, by utilizing computer-readable program instructions Status information carry out personalized customization electronic circuit, such as programmable logic circuit, field programmable gate array (FPGA) or can Programmed logic array (PLA) (PLA), the electronic circuit can execute computer-readable program instructions, to realize each side of the disclosure Face.
Referring herein to according to the flow chart of the method, apparatus (system) of the embodiment of the present disclosure and computer program product and/ Or block diagram describes various aspects of the disclosure.It should be appreciated that flowchart and or block diagram each box and flow chart and/ Or in block diagram each box combination, can be realized by computer-readable program instructions.
These computer-readable program instructions can be supplied to general purpose computer, special purpose computer or other programmable datas The processor of processing unit, so that a kind of machine is produced, so that these instructions are passing through computer or other programmable datas When the processor of processing unit executes, function specified in one or more boxes in implementation flow chart and/or block diagram is produced The device of energy/movement.These computer-readable program instructions can also be stored in a computer-readable storage medium, these refer to It enables so that computer, programmable data processing unit and/or other equipment work in a specific way, thus, it is stored with instruction Computer-readable medium then includes a manufacture comprising in one or more boxes in implementation flow chart and/or block diagram The instruction of the various aspects of defined function action.
Computer-readable program instructions can also be loaded into computer, other programmable data processing units or other In equipment, so that series of operation steps are executed in computer, other programmable data processing units or other equipment, to produce Raw computer implemented process, so that executed in computer, other programmable data processing units or other equipment Instruct function action specified in one or more boxes in implementation flow chart and/or block diagram.
The flow chart and block diagram in the drawings show system, method and the computer journeys according to multiple embodiments of the disclosure The architecture, function and operation in the cards of sequence product.In this regard, each box in flowchart or block diagram can generation One module of table, program segment or a part of instruction, the module, program segment or a part of instruction include one or more use The executable instruction of the logic function as defined in realizing.In some implementations as replacements, function marked in the box It can occur in a different order than that indicated in the drawings.For example, two continuous boxes can actually be held substantially in parallel Row, they can also be executed in the opposite order sometimes, and this depends on the function involved.It is also noted that block diagram and/or The combination of each box in flow chart and the box in block diagram and or flow chart, can the function as defined in executing or dynamic The dedicated hardware based system made is realized, or can be realized using a combination of dedicated hardware and computer instructions.
The presently disclosed embodiments is described above, above description is exemplary, and non-exclusive, and It is not limited to disclosed each embodiment.Without departing from the scope and spirit of illustrated each embodiment, for this skill Many modifications and changes are obvious for the those of ordinary skill in art field.The selection of term used herein, purport In the principle, practical application or technological improvement to the technology in market for best explaining each embodiment, or lead this technology Other those of ordinary skill in domain can understand each embodiment disclosed herein.

Claims (10)

1. a kind of calculation method of neural network, which is characterized in that the method is upper based on the corresponding calculating figure of neural network The training of context information progress neural network, wherein the contextual information includes the connection relationship between calculating figure interior joint And the corresponding forward operation core of node in figure and reversed operation core are calculated, the connection relationship between the node includes forward direction Connection relationship and Opposite direction connection relationship, the forward operation core includes positive operation symbol and forward direction exports operand, described reversed Operation core includes reverse operating symbol and reversed output operand;
The described method includes:
The neural network, which is checked, according to the positive connection relationship, forward operation carries out positive calculating;
Reverse train is carried out to the neural network according to the Opposite direction connection relationship and reversed operation core, alternatively, according to anti- The neural network, which is checked, to connection relationship, reversed operation core, positive connection relationship and forward operation carries out reverse train.
2. the method according to claim 1, wherein the method also includes:
The predecessor node and descendant node that each node is obtained according to the input node of calculating figure interior joint, according to the forerunner Node and the successor node determine the connection relationship between the calculating figure interior joint.
3. method according to claim 1 or 2, which is characterized in that the method also includes:
The calculating figure is traversed according to the topological order forward direction for calculating figure, it is corresponding just to calculate each node creation in figure To operator with forward direction output operand as the corresponding forward operation core of each node;
The calculating figure is reversely traversed according to the topological order of the calculating figure, it is corresponding anti-to calculate each node creation in figure To operator and reversed output operand as the corresponding reversed operation core of each node.
4. according to method as claimed in claim 2, which is characterized in that according to the positive connection relationship, forward operation verification Neural network carries out positive calculating, comprising: executes following operation for any node calculated in figure:
The predecessor node of any node is obtained from the contextual information by interface;Wherein, the interface is for reading Take the contextual information;
The corresponding forward operation core of any node, Yi Jisuo are obtained according to the corresponding relationship between node and forward operation core State the corresponding forward operation core of predecessor node of any node;
According to the positive output operand and any section in the corresponding forward operation core of the predecessor node of any node Positive operation symbol in the corresponding forward operation core of point carries out positive calculating.
5. according to the method described in claim 2, it is characterized in that, according to the Opposite direction connection relationship, reversed operation core to institute It states neural network and carries out reverse train, comprising:
Following operation is executed for the first node calculated in Tu Zhong first part node:
The descendant node of the first node is obtained from the contextual information by interface;Wherein, the interface is for reading Take the contextual information;
The corresponding reversed operation core of the first node, Yi Jisuo are obtained according to the corresponding relationship between node and reversed operation core State the corresponding reversed operation core of descendant node of first node;
It is accorded with and being carried out according to the reverse operating in the first reversed output operand and the corresponding reversed operation core of the first node Reverse train,
Wherein, the described first reversed output operand is anti-in the corresponding reversed operation core of descendant node of the first node To output operand.
6. method according to claim 2 or 5, which is characterized in that according to the Opposite direction connection relationship, reversed operation core, Positive connection relationship and forward operation check the neural network and carry out reverse train, comprising:
Following operation is executed for the second node calculated in figure in second part node:
According to the anti-of the descendant node of reverse operating symbol and second node in the corresponding reversed operation core of the second node To the positive output operand of the predecessor node of output operand or second node or the positive output behaviour of second node It counts and carries out reverse train.
7. according to the method described in claim 6, it is characterized in that, for the second node calculated in figure in second part node The operation of execution further include:
The predecessor node and descendant node of the second node are obtained from the contextual information by interface;Wherein, described Interface is for reading the contextual information;
The predecessor node of the reversed output operand and the second node that obtain the descendant node of the second node is just To output operand;
According to the corresponding forward operation core of the second node, the predecessor node of the second node positive output operand into The positive output operand of second node is calculated in row forward direction.
8. a kind of computing device of neural network, which is characterized in that described device is upper based on the corresponding calculating figure of neural network The training of context information progress neural network, wherein the contextual information includes the connection relationship between calculating figure interior joint And the corresponding forward operation core of node in figure and reversed operation core are calculated, the connection relationship between the node includes forward direction Connection relationship and Opposite direction connection relationship, the forward operation core includes positive operation symbol and forward direction exports operand, described reversed Operation core includes reverse operating symbol and reversed output operand;
Described device includes:
Positive computing module, by checking the neural network according to the positive connection relationship, forward operation and carrying out based on forward direction It calculates;
Reverse train module, it is reversed for being carried out according to the Opposite direction connection relationship and reversed operation core to the neural network Training, alternatively, checking the nerve net according to Opposite direction connection relationship, reversed operation core, positive connection relationship and forward operation Network carries out reverse train.
9. a kind of calculating equipment of neural network characterized by comprising
Processor;
Memory for storage processor executable instruction;
Wherein, the processor is configured to being realized described in any one of claim 1 to 7 when executing the executable instruction Method.
10. a kind of non-volatile computer readable storage medium storing program for executing, is stored thereon with computer program instructions, which is characterized in that institute It states and realizes method described in any one of claim 1 to 7 when computer program instructions are executed by processor.
CN201910472870.1A 2019-05-31 2019-05-31 Operation method, device and related product Active CN110163372B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910472870.1A CN110163372B (en) 2019-05-31 2019-05-31 Operation method, device and related product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910472870.1A CN110163372B (en) 2019-05-31 2019-05-31 Operation method, device and related product

Publications (2)

Publication Number Publication Date
CN110163372A true CN110163372A (en) 2019-08-23
CN110163372B CN110163372B (en) 2020-04-21

Family

ID=67630983

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910472870.1A Active CN110163372B (en) 2019-05-31 2019-05-31 Operation method, device and related product

Country Status (1)

Country Link
CN (1) CN110163372B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111126596A (en) * 2019-12-17 2020-05-08 百度在线网络技术(北京)有限公司 Information processing method, equipment and storage medium in neural network training
CN112988367A (en) * 2019-12-12 2021-06-18 中科寒武纪科技股份有限公司 Resource allocation method and device, computer equipment and readable storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108268278A (en) * 2016-12-30 2018-07-10 英特尔公司 Processor, method and system with configurable space accelerator
CN108292374A (en) * 2015-11-09 2018-07-17 谷歌有限责任公司 Training is expressed as the neural network of calculating figure
US20190156192A1 (en) * 2017-11-20 2019-05-23 International Business Machines Corporation Cardinal sine as an activation function for universal classifier training data

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108292374A (en) * 2015-11-09 2018-07-17 谷歌有限责任公司 Training is expressed as the neural network of calculating figure
CN108268278A (en) * 2016-12-30 2018-07-10 英特尔公司 Processor, method and system with configurable space accelerator
US20190156192A1 (en) * 2017-11-20 2019-05-23 International Business Machines Corporation Cardinal sine as an activation function for universal classifier training data

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112988367A (en) * 2019-12-12 2021-06-18 中科寒武纪科技股份有限公司 Resource allocation method and device, computer equipment and readable storage medium
CN111126596A (en) * 2019-12-17 2020-05-08 百度在线网络技术(北京)有限公司 Information processing method, equipment and storage medium in neural network training
CN111126596B (en) * 2019-12-17 2021-03-19 百度在线网络技术(北京)有限公司 Information processing method, equipment and storage medium in neural network training

Also Published As

Publication number Publication date
CN110163372B (en) 2020-04-21

Similar Documents

Publication Publication Date Title
CN107705783A (en) A kind of phoneme synthesizing method and device
CN109800737A (en) Face recognition method and device, electronic equipment and storage medium
CN109800744A (en) Image clustering method and device, electronic equipment and storage medium
CN109089133A (en) Method for processing video frequency and device, electronic equipment and storage medium
CN104035995B (en) Group's label generating method and device
CN109145213A (en) Inquiry recommended method and device based on historical information
CN109614613A (en) The descriptive statement localization method and device of image, electronic equipment and storage medium
CN109919300A (en) Neural network training method and device and image processing method and device
CN110188871A (en) Operation method, device and Related product
CN110458102A (en) A kind of facial image recognition method and device, electronic equipment and storage medium
CN109543537A (en) Weight identification model increment training method and device, electronic equipment and storage medium
CN109165738A (en) Optimization method and device, electronic equipment and the storage medium of neural network model
CN109151548A (en) Interface alternation method and device
CN109978891A (en) Image processing method and device, electronic equipment and storage medium
CN110209877A (en) Video analysis method and device
US10762902B2 (en) Method and apparatus for synthesizing adaptive data visualizations
CN110262886A (en) Task executing method and device, electronic equipment and storage medium
CN108924644A (en) Video clip extracting method and device
CN109920016A (en) Image generating method and device, electronic equipment and storage medium
CN110532956A (en) Image processing method and device, electronic equipment and storage medium
CN109934240A (en) Feature update method and device, electronic equipment and storage medium
CN110163372A (en) Operation method, device and Related product
CN109447258A (en) Optimization method and device, electronic equipment and the storage medium of neural network model
CN109359218A (en) Multimedia resource methods of exhibiting and device
CN109784537A (en) Predictor method, device and the server and storage medium of ad click rate

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 100190 room 644, comprehensive research building, No. 6 South Road, Haidian District Academy of Sciences, Beijing

Applicant after: Zhongke Cambrian Technology Co., Ltd

Address before: 100190 room 644, comprehensive research building, No. 6 South Road, Haidian District Academy of Sciences, Beijing

Applicant before: Beijing Zhongke Cambrian Technology Co., Ltd.

GR01 Patent grant
GR01 Patent grant