CN110378413A - Neural network model processing method, device and electronic equipment - Google Patents

Neural network model processing method, device and electronic equipment Download PDF

Info

Publication number
CN110378413A
CN110378413A CN201910644306.3A CN201910644306A CN110378413A CN 110378413 A CN110378413 A CN 110378413A CN 201910644306 A CN201910644306 A CN 201910644306A CN 110378413 A CN110378413 A CN 110378413A
Authority
CN
China
Prior art keywords
node
neural network
network model
operator
sequence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910644306.3A
Other languages
Chinese (zh)
Inventor
陈岩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201910644306.3A priority Critical patent/CN110378413A/en
Publication of CN110378413A publication Critical patent/CN110378413A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The embodiment of the present application discloses a kind of neural network model processing method, device and electronic equipment.The described method includes: obtaining neural network model to be configured;Based on the dependence of operator in the neural network model, the neural network model is mapped as graph structure, wherein a node in the graph structure characterizes an operator in the neural network model;The graph structure is traversed, obtain the node executes sequence;What the execution based on the node was arranged in order operator that the node is characterized executes sequence.This method is by being converted to graph structure for neural network model, determine the operator that each node is characterized by way of the node in traversal graph structure again executes sequence, and then realize that can quickly determine all operators executes sequence, promote the operation efficiency of entire model.

Description

Neural network model processing method, device and electronic equipment
Technical field
This application involves field of computer technology, more particularly, to a kind of neural network model processing method, device with And electronic equipment.
Background technique
Usual neural network model is trained in equipment such as computers.For the ease of trained neural network Model can be run on the electronic equipments such as mobile phone or tablet computer, can be carried out to trained neural network model excellent Change, however is likely to result in the operator execution sequence entanglement of neural network model in optimization process.
Summary of the invention
In view of the above problems, present applicant proposes a kind of neural network model processing method, device and electronic equipment, with Improve the above problem.
In a first aspect, being applied to electronic equipment, the method this application provides a kind of neural network model processing method It include: to obtain neural network model to be configured;Based on the dependence of operator in the neural network model, by the nerve Network model is mapped as graph structure, wherein a node in the graph structure characterizes one in the neural network model Operator;The graph structure is traversed, obtain the node executes sequence;Execution based on the node is arranged in order institute That states the operator that node is characterized executes sequence.
Second aspect, this application provides a kind of neural network model processing units, run on electronic equipment, the method It include: model acquiring unit, for obtaining neural network model to be configured;Model treatment unit, for being based on the nerve The neural network model is mapped as graph structure, wherein one in the graph structure by the dependence of operator in network model A node characterizes an operator in the neural network model;Traversal Unit is obtained for traversing to the graph structure The node executes sequence;Order determination unit is characterized for the node that is arranged in order of the execution based on the node Operator execute sequence.
Fourth aspect, this application provides a kind of electronic equipment, including multi-core processor, starting controller and storage Device, the memory is for storing data to be loaded;One or more programs are stored in the starting controller and quilt It is configured to be executed by the starting controller to realize above-mentioned method.
5th aspect, this application provides a kind of computer readable storage medium, in the computer readable storage medium It is stored with program code, wherein execute above-mentioned method when said program code is activated controller operation.
A kind of neural network model processing method, device and electronic equipment provided by the present application, it is to be configured obtaining After neural network model, based on the dependence of operator in the neural network model, the neural network model is mapped as Graph structure a, wherein node in the graph structure characterizes an operator in the neural network model.Then, to institute It states graph structure to be traversed, obtain the node executes sequence, and the execution based on the node is arranged in order the node institute The operator of characterization executes sequence.Thus by the way that neural network model is converted to graph structure, then by traversal graph structure The mode of node determines the sequence that executes of operator that each node is characterized, and then realizes and can quickly determine all operators Execute sequence, promote the operation efficiency of entire model.
Detailed description of the invention
In order to more clearly explain the technical solutions in the embodiments of the present application, make required in being described below to embodiment Attached drawing is briefly described, it should be apparent that, the drawings in the following description are only some examples of the present application, for For those skilled in the art, without creative efforts, it can also be obtained according to these attached drawings other attached Figure.
Fig. 1 shows a kind of flow chart of neural network model processing method of the embodiment of the present application proposition;
A kind of neural network in a kind of neural network model processing method proposed Fig. 2 shows one embodiment of the application The corresponding diagram of model is intended to;
Fig. 3 shows a kind of middle process resource consumption of neural network model processing method of one embodiment of the application proposition The mapping relations schematic diagram of degree and time;
Fig. 4 shows a kind of flow chart for neural network model processing method that another embodiment of the application proposes;
Fig. 5 shows a kind of flow chart of neural network model processing method of the application another embodiment proposition;
Fig. 6 shows a kind of structural block diagram of neural network model processing unit of the embodiment of the present application proposition;
Fig. 7 shows a kind of structural block diagram for neural network model processing unit that another embodiment of the application proposes;
Fig. 8 shows a kind of structural block diagram of neural network model processing unit of the application another embodiment proposition;
Fig. 9 shows the electronics for being used to execute the neural network model processing method according to the embodiment of the present application of the application The structural block diagram of equipment;
Figure 10 is the embodiment of the present application for saving or carrying the neural network mould realized according to the embodiment of the present application The storage unit of the program code of type processing method.
Specific embodiment
Below in conjunction with the attached drawing in the embodiment of the present application, technical solutions in the embodiments of the present application carries out clear, complete Site preparation description, it is clear that described embodiments are only a part of embodiments of the present application, instead of all the embodiments.It is based on Embodiment in the application, it is obtained by those of ordinary skill in the art without making creative efforts every other Embodiment shall fall in the protection scope of this application.
Neural network (Neural Networks, NN) is extensive by a large amount of, simple processing unit (referred to as neuron) Ground interconnects and the complex networks system of formation.Neural network has large-scale parallel, distributed storage and processing, from group It knits, adaptive and self-learning ability.It usually include a large amount of operator in neural network model.Wherein it is possible to understand, calculate Son can regard some algorithm process in a neural network model as, and operator can be function onto function, or letter Number mapping of a set onto another number.
However, inventor has found under study for action, and when the operator of neural network is more or complexity is higher, whole calculation Sub- execution sequence more can not be determined quickly.
Furthermore the operator sequence of neural network is mostly suitable using the trained operator of neural network framework of server end Sequence, for example, similar these frames by Tensorflow train the neural network model operator sequence come, when needs are to training When obtained neural network model increases operator, it can only increase between specific operator in original network, cause calculating Son optimization when need to regenerate a neural network diagram, cause model conversion speed can it is slack-off very much.The reason is that neural There are dependence, that is, some operator to correctly execute between each operator of network, the operator relied on It must have gone through, so operator has execution sequence, as these cured models of PC end frame of Tensorflow Sequence is sequenced for operator.For example operator B depends on operator A, that is inner at file (file for being stored with neural network model) Operator A comes before B, that between A and B if necessary to increase operator C, then general way is to regenerate net Network copies in order until operator A, then adds operator C, then copies operator B, then copy subsequent operator again.Namely It says that C must be added between A and B, just can guarantee that operator correctly executes in this way.This cause to add operator become one it is very complicated Operation, neural network model conversion time can be allowed very long, in turn result in model conversion speed can it is slack-off very much.
In addition, deployment of the neural network model in mobile terminal, generally by the computers such as PC by trained mould Type is solidified into a file, and then the neural network framework of mobile terminal parses this document, reads in memory, sequentially executes neural network Operator in model.But because the resource of mobile terminal is limited, some mini Mods can only be often run, large-sized model is needed by one The optimization of series, such as Operator Fusion, network beta pruning, model quantization, network cutting, etc. carry out Optimized model, with reach can be with In the purpose of mobile terminal operation.These optimizations frequently can lead to original operator execution sequence and change, thus need to pass through one kind Effective mode is come again by operator of ranking, so that neural network model can be executed normally.
Therefore, the neural network model processing method that can improve the above problem in the application, device are inventors herein proposed And electronic equipment.
Present embodiments are specifically described below in conjunction with attached drawing.
Referring to Fig. 1, a kind of neural network model processing method provided by the embodiments of the present application, is applied to electronic equipment, The described method includes:
Step S110: neural network model to be configured is obtained.
As a kind of mode, neural network model to be configured can be the neural network mould directly got from network-side Type.In addition, neural network model to be configured may be that the neural network model got from network-side is optimized Later neural network model.Wherein, optimizing to neural network model can be understood as calculating neural network model Son fusion, network beta pruning, model quantization, the operation such as network cutting.
Step S120: the dependence based on operator in the neural network model maps the neural network model For graph structure, wherein a node in the graph structure characterizes an operator in the neural network model.
Graph structure is the mathematic(al) object for indicating the relationship between object and object.If to each edge regulation one of graph structure A direction, then obtained graph structure is known as digraph.It is associated with a node in You Chu and to enter side in digraph Point.On the contrary, side does not have directive figure to be known as non-directed graph.
As a kind of mode, graph structure described in the present embodiment is directed acyclic graph (Directed Acyclic Graph).Typically an acyclic digraph is referred to as directed acyclic graph, directed acyclic graph in fact with array, arrangement, block Chain is the same and a kind of data structure.But different from block chain, most long-chain common recognition is changed to most heavy chain common recognition by directed acyclic graph. On traditional block chain, the block newly issued can be added on original most long-chain, and all think longest with all nodes Subject to chain, successively unlimited sprawling.And in directed acyclic graph, the unit being each newly added, one be not only only added in long-chain A block, and all blocks before being incorporated into.
It is understood that neural network model be as combined by many operators at.And by having between some operators There is mutual dependence.And dependence here can be understood as the dependence in data.For example, in neural network model There are operator A, operator B, operator C, operator D, operator E, operator F, operator G and operator H.Wherein, neural network model defines The input data of operator C is the output data of operator B, and the input data of operator B is the output data of operator A, and operator A is only There is data output.It can so obtain from here, the operation of operator D is to need the output data of operator B, then operator D It is to rely on operator B, similarly, the operation of operator B is to need the output data of operator A, then operator B is to rely on operator A's.
Furthermore the output data of operator C is the output data of operator F, and the input data of operator E is the output number of operator A According to, and the input data of operator F also derives from operator B other than from operator C.The input data of operator G derives from operator B, the input data of operator C and operator E, operator H derive from operator D and operator F.So based on previously for dependence It explains, operator F depends on operator B and operator C, operator E that operator A, operator G is depended on to depend on operator B, operator simultaneously simultaneously C and operator E, operator H depend on operator D and operator F simultaneously.
In turn, obtained as shown in Figure 2 there is mutually acyclic figure based on above-mentioned dependence.There are node 0, node 1, section in figure Point 2, node 3, node 4, node 5, node 6 and node 7.Wherein, node 0 characterizes operator A, and node 1 characterizes operator B, node 2 Operator C is characterized, node 3 characterizes operator D, and node 4 characterizes operator E, and node 5 characterizes operator F, and node 6 characterizes operator G, 7 table of node Levy operator H.Wherein, node pointed by arrow depends on the start node of arrow.
So in this manner, can be first, the graph structure is traversed described in electronic equipment, obtains the node Execution sequence the step of include: to the directed acyclic graph carry out depth-first traversal, obtain the node executes sequence.
Step S130: traversing the graph structure, and obtain the node executes sequence.
Wherein, although as shown in Fig. 2, mutual execution sequence is more for every two adjacent nodes It clearly, may for whole execution sequence but if in the node for not having neighbouring relations is considered in There is mistake.For example, comparing node 1, node 2 and node 5 in Fig. 2 it is clear that first carrying out node 1 is executing section It puts 5 and first carries out node 2 and execute node 5 again.But if after the execution of node 1, it is necessary in the case where executing node 5, just It will cause node 5 first to run than node 2, but operator corresponding to node 5 is in the process of running for calculation corresponding to node 2 The output data of son may be to have centainly to rely on, and then will result in error in data in this way.So it is used as a kind of mode, electronics Equipment can be traversed by each node to graph structure, and then how the execution sequence for obtaining each node on the whole is 's.
It is described that depth-first traversal is carried out to the directed acyclic graph as a kind of mode, obtain the execution of the node Sequentially.
It should be noted that in the node traverses mode of graph structure, including depth-first search and breadth first search. Depth-first search belongs to one kind of nomography, and english abbreviation is DFS, that is, Depth First Search, and process is briefly Be each possible individual path is deep into cannot again deeply until, and each node can only traverse once.
In specific ergodic process, because there are two vertex to rely on node 0, node 1 and node 3, select here wherein One vertex (which is selected will not all have an impact) selects No. 1 node, as the node continued searching here.Find node 1 Need to find the vertex for relying on the node 1 afterwards, node 3, node 5, node 6 all rely on node 1 as we can see from the figure.Here it selects Select node of the node 3 as next search.Then it is further continued for finding next node for relying on node 3, because only that node 7 Node 3 is relied on, so selection node 7.Because relying on node 7 without node, it regard node 7 as first aim node. Because can not find the node for relying on node 7, need to return to a node 3 in next step.Because relying on the node of node 3 Only node 7, and node 7 had stepped through, so the node for the dependence node 3 not traversed can not be found, therefore will section Point 3 is also used as destination node.And destination node at this time successively includes node 7 and node 3.
Continue a upper node i.e. node 1 of selection node 3, finds the node of not traversed dependence node 1, here also Remaining node 5 and node 6, select node 5.After finding node 5, the node for relying on node 5 is continually looked for, here only node 7, and have stepped through, therefore node 5 is labeled as destination node.Destination node at this time successively includes node 7, node 3 and node 5.
The upper node 1 for returning to node 5 continually looks for the node for relying on node 1, is only left node 6 here.Because not yet There is node to rely on node 6, so node 6 is labeled as destination node.At this point, destination node successively includes node 7, node 3, section Point 5 and node 6.The a upper node i.e. node 6 of node 6 is returned at this time, at this time because relying on it without node and not having Have and be traversed, therefore node 1 is labeled as destination node.At this point, destination node successively includes node 7, node 3, node 5, node 6 and node 1.The a upper node i.e. node 0 for returning node 1 finds the node 4 for relying on the node 0.Because not yet There is the vertex not traversed to rely on node 4, so node 4 is labeled as destination node.At this point, destination node successively include node 7, Node 3, node 5, node 6, node 1 and node 4.The a upper node i.e. node 0 for returning node 4, because without node Node 0 is relied on again, so node 0 is labeled as destination node.At this point, destination node successively include node 7, node 3, node 5, Node 6, node 1, node 4 and node 0.Because having returned to input node i.e. node 0, needed at this time from another input section Point is that node 2 is begun stepping through, because the node for relying on node 2 is all traversed, node 2 is directly labeled as target section Point.
Flag sequence so based on above-mentioned destination node can be obtained by the sequence that executes of each node, that is, execute Sequencing is node 2, node 0, node 4, node 1, node 6, node 5, node 3 and node 7.Wherein, for the ease of electricity What sub- equipment can quickly identify marked destination node executes sequence, can use the memory space of stack architecture to store The destination node marked.So in this manner, in ergodic process, electronic equipment carries out the directed acyclic graph deep First traversal is spent, the destination node got in ergodic process is sequentially stored into the memory space of stack architecture, the destination node The node being traversed for the node or slave node of no slave node;The sequence of popping of the memory space interior joint is made Sequence is executed for the node.Based on example shown in Fig. 2, it is stored in the sequence of the destination node of the memory space of stack architecture For node 7, node 3, node 5, node 6, node 1, node 4, node 0 and node 2.So corresponding sequence of popping is node 2, node 0, node 4, node 1, node 6, node 5, node 3 and node 7.
It is understood that input vertex can be identified as the Node electronics equipment for not depending on other nodes.So In the case where electronic equipment has recognized multiple input vertexs, electronic equipment can starting input vertex based on a specified, it is right The directed acyclic graph carries out depth-first traversal, and obtain the node executes sequence.
It is understood that depth-first traversal in this case, the input vertex first traversed will be leaned on After execute.For example, first being begun stepping through from input vertex 0 (node 0), in previous examples then again from input vertex 2 (node 2) It begins stepping through, then being in rear execution for the node 0 first traversed.It is similar, for the node relied on by multiple nodes, It also can be by rear execution for the node being first traversed.For example, the node 3, node 5 and node 6 in Fig. 2 all rely on node 1, if that first traversed since node 3, then correspondingly, node 3 can just can be by after node 5 or node 6 It executes.And the operator characterized for different nodes when executed consumed by process resource be may be different 's.Also, total process resource of electronic equipment is limited, if when in carrying out the implementation procedure that some node corresponds to operator Need to consume a large amount of process resource, and electronic equipment always has other programs to be also required to consume a large amount of process resource, then It will result in the Caton or delay machine of electronic equipment.
So as a kind of mode for improving the above problem, electronic equipment can be right during carrying out node traverses It is once traversed in all branch's situations, and then obtain a variety of all nodes executes sequence.For example, for there is node 0, in the case where node 1, node 2, node 3, node 4, node 5, node 6 and node 7, a kind of available execution sequence is Node 2, node 0, node 4, node 1, node 6, node 5, node 3 and node 7, can also obtain another execution sequence is Node 2, node 0, node 4, node 1, node 6, node 3, node 5 and node 7, can also obtain another execution sequence is Node 2, node 0, node 4, node 1, node 3, node 5, node 6 and node 7.And it is right for different node institute therein In the implementation procedure for the operator answered, in the case that consumed process resource is different, electronic equipment is determining all nodes Execution sequence after, the just available processing consumed required for each stage in the process of implementation to neural network model The relative prevalence of resource.One of them stage characterizes the implementation procedure of an operator.
So corresponding, electronic equipment can also be estimated in the entire implementation procedure of neural network model, other programs It is then that other programs are suitable for the consumption state of process resource and a variety of execution above-mentioned for the consumption situation of process resource The how corresponding resource consumption situation of sequence is matched, by multiple execution sequences with other programs for the consumption shape of process resource Condition, which conflicts, the smallest a kind of as final executes sequence.
Wherein, as a kind of mode, electronic equipment can be by the process resource consumption state of a variety of execution sequences and processing Time is associated and then establishes process resource consumption and the two-dimensional map relationship between the time.As shown in figure 3, horizontal seat therein Identified time, the resource consumption degree of ordinate mark are marked, and wherein each cylindricality characterizes the implementation procedure of a node.Example Such as, t1 to t2 characterizes the implementation procedure of operator corresponding to a node.It is understood that the operator that each node is characterized Execution consumed by the time may be different, display is only exemplary in figure.Also, for ease of calculation, for figure Show average resource consumption of the resource consumption degree corresponding to each node not in operator operational process corresponding to the node Degree.
So be based on mode shown in Fig. 3, so that it may get resource consumption degree corresponding to a variety of execution sequences with Mapping relations between time.Then obtain again other programs of corresponding period inner electronic equipment resource consumption program and when Between between mapping relations, then by the resource consumption program of his program and mapping relations between the time and a variety of execute sequence Corresponding resource consumption degree and the mapping relations progress two stage cultivation between the time can get resource consumption and conflict most It is small a kind of to execute sequence.
It is understood that each segment table in two stage cultivation therein levied is holding for operator that a node is characterized Row process.For example, t1 to t2 shown in Fig. 3 be one section, t2 to t3 be also be one section.So in the matching process, electronics is set The standby process resource consumption degree that electronic equipment other programs within this period of t1 to t2 can be calculated, then should Corresponding to the process resource consumption degree of other interior programs of this period of t1 to t2 and the node in this period of t1 to t2 Operator execute needed for process resource matched, and then successively to each stage complete match, to obtain whole With degree.
In the present embodiment, it as a kind of mode, can complete to match by way of scoring.Optionally, electronics is set It is standby a mapping relations will to be established between the difference and scoring of resource consumption degree, in the resource consumption journey to each stage When degree is matched, an available scoring, for example, the smaller scoring of difference is higher, then electronic equipment can be obtained by often The scoring in a stage, to obtain the overall score of every kind of execution sequence, by that execution for scoring minimum sequence as selecting Execute sequence.Specifically, the node a corresponding execution stage is this time of t1 to t2 by taking several stages shown in Fig. 3 as an example Section, corresponding process resource consumption degree a1, then getting electronic equipment other application within this period of t1 to t2 The process resource consumption degree of program is a2, then scoring associated by the value after the a1 difference for subtracting a2 takes absolute value in turn Scoring as this stage.Similar, available t2 to t3, t3 to t4, t4 to t5, t5 to t6, t6 to t7, t7 to t8 The scoring of several periods, and then can be obtained by whole overall score.
It should be noted that for the resource consumption journey of the other applications in wherein each period inner electronic equipment Degree is also possible to mean consumption degree.Furthermore, it is possible to understand that, the selection course of a variety of execution sequences above-mentioned is actual What neural network model carried out before executing.Consumed by the operation of operator corresponding to each stage interior nodes so therein It is the discreet value precalculated that time and process resource, which occupy degree,.It is similar, electronic equipment within each stage its It is also discreet value that the process resource of his application program, which occupies degree,.
As a kind of mode, time and processing consumed by the operation for operator corresponding to each stage interior nodes Resource occupation degree can be just to precalculate completion during carrying out neural network model training.So in this mode Under, electronic equipment can also receive mark nerve net while receiving the file for being stored with neural network model from network The file of the operation time of each node and resource occupation degree in network model.Furthermore for each stage inner electronic equipment Other applications process resource occupy degree can be carried out according to the application history log of electronic equipment It estimates.It is understood that electronic equipment for it is daily what time period in run what application program be that can carry out Record, then the consumption degree of corresponding process resource in the process of running is also that can be recorded.So electronics is set It is standby that certain rule can be got by the statistics of the consumption degree to the process resource in a period of time, and then to subsequent Each period process resource consumption degree estimated.
It should be noted that each stage shown in aforementioned is the period in daily, for example, 12: 10 assign to 12: 12 Divide this period.For another example 12 points of 30 seconds 1 minute to 12 points 2 minutes this periods, even shorter or longer time Section.And wherein the process resource can be understood as the occupancy of processor.
Step S140: what the execution based on the node was arranged in order operator that the node is characterized executes sequence.
A kind of neural network model processing method provided by the present application, after obtaining neural network model to be configured, base The neural network model is mapped as graph structure, wherein the figure by the dependence of operator in the neural network model A node in structure characterizes an operator in the neural network model.Then, the graph structure is traversed, is obtained Execute sequence to the node, the execution based on the node be arranged in order operator that the node is characterized execution it is suitable Sequence.To every to determine by the way that neural network model is converted to graph structure, then by way of the node in traversal graph structure The operator that a node is characterized executes sequence, and then realizes the sequence that executes that can quickly determine all operators, is promoted whole The operation efficiency of a model.
Referring to Fig. 4, a kind of neural network model processing method provided by the embodiments of the present application, is applied to electronic equipment, The described method includes:
Step S210: the neural network model that training obtains is received.
Step S220: the neural network model obtained to the training optimizes, and obtains neural network mould to be configured Type;Wherein, the neural network model to be configured is the neural network mould for being adapted to the data computing capability of the electronic equipment Type.
The step of neural network model obtained to the training optimizes including at least in the following steps extremely It is one few: Operator Fusion is carried out to the neural network model that the training obtains;The neural network model that the training is obtained Carry out network beta pruning;Model quantization is carried out to the neural network model that the training obtains;And to the mind that the training obtains Network cutting is carried out through network model.
Wherein, Operator Fusion can be understood as merging some operators with reduce calculating or reduce memory copying (such as The merging of Conv2D and BatchNorm).Network beta pruning, it can be understood as some unnecessary operators of removal, to simplify network (for example removing some redundancy operators that can be used in training).Furthermore the calculating of the inside of general neural network model is all It being calculated using floating number, the calculating of floating number can consume bigger computing resource (space and cpu/gpu time), if In the case where not influencing neural network model accuracy rate, it can be carried out using other simple value types inside neural network model If calculating, calculating speed can be improved very much, and the computing resource of consumption can greatly reduce, for mobile device, This point is even more important.Quantization compresses original neural network model by bit number needed for reducing each weight of expression.
Step S230: the dependence based on operator in the neural network model maps the neural network model For graph structure, wherein a node in the graph structure characterizes an operator in the neural network model;
Step S240: traversing the graph structure, and obtain the node executes sequence.
Step S250: what the execution based on the node was arranged in order operator that the node is characterized executes sequence.
A kind of neural network model processing method provided by the present application, after obtaining neural network model to be configured, base The neural network model is mapped as graph structure, wherein the figure by the dependence of operator in the neural network model A node in structure characterizes an operator in the neural network model.Then, the graph structure is traversed, is obtained Execute sequence to the node, the execution based on the node be arranged in order operator that the node is characterized execution it is suitable Sequence.To every to determine by the way that neural network model is converted to graph structure, then by way of the node in traversal graph structure The operator that a node is characterized executes sequence, and then realizes the sequence that executes that can quickly determine all operators, is promoted whole The operation efficiency of a model.Also, the method provided in the present embodiment can be and carry out to optimizing rear neural network model It calculates, and then realizes that can be rapidly obtained accurate operator after neural network model optimization executes sequence.
Referring to Fig. 5, a kind of neural network model processing method provided by the embodiments of the present application, is applied to electronic equipment, The described method includes:
Step S310: the neural network model that training obtains is received.
Step S320: increasing target operator after referring to the execution sequence of definite operator in the neural network model that training obtains, Neural network model to be configured is obtained, the target operator is for executing the specified operator to obtain the storage format of data Be converted to object format.
It should be noted that electronic equipment can be allowed to carry out data meter faster after storage format is converted It calculates.For example, generally there are two types of memory cloth for the operator calculated by GPU in the electronic equipments such as smart phone, tablet computer Office: Buffer and Image, the former does not need to process model, but calculating speed is not so good as the latter, so generally will use The mode of Image, but need for the weight in network model to be converted into the format of Image from the memory of plain, it is therefore desirable to Increase operator in network to convert for memory.
Step S330: the dependence based on operator in the neural network model maps the neural network model For graph structure, wherein a node in the graph structure characterizes an operator in the neural network model;
Step S340: traversing the graph structure, and obtain the node executes sequence.
Step S350: what the execution based on the node was arranged in order operator that the node is characterized executes sequence.
A kind of neural network model processing method provided by the present application, after obtaining neural network model to be configured, base The neural network model is mapped as graph structure, wherein the figure by the dependence of operator in the neural network model A node in structure characterizes an operator in the neural network model.Then, the graph structure is traversed, is obtained Execute sequence to the node, the execution based on the node be arranged in order operator that the node is characterized execution it is suitable Sequence.To every to determine by the way that neural network model is converted to graph structure, then by way of the node in traversal graph structure The operator that a node is characterized executes sequence, and then realizes the sequence that executes that can quickly determine all operators, is promoted whole The operation efficiency of a model.Also, the method provided in the present embodiment can be to the nerve net for be inserted into after new operator Network model is calculated, and then realization can be rapidly obtained accurate calculation after being inserted into new operator in neural network model It is sub- to execute sequence.
Referring to Fig. 6, a kind of neural network model processing unit 400 provided by the embodiments of the present application, runs on electronics and sets Standby, described device 400 includes:
Model acquiring unit 410, for obtaining neural network model to be configured.
As a kind of mode, as shown in fig. 7, model acquiring unit 410 includes:
Model receiving subelement 411, the neural network model obtained for receiving training;
Model optimization subelement 412, the neural network model for obtaining to the training optimize, and obtain to be configured Neural network model;Wherein, the neural network model to be configured is the data computing capability for being adapted to the electronic equipment Neural network model.
As a kind of mode, as shown in figure 8, model acquiring unit 410 includes:
Model receiving subelement 413, the neural network model obtained for receiving training.
Operator increases subelement 414, increases after referring to the execution sequence of definite operator in the neural network model that training obtains Target operator obtains neural network model to be configured, and the target operator is for executing the specified operator to obtain data Storage format be converted to object format.
Wherein, optionally, the step of neural network model obtained to the training optimizes includes at least down At least one of column step: Operator Fusion is carried out to the neural network model that the training obtains;The training is obtained Neural network model carries out network beta pruning;Model quantization is carried out to the neural network model that the training obtains;And to described The neural network model that training obtains carries out network cutting.
Model treatment unit 420, for the dependence based on operator in the neural network model, by the nerve net Network model is mapped as graph structure, wherein a node in the graph structure characterizes a calculation in the neural network model Son.
Traversal Unit 430, for traversing to the graph structure, obtain the node executes sequence.
Order determination unit 440 is arranged in order the operator that the node is characterized for the execution based on the node Execute sequence.
As a kind of mode, the graph structure is directed acyclic graph.In this manner, model treatment unit 420, specifically For carrying out depth-first traversal to the directed acyclic graph, obtain the node executes sequence.Traversal Unit 430, specifically For carrying out depth-first traversal to the directed acyclic graph, the destination node got in ergodic process is sequentially stored into stack knot The memory space of structure, the destination node are the node that node without slave node or slave node are traversed;It will be described Popping for memory space interior joint sequentially executes sequence as the node.
As a kind of mode, Traversal Unit 430 is specifically used for starting input vertex based on a specified, to the oriented nothing Ring figure carries out depth-first traversal, and obtain the node executes sequence.
It should be noted that Installation practice is mutual corresponding, device implementation with preceding method embodiment in the application Specific principle may refer to the content in preceding method embodiment in example, and details are not described herein again.
A kind of electronic equipment provided by the present application is illustrated below in conjunction with Fig. 9.
Referring to Fig. 9, based on above-mentioned neural network model processing method, device, the embodiment of the present application is also provided another A kind of electronic equipment 200 that can execute aforementioned neurological network model processing method.Electronic equipment 200 includes one to intercouple A or multiple (one is only shown in figure) processor 102, memory 104 and network module 106.Wherein, in the memory 104 It is stored with the program that can execute content in previous embodiment, and processor 102 can execute the journey stored in the memory 104 Sequence.
Wherein, processor 102 may include one or more core for being used to handle data.Processor 102 utilizes various Various pieces in interface and the entire electronic equipment 200 of connection, by running or executing the finger being stored in memory 104 It enables, program, code set or instruction set, and calls the data being stored in memory 104, execute the various of electronic equipment 200 Function and processing data.Optionally, processor 102 can use Digital Signal Processing (Digital Signal Processing, DSP), it is field programmable gate array (Field-Programmable Gate Array, FPGA), programmable At least one of logic array (Programmable Logic Array, PLA) example, in hardware is realized.Processor 102 can Integrating central processor (Central Processing Unit, CPU), image processor (Graphics Processing Unit, GPU) and one or more of modem etc. combination.Wherein, the main processing operation system of CPU, user interface With application program etc.;GPU is for being responsible for the rendering and drafting of display content;Modem is for handling wireless communication.It can be with Understand, above-mentioned modem can not also be integrated into processor 102, be carried out separately through one piece of communication chip real It is existing.
Memory 104 may include random access memory (Random Access Memory, RAM), also may include read-only Memory (Read-Only Memory).Memory 104 can be used for store instruction, program, code, code set or instruction set.It deposits Reservoir 104 may include storing program area and storage data area, wherein the finger that storing program area can store for realizing operating system Enable, for realizing at least one function instruction (such as touch function, sound-playing function, image player function etc.), be used for Realize the instruction etc. of following each embodiments of the method.Storage data area can also store the number that terminal 100 is created in use According to (such as phone directory, audio, video data, chat record data) etc..
The network module 106 is used to receive and transmit electromagnetic wave, realizes the mutual conversion of electromagnetic wave and electric signal, from And it is communicated with communication network or other equipment, such as communicated with audio-frequence player device.The network module 106 can Including various existing for executing the circuit elements of these functions, for example, antenna, RF transceiver, digital signal processor, Encryption/deciphering chip, subscriber identity module (SIM) card, memory etc..The network module 106 can be for example mutual with various networks Networking, intranet, wireless network communicate or communicated by wireless network and other equipment.Above-mentioned is wireless Network may include cellular telephone networks, WLAN or Metropolitan Area Network (MAN).For example, network module 106 can carry out letter with base station Breath interaction.
Referring to FIG. 10, it illustrates a kind of structural frames of computer readable storage medium provided by the embodiments of the present application Figure.Program code is stored in the computer-readable medium 1100, said program code can be called by processor and execute above-mentioned side Method described in method embodiment.
Computer readable storage medium 1100 can be (the read-only storage of electrically erasable of such as flash memory, EEPROM Device), the electronic memory of EPROM, hard disk or ROM etc.Optionally, computer readable storage medium 1100 includes non-volatile Property computer-readable medium (non-transitory computer-readable storage medium).It is computer-readable Storage medium 1100 has the memory space for the program code 810 for executing any method and step in the above method.These programs Code can read or be written to this one or more computer program from one or more computer program product In product.Program code 1110 can for example be compressed in a suitable form.
A kind of neural network model processing method, device and electronic equipment provided by the present application, it is to be configured obtaining After neural network model, based on the dependence of operator in the neural network model, the neural network model is mapped as Graph structure a, wherein node in the graph structure characterizes an operator in the neural network model.Then, to institute It states graph structure to be traversed, obtain the node executes sequence, and the execution based on the node is arranged in order the node institute The operator of characterization executes sequence.Thus by the way that neural network model is converted to graph structure, then by traversal graph structure The mode of node determines the sequence that executes of operator that each node is characterized, and then realizes and can quickly determine all operators Execute sequence, promote the operation efficiency of entire model.
Finally, it should be noted that above embodiments are only to illustrate the technical solution of the application, rather than its limitations;Although The application is described in detail with reference to the foregoing embodiments, those skilled in the art are when understanding: it still can be with It modifies the technical solutions described in the foregoing embodiments or equivalent replacement of some of the technical features;And These are modified or replaceed, do not drive corresponding technical solution essence be detached from each embodiment technical solution of the application spirit and Range.

Claims (10)

1. a kind of neural network model processing method, which is characterized in that be applied to electronic equipment, which comprises
Obtain neural network model to be configured;
Based on the dependence of operator in the neural network model, the neural network model is mapped as graph structure, wherein A node in the graph structure characterizes an operator in the neural network model;
The graph structure is traversed, obtain the node executes sequence;
What the execution based on the node was arranged in order operator that the node is characterized executes sequence.
2. the method according to claim 1, wherein the graph structure is directed acyclic graph;It is described to the figure The step of structure is traversed, and the execution sequence of the node is obtained include:
Depth-first traversal is carried out to the directed acyclic graph, obtain the node executes sequence.
3. according to the method described in claim 2, it is characterized in that, described carry out depth-first time to the directed acyclic graph The step of going through, obtaining the execution sequence of the node include:
Depth-first traversal is carried out to the directed acyclic graph, the destination node got in ergodic process is sequentially stored into stack knot The memory space of structure, the destination node are the node that node without slave node or slave node are traversed;
Popping for the memory space interior joint is sequentially executed into sequence as the node.
4. described right the method according to claim 1, wherein the directed acyclic graph has multiple input vertexs The step of directed acyclic graph carries out depth-first traversal, obtains the execution sequence of the node include:
Starting input vertex based on a specified carries out depth-first traversal to the directed acyclic graph, obtains holding for the node Row sequence.
5. the method according to claim 1, wherein described the step of obtaining neural network model to be configured, wraps It includes:
Receive the neural network model that training obtains;
The neural network model obtained to the training optimizes, and obtains neural network model to be configured;Wherein, it is described to The neural network model of configuration is the neural network model for being adapted to the data computing capability of the electronic equipment.
6. according to the method described in claim 5, it is characterized in that, the neural network model obtained to the training carries out The step of optimization, includes at least at least one of the following steps:
Operator Fusion is carried out to the neural network model that the training obtains;
Network beta pruning is carried out to the neural network model that the training obtains;
Model quantization is carried out to the neural network model that the training obtains;And
Network cutting is carried out to the neural network model that the training obtains.
7. according to the method described in claim 5, it is characterized in that, the neural network model obtained to the training carries out Optimization, the step of obtaining neural network model to be configured include:
Increase target operator after referring to the execution sequence of definite operator in the neural network model that training obtains, obtains mind to be configured Through network model, the target operator is used to the storage format that the specified operator executes to obtain data being converted to target lattice Formula.
8. a kind of neural network model processing unit, which is characterized in that run on electronic equipment, described device includes:
Model acquiring unit, for obtaining neural network model to be configured;
Model treatment unit, for the dependence based on operator in the neural network model, by the neural network model It is mapped as graph structure, wherein a node in the graph structure characterizes an operator in the neural network model;
Traversal Unit, for traversing to the graph structure, obtain the node executes sequence;
Order determination unit, for the execution based on the node be arranged in order the operator that the node is characterized execution it is suitable Sequence.
9. a kind of electronic equipment, which is characterized in that including processor and memory;
One or more programs are stored in the memory and are configured as being executed by the processor to realize that right is wanted Seek any method of 1-7.
10. a kind of computer readable storage medium, which is characterized in that be stored with program generation in the computer readable storage medium Code, wherein perform claim requires any method of 1-7 when said program code is run by processor.
CN201910644306.3A 2019-07-17 2019-07-17 Neural network model processing method, device and electronic equipment Pending CN110378413A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910644306.3A CN110378413A (en) 2019-07-17 2019-07-17 Neural network model processing method, device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910644306.3A CN110378413A (en) 2019-07-17 2019-07-17 Neural network model processing method, device and electronic equipment

Publications (1)

Publication Number Publication Date
CN110378413A true CN110378413A (en) 2019-10-25

Family

ID=68253598

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910644306.3A Pending CN110378413A (en) 2019-07-17 2019-07-17 Neural network model processing method, device and electronic equipment

Country Status (1)

Country Link
CN (1) CN110378413A (en)

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111104214A (en) * 2019-12-26 2020-05-05 北京九章云极科技有限公司 Workflow application method and device
CN111340237A (en) * 2020-03-05 2020-06-26 腾讯科技(深圳)有限公司 Data processing and model operation method, device and computer equipment
CN112346877A (en) * 2021-01-11 2021-02-09 瀚博半导体(上海)有限公司 Memory allocation method and system for effectively accelerating deep learning calculation
CN112633502A (en) * 2020-12-29 2021-04-09 北京百度网讯科技有限公司 Cross-platform execution method and device of deep learning model and electronic equipment
CN112819153A (en) * 2020-12-31 2021-05-18 杭州海康威视数字技术股份有限公司 Model transformation method and device
CN112990461A (en) * 2019-12-16 2021-06-18 杭州海康威视数字技术股份有限公司 Method and device for constructing neural network model, computer equipment and storage medium
CN113469351A (en) * 2020-03-30 2021-10-01 嘉楠明芯(北京)科技有限公司 Data processing method, device and storage medium
CN113469360A (en) * 2020-03-31 2021-10-01 杭州海康威视数字技术股份有限公司 Inference method and device
CN113657584A (en) * 2021-08-31 2021-11-16 安谋科技(中国)有限公司 Neural network model calculation method, data processing method, electronic device, and medium
CN113760380A (en) * 2020-05-27 2021-12-07 杭州海康威视数字技术股份有限公司 Method, device, equipment and storage medium for determining running code of network model
CN113811897A (en) * 2019-12-30 2021-12-17 深圳元戎启行科技有限公司 Inference method and apparatus of neural network model, computer device, and storage medium
CN113918126A (en) * 2021-09-14 2022-01-11 威讯柏睿数据科技(北京)有限公司 AI modeling flow arrangement method and system based on graph algorithm
CN113935427A (en) * 2021-10-22 2022-01-14 北京达佳互联信息技术有限公司 Training task execution method and device, electronic equipment and storage medium
CN114117206A (en) * 2021-11-09 2022-03-01 北京达佳互联信息技术有限公司 Recommendation model processing method and device, electronic equipment and storage medium
CN114257701A (en) * 2020-09-23 2022-03-29 北京字节跳动网络技术有限公司 Access configuration method, device and storage medium of video processing algorithm
CN114282661A (en) * 2021-12-23 2022-04-05 安谋科技(中国)有限公司 Method for operating neural network model, readable medium and electronic device
CN114429211A (en) * 2022-02-07 2022-05-03 北京百度网讯科技有限公司 Method, apparatus, device, medium and product for generating information
WO2022105187A1 (en) * 2020-11-18 2022-05-27 华为技术有限公司 Memory management method, device, and system
CN114968395A (en) * 2022-05-10 2022-08-30 上海淇玥信息技术有限公司 Starting optimization method and device based on Spring framework and computer equipment
CN115358379A (en) * 2022-10-20 2022-11-18 腾讯科技(深圳)有限公司 Neural network processing method, neural network processing device, information processing method, information processing device and computer equipment
WO2024156284A1 (en) * 2023-01-29 2024-08-02 维沃移动通信有限公司 Model conversion method and apparatus, electronic device and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105447900A (en) * 2014-07-04 2016-03-30 北京新媒传信科技有限公司 Animation recording method and device
CN109614975A (en) * 2018-10-26 2019-04-12 桂林电子科技大学 A kind of figure embedding grammar, device and storage medium
CN109740751A (en) * 2018-12-24 2019-05-10 北京中科寒武纪科技有限公司 The framework fusion method and relevant apparatus of neural network model
CN109828089A (en) * 2019-02-13 2019-05-31 仲恺农业工程学院 DBN-BP-based water quality parameter nitrous acid nitrogen online prediction method
CN109902819A (en) * 2019-02-12 2019-06-18 Oppo广东移动通信有限公司 Neural computing method, apparatus, mobile terminal and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105447900A (en) * 2014-07-04 2016-03-30 北京新媒传信科技有限公司 Animation recording method and device
CN109614975A (en) * 2018-10-26 2019-04-12 桂林电子科技大学 A kind of figure embedding grammar, device and storage medium
CN109740751A (en) * 2018-12-24 2019-05-10 北京中科寒武纪科技有限公司 The framework fusion method and relevant apparatus of neural network model
CN109902819A (en) * 2019-02-12 2019-06-18 Oppo广东移动通信有限公司 Neural computing method, apparatus, mobile terminal and storage medium
CN109828089A (en) * 2019-02-13 2019-05-31 仲恺农业工程学院 DBN-BP-based water quality parameter nitrous acid nitrogen online prediction method

Cited By (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112990461B (en) * 2019-12-16 2023-09-19 杭州海康威视数字技术股份有限公司 Method, device, computer equipment and storage medium for constructing neural network model
CN112990461A (en) * 2019-12-16 2021-06-18 杭州海康威视数字技术股份有限公司 Method and device for constructing neural network model, computer equipment and storage medium
CN111104214A (en) * 2019-12-26 2020-05-05 北京九章云极科技有限公司 Workflow application method and device
CN113811897B (en) * 2019-12-30 2022-05-31 深圳元戎启行科技有限公司 Inference method and apparatus of neural network model, computer device, and storage medium
CN113811897A (en) * 2019-12-30 2021-12-17 深圳元戎启行科技有限公司 Inference method and apparatus of neural network model, computer device, and storage medium
CN111340237A (en) * 2020-03-05 2020-06-26 腾讯科技(深圳)有限公司 Data processing and model operation method, device and computer equipment
CN111340237B (en) * 2020-03-05 2024-04-26 腾讯科技(深圳)有限公司 Data processing and model running method, device and computer equipment
CN113469351A (en) * 2020-03-30 2021-10-01 嘉楠明芯(北京)科技有限公司 Data processing method, device and storage medium
CN113469360B (en) * 2020-03-31 2023-10-20 杭州海康威视数字技术股份有限公司 Reasoning method and device
CN113469360A (en) * 2020-03-31 2021-10-01 杭州海康威视数字技术股份有限公司 Inference method and device
CN113760380A (en) * 2020-05-27 2021-12-07 杭州海康威视数字技术股份有限公司 Method, device, equipment and storage medium for determining running code of network model
CN114257701A (en) * 2020-09-23 2022-03-29 北京字节跳动网络技术有限公司 Access configuration method, device and storage medium of video processing algorithm
WO2022105187A1 (en) * 2020-11-18 2022-05-27 华为技术有限公司 Memory management method, device, and system
CN112633502A (en) * 2020-12-29 2021-04-09 北京百度网讯科技有限公司 Cross-platform execution method and device of deep learning model and electronic equipment
CN112819153A (en) * 2020-12-31 2021-05-18 杭州海康威视数字技术股份有限公司 Model transformation method and device
CN112819153B (en) * 2020-12-31 2023-02-07 杭州海康威视数字技术股份有限公司 Model transformation method and device
CN112346877A (en) * 2021-01-11 2021-02-09 瀚博半导体(上海)有限公司 Memory allocation method and system for effectively accelerating deep learning calculation
CN113657584A (en) * 2021-08-31 2021-11-16 安谋科技(中国)有限公司 Neural network model calculation method, data processing method, electronic device, and medium
CN113657584B (en) * 2021-08-31 2024-04-09 安谋科技(中国)有限公司 Neural network model calculation method, data processing method, electronic device and medium
CN113918126A (en) * 2021-09-14 2022-01-11 威讯柏睿数据科技(北京)有限公司 AI modeling flow arrangement method and system based on graph algorithm
WO2023040372A1 (en) * 2021-09-14 2023-03-23 北京柏睿数据技术股份有限公司 Ai modeling process choreography method and system based on graph algorithm
CN113935427A (en) * 2021-10-22 2022-01-14 北京达佳互联信息技术有限公司 Training task execution method and device, electronic equipment and storage medium
CN114117206A (en) * 2021-11-09 2022-03-01 北京达佳互联信息技术有限公司 Recommendation model processing method and device, electronic equipment and storage medium
CN114282661A (en) * 2021-12-23 2022-04-05 安谋科技(中国)有限公司 Method for operating neural network model, readable medium and electronic device
CN114429211A (en) * 2022-02-07 2022-05-03 北京百度网讯科技有限公司 Method, apparatus, device, medium and product for generating information
CN114968395B (en) * 2022-05-10 2023-09-26 上海淇玥信息技术有限公司 Starting optimization method and device based on Spring framework and computer equipment
CN114968395A (en) * 2022-05-10 2022-08-30 上海淇玥信息技术有限公司 Starting optimization method and device based on Spring framework and computer equipment
CN115358379B (en) * 2022-10-20 2023-01-10 腾讯科技(深圳)有限公司 Neural network processing method, neural network processing device, information processing method, information processing device and computer equipment
CN115358379A (en) * 2022-10-20 2022-11-18 腾讯科技(深圳)有限公司 Neural network processing method, neural network processing device, information processing method, information processing device and computer equipment
WO2024156284A1 (en) * 2023-01-29 2024-08-02 维沃移动通信有限公司 Model conversion method and apparatus, electronic device and storage medium

Similar Documents

Publication Publication Date Title
CN110378413A (en) Neural network model processing method, device and electronic equipment
CN110705709B (en) Method and device for training neural network model of graph
EP3370188A1 (en) Facial verification method, device, and computer storage medium
CN103942308A (en) Method and device for detecting large-scale social network communities
CN110166344B (en) Identity identification method, device and related equipment
CN109120431B (en) Method and device for selecting propagation source in complex network and terminal equipment
CN111260220A (en) Group control equipment identification method and device, electronic equipment and storage medium
CN117271101B (en) Operator fusion method and device, electronic equipment and storage medium
CN113469353A (en) Neural network model optimization method, data processing method and device
CN110503180A (en) Model treatment method, apparatus and electronic equipment
CN106021296B (en) Method and device for detecting batch operation paths of core bank system
CN114647790A (en) Big data mining method and cloud AI (Artificial Intelligence) service system applied to behavior intention analysis
CN117014057A (en) Network slice resource allocation method, device and storage medium
CN116366603A (en) Method and device for determining active IPv6 address
CN109101507B (en) Data processing method, device, computer equipment and storage medium
CN115577363A (en) Detection method and device for deserialization utilization chain of malicious code
CN109558521A (en) Large scale key word multi-mode matching method, device and equipment
CN113347268B (en) Networking method and device based on distributed network, storage medium and computer equipment
CN114218500A (en) User mining method, system, device and storage medium
CN111914945A (en) Data processing method and device, image generation method and electronic equipment
CN111352932B (en) Method and device for improving data processing efficiency based on bitmap tree algorithm
CN105933260A (en) Spectrum access method and spectrum access system
CN106600250B (en) User identification method and device from block chain decentralized to centralized
CN115913991B (en) Service data prediction method and device, electronic equipment and storage medium
CN111208980B (en) Data analysis processing method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination