CN106709565A - Neural network optimization method and device - Google Patents

Neural network optimization method and device Download PDF

Info

Publication number
CN106709565A
CN106709565A CN201611022209.3A CN201611022209A CN106709565A CN 106709565 A CN106709565 A CN 106709565A CN 201611022209 A CN201611022209 A CN 201611022209A CN 106709565 A CN106709565 A CN 106709565A
Authority
CN
China
Prior art keywords
training sample
network
value
parameter value
nervus opticus
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201611022209.3A
Other languages
Chinese (zh)
Inventor
张玉兵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Shiyuan Electronics Thecnology Co Ltd
Original Assignee
Guangzhou Shiyuan Electronics Thecnology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Shiyuan Electronics Thecnology Co Ltd filed Critical Guangzhou Shiyuan Electronics Thecnology Co Ltd
Priority to CN201611022209.3A priority Critical patent/CN106709565A/en
Publication of CN106709565A publication Critical patent/CN106709565A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The embodiment of the invention discloses a neural network optimization method and device. The method comprises the following steps: acquiring a first neural network which meets a set precision condition, and processing a training sample set based on the first neural network to obtain a first feature vector of each training sample in the training sample set; constructing a second neural network to be trained based on the set network construction conditions; training a second neural network according to the first feature vector and the training sample set, and determining the second neural network meeting the set precision condition; and determining the second neural network as a target neural network of the first neural network. By using the method, another newly constructed small-scale neural network can be directly determined as the target optimization network of the neural network to be optimized after training and learning according to the optimization conditions, so that the aims of accelerating the recognition processing speed, shortening the recognition processing time and reducing the occupied space of storage, running memory, video memory and the like can be achieved when feature recognition is carried out on the basis of the optimized neural network.

Description

A kind of optimization method and device of neutral net
Technical field
The present embodiments relate to artificial neural network technology field, more particularly to a kind of neutral net optimization method and Device.
Background technology
At present, being typically based on the neural network model (such as depth convolutional neural networks model) for training carries out face knowledge Not.When recognition of face is carried out using neural network model, the problem for existing is as follows:1st, the complexity calculated image real time transfer Degree is higher, and influence operation time (e.g., will often be consumed when being configured with and processing facial image on the electronic equipment of Duo i7 processors Take the time of more than 1 second);2nd, need to take larger memory headroom or video card video memory space in processing procedure;3rd, also need to Larger memory space is taken to deposit whole neural network model.
The optimization method of existing neural network model, can not be fully solved above-mentioned problem, e.g., using Hough The optimization that the form of graceful coding is carried out, ensure that the treatment computational accuracy of neural network model after optimization, and effectively reduce The memory space of deep neural network model, but the complexity for the treatment of computing can not be reduced, shorten run time, while can not To internal memory or the space hold of video memory in reduction processing procedure.
The content of the invention
A kind of optimization method and device of neutral net is the embodiment of the invention provides, the excellent of neutral net can be realized Change, reach and shorten run time, reduce the purpose that system resources space takes.
On the one hand, a kind of optimization method of neutral net is the embodiment of the invention provides, including:
Acquisition meets the first nerves network of setting accuracy condition, and the instruction based on first nerves network processes setting Practice sample set, obtain the first eigenvector that the training sample concentrates each training sample;
Network struction condition based on setting constructs nervus opticus network to be trained, wherein, the nervus opticus network Node number and/or the network number of plies be less than the first nerves network;
The nervus opticus network is trained according to the first eigenvector and the training sample set, it is determined that meeting setting The nervus opticus network of precision conditions;
The nervus opticus network for meeting setting accuracy condition is defined as the target nerve of the first nerves network Network.
On the other hand, a kind of optimization device of neutral net is the embodiment of the invention provides, including:
Initial information acquisition module, the first nerves network of setting accuracy condition is met for obtaining, and based on described the The training sample set of one Processing with Neural Network setting, obtains the first eigenvector that the training sample concentrates each training sample;
Network model builds module, and nervus opticus network to be trained is constructed for the network struction condition based on setting, Wherein, the node number and/or the network number of plies of the nervus opticus network are less than the first nerves network;
Network model optimization module, for according to the first eigenvector and training sample set training described second Neutral net, it is determined that meeting the nervus opticus network of setting accuracy condition;
Objective network determining module, for the nervus opticus network for meeting setting accuracy condition to be defined as into described The target nerve network of one neutral net.
A kind of optimization method and device of neutral net is provided in the embodiment of the present invention, the method obtains train first First nerves network, and by first nerves network processes training sample set, obtain training sample and concentrate each training sample First eigenvector;Then nervus opticus network to be trained is constructed;Assembled for training according to first eigenvector and training sample afterwards Practice nervus opticus network;The nervus opticus network for most meeting setting accuracy condition at last is defined as the target god of first nerves network Through network.Using the method, directly the small-scale neutral net of another neotectonics can be trained according to optimal conditions and learnt It is defined as the objective optimization network of neutral net to be optimized afterwards, is achieved in treating the optimization of optimization neural network, so as in base When the neutral net after optimization carries out feature recognition, quickening identifying processing speed can be reached, shorten the identifying processing time, drop The purpose of the space holds such as low storage, running memory and video memory.
Brief description of the drawings
Fig. 1 is a kind of schematic flow sheet of the optimization method of neutral net that the embodiment of the present invention one is provided;
Fig. 2 is a kind of schematic flow sheet of the optimization method of neutral net that the embodiment of the present invention two is provided;
Fig. 3 is a kind of structured flowchart of the optimization device of neutral net that the embodiment of the present invention three is provided.
Specific embodiment
The present invention is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched The specific embodiment stated is used only for explaining the present invention, rather than limitation of the invention.It also should be noted that, in order to just Part rather than entire infrastructure related to the present invention is illustrate only in description, accompanying drawing.
Embodiment one
Fig. 1 is a kind of schematic flow sheet of the optimization method of neutral net that the embodiment of the present invention one is provided, it is adaptable to right The situation that the neutral net to be optimized of setting accuracy condition is optimized is reached after training study, the method can be by neutral net Optimization device perform, wherein the device can be realized by software and/or hardware, and be typically integrated in where neural network model On terminal device or server platform.
As shown in figure 1, a kind of optimization method of neutral net that the embodiment of the present invention one is provided, including following operation:
S101, acquisition meet the first nerves network of setting accuracy condition, and based on the setting of first nerves network processes Training sample set, obtains the first eigenvector that the training sample concentrates each training sample.
In the present embodiment, after setting accuracy condition specifically can be regarded as being trained neutral net study, actually enter During row application treatment, neutral net needs the processing accuracy scope for reaching.Usually, the setting accuracy condition can be that system is write from memory The scope, or the artificial scope for setting recognized.In the present embodiment, the first nerves network for being trained to study can pass through The processing accuracy for determining to be currently able to reach to the treatment of sample data included in standard testing collection, and in current determination When processing accuracy meets setting accuracy condition, it is believed that the first nerves network can carry out practical application treatment.
In the present embodiment, after obtaining the first nerves network for reaching setting accuracy condition, can be by first nerves net Network is processed the training sample set selected, and thus obtains the characteristic vector that training sample concentrates each training sample, is just In the optimization processing of follow-up neutral net, the present embodiment by the characteristic vector of obtained each training sample be referred to as fisrt feature to Amount.
S102, the network struction condition based on setting construct nervus opticus network to be trained, wherein, nervus opticus network Node number and/or the network number of plies be less than first nerves network.
In the present embodiment, reached by training study the need for the nervus opticus network specifically can be regarded as neotectonics The neutral net with setting accuracy condition as first nerves network.In the present embodiment, set network struction bar Part includes annexation between node number, the network number of plies and adjacent two layers node etc., based on above-mentioned network struction condition A neutral net to be trained can be constructed.
In the present embodiment, because the node number and/or the network number of plies of constructed first nerves network are less than first Neutral net, therefore, it can regard the nervus opticus network as one to the first nerves network optimization treatment mesh to be reached Mark optimization network, with scale is smaller, processing speed faster, network node arranges greater compactness of feature.
S103, the nervus opticus network is trained according to first eigenvector and training sample set, it is determined that meeting setting essence The nervus opticus network of degree condition.
In the present embodiment, in order to obtain the objective optimization network of first nerves network, it is necessary to enter to nervus opticus network Row training study, the present embodiment can be trained study based on above-mentioned selected training sample set pair nervus opticus network.
In the present embodiment, study can be trained to nervus opticus network by two ways.Specifically, its training PROCESS OVERVIEW is:The weight parameter value of adjacent two layers connecting node in nervus opticus network is constantly adjusted, first using the first Training method is concentrated to training sample by the nervus opticus network with above-mentioned weight parameter value and processed, the instruction that will be obtained Practice the second feature vector of each training sample in sample set as the result of the first training method;Then use second Training method is concentrated from training sample and chooses any to training sample, and when the information that this pair of training sample is included is identical, will The label position of this pair of training sample is set to a label value, equally, if comprising information differ, this pair of training sample Label position be set to another label value;Then this pair of training sample correspondence is determined from the training result of the first training method Second feature vector, using the second feature vector of above-mentioned label value and this pair of training sample as second place of training method Reason result.
In the present embodiment, set if the result of the result of the first training method and the second training method meets Fixed optimal conditions, then can be defined as target weight parameter by corresponding weight parameter value when obtaining above-mentioned result, and can The nervus opticus network with the target weight parameter is defined as one of candidate's neutral net of first nerves network;Most The nervus opticus network of setting accuracy condition can be determined for compliance with candidate's neutral net eventually, be used as first nerves network Objective optimization network.
In the present embodiment, the above-mentioned optimal conditions for setting are a majorized function value minimum for majorized function, the optimization The variable of function is the weight parameter value of the adjacent two layers connecting node of continuous adjustment, and optimization letter corresponding with weight parameter value The result of result and the second training method that numerical value is based primarily upon the first training method determines.Specifically, it can It is defined as target weight parameter value with by the corresponding weight parameter value of minimum majorized function value.
S104, the nervus opticus network that will meet setting accuracy condition are defined as the target nerve net of first nerves network Network.
In the present embodiment, can by first nerves network regard as one it is large-scale, committed memory and running space Neutral net to be optimized, therefore, after meeting setting accuracy condition to the study of nervus opticus network training, it is possible to by nervus opticus Network regards the objective optimization network of first nerves network as, and can substitute first nerves network with the nervus opticus network afterwards exists Application in practice, it is thus a greater degree of to save internal memory and running space.
A kind of optimization method of neutral net that the embodiment of the present invention one is provided, can directly by the small of another neotectonics Scale neutral net is achieved according to the objective optimization network for being defined as neutral net to be optimized after optimal conditions training study The optimization of optimization neural network is treated, so as to when the neutral net after based on optimization carries out feature recognition, quickening can be reached Identifying processing speed, shortens the identifying processing time, reduces the purpose of the space holds such as storage, running memory and video memory.
Embodiment two
Fig. 2 is a kind of schematic flow sheet of the optimization method of neutral net that the embodiment of the present invention two is provided.The present embodiment Two are optimized based on above-described embodiment, in the present embodiment, will be " according to the first eigenvector and the training sample This collection trains the nervus opticus network, it is determined that meeting the nervus opticus network of setting accuracy condition " further it is optimized for:Initially Change the parameter value of characteristic extraction parameter;Optimization letter according to the parameter value, first eigenvector, training sample set and setting Number, determines the target weight parameter value of adjacent two layers connecting node in nervus opticus network, and will join with the target weight Several nervus opticus networks are defined as candidate's neutral net;Based on the parameter value of characteristic extraction parameter described in setting Policy Updates, If the parameter value does not meet loop stop conditions, using candidate's neutral net as new nervus opticus network, and Return to the determination operation of performance objective weight parameter;Otherwise, candidate's neutral net is defined as meeting setting accuracy condition Nervus opticus network.
On the basis of above-mentioned optimization, further will " according to the parameter value, first eigenvector, training sample set with And the majorized function of setting, determine the target weight parameter value of adjacent two layers connecting node in nervus opticus network " it is embodied as: Default iterative value is initialized, and determines the present weight parameter value of adjacent two layers connecting node in the nervus opticus network; The training sample set according to the nervus opticus network processes with the present weight parameter value, and obtain current treatment knot Really;According to the majorized function of the parameter value, the first eigenvector, the result and setting, it is determined that described work as Preceding weight parameter is worth corresponding majorized function value, and the present weight parameter value and corresponding majorized function value are deposited in into time Select parameter set;If the iterative value does not meet iteration termination condition, from the increase iterative value, and based under stochastic gradient Drop method adjusts the present weight parameter value, returns to the treatment operation for performing training sample set;Otherwise, it determines the candidate parameter The minimum majorized function value of concentration, target weight ginseng is defined as by the corresponding present weight parameter value of the minimum majorized function value Numerical value, and the nervus opticus network with the target weight parameter value is defined as candidate's neutral net.
As shown in Fig. 2 a kind of optimization method of neutral net that the embodiment of the present invention two is provided, specifically includes following behaviour Make:
S201, acquisition meet the first nerves network of setting accuracy condition, and based on the setting of first nerves network processes Training sample set, obtains the first eigenvector that the training sample concentrates each training sample.
S202, the network struction condition based on setting construct nervus opticus network to be trained, wherein, nervus opticus network Node number and/or the network number of plies be less than first nerves network.
In the present embodiment, step S201 and step S202 are described in detail in the above-described embodiments, are repeated no more here.Step S203 to step S210 gives and is trained study based on first eigenvector and training sample set pair nervus opticus network Detailed process.It is understood that can actually regard the process of nervus opticus network training study to nervus opticus net as The weight parameter value of adjacent two layers connecting node constantly adjusts to reach the process of optimal conditions in network.
The parameter value of S203, initialization feature extraction parameters.
In the present embodiment, the process to the study of nervus opticus network training can be summarized as two circulate operation processes, and For an outer circulation includes an interior circulation, wherein, outer circulation is based primarily upon characteristic extraction parameter and is circulated operation.Specifically Ground, the training when adjustment nervus opticus network that this feature extraction parameters can be used to having levels is trained based on training sample set Difficulty.Usually, the parameter value of characteristic extraction parameter can be manually set, the setting of its parameter value can be based on the history of technical staff Experience sets.
S204, the default iterative value of initialization, and determine the current power of adjacent two layers connecting node in nervus opticus network Weight parameter value.
In the present embodiment, interior circulation is based primarily upon default iterative value and is circulated operation.Specifically, iterative value can use To the times of regulate and control of present weight parameter, typically greater than 0 integer in a systemic circulation is limited.Preferably, can be by iteration Value is initialized as 1.
In the present embodiment, before interior circulation operation is carried out, in addition it is also necessary to it is determined that state adjacent two layers in nervus opticus network connecting The present weight parameter value of node is connect, so that nervus opticus network is based on present weight parameter value and processes training sample.Can manage Solution, for the nervus opticus neutral net to be trained of neotectonics, can be by its each adjacent two layers connection unit Present weight parameter value be initially set to 0.
S205, basis have the nervus opticus network processes training sample set of present weight parameter value, and obtain current Result.
When in the present embodiment, based on the nervus opticus network processes training sample set with present weight parameter value, can Obtain training sample and concentrate the current second feature vector of each training sample;Meanwhile, may further determine that a pair of selected training samples This label value and this pair of training sample corresponding second feature vector in present weight parameter.It is understood that being obtained The result for obtaining includes the current second feature vector of above-mentioned each training sample and is a pair of marks of training sample setting The second feature vector of label value and this pair of training sample of acquisition.
Further, the basis has training sample described in the nervus opticus network processes of the present weight parameter value Collection, and current result is obtained, including:
The training sample set according to the nervus opticus network processes with the present weight parameter value, obtains the instruction The current second feature vector of each training sample in white silk sample set;And concentrate a pair of training samples of acquisition from the training sample This, if including identical information in the pair of training sample, using the first definite value as respectively instructing in the pair of training sample Practice the label value of sample;Otherwise, using the second definite value as each training sample in the pair of training sample label value;Obtain institute State the current second feature vector of each training sample in a pair of training samples.
In the present embodiment, aforesaid operations specifically describe the acquisition process of the current second feature vector of each training sample; Specifically describe a pair of determination process of training sample label value simultaneously.
S206, the majorized function according to parameter value, first eigenvector, result and setting, determine present weight The corresponding majorized function value of parameter value, and present weight parameter value and corresponding majorized function value are deposited in into candidate parameter collection.
In the present embodiment, the majorized function is the function as variable based on present weight parameter value, is worked as when described During preceding weight parameter value changes, the result obtained based on step S205 is also accordingly changed, and the majorized function value is specific Influenceed by parameter value, first eigenvector and result, therefore, it can determine that present weight is joined according to step S206 The majorized function value of numerical value, and for the ease of the follow-up minimum majorized function value for determining majorized function, the optimization letter that will also determine The candidate parameter that numerical value and corresponding present weight parameter value deposit in setting is concentrated.
Further, the majorized function is set as:Op (g)=η loss1(g)+(1-η)loss2(g), wherein, op (g) The corresponding majorized function values of present weight parameter value g are represented, η is empirical parameter, loss1G () represents present weight parameter value g pairs The first-loss functional value answered, loss2G () represents the corresponding second loss function values of present weight parameter g;The first-loss Function sets are:Wherein, i >=1 represents i-th training sample that the training sample is concentrated This;||hi(g)-fi* | | represent hi(g) and fiThe norm value of difference *, hiG () represents i-th training sample in present weight parameter Corresponding second feature vector during value g;fi* i-th extraction characteristic vector of training sample, the extraction characteristic vector base are represented In the first eigenvector and characteristic extraction parameter determination of i-th training sample;Second loss function is set as:Wherein, j takes 1 or 2, in the pair of training sample of expression Any training sample;Y represents the label value of each training sample in the pair of training sample;dj=| | hj(g)-fj* | |, table Show hj(g) and fjThe norm value of difference *, hjG () represents j-th training sample corresponding second spy in present weight parameter value g Levy vector;fj* j-th extraction characteristic vector of training sample is represented;α is empirical parameter.
In the present embodiment, when determining majorized function value based on above-mentioned majorized function, it is preferred that emphasis is characteristic extraction vector It is determined that.It is understood that the characteristic extraction vector is based primarily upon characteristic extraction parameter in first eigenvector and outer circulation Current parameter value t determines.
Specifically, after being based on first nerves network processes due to training sample, obtain included in first eigenvector Multiple characteristic vector values data fluctuations scope it is larger, so firstly the need of the first eigenvector value f to training sample ii It is normalized operation and obtains fi`(0<fi`<1);It is then possible to determine the current characteristic extraction vector fs of training sample ii* it is equal to fiThe t powers of `, i.e. fi*=(fi`)t, exemplarily, it is assumed that parameter value t is currently equal to 10, then characteristic extraction vector fi*=(fi `)10.It is understood that due to fi` from as more than 0 less than 1 value, so fi* value will be less than fi`, therefore, the second god The second feature vector obtained through network processes training sample i easily reaches fi*, and then it is easily determined based on above-mentioned optimization Candidate's neutral net when function determines that parameter value is t.
Whether S207, judgement iterative value meet iteration termination condition, if it is not, then performing step S208;If so, then performing step Rapid S209.
In the present embodiment, step S207 is operated equivalent to the cycle criterion of interior circulation, is mainly sentenced by iterative value It is fixed to realize.Exemplarily, it is assumed that set iteration termination condition is that iterative value is equal to 8, then can terminate when iterative value is equal to 8 Interior circulation, carries out the operation of step S209;Otherwise need to perform step S208.
S208, certainly increase iterative value, and present weight parameter value, return to step are adjusted based on stochastic gradient descent method S205。
In the present embodiment, when iterative value does not meet the termination condition of interior circulation, it is necessary to be carried out to current iterative value Operated from increasing, it is then regular to present weight ginseng according to the stochastic gradient descent method commonly used in neural metwork training study Numerical value is adjusted as new present weight parameter value, needs return to step S205 to continue based on new current power afterwards The nervus opticus network of weight parameter value determines the result of training sample set.
S209, the minimum majorized function value for determining candidate parameter concentration, by the corresponding present weight of minimum majorized function value Parameter value is defined as target weight parameter value, and the nervus opticus network with target weight parameter value is defined as into candidate's nerve Network.
In the present embodiment, when iterative value reaches the termination condition of interior circulation, interior circulation operation, Ran Hou can be terminated Candidate parameter is concentrated and determines minimum majorized function value, corresponding target power when thereby determining that the current parameter value of characteristic extraction parameter Weight parameter value and its candidate's neutral net.
S210, the parameter value based on setting Policy Updates characteristic extraction parameter, if parameter value does not meet circulation and terminates bar Part, then using candidate's neutral net as new nervus opticus network, and return to execution step S204;Otherwise, by candidate's nerve net Network is defined as meeting the nervus opticus network of setting accuracy condition.
In the present embodiment, step S210 is operated equivalent to the cycle criterion of outer circulation, mainly by judging characteristic extraction The termination condition whether parameter value of parameter meets outer circulation is realized.The present embodiment is based primarily upon setting Policy Updates characteristic extraction The parameter value of parameter, it is preferable that its setting rule can be parameter value from reducing, therefore, it can preferably will circulation knot Beam condition be set as parameter value be 0 when.
In the present embodiment, if parameter value does not meet loop stop conditions, need to perform the operation of interior circulation again, At this point it is possible to the candidate's neutral net determined at the end of current interior circulation is used as interior circulation next time when initial second Neutral net, and return to step S204 starts the determination operation of candidate's neutral net of a new round.
Specifically, if parameter value has met loop stop conditions, the candidate that the interior circulation that will can have just terminated determines Neutral net is defined as meeting the nervus opticus network of setting accuracy condition, meanwhile, the target weight parameter determined during interior circulation Value is exactly the weight parameter value of adjacent two layers connecting node in the nervus opticus network.
It is understood that in outer loop process, the value of parameter value t is gradually reduced, equivalent to the spy of training sample i Levy extraction vector fi* gradually increase with the reduction of parameter value t, finally when parameter value t is 1, the characteristic extraction of training sample i Vector fi* it has been equal to the first eigenvector of the training sample.It can thus be seen that the present embodiment is one to nervus opticus network Individual incremental training process, may finally be smart into setting is met as first nerves network by nervus opticus network training The neutral net that practical application can be carried out of degree condition.
S211, the nervus opticus network that will meet setting accuracy condition are defined as the target nerve net of first nerves network Network.
A kind of optimized algorithm of neutral net that the embodiment of the present invention two is provided, embodies the training of nervus opticus network Process, i.e., by the characteristic extraction parameter of setting, obtain first eigenvector, training sample set and setting majorized function, Training of the realization of iterative cycles to nervus opticus network learns, and thereby determines that the target weight parameter value of nervus opticus network, So that nervus opticus network disclosure satisfy that setting accuracy condition, the objective optimization network as first nerves network.Using the party Method, can directly will be excellent as the target of neutral net to be optimized after the trained study of small-scale neutral net of another new structure Change network, be hereby based on the neutral net after optimization when carrying out feature recognition, quickening identifying processing speed can be reached, shorten and know Other process time, reduces the purpose of the space holds such as storage, running memory and video memory;Additionally, directly treating excellent with existing Change neutral net to be compared by knot removal acquisition objective optimization network, the method is applicable not only to single nerve net to be optimized The optimization of network, applies also for the optimization of the cluster neutral net to multiple neural network models composition, substantially increases nerve net The optimal speed of network, is more suitable for extensive use.
Embodiment three
Fig. 3 is a kind of structured flowchart of the optimization device of neutral net that the embodiment of the present invention three is provided.The device is applicable The situation that the neutral net to be optimized of setting accuracy condition is optimized is reached after to training study, wherein, the device can be by Software and/or hardware realizes, and on terminal device or server platform where being typically integrated in neural network model.Such as Fig. 3 Shown, the device includes:Initial information acquisition module 31, network model build module 32, network model optimization module 33 and Objective network determining module 34.
Wherein, initial information acquisition module 31, the first nerves network of setting accuracy condition is met for obtaining, and is based on The training sample set of the first nerves network processes setting, obtains the fisrt feature that the training sample concentrates each training sample Vector;
Network model builds module 32, and nervus opticus net to be trained is constructed for the network struction condition based on setting Network, wherein, the node number and/or the network number of plies of the nervus opticus network are less than the first nerves network;
Network model optimization module 33, for according to the first eigenvector and training sample set training described the Two neutral nets, it is determined that meeting the nervus opticus network of setting accuracy condition;
Objective network determining module 34, it is described for the nervus opticus network for meeting setting accuracy condition to be defined as The target nerve network of first nerves network.
In the present embodiment, the device is obtained by initial information acquisition module 31 meet the of setting accuracy condition first One neutral net, and the training sample set based on first nerves network processes setting, obtain the training sample and concentrate each The first eigenvector of training sample;Then network struction condition construction of the module 32 based on setting is built by network model to treat The nervus opticus network of training, wherein, the node number and/or the network number of plies of the nervus opticus network are less than the described first god Through network;Afterwards by network model optimization module 33 according to the first eigenvector and the training sample set are trained Nervus opticus network, it is determined that meeting the nervus opticus network of setting accuracy condition;Will eventually through objective network determining module 34 The nervus opticus network for meeting setting accuracy condition is defined as the target nerve network of the first nerves network.
A kind of optimization device of neutral net that the embodiment of the present invention three is provided, can directly by the small of another neotectonics Scale neutral net is achieved according to the objective optimization network for being defined as neutral net to be optimized after optimal conditions training study The optimization of optimization neural network is treated, so as to when the neutral net after based on optimization carries out feature recognition, quickening can be reached Identifying processing speed, shortens the identifying processing time, reduces the purpose of the space holds such as storage, running memory and video memory.
Further, the network model optimization module 33, specifically includes:
Parameter initialization unit, for the parameter value of initialization feature extraction parameters;Candidate network determining unit, for root According to the majorized function of the parameter value, first eigenvector, training sample set and setting, determine adjacent in nervus opticus network The target weight parameter value of two-layer connecting node, and the nervus opticus network with the target weight parameter is defined as candidate Neutral net;Loop optimization unit, for the parameter value based on characteristic extraction parameter described in setting Policy Updates, when the parameter When value does not meet loop stop conditions, using candidate's neutral net as new nervus opticus network, and candidate network is returned to Determining unit;Otherwise, candidate's neutral net is defined as meeting the nervus opticus network of setting accuracy condition.
The candidate network determining unit, specifically includes:
Initialization subelement, for initializing default iterative value, and determines adjacent two layers in the nervus opticus network The present weight parameter value of connecting node;
Sample process subelement, for being instructed according to the nervus opticus network processes with the present weight parameter value Practice sample set, and obtain current result;
Numerical value determination subelement, for according to the parameter value, the first eigenvector, the result and setting Fixed majorized function, determines the corresponding majorized function value of the present weight parameter value, and by the present weight parameter value and Corresponding majorized function value deposits in candidate parameter collection;
Iterative criterion subelement, for not meeting iteration termination condition when the iterative value, then from the increase iterative value, And the present weight parameter value is adjusted based on stochastic gradient descent method, return to the treatment operation for performing training sample set;Otherwise, The minimum majorized function value that the candidate parameter is concentrated is determined, by the corresponding present weight parameter value of the minimum majorized function value It is defined as target weight parameter value, and the nervus opticus network with the target weight parameter value is defined as candidate's nerve net Network.
On the basis of above-described embodiment, the sample process subelement, specifically for:
The training sample set according to the nervus opticus network processes with the present weight parameter value, obtains the instruction The current second feature vector of each training sample in white silk sample set;And
Concentrated from the training sample and obtain a pair of training samples, if including identical letter in the pair of training sample Breath, then using the first definite value as each training sample in the pair of training sample label value;Otherwise, using the second definite value as institute State the label value of each training sample in a pair of training samples;Obtain each training sample in the pair of training sample it is current second Characteristic vector.
Further, the majorized function is set as:
Op (g)=η loss1(g)+(1-η)loss2(g), wherein, op (g) represents the corresponding optimizations of present weight parameter value g Functional value, η is empirical parameter, loss1G () represents the corresponding first-loss functional values of present weight parameter value g, loss2(g) table Show the corresponding second loss function values of present weight parameter g;The first-loss function sets are:Wherein, i >=1 represents i-th training sample that the training sample is concentrated;||hi(g)- fi* | | represent hi(g) and fiThe norm value of difference *, hiG () represents i-th training sample correspondence in present weight parameter value g Second feature vector;fi* i-th extraction characteristic vector of training sample is represented, the extraction characteristic vector is based on i-th instruction Practice the first eigenvector and characteristic extraction parameter determination of sample;Second loss function is set as:Wherein, j takes 1 or 2, in the pair of training sample of expression Any training sample;Y represents the label value of each training sample in the pair of training sample;dj=| | hj(g)-fj* | |, table Show hj(g) and fjThe norm value of difference *, hjG () represents j-th training sample corresponding second spy in present weight parameter value g Levy vector;fj* j-th extraction characteristic vector of training sample is represented;α is empirical parameter.
Note, above are only presently preferred embodiments of the present invention and institute's application technology principle.It will be appreciated by those skilled in the art that The invention is not restricted to specific embodiment described here, can carry out for a person skilled in the art various obvious changes, Readjust and substitute without departing from protection scope of the present invention.Therefore, although the present invention is carried out by above example It is described in further detail, but the present invention is not limited only to above example, without departing from the inventive concept, also More other Equivalent embodiments can be included, and the scope of the present invention is determined by scope of the appended claims.

Claims (10)

1. a kind of optimization method of neutral net, it is characterised in that including:
Acquisition meets the first nerves network of setting accuracy condition, and the training sample based on first nerves network processes setting This collection, obtains the first eigenvector that the training sample concentrates each training sample;
Network struction condition based on setting constructs nervus opticus network to be trained, wherein, the section of the nervus opticus network Point number and/or the network number of plies are less than the first nerves network;
The nervus opticus network is trained according to the first eigenvector and the training sample set, it is determined that meeting setting accuracy The nervus opticus network of condition;
The nervus opticus network for meeting setting accuracy condition is defined as the target nerve network of the first nerves network.
2. method according to claim 1, it is characterised in that described according to the first eigenvector and the training sample This collection trains the nervus opticus network, it is determined that meet the nervus opticus network of setting accuracy condition, including:
The parameter value of initialization feature extraction parameters;
According to the majorized function of the parameter value, first eigenvector, training sample set and setting, nervus opticus network is determined The target weight parameter value of middle adjacent two layers connecting node, and will determine with the nervus opticus network of the target weight parameter It is candidate's neutral net;
Based on the parameter value of characteristic extraction parameter described in setting Policy Updates, if the parameter value does not meet circulation and terminates bar Part, then using candidate's neutral net as new nervus opticus network, and return to the determination operation of performance objective weight parameter; Otherwise, candidate's neutral net is defined as meeting the nervus opticus network of setting accuracy condition.
3. method according to claim 2, it is characterised in that described according to the parameter value, first eigenvector, training Sample set and the majorized function of setting, determine the target weight parameter value of adjacent two layers connecting node in nervus opticus network, Including:
Default iterative value is initialized, and determines the present weight parameter of adjacent two layers connecting node in the nervus opticus network Value;
The training sample set according to the nervus opticus network processes with the present weight parameter value, and obtain current place Reason result;
According to the majorized function of the parameter value, the first eigenvector, the result and setting, it is determined that described work as Preceding weight parameter is worth corresponding majorized function value, and the present weight parameter value and corresponding majorized function value are deposited in into time Select parameter set;
If the iterative value does not meet iteration termination condition, from the increase iterative value, and based on stochastic gradient descent method The present weight parameter value is adjusted, the treatment operation for performing training sample set is returned;Otherwise, it determines the candidate parameter is concentrated Minimum majorized function value, the corresponding present weight parameter value of the minimum majorized function value is defined as target weight parameter Value, and the nervus opticus network with the target weight parameter value is defined as candidate's neutral net.
4. method according to claim 3, it is characterised in that the basis has the second of the present weight parameter value Training sample set described in Processing with Neural Network, and current result is obtained, including:
The training sample set according to the nervus opticus network processes with the present weight parameter value, obtains the training sample The current second feature vector of each training sample of this concentration;And
Concentrated from the training sample and obtain a pair of training samples, if including identical information in few a pair of training samples, Then using the first definite value as each training sample in the pair of training sample label value;Otherwise, using the second definite value as described The label value of each training sample in a pair of training samples;
Obtain the current second feature vector of each training sample in the pair of training sample.
5. method according to claim 4, it is characterised in that the majorized function is set as:
Op (g)=η loss1(g)+(1-η)loss2(g), wherein, op (g) represents the corresponding majorized functions of present weight parameter value g Value, η is empirical parameter, loss1G () represents the corresponding first-loss functional values of present weight parameter value g, loss2G () represents and works as The corresponding second loss function value of preceding weight parameter g;
The first-loss function sets are:Wherein, i >=1 represents the training sample set In i-th training sample;||hi(g)-fi* | | represent hi(g) and fiThe norm value of difference *, hiG () represents i-th training sample This corresponding second feature vector in present weight parameter value g;fi* i-th extraction characteristic vector of training sample, institute are represented State first eigenvector and characteristic extraction parameter determination that extraction characteristic vector is based on i-th training sample;
Second loss function is set as:Wherein, j takes 1 Or 2, represent any training sample in the pair of training sample;Y represents each training sample in the pair of training sample Label value;dj=| | hj(g)-fj* | |, represent hj(g) and fjThe norm value of difference *, hjG () represents that j-th training sample is being worked as Corresponding second feature vector during preceding weight parameter value g;fj* j-th extraction characteristic vector of training sample is represented;α is experience Parameter.
6. the optimization device of a kind of neutral net, it is characterised in that including:
Initial information acquisition module, the first nerves network of setting accuracy condition is met for obtaining, and based on the described first god Through the training sample set that network processes set, the first eigenvector that the training sample concentrates each training sample is obtained;
Network model builds module, and nervus opticus network to be trained is constructed for the network struction condition based on setting, wherein, The node number and/or the network number of plies of the nervus opticus network are less than the first nerves network;
Network model optimization module, for training the nervus opticus according to the first eigenvector and the training sample set Network, it is determined that meeting the nervus opticus network of setting accuracy condition;
Objective network determining module, for the nervus opticus network for meeting setting accuracy condition to be defined as into first god Through the target nerve network of network.
7. device according to claim 6, it is characterised in that the network model optimization module, specifically includes:
Parameter initialization unit, for the parameter value of initialization feature extraction parameters;
Candidate network determining unit, for the optimization according to the parameter value, first eigenvector, training sample set and setting Function, determines the target weight parameter value of adjacent two layers connecting node in nervus opticus network, and will be with the target weight The nervus opticus network of parameter is defined as candidate's neutral net;
Loop optimization unit, for based on setting Policy Updates described in characteristic extraction parameter parameter value, when the parameter value not When meeting loop stop conditions, using candidate's neutral net as new nervus opticus network, and candidate network determination is returned to Unit;Otherwise, candidate's neutral net is defined as meeting the nervus opticus network of setting accuracy condition.
8. device according to claim 7, it is characterised in that the candidate network determining unit, specifically includes:
Initialization subelement, for initializing default iterative value, and determines adjacent two layers connection in the nervus opticus network The present weight parameter value of node;
Sample process subelement, for training sample according to the nervus opticus network processes with the present weight parameter value This collection, and obtain current result;
Numerical value determination subelement, for according to the parameter value, the first eigenvector, the result and setting Majorized function, determines the corresponding majorized function value of the present weight parameter value, and by the present weight parameter value and correspondence Majorized function value deposit in candidate parameter collection;
Iterative criterion subelement, for not meeting iteration termination condition when the iterative value, then from the increase iterative value, and base The present weight parameter value is adjusted in stochastic gradient descent method, the treatment operation for performing training sample set is returned;Otherwise,
Determine the minimum majorized function value that the candidate parameter is concentrated, the corresponding present weight of the minimum majorized function value is joined Numerical value is defined as target weight parameter value, and the nervus opticus network with the target weight parameter value is defined as into candidate god Through network.
9. device according to claim 8, it is characterised in that the sample process subelement, specifically for:
The training sample set according to the nervus opticus network processes with the present weight parameter value, obtains the training sample The current second feature vector of each training sample of this concentration;And
Concentrated from the training sample and obtain a pair of training samples, if including identical information in the pair of training sample, Using the first definite value as each training sample in the pair of training sample label value;Otherwise, using the second definite value as described To the label value of each training sample in training sample;
Obtain the current second feature vector of each training sample in the pair of training sample.
10. device according to claim 9, it is characterised in that the majorized function is set as:
Op (g)=η loss1(g)+(1-η)loss2(g), wherein, op (g) represents the corresponding majorized functions of present weight parameter value g Value, η is empirical parameter, loss1G () represents the corresponding first-loss functional values of present weight parameter value g, loss2G () represents and works as The corresponding second loss function value of preceding weight parameter g;
The first-loss function sets are:Wherein, i >=1 represents the training sample set In i-th training sample;||hi(g)-fi* | | represent hi(g) and fiThe norm value of difference *, hiG () represents i-th training sample This corresponding second feature vector in present weight parameter value g;fi* i-th extraction characteristic vector of training sample, institute are represented State first eigenvector and characteristic extraction parameter determination that extraction characteristic vector is based on i-th training sample;
Second loss function is set as:Wherein, j takes 1 Or 2, represent any training sample in the pair of training sample;Y represents each training sample in the pair of training sample Label value;dj=| | hj(g)-fj* | |, represent hj(g) and fjThe norm value of difference *, hjG () represents that j-th training sample is being worked as Corresponding second feature vector during preceding weight parameter value g;fj* j-th extraction characteristic vector of training sample is represented;α is experience Parameter.
CN201611022209.3A 2016-11-16 2016-11-16 Neural network optimization method and device Pending CN106709565A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611022209.3A CN106709565A (en) 2016-11-16 2016-11-16 Neural network optimization method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611022209.3A CN106709565A (en) 2016-11-16 2016-11-16 Neural network optimization method and device

Publications (1)

Publication Number Publication Date
CN106709565A true CN106709565A (en) 2017-05-24

Family

ID=58940987

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611022209.3A Pending CN106709565A (en) 2016-11-16 2016-11-16 Neural network optimization method and device

Country Status (1)

Country Link
CN (1) CN106709565A (en)

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108108811A (en) * 2017-12-18 2018-06-01 北京地平线信息技术有限公司 Convolutional calculation method and electronic equipment in neutral net
CN108229652A (en) * 2017-11-28 2018-06-29 北京市商汤科技开发有限公司 Neural network model moving method and system, electronic equipment, program and medium
CN108268936A (en) * 2018-01-17 2018-07-10 百度在线网络技术(北京)有限公司 For storing the method and apparatus of convolutional neural networks
CN108288090A (en) * 2018-01-08 2018-07-17 福州瑞芯微电子股份有限公司 A kind of optimization method and device of parallel Competitive ANN chip
CN108776834A (en) * 2018-05-07 2018-11-09 上海商汤智能科技有限公司 System enhances learning method and device, electronic equipment, computer storage media
CN109032630A (en) * 2018-06-29 2018-12-18 电子科技大学 The update method of global parameter in a kind of parameter server
CN109165738A (en) * 2018-09-19 2019-01-08 北京市商汤科技开发有限公司 Optimization method and device, electronic equipment and the storage medium of neural network model
CN109389216A (en) * 2017-08-03 2019-02-26 珠海全志科技股份有限公司 The dynamic tailor method, apparatus and storage medium of neural network
CN109949304A (en) * 2018-03-29 2019-06-28 北京昆仑医云科技有限公司 The training and acquisition methods of image detection learning network, image detection device and medium
CN109993300A (en) * 2017-12-29 2019-07-09 华为技术有限公司 A kind of training method and device of neural network model
WO2020062262A1 (en) * 2018-09-30 2020-04-02 Shanghai United Imaging Healthcare Co., Ltd. Systems and methods for generating a neural network model for image processing
WO2020062004A1 (en) * 2018-09-28 2020-04-02 深圳百诺国际生命科技有限公司 Neural network and training method therefor
CN111079691A (en) * 2019-12-27 2020-04-28 中国科学院重庆绿色智能技术研究院 Pruning method based on double-flow network
CN111407279A (en) * 2019-01-07 2020-07-14 四川锦江电子科技有限公司 Magnetoelectricity combined positioning and tracking method and device based on neural network
CN112149797A (en) * 2020-08-18 2020-12-29 Oppo(重庆)智能科技有限公司 Neural network structure optimization method and device and electronic equipment
CN112446462A (en) * 2019-08-30 2021-03-05 华为技术有限公司 Generation method and device of target neural network model
CN113189879A (en) * 2021-05-10 2021-07-30 中国科学技术大学 Control strategy determination method and device, storage medium and electronic equipment
WO2022100607A1 (en) * 2020-11-13 2022-05-19 华为技术有限公司 Method for determining neural network structure and apparatus thereof
WO2022120741A1 (en) * 2020-12-10 2022-06-16 Baidu.Com Times Technology (Beijing) Co., Ltd. Training of deployed neural networks
WO2023040740A1 (en) * 2021-09-18 2023-03-23 华为技术有限公司 Method for optimizing neural network model, and related device

Cited By (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109389216A (en) * 2017-08-03 2019-02-26 珠海全志科技股份有限公司 The dynamic tailor method, apparatus and storage medium of neural network
CN108229652A (en) * 2017-11-28 2018-06-29 北京市商汤科技开发有限公司 Neural network model moving method and system, electronic equipment, program and medium
CN108229652B (en) * 2017-11-28 2021-05-04 北京市商汤科技开发有限公司 Neural network model migration method and system, electronic device, program, and medium
CN108108811A (en) * 2017-12-18 2018-06-01 北京地平线信息技术有限公司 Convolutional calculation method and electronic equipment in neutral net
US11966844B2 (en) 2017-12-29 2024-04-23 Huawei Technologies Co., Ltd. Method for training neural network model and apparatus
CN109993300B (en) * 2017-12-29 2021-01-29 华为技术有限公司 Training method and device of neural network model
US11521012B2 (en) 2017-12-29 2022-12-06 Huawei Technologies Co., Ltd. Method for training neural network model and apparatus
CN109993300A (en) * 2017-12-29 2019-07-09 华为技术有限公司 A kind of training method and device of neural network model
CN108288090A (en) * 2018-01-08 2018-07-17 福州瑞芯微电子股份有限公司 A kind of optimization method and device of parallel Competitive ANN chip
CN108268936A (en) * 2018-01-17 2018-07-10 百度在线网络技术(北京)有限公司 For storing the method and apparatus of convolutional neural networks
CN108268936B (en) * 2018-01-17 2022-10-28 百度在线网络技术(北京)有限公司 Method and apparatus for storing convolutional neural networks
CN109949304A (en) * 2018-03-29 2019-06-28 北京昆仑医云科技有限公司 The training and acquisition methods of image detection learning network, image detection device and medium
CN109949304B (en) * 2018-03-29 2021-08-10 科亚医疗科技股份有限公司 Training and acquiring method of image detection learning network, image detection device and medium
US11669711B2 (en) 2018-05-07 2023-06-06 Shanghai Sensetime Intelligent Technology Co., Ltd System reinforcement learning method and apparatus, and computer storage medium
CN108776834B (en) * 2018-05-07 2021-08-06 上海商汤智能科技有限公司 System reinforcement learning method and device, electronic equipment and computer storage medium
CN108776834A (en) * 2018-05-07 2018-11-09 上海商汤智能科技有限公司 System enhances learning method and device, electronic equipment, computer storage media
CN109032630B (en) * 2018-06-29 2021-05-14 电子科技大学 Method for updating global parameters in parameter server
CN109032630A (en) * 2018-06-29 2018-12-18 电子科技大学 The update method of global parameter in a kind of parameter server
CN109165738A (en) * 2018-09-19 2019-01-08 北京市商汤科技开发有限公司 Optimization method and device, electronic equipment and the storage medium of neural network model
WO2020062004A1 (en) * 2018-09-28 2020-04-02 深圳百诺国际生命科技有限公司 Neural network and training method therefor
US11907852B2 (en) 2018-09-30 2024-02-20 Shanghai United Imaging Healthcare Co., Ltd. Systems and methods for generating a neural network model for image processing
WO2020062262A1 (en) * 2018-09-30 2020-04-02 Shanghai United Imaging Healthcare Co., Ltd. Systems and methods for generating a neural network model for image processing
US11599796B2 (en) 2018-09-30 2023-03-07 Shanghai United Imaging Healthcare Co., Ltd. Systems and methods for generating a neural network model for image processing
CN111407279A (en) * 2019-01-07 2020-07-14 四川锦江电子科技有限公司 Magnetoelectricity combined positioning and tracking method and device based on neural network
CN112446462A (en) * 2019-08-30 2021-03-05 华为技术有限公司 Generation method and device of target neural network model
CN112446462B (en) * 2019-08-30 2024-06-18 华为技术有限公司 Method and device for generating target neural network model
CN111079691A (en) * 2019-12-27 2020-04-28 中国科学院重庆绿色智能技术研究院 Pruning method based on double-flow network
CN112149797A (en) * 2020-08-18 2020-12-29 Oppo(重庆)智能科技有限公司 Neural network structure optimization method and device and electronic equipment
WO2022100607A1 (en) * 2020-11-13 2022-05-19 华为技术有限公司 Method for determining neural network structure and apparatus thereof
WO2022120741A1 (en) * 2020-12-10 2022-06-16 Baidu.Com Times Technology (Beijing) Co., Ltd. Training of deployed neural networks
CN113189879B (en) * 2021-05-10 2022-07-15 中国科学技术大学 Control strategy determination method and device, storage medium and electronic equipment
CN113189879A (en) * 2021-05-10 2021-07-30 中国科学技术大学 Control strategy determination method and device, storage medium and electronic equipment
WO2023040740A1 (en) * 2021-09-18 2023-03-23 华为技术有限公司 Method for optimizing neural network model, and related device

Similar Documents

Publication Publication Date Title
CN106709565A (en) Neural network optimization method and device
US20210065058A1 (en) Method, apparatus, device and readable medium for transfer learning in machine learning
CN111259738B (en) Face recognition model construction method, face recognition method and related device
Valdez et al. Modular neural networks architecture optimization with a new nature inspired method using a fuzzy combination of particle swarm optimization and genetic algorithms
CN104751842B (en) The optimization method and system of deep neural network
CN109902546A (en) Face identification method, device and computer-readable medium
CN106779068A (en) The method and apparatus for adjusting artificial neural network
CN107995428A (en) Image processing method, device and storage medium and mobile terminal
CN108304489A (en) A kind of goal directed type personalization dialogue method and system based on intensified learning network
CN109840477A (en) Face identification method and device are blocked based on eigentransformation
CN111598213B (en) Network training method, data identification method, device, equipment and medium
JP6908302B2 (en) Learning device, identification device and program
EP3502978A1 (en) Meta-learning system
CN111612125A (en) Novel HTM time pool method and system for online learning
CN110135582A (en) Neural metwork training, image processing method and device, storage medium
US12124964B2 (en) Method for updating a node model that resists discrimination propagation in federated learning
CN115249315B (en) Heterogeneous computing device-oriented deep learning image classification method and device
CN106897744A (en) A kind of self adaptation sets the method and system of depth confidence network parameter
CN110457470A (en) A kind of textual classification model learning method and device
CN106991999A (en) Audio recognition method and device
CN112272074A (en) Information transmission rate control method and system based on neural network
CN107798384B (en) Iris florida classification method and device based on evolvable pulse neural network
CN112667912B (en) Task amount prediction method of edge server
CN112686306B (en) ICD operation classification automatic matching method and system based on graph neural network
CN111445024B (en) Medical image recognition training method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20170524

RJ01 Rejection of invention patent application after publication