CN106875002A - Complex value neural network training method based on gradient descent method Yu generalized inverse - Google Patents

Complex value neural network training method based on gradient descent method Yu generalized inverse Download PDF

Info

Publication number
CN106875002A
CN106875002A CN201710091587.5A CN201710091587A CN106875002A CN 106875002 A CN106875002 A CN 106875002A CN 201710091587 A CN201710091587 A CN 201710091587A CN 106875002 A CN106875002 A CN 106875002A
Authority
CN
China
Prior art keywords
hidden layer
complex value
sample
output
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710091587.5A
Other languages
Chinese (zh)
Inventor
桑兆阳
刘芹
龚晓玲
张华清
陈华
王健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China University of Petroleum East China
Original Assignee
China University of Petroleum East China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China University of Petroleum East China filed Critical China University of Petroleum East China
Priority to CN201710091587.5A priority Critical patent/CN106875002A/en
Publication of CN106875002A publication Critical patent/CN106875002A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Complex Calculations (AREA)

Abstract

The present invention relates to a kind of complex value neural network training method based on gradient descent method Yu generalized inverse, step one, the single hidden layer complex value neural network model of selection;Step 2, the weight matrix and weight vector in single hidden layer complex value neutral net are calculated using gradient descent method and generalized inverse, and step 3 according to weight matrix and weight vector, obtains complex value neutral net network parameter, carries out calculating mean square error;Iterations plus 1, return to step two.Hidden layer input weights of the invention are produced by gradient descent method iteration, and output weights are solved by generalized inverse all the time.This method iterations is few, and the corresponding training time is short, fast convergence rate, and learning efficiency is high, while required hidden node number is few.Therefore, the present invention can accurately reflect the performance of complex value neural network model than BSCBP method and CELM methods.

Description

Complex value neural network training method based on gradient descent method Yu generalized inverse
Technical field
Gradient is based on the invention belongs to the technical field of image procossing, pattern-recognition and communications, more particularly to one kind Descent method and the complex value neural network training method of generalized inverse.
Background technology
At aspects such as image procossing, pattern-recognition and communications sample training is carried out using the method for neural net model establishing It is widely used with test.Neural network model modeling in training sample, neutral net signal (input signal and output Signal and weighting parameter), can be real number value and complex values, so that neutral net is divided into real-valued neutral net and complex value nerve Network.Existing neural network modeling approach majority is the real-valued neural network model set up, but with electronic information science Develop rapidly, complex valued signals are increasingly frequently occurred in engineering practice, only consider the solution that real-valued calculating cannot be good Certainly practical problem, and complex value neutral net can solve the problems, such as that some real-valued neutral nets can't resolve.Complex value nerve net Network is to process complex information by complex parameter and variable (i.e. the input of signal, output and network weight are plural number) Neutral net.Therefore, a series of model of complex value neutral nets is proposed and furtherd investigate successively.
A kind of BSCBP methods are proposed in Batch Split-Complex Backpropagation Algorithm For training complex value neutral net.The real imaginary part type activation primitive of activation primitive selection, the real and imaginary part to hidden layer input is not activated, Avoid the appearance of singular point;BSCBP methods first carry out random assignment to input weight matrix and output weight matrix, then pass through Gradient descent method carries out gradient updating, finally calculates the precision of test sample.But, the BSCBP models based on gradient descent method Successive ignition is needed to train, elapsed time is long and learning efficiency is relatively low.
A kind of CELM methods are proposed by ELM methods in Fully Complex Extreme Learning Machine Complex field is expanded to from real number field, and is applied to non linear channel equalization.CELM only needs to set suitable network hidden node Number, the input weights to network carry out random assignment, and the optimal solution of output layer weights is obtained by least square method, activate letter Number can select sigmoid functions, (anti-) trigonometric function and (anti-) hyperbolic functions, and from unlike BSCBP, activation primitive is to hidden The input matrix of layer is directly activated.Whole process is once completed, and without iteration, therefore selects easy, pace of learning with parameter The advantage being exceedingly fast.But CELM methods generally require more hidden layer section to make up the random Sexual behavior mode of hidden node parameter Point number, and training precision needs further raising.
In sum, BSCBP speed in training is slow, and precision is relatively low, required although CELM methods speed is fast Network the number of hidden nodes is excessive, and precision also has much room for improvement, and is instructed for how to solve complex value neutral net simultaneously in the prior art Practice the problem that training speed is slow, precision is low and network the number of hidden nodes is excessive in method, still lack effective solution.
The content of the invention
, in order to solve the above problems, overcoming cannot be while solves instruction in traditional complex value neural network training method for the present invention Practice that speed is slow, the problem that precision is low and network the number of hidden nodes is excessive, there is provided a kind of answering based on gradient descent method and generalized inverse Value neural network training method (Gradient based Generalized Complex Neural Networks, referred to as GGCNN)。
To achieve these goals, the present invention is adopted the following technical scheme that:
A kind of complex value neural network training method based on gradient descent method Yu generalized inverse, methods described step includes:
(1) single hidden layer complex value neural network model is selected to be modeled sample data set;
(2) according to the single hidden layer complex value neural network model selected in step (1), single hidden layer is calculated using generalized inverse Weight matrix in complex value neutral net, 1 is set to by iterations initial value, and single hidden layer is calculated using gradient descent method Weight vector in complex value neutral net;
(3) according to the weight matrix and the weight vector calculated in step (2), complex value nerve net network diagram is obtained Network parameter, calculates the mean square error of current sample data;Judge whether current iteration number of times is equal to maximum iteration, if so, Terminate training;If it is not, current iteration number of times is added 1, return to step (2).
Preferably, the sample data set in the step (1) includes training sample data collection or test data set.
Preferably, the described single hidden layer complex value neural network model in the step (1) is:
Input layer, hidden layer and output layer neuron number in single hidden layer complex value neural network model are respectively L, M With 1;
Q input sample is given, its sample matrix is Z=(zij)L×Q=ZR+iZI, wherein, ZRIt is the real part of Z, ZIIt is Z's Imaginary part;
The input of q-th sample isWherein, i=1,2 ... L;
The corresponding ideal output matrix of input sample is D=(d1,d2…dQ)T=DR+iDI, wherein, DRIt is the real part of D, DI It is the imaginary part of D;
The ideal of q-th sample is output as dq∈C。
Preferably, the activation primitive of the hidden layer in single hidden layer complex value neural network model is gc:C→C;
The weight matrix of connection input layer and hidden layer is W=(wij)M×L=WR+iWI, wherein, WRIt is the real part of W, WIIt is W's Imaginary part;
The connection weight of input layer and i-th hidden node is designated as wi=(wi1,wi2…wiL)∈CL, wherein, i=1,2 ... M;
The weight vector of connection hidden layer and output layer is V=(v1,v2…vM)T=VR+iVI, wherein, VRIt is the real part of V, VI It is the imaginary part of V;
K-th hidden node is designated as v with the connection weight of output layerk∈ C, wherein, k=1,2 ... M.
Preferably, concretely comprising the following steps in the step (2):
(2-1) initializes input layer to the weight matrix of hidden layer, obtains initial weight matrix W0, W0In given interval with Machine assignment;
(2-2) using gradient descent method and generalized inverse calculate weight matrix and weights in single hidden layer complex value neutral net to Amount.
Preferably, the tool by generalized inverse calculating hidden layer to the weight matrix V of output layer in the step (2-2) Body step is:
The described initial weight matrix W of (2-2a-1) in step (2-1)0Calculated with the sample matrix Z in step (1) Input matrix U=(the u of hidden layerij)M×Q,
(2-2a-2) to matrix step (2-2a-1) in the input matrix U real part and imaginary part activate respectively, obtain Output matrix H=(the h of hidden layerij)M×Q, H=gc(UR))+igc(UI))=HR+iHI, wherein, HRIt is the real part of H, HIIt is the void of H Portion;
(2-2a-3):Hidden layer to the weight matrix V of output layer is calculated by generalized inverse,
Wherein, H is the output matrix of hidden layer in step (2-2a-2), and D is the described preferable output matrix in step (1).
Preferably, in the step (2-2) to initial weight matrix W0Optimize and concretely comprise the following steps:
(2-2b-1):Primary iteration number of times k=1 is set, and maximum iteration is K;
(2-2b-2):Calculate gradients of the mean square error E on hidden layer weights W;
(2-2b-3):Right value update formula isWherein, n=1,2 ..., η are learning rate.
Preferably, gradients of the mean square error E on hidden layer weights W is divided into two parts calculating in the step (2-2b-2), E is first sought to WRGradient, then seek E to WIGradient, wherein, WRIt is W real parts, WIIt is the imaginary part of W;
In formula, zqIt is q-th input vector of sample, zq,RIt is q-th real part of the input vector of sample, zq,IIt is q The imaginary part of the input vector of individual sample, gcIt is the activation primitive of hidden layer,It is input of q-th sample at m-th hidden node,It is q-th sample in m-th real part of the input of hidden node,It is q-th sample in m-th void of the input of hidden node Portion.
Preferably, concretely comprising the following steps in the step (3):
The activation primitive selection linear function of (3-1) output layer, the then output of the input equal to output layer of output layer, network Reality output be O=(o1,o2…oQ)T, q-th reality output of sample is oq∈ C, q=1,2 ... Q, are O by matrix O pointsR (real part) and OI(imaginary part) two parts, O=HTV=OR+iOIQ-th reality output of sample:
Wherein, vRIt is hidden layer to the real part of output layer weight vector, vIIt is hidden layer to the imaginary part of output layer weight vector, hq,R It is q-th real part of sample hidden layer output vector, hq,IIt is q-th imaginary part of sample hidden layer output vector;
(3-2) calculates the mean square error of current sample data, judges whether current iteration number of times k is equal to maximum iteration K, if so, terminating training;If it is not, current iteration number of times is added 1, return to step (2-2b-2).
Preferably, the mean square error of the current sample data of calculating in the step (3) is used:
Wherein, oqIt is q-th reality output of sample, dqIt is the q-th preferable output of sample.
Beneficial effects of the present invention:
1st, a kind of complex value neural network training method based on gradient descent method and generalized inverse of the invention, hidden layer input power Value is produced by gradient descent method iteration, and output weights are solved by generalized inverse all the time.Therefore, compared with BSCBP, we Method iterations is few, and the corresponding training time is short, fast convergence rate, and learning efficiency is high.Compared with CELM, required hidden layer Node number is few, and learning efficiency is high.Therefore, the present invention, can be more accurate than BSCBP method and CELM method high precisions Reflection complex value neural network model performance.
2nd, a kind of complex value neural network training method based on gradient descent method and generalized inverse of the invention, solves from hidden layer To output layer weights when, the method for employing generalized inverse, without iteration, a step obtain output weights a minimum norm most young waiter in a wineshop or an inn Multiply solution, it is fast compared to correlation technique (such as CBP methods) training speed based on gradient descent method.
Brief description of the drawings
Fig. 1 is method of the present invention schematic flow sheet;
Fig. 2 is the curve map of the present invention and the contrast of BSCBP and CELM modeling methods.
Specific embodiment:
It is noted that described further below is all exemplary, it is intended to provide further instruction to the application.Unless another Indicate, all technologies used herein and scientific terminology are with usual with the application person of an ordinary skill in the technical field The identical meanings of understanding.
It should be noted that term used herein above is merely to describe specific embodiment, and be not intended to restricted root According to the illustrative embodiments of the application.As used herein, unless the context clearly indicates otherwise, otherwise singulative Be also intended to include plural form, additionally, it should be understood that, when in this manual use term "comprising" and/or " bag Include " when, it indicates existing characteristics, step, operation, device, component and/or combinations thereof.
In the case where not conflicting, the feature in embodiment and embodiment in the application can be mutually combined.Tie below The invention will be further described with embodiment to close accompanying drawing.
Embodiment 1:
Citation " Channel Equalization Using Adaptive Complex in the present embodiment The three-dimensional equalizer model of the non-linear distortion of the 4-QAM signals in Radial Basis Function Networks ".Its In, the input of balanced device is
The ideal of balanced device is output as 0.7+0.7i, 0.7-0.7i, -0.7+0.7i, -0.7-0.7i.
Training dataset and test data set round the 70% and 30% of body sample data set respectively in the present embodiment.
First, by a kind of complex value neural network training method logarithm based on gradient descent method Yu generalized inverse of the invention It is modeled according to collection.
A kind of complex value neural network training method based on gradient descent method Yu generalized inverse, its method flow diagram such as Fig. 1 institutes Show, methods described step includes:
(1) single hidden layer complex value neural network model is selected to build sample training data set or test sample data set Mould;
Described single hidden layer complex value neural network model in the step (1) is:
Input layer, hidden layer and output layer neuron number in single hidden layer complex value neural network model are respectively L, M With 1;
Q input sample is given, its sample matrix is Z=(zij)L×Q=ZR+iZI, wherein, ZRFor
The real part of Z, ZIIt is the imaginary part of Z;
The input of q-th sample isWherein, i=1,2 ... L;
The corresponding ideal output matrix of input sample is D=(d1,d2…dQ)T=DR+iDI, wherein, DRIt is the real part of D, DI It is the imaginary part of D;
The ideal of q-th sample is output as dq∈C。
The activation primitive of the hidden layer in single hidden layer complex value neural network model is gc:C→C;
The weight matrix of connection input layer and hidden layer is W=(wij)M×L=WR+iWI, wherein, WRIt is the real part of W, WIIt is W's Imaginary part;
The connection weight of input layer and i-th hidden node is designated as wi=(wi1,wi2…wiL)∈CL, wherein, i=1,2 ... M;
The weight vector of connection hidden layer and output layer is V=(v1,v2…vM)T=VR+iVI, wherein, VRIt is the real part of V, VI It is the imaginary part of V;
K-th hidden node is designated as v with the connection weight of output layerk∈ C, wherein, k=1,2 ... M.
(2) according to the single hidden layer complex value neural network model selected in step (1), single hidden layer is calculated using generalized inverse Weight matrix in complex value neutral net, 1 is set to by iterations initial value, and single hidden layer is calculated using gradient descent method Weight vector in complex value neutral net;
Step S21:Initialization input layer obtains initial weight matrix W to the weight matrix of hidden layer0, W0It is given interval with Machine assignment;
Step S22:The input matrix of hidden layer is U=(uij)M×Q, matrix U is divided into UR(real part) and UI(imaginary part) two parts, U =WZ=UR+iUI, the input of q-th sample, m-th hidden node:
In formula, xqIt is q-th real part of input sample, yqIt is q-th imaginary part of input sample,It is m-th hidden node Input weight vector real part,It is m-th imaginary part of the input weight vector of hidden node;
Step S23:Real part and imaginary part to matrix U are activated respectively, and the output matrix for obtaining hidden layer is H=(hij)M×Q, square H points of battle array is HR(real part) and HI(imaginary part) two parts, H=gc(UR)+igc(UI)=HR+iHI
Step S24:Hidden layer to the weight matrix V of output layer is calculated by generalized inverse,
Step S25:To initial weight matrix W0Optimize, the following sub-steps of Optimization Steps S25:
Step S251:Primary iteration number of times k=1 is set;(maximum iteration is K)
Step S252:Gradients of the mean square error E on hidden layer weights W is calculated, is divided into two parts calculating, first seek E to WR's Gradient, then E is sought to WIGradient,
Step S27:Right value update formula is Wn+1=Wn+ΔWnN is iterations n=1,2 ..., whereinAndI.e. m-th hidden node changes in n-th For when gradient;Learning rate η is constant;
(3) according to the weight matrix and the weight vector calculated in step (2), complex value nerve net network diagram is obtained Network parameter, calculates the mean square error of current sample data;Judge whether current iteration number of times is equal to maximum iteration, if so, Terminate training;If it is not, current iteration number of times is added 1, return to step (2).
Step S31:The activation primitive selection linear function of output layer, then the output of the input equal to output layer of output layer, The reality output of network is O=(o1,o2…oQ)T, q-th reality output of sample is oq∈ C, q=1,2 ... Q, by matrix O points It is OR(real part) and OI(imaginary part) two parts, O=HTV=OR+iOIQ-th reality output of sample:
In formula, vRIt is hidden layer to the real part of output layer weight vector, vIIt is hidden layer to the imaginary part of output layer weight vector, hq,R It is q-th real part of sample hidden layer output vector, hq,IIt is q-th imaginary part of sample hidden layer output vector;
Step S32:Calculate the error function of training sample:
Make k=k+1, return to step S22 (the weight matrix V of hidden layer to output layer is solved by generalized inverse all the time).
The present embodiment includes two contrast modeling methods, BSCBP methods and CELM methods.CELM methods are documents Method in " Fully complex extreme learning machine ", the method is that input weight matrix is assigned at random Value, output weight matrix is solved by generalized inverse.Respectively by BSCBP methods and CELM methods to data set (document “Channel Equalization Using Adaptive Complex Radial Basis Function Networks” In the three-dimensional equalizer model of non-linear distortion of 4-QAM signals processed.Wherein, the input of balanced device is
The ideal of balanced device is output as 0.7 + 0.7i, 0.7-0.7i, -0.7+0.7i, -0.7-0.7i) it is modeled.Experimental result is as shown in Figure 2.
As can be seen from Figure 2 under the conditions of network structure identical, the training error of this method is below BSCBP methods With CELM methods, illustrate that the inventive method effectively optimizes training weights, reached training precision very high.
In the inventive method, the input weight matrix of hidden layer is updated by gradient descent method, therefore, with CELM phases Than, under the conditions of initial weight identical, the Hidden nodes that GGCNN models need far less than the Hidden nodes needed for CELM, And the mean square error produced less than CELM models;
Modeling method (Gradient based based on gradient descent method Yu the complex value neutral net of generalized inverse of the invention Generalized Complex Neural Networks, abbreviation GGCNN method) in, the output weight matrix of hidden layer is by wide Justice is inverse to be solved, and without iteration, a step obtains the LS solution of the least norm of output weight matrix.Therefore initial weight with Under the conditions of hidden node number identical, the inventive method is not only faster than BSCBP training speed, and training error and test error It is greatly lowered, detailed comparisons' result is as shown in table 1.
Table 1
The preferred embodiment of the application is the foregoing is only, the application is not limited to, for the skill of this area For art personnel, the application can have various modifications and variations.It is all within spirit herein and principle, made any repair Change, equivalent, improvement etc., should be included within the protection domain of the application.

Claims (10)

1. a kind of complex value neural network training method based on gradient descent method Yu generalized inverse, it is characterized in that:Methods described step Including:
(1) single hidden layer complex value neural network model is selected to be modeled sample data set;
(2) according to the single hidden layer complex value neural network model selected in step (1), single hidden layer complex value is calculated using generalized inverse Weight matrix in neutral net, 1 is set to by iterations initial value, and single hidden layer complex value is calculated using gradient descent method Weight vector in neutral net;
(3) according to the weight matrix and the weight vector calculated in step (2), complex value neutral net network ginseng is obtained Number, calculates the mean square error of current sample data;Judge whether current iteration number of times is equal to maximum iteration, if so, terminating Training;If it is not, current iteration number of times is added 1, return to step (2).
2. a kind of complex value neural network training method based on gradient descent method Yu generalized inverse as claimed in claim 1, its spy Levying is:Described single hidden layer complex value neural network model in the step (1) is:
Input layer, hidden layer and output layer neuron number in single hidden layer complex value neural network model are respectively L, M and 1; Give Q input sample;Its sample matrix is Z=(zij)L×Q, the corresponding ideal output matrix of input sample be D=(d1,d2… dQ)TIt is plural number;The input of q-th sample isWherein, i=1,2 ... L;Q-th sample Ideal is output as dq∈C。
3. a kind of complex value neural network training method based on gradient descent method Yu generalized inverse as claimed in claim 2, its spy Levying is:The activation primitive of the hidden layer in described single hidden layer complex value neural network model in the step (1) is gc:C→ C;The weight matrix of connection input layer and hidden layer is W=(wij)M×L=WR+iWI, wherein, WRIt is the real part of W, WIIt is the imaginary part of W; The connection weight of input layer and i-th hidden node is designated as wi=(wi1,wi2…wiL)∈CL, wherein, i=1,2 ... M;Connection hidden layer It is V=(v with the weight vector of output layer1,v2…vM)T=VR+iVI, wherein, VRIt is the real part of V, VIIt is the imaginary part of V;K-th hidden Node is designated as v with the connection weight of output layerk∈ C, wherein, k=1,2 ... M.
4. a kind of complex value neural network training method based on gradient descent method Yu generalized inverse as claimed in claim 3, its spy Levying is:Concretely comprising the following steps in the step (2):
(2-1) initializes input layer to the weight matrix of hidden layer, obtains initial weight matrix W0, W0In given interval interior random tax Value;
(2-2) calculates weight matrix and weight vector in single hidden layer complex value neutral net using gradient descent method and generalized inverse.
5. a kind of complex value neural network training method based on gradient descent method Yu generalized inverse as claimed in claim 4, its spy Levying is:It is described by generalized inverse calculating hidden layer concretely comprising the following steps to the weight matrix V of output layer in the step (2-2):
The described initial weight matrix W of (2-2a-1) in step (2-1)0Hidden layer is calculated with the sample matrix Z in step (1) Input matrix U=(uij)M×Q,
(2-2a-2) to matrix step (2-2a-1) in the input matrix U real part and imaginary part activate respectively, obtain hidden layer Output matrix H=(hij)M×Q, H=gc(UR)+igc(UI)=HR+iHI, wherein, HRIt is the real part of H, HIIt is the imaginary part of H;
(2-2a-3):Hidden layer to the weight matrix V of output layer is calculated by generalized inverse,
Wherein, H is the output matrix of hidden layer in step (2-2a-2), and D is the described preferable output matrix in step (1).
6. a kind of complex value neural network training method based on gradient descent method Yu generalized inverse as claimed in claim 4, its spy Levying is:In the step (2-2) to initial weight matrix W0Optimize and concretely comprise the following steps:
(2-2b-1):Primary iteration number of times k=1 is set, and maximum iteration is K;
(2-2b-2):Calculate gradients of the mean square error E on hidden layer weights W;
(2-2b-3):Right value update formula isWherein, n=1,2 ..., η are learning rate.
7. a kind of complex value neural network training method based on gradient descent method Yu generalized inverse as claimed in claim 6, its spy Levying is:Gradients of the mean square error E on hidden layer weights W is divided into two parts calculating in the step (2-2b-2), first seeks E to WR Gradient, then seek E to WIGradient, wherein, WRIt is W real parts, WIIt is the imaginary part of W;
8. a kind of complex value neural network training method based on gradient descent method Yu generalized inverse as claimed in claim 7, its spy Levying is:
In formula, zqIt is q-th input vector of sample, zq,RIt is q-th real part of the input vector of sample, zq,IIt is q-th sample Input vector imaginary part, gcIt is the activation primitive of hidden layer,It is input of q-th sample at m-th hidden node,For Q-th sample in m-th real part of the input of hidden node,It is q-th sample in m-th imaginary part of the input of hidden node.
9. a kind of complex value neural network training method based on gradient descent method Yu generalized inverse as claimed in claim 1, its spy Levying is:Concretely comprising the following steps in the step (3):
The activation primitive selection linear function of (3-1) output layer, the then output of the input equal to output layer of output layer, the reality of network Border is output as O=(o1,o2…oQ)T, q-th reality output of sample is oq∈ C, q=1,2 ... Q, are O by matrix O pointsRIt is (real Portion) and OI(imaginary part) two parts, O=HTV=OR+iOIQ-th reality output of sample:
Wherein, vRIt is hidden layer to the real part of output layer weight vector, vIIt is hidden layer to the imaginary part of output layer weight vector, hq,RIt is The q real part of sample hidden layer output vector, hq,IIt is q-th imaginary part of sample hidden layer output vector;
(3-2) calculates the mean square error of current sample data, judges whether current iteration number of times k is equal to maximum iteration K, if It is to terminate training;If it is not, current iteration number of times is added 1, return to step (2-2b-2).
10. a kind of complex value neural network training method based on gradient descent method Yu generalized inverse as claimed in claim 9, its spy Levying is:The mean square error of the current sample data of calculating in the step (3) is used:
Wherein, oqIt is q-th reality output of sample, dqIt is the q-th preferable output of sample.
CN201710091587.5A 2017-02-20 2017-02-20 Complex value neural network training method based on gradient descent method Yu generalized inverse Pending CN106875002A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710091587.5A CN106875002A (en) 2017-02-20 2017-02-20 Complex value neural network training method based on gradient descent method Yu generalized inverse

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710091587.5A CN106875002A (en) 2017-02-20 2017-02-20 Complex value neural network training method based on gradient descent method Yu generalized inverse

Publications (1)

Publication Number Publication Date
CN106875002A true CN106875002A (en) 2017-06-20

Family

ID=59166995

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710091587.5A Pending CN106875002A (en) 2017-02-20 2017-02-20 Complex value neural network training method based on gradient descent method Yu generalized inverse

Country Status (1)

Country Link
CN (1) CN106875002A (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108334947A (en) * 2018-01-17 2018-07-27 上海爱优威软件开发有限公司 A kind of the SGD training methods and system of intelligent optimization
CN109255308A (en) * 2018-11-02 2019-01-22 陕西理工大学 There are the neural network angle-of- arrival estimation methods of array error
CN109274624A (en) * 2018-11-07 2019-01-25 中国电子科技集团公司第三十六研究所 A kind of carrier frequency bias estimation based on convolutional neural networks
CN110011733A (en) * 2019-03-25 2019-07-12 华中科技大学 A kind of depolarization multiplexing method and system based on factor of momentum
CN110034827A (en) * 2019-03-25 2019-07-19 华中科技大学 A kind of depolarization multiplexing method and system based on reverse observation error
CN110824922A (en) * 2019-11-22 2020-02-21 电子科技大学 Smith estimation compensation method based on six-order B-spline wavelet neural network
CN111950711A (en) * 2020-08-14 2020-11-17 苏州大学 Second-order hybrid construction method and system of complex-valued forward neural network
CN112148730A (en) * 2020-06-30 2020-12-29 网络通信与安全紫金山实验室 Method for extracting product data characteristics in batches by using matrix generalized inverse
CN112770013A (en) * 2021-01-15 2021-05-07 电子科技大学 Heterogeneous information network embedding method based on side sampling
CN113158582A (en) * 2021-05-24 2021-07-23 苏州大学 Wind speed prediction method based on complex value forward neural network
US11120333B2 (en) 2018-04-30 2021-09-14 International Business Machines Corporation Optimization of model generation in deep learning neural networks using smarter gradient descent calibration
CN114091327A (en) * 2021-11-10 2022-02-25 中国航发沈阳发动机研究所 Method for determining radar scattering characteristics of engine cavity
WO2023216383A1 (en) * 2022-05-13 2023-11-16 苏州大学 Complex-valued timing signal prediction method based on complex-valued neural network
US11863221B1 (en) * 2020-07-14 2024-01-02 Hrl Laboratories, Llc Low size, weight and power (swap) efficient hardware implementation of a wide instantaneous bandwidth neuromorphic adaptive core (NeurACore)
US12057989B1 (en) 2020-07-14 2024-08-06 Hrl Laboratories, Llc Ultra-wide instantaneous bandwidth complex neuromorphic adaptive core processor

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108334947A (en) * 2018-01-17 2018-07-27 上海爱优威软件开发有限公司 A kind of the SGD training methods and system of intelligent optimization
US11120333B2 (en) 2018-04-30 2021-09-14 International Business Machines Corporation Optimization of model generation in deep learning neural networks using smarter gradient descent calibration
CN109255308A (en) * 2018-11-02 2019-01-22 陕西理工大学 There are the neural network angle-of- arrival estimation methods of array error
CN109255308B (en) * 2018-11-02 2023-07-21 陕西理工大学 Neural network arrival angle estimation method with array error
CN109274624B (en) * 2018-11-07 2021-04-27 中国电子科技集团公司第三十六研究所 Carrier frequency offset estimation method based on convolutional neural network
CN109274624A (en) * 2018-11-07 2019-01-25 中国电子科技集团公司第三十六研究所 A kind of carrier frequency bias estimation based on convolutional neural networks
CN110034827A (en) * 2019-03-25 2019-07-19 华中科技大学 A kind of depolarization multiplexing method and system based on reverse observation error
CN110011733A (en) * 2019-03-25 2019-07-12 华中科技大学 A kind of depolarization multiplexing method and system based on factor of momentum
CN110824922A (en) * 2019-11-22 2020-02-21 电子科技大学 Smith estimation compensation method based on six-order B-spline wavelet neural network
CN112148730A (en) * 2020-06-30 2020-12-29 网络通信与安全紫金山实验室 Method for extracting product data characteristics in batches by using matrix generalized inverse
US11863221B1 (en) * 2020-07-14 2024-01-02 Hrl Laboratories, Llc Low size, weight and power (swap) efficient hardware implementation of a wide instantaneous bandwidth neuromorphic adaptive core (NeurACore)
US12057989B1 (en) 2020-07-14 2024-08-06 Hrl Laboratories, Llc Ultra-wide instantaneous bandwidth complex neuromorphic adaptive core processor
CN111950711A (en) * 2020-08-14 2020-11-17 苏州大学 Second-order hybrid construction method and system of complex-valued forward neural network
CN112770013A (en) * 2021-01-15 2021-05-07 电子科技大学 Heterogeneous information network embedding method based on side sampling
CN113158582A (en) * 2021-05-24 2021-07-23 苏州大学 Wind speed prediction method based on complex value forward neural network
WO2022247049A1 (en) * 2021-05-24 2022-12-01 苏州大学 Method for predicting wind speed based on complex-valued forward neural network
CN114091327A (en) * 2021-11-10 2022-02-25 中国航发沈阳发动机研究所 Method for determining radar scattering characteristics of engine cavity
CN114091327B (en) * 2021-11-10 2022-09-20 中国航发沈阳发动机研究所 Method for determining radar scattering characteristics of engine cavity
WO2023216383A1 (en) * 2022-05-13 2023-11-16 苏州大学 Complex-valued timing signal prediction method based on complex-valued neural network

Similar Documents

Publication Publication Date Title
CN106875002A (en) Complex value neural network training method based on gradient descent method Yu generalized inverse
CN108154228B (en) Artificial neural network computing device and method
WO2023019601A1 (en) Signal modulation recognition method for complex-valued neural network based on structure optimization algorithm
CN108416755A (en) A kind of image de-noising method and system based on deep learning
CN106991440B (en) Image classification method of convolutional neural network based on spatial pyramid
CN110472779A (en) A kind of power-system short-term load forecasting method based on time convolutional network
CN107766794A (en) The image, semantic dividing method that a kind of Fusion Features coefficient can learn
CN109639710A (en) A kind of network attack defence method based on dual training
CN111324990A (en) Porosity prediction method based on multilayer long-short term memory neural network model
CN109255340A (en) It is a kind of to merge a variety of face identification methods for improving VGG network
CN107132516A (en) A kind of Radar range profile's target identification method based on depth confidence network
CN110807544B (en) Oil field residual oil saturation distribution prediction method based on machine learning
CN106022465A (en) Extreme learning machine method for improving artificial bee colony optimization
CN108182260A (en) A kind of Multivariate Time Series sorting technique based on semantic selection
CN108596078A (en) A kind of seanoise signal recognition method based on deep neural network
CN113190688A (en) Complex network link prediction method and system based on logical reasoning and graph convolution
CN111191785A (en) Structure searching method based on expanded search space
CN110276441A (en) A kind of trapezoidal overlap kernel impulse response estimation method based on deep learning
CN109800517B (en) Improved reverse modeling method for magnetorheological damper
CN111950711A (en) Second-order hybrid construction method and system of complex-valued forward neural network
CN113807040B (en) Optimized design method for microwave circuit
CN115688908A (en) Efficient neural network searching and training method based on pruning technology
CN111058840A (en) Organic carbon content (TOC) evaluation method based on high-order neural network
CN117407802A (en) Runoff prediction method based on improved depth forest model
CN106407932B (en) Handwritten Digit Recognition method based on fractional calculus Yu generalized inverse neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20170620