CN112542161B - BP neural network voice recognition method based on double-layer PID optimization - Google Patents

BP neural network voice recognition method based on double-layer PID optimization Download PDF

Info

Publication number
CN112542161B
CN112542161B CN202011455918.7A CN202011455918A CN112542161B CN 112542161 B CN112542161 B CN 112542161B CN 202011455918 A CN202011455918 A CN 202011455918A CN 112542161 B CN112542161 B CN 112542161B
Authority
CN
China
Prior art keywords
layer
neural network
output
error
pid
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011455918.7A
Other languages
Chinese (zh)
Other versions
CN112542161A (en
Inventor
和思铭
李伟觐
曾文钰
范晨奥
刘世新
汪雨琦
吴英然
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changchun Institute of Applied Chemistry of CAS
Original Assignee
Changchun Institute of Applied Chemistry of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changchun Institute of Applied Chemistry of CAS filed Critical Changchun Institute of Applied Chemistry of CAS
Priority to CN202011455918.7A priority Critical patent/CN112542161B/en
Publication of CN112542161A publication Critical patent/CN112542161A/en
Application granted granted Critical
Publication of CN112542161B publication Critical patent/CN112542161B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/16Speech classification or search using artificial neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/06Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
    • G10L15/063Training
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention relates to a BP neural network voice recognition method based on double-layer PID optimization, which takes an FPGA as a platform of a voice signal input method, adjusts a weight threshold value and a learning rate by using a PID algorithm, and adopts K in the double-layer PID algorithm P ,K I And K D Error E of three parameters according to system and training result g (k) The automatic adjustment is carried out, so that the weight threshold convergence of the hidden layer and the output layer is more stable, and the data fluctuation of the system is reduced; the outer PID algorithm synchronizes the updating of the learning rate with the training process of the neural network, provides a larger updating intensity in the early stage of the training of the neural network to enable the neural network to iterate quickly, reduces the updating intensity in the later stage of the training of the neural network to prevent the data from deviating from the correct value, enables the learning rate to provide a larger updating intensity in the early stage to help the system to update quickly, reduces the updating intensity in the later stage to prevent the data from deviating from the correct value, and has higher voice recognition correct rate.

Description

BP neural network voice recognition method based on double-layer PID optimization
The technical field is as follows:
the invention relates to an artificial intelligence algorithm, in particular to a BP neural network speech recognizer optimized by double-layer PID
Background art:
as an important application direction in the field of artificial intelligence, speech recognition has been studied by many learners, such as speech recognition under deep learning, and a speech recognition method under a support vector machine. In the aspect of learning algorithm, the BP algorithm has been named after a strong nonlinear mapping and a simple structure, and has been used for a long time in the context of speech recognition, but it also has some defects, such as that a weight threshold and a learning rate cannot be determined during network initialization, convergence fluctuation is large if the setting is too large, and convergence speed is slow if the setting is small. The existing weight threshold updating formula almost depends on a negative gradient algorithm, although the negative gradient algorithm accelerates the convergence of the weight threshold to a certain degree, the numerical fluctuation caused by the negative gradient algorithm is too large, and the normal convergence of the weight threshold is influenced to a certain degree. Meanwhile, the updating of the learning rate is basically manually adjusted and set in continuous experiments, the existing variable learning rate algorithm only simply and linearly reduces the learning rate, the influence of the learning rate is reduced when the algorithm is carried out to the later stage, the output deviation from a correct result due to fluctuation caused by overlarge updating force is avoided, but the effect of the algorithm on the correct rate is not achieved, and the condition of low music identification correct rate can occur when a general BP neural network structure is used for identifying music data.
CN103639211A discloses a BP neural network and PID parameter optimization roll gap control method and system, and provides a method for optimizing PID structure by using a neural network, wherein PID parameters are more stable by the method, and the algorithm is more stable.
CN110488600A discloses an LQR optimization type brushless DC motor speed regulation neural network PID controller, and provides an algorithm for optimizing neural network regulation PID by using an LQR algorithm, so that the DC motor control is more stable.
CN104834215A discloses a BP neural network PID control algorithm optimized by variation particle swarm, and provides a BP neural network PID control algorithm optimized by variation particle swarm, and the method enables PID algorithm output to be more stable.
The invention content is as follows:
the invention aims to provide a BP neural network speech recognition method based on double-layer PID optimization aiming at the defects of the prior art.
The invention idea is as follows: and the FPGA with the voice recognition function is used as a platform of a voice signal input method. The double-layer PID is divided into an inner layer and an outer layer, the inner layer PID algorithm reduces data fluctuation caused by a negative gradient algorithm, so that the weight threshold updating process is converged smoothly, and the oscillation generated during convergence is reduced; meanwhile, the outer-layer PID algorithm synchronizes the updating of the learning rate with the training process of the neural network, provides a larger updating intensity in the early stage of the neural network training to enable the neural network to iterate rapidly, reduces the updating intensity in the later stage of the neural network training to prevent data from deviating from a correct value, and has higher voice recognition accuracy.
The purpose of the invention is realized by the following technical scheme:
firstly, three layers of BP neural networks are built, namely an input layer, a hidden layer and an output layer. Randomly generating a weighting factor W between an output layer and a hidden layer ij (k) And W jg (k) And an activation function parameter a between two layers j (k) And b g (k) Selecting a learning rate eta (k), and setting k to be 1;
secondly, the FPGA platform extracts voice data of the voice to be recognized, and the BP neural network is used for extracting the extracted data X i (k) Analyzing and calculating the output O of the BP neural network output layer g (k) While outputting Y as desired g (k) Calculating error E g (k) Using error E g (k) Proportional parameter K in Proportion Integration Differentiation (PID) algorithm P Integral parameter K I Differential parameter K D Adjusting η (k);
then, using the error E g (k) And the adjusted eta (k) corrects the weighting coefficient W of the output layer and the hidden layer of the BP neural network ij (k) And W jg (k) And an activation function parameter a between two layers j (k) And b g (k) Until the output error of the BP neural network output layer meets the requirement, finally judging whether the input voice is a set voice signal;
the method comprises the following steps:
A. carrying out training initialization preparation, initializing a neural network structure, and acquiring a sample set for training;
B. carrying out feature extraction on the sample set to obtain a feature set;
C. training a neural network by taking the feature set as a training set, and using the expected output Y in the training process g (k) And the actual output O g (k) Obtain an error E g (k) Error E g (k) Adjusting the learning rate η (k) as a parameter in the outer PID algorithm, and then continuing to use the error E g (k) And the adjusted learning rate eta (k +1) is taken as the weight W of the inner PID algorithm parameter to the input layer ij (k) Threshold value a j (k) And output layer weight W jg (k) Threshold b g (k) Carrying out adjustment;
D. testing the adjusted neural network, extracting the characteristics of the test sample according to the step (2), and identifying the test sample by the neural network in the step (3) to obtain the error E in the step (3) g (k) When error E g (k) If the value is less than a certain threshold value, the neural network training is finished, and whether the input voice signal is a set voice signal or not is judged.
In said step C, error E g (k) And adjusting the learning rate as an outer layer PID algorithm parameter, wherein an outer layer PID adjusting formula is as follows:
Figure BDA0002828377040000031
where s is 1,2, …, k, k is the current iteration number, N 3 Taking N as the number of output layers 3 3, wherein
Figure BDA0002828377040000032
E gD (k)=E g (k)-E g (k-1)。
In said step C, the error E is used g (k) And the adjusted learning rate eta (k +1) is taken as the weight W of the inner PID algorithm parameter pair between the input layer and the hidden layer ij (k) Hidden layer threshold a j (k) And the weight W between the hidden layer and the output layer jg (k) Output layer threshold b g (k) Make a toneThe PID regulating formula of the outer layer is as follows,
the updating formula of the weight between the input layer and the hidden layer is as follows:
Figure BDA0002828377040000033
the hidden layer threshold is formulated as:
Figure BDA0002828377040000034
N 3 taking N as the number of output layers 3 =3,
The updating formula of the weight between the hidden layer and the output layer is as follows:
Figure BDA0002828377040000035
the update formula of the output layer threshold is as follows:
Figure BDA0002828377040000036
where s is 1,2, …, k, k is the current iteration number, N 3 Taking N as the number of output layers 3 3, wherein
Figure BDA0002828377040000037
E gD (k)=E g (k)-E g (k-1)。
Has the advantages that: adjusting weight threshold and learning rate using PID algorithm, K in double-layer PID algorithm P ,K I And K D Error E of three parameters according to system and training result g (k) The automatic adjustment is carried out, so that the weight threshold convergence of the hidden layer and the output layer is more stable, and the data fluctuation of the system is reduced; the learning rate can provide larger updating intensity in the early stage to help the system to update quickly, and can reduce the updating intensity in the later stage to prevent the data from deviating from the correct value and realize voice recognitionThe accuracy is higher.
Description of the drawings:
FIG. 1 is a running diagram of a speech recognition method based on an FPGA platform
FIG. 2 is a flowchart of an algorithm for updating learning rate of weight threshold by two-level PID
FIG. 3 is a diagram of a BP neural network structure
FIG. 4 is a flow chart of neural network training
FIG. 5 is a simulation display diagram of four music characteristic signals
FIG. 6 is a graph of convergence weight threshold for training of general neural network structure
FIG. 7 is a graph of convergence of training weight threshold of neural network structure after double-layer PID optimization
FIG. 8 is a graph of the convergence of learning rate in training of general neural network structure
FIG. 9 is a neural network structure training learning rate convergence diagram after double-layer PID optimization
FIG. 10 is a graph of the accuracy of the training structure of the general neural network structure for four music characteristic signals
FIG. 11 is a diagram of the accuracy of a neural network structure training structure under the double-layer PID result optimization of four music characteristic signals.
The specific implementation mode is as follows:
the invention is described in further detail below with reference to the following figures and examples:
a BP neural network voice recognition method with double-layer PID optimization,
firstly, three layers of BP neural networks are built, namely an input layer, a hidden layer and an output layer. Randomly generating a weighting factor W between an output layer and a hidden layer ij (k) And W jg (k) And an activation function parameter a between two layers j (k) And b g (k) Selecting a learning rate eta (k), and setting k to 1;
secondly, the FPGA platform extracts voice data of the voice to be recognized, and the BP neural network is used for extracting the extracted data X i (k) Analyzing, and calculating output O of BP neural network output layer g (k) While outputting Y as desired g (k) Calculating error E g (k) Using error E g (k) Proportional parameter K in Proportion Integration Differentiation (PID) algorithm P Integral parameter K I Differential parameter K D Adjusting η (k);
then, using the error E g (k) And the adjusted weighting coefficient W of the output layer and the hidden layer of the eta (k) correction BP neural network ij (k) And W jg (k) And an activation function parameter a between two layers j (k) And b g (k) Until the output error of the BP neural network output layer meets the requirement, finally, judging whether the input voice is a set voice signal;
the method comprises the following steps:
a: initializing an FPGA platform based on a double-layer PID optimization BP neural network, and taking three input layer neurons, six hidden layer neurons and three output layer neurons as shown in FIG. 3. Denote the ith input layer neuron input data as X i (k) (ii) a Recording the weight value between the ith input layer neuron and the jth hidden layer neuron as W ij (k) (ii) a Let the hidden layer threshold for the jth hidden layer neuron be a j (k) (ii) a Recording the weight value between the jth hidden layer neuron and the g output layer neuron as W jg (k) (ii) a Denote the g-th output layer neuron threshold as b g (k) (ii) a Let the g output layer neuron output value be O g (k) The expected output value and the output value error value are recorded as Y g (k) And E g (k) (ii) a The learning rate is noted as η (k). Wherein i is the number of neurons in the input layer, j is the number of neurons in the hidden layer, g is the number of neurons in the output layer, and k is the current iteration number. Generating proportional parameter K in inner PID and outer PID algorithm simultaneously P Integral parameter K I Differential parameter K D
B: the FPGA platform extracts voice data from a voice signal by AD conversion, then performs feature extraction on the extracted voice data by the MFCC method to obtain a feature vector set, and records an input layer vector as z (k) ═ X 1 (k),X 2 (k),X 3 (k) Wherein X) is i For each of the three neurons in the input layer (i ═ 1,2,3), the input layer first iteration takes as input Z (1) ═ X 1 (1),X 2 (1),X 3 (1) Of the first one of them)The first input of a neuron is X 1 (1) After Z (k) is input, firstly, data weight processing is carried out on the hidden layer, and the processing result is recorded as:
Figure BDA0002828377040000051
variable N 1 For inputting the number of layers, take N 1 =3;
Continue to process the weight value processing result net j Processing hidden layer threshold value data, recorded as H j =f(net j -a j ) Wherein f is an activation function, a j To hide the layer threshold, H j Is the output layer input value.
Output layer input value H j (j is 1,2,3,4,5,6), data weighting processing is performed on the output layer, and the processing result is expressed as:
Figure BDA0002828377040000052
variable N 2 Taking N to hide the number of layers 2 =6;
Then, the threshold processing is continuously carried out on the output layer weight processing result and is recorded as O g =G(net g -b g ) G is an activation function, b g As output layer threshold, O g And outputting the output value of the output layer, namely the final output of the neural network. Error recording E g (k)=Y g (k)-O g (k) Wherein the first output neuron outputs an error E 1 (1)=Y 1 (1)-O 1 (1) Second output neuron output error E 2 (1)=Y 2 (1)-O 2 (1) Third output neuron output error E 3 (1)=Y 3 (1)-O 3 (1)。
C: carrying out reverse error propagation by the BP neural network, and updating a learning rate eta (k); hidden layer weight W ij (k) And a threshold value a j (k) (ii) a Output layer weight W jg (k) And a threshold value b g (k) In that respect The inner-layer PID updating formula of the learning rate eta (k) is as follows:
Figure BDA0002828377040000061
where s is 1,2, …, k, k is the current iteration number, N 3 Taking N as the number of output layers 3 =3;
Wherein
Figure BDA0002828377040000062
The purpose of summation and averaging in the formula is to convert a plurality of adjustment values into one, so that eta can be subjected to the change of the whole neural network, the parameter is used as a proportional adjustment parameter, eta can be rapidly adjusted, and the second iteration of the first output neuron is taken as an example:
Figure BDA0002828377040000063
Figure BDA0002828377040000064
the parameter is used as an integral adjustment parameter, error accumulation is carried out on time, a stable adjustment is maintained, inertia during parameter updating is reduced, and system oscillation is reduced. Take the second iteration of the first output neuron as an example: e 1 (2)+E 1 (1) (ii) a Wherein E gD (k)=E g (k)-E g (k-1), the parameter is used as a differential regulation parameter, the output error change of the neural network can be reflected in advance, and an effective early correction signal is introduced into the system, so that the action speed of the system is accelerated, and the regulation time is shortened. Take the second iteration of the first output neuron as an example: e 1D (2)=E 1 (2)-E 1 (1)。
Take the inner-layer PID update formula of the second iteration of the first output neuron as an example:
Figure BDA0002828377040000065
the learning rate updated by using the inner PID updating algorithm can provide a larger updating intensity at the early stage of the operation of the neural network, so that the system can quickly iterate and correct parameters, the updating intensity is reduced at the later stage of the operation of the system, and the result is prevented from deviating from the correct value due to data oscillation;
after the learning rate is updated, the updated learning rate eta (k) and the output error E are used g (k) As a parameter in an inner PID updating formula, then updating a weight between an input layer and a hidden layer, a hidden layer threshold, a weight between the hidden layer and an output layer threshold;
the updating formula of the weight between the input layer and the hidden layer is as follows:
Figure BDA0002828377040000066
the formula for updating the weight between the input layer and the hidden layer is exemplified by the second iteration:
Figure BDA0002828377040000067
the updating formula of the hidden layer threshold value is as follows:
Figure BDA0002828377040000071
N 3 taking N as the number of output layers 3 =3。
The hidden layer threshold formula is exemplified by the second iteration:
Figure BDA0002828377040000072
the updating formula of the weight between the hidden layer and the output layer is as follows:
Figure BDA0002828377040000073
the weight formula between the hidden layer and the output layer is exemplified by the second iteration:
Figure BDA0002828377040000074
the updating formula of the output layer threshold value is as follows:
Figure BDA0002828377040000075
the update formula of the output layer threshold takes the second iteration as an example:
Figure BDA0002828377040000076
the negative gradient algorithm only considers the current state of the neural network, but does not consider the past state and the future state, and the weight threshold value adjustment data in the iteration process has large fluctuation and is easy to deviate from correct output. And updating the weight threshold value by using an inner-layer PID algorithm, and then updating the weight threshold value according to the error return value, so that the weight threshold value can be stably iterated, and the purposes of stronger system stability and higher output result accuracy are achieved.
D: and testing the adjusted neural network algorithm, extracting the characteristics of the test sample according to the step two, then testing according to the step two, and if the error is lower than a certain threshold value, finishing the training and outputting the result.
According to the simulation experiment result, the same data is processed by using a common neural network and a neural network with a double-layer PID structure optimized, four types of characteristic signals are shown in FIG. 5, and it can be seen that the second type of music characteristic signal is very similar to the third type of music characteristic signal, so that the second type of music characteristic signal or the third type of music characteristic signal may not be ideal in final identification accuracy.
The weight threshold of the hidden layer and the weight threshold of the output layer of the common neural network are shown in fig. 6, the convergence of the neural network with a double-layer PID structure is shown in fig. 7, and it can be seen that the output convergence of the weight threshold optimized by using the double-layer PID is more stable.
The learning rate after the training of the common neural network structure is shown in fig. 8, and the learning rate of the neural network training under the optimization of the double-layer PID structure is shown in fig. 9, so that the obvious convergence can be seen, and compared with the fixed learning rate, the learning rate is obviously more scientific by continuously converging according to the feedback result.
The operation result of the neural network with the common structure is shown in fig. 10, and the output result of the neural network with the optimized double-layer PID structure is shown in fig. 11, so that the accuracy of the neural network with the optimized double-layer PID structure for the third music identification is far higher than that of the neural network with the common structure, and meanwhile, the neural network with the optimized double-layer PID structure has almost no influence on the other three types of music identification, and the overall accuracy is greatly improved.

Claims (3)

1. A BP neural network speech recognition method of double-deck PID optimization, characterized by that:
firstly, building three layers of BP neural networks which are respectively an input layer, a hidden layer and an output layer, and randomly generating a weighting coefficient W between the output layer and the hidden layer ij (k) And W jg (k) And an activation function parameter a between two layers j (k) And b g (k) Selecting a learning rate eta (k), and setting k to be 1;
secondly, the FPGA platform extracts voice data of the voice to be recognized, and the BP neural network is used for extracting the extracted data X i (k) Analyzing and calculating the output O of the BP neural network output layer g (k) While outputting Y as desired g (k) Calculating error E g (k) Using error E g (k) With a proportional parameter K P Integral parameter K I Differential parameter K D Carrying out PID regulation on eta (k);
after the learning rate is updated, the updated learning rate eta (k) and the output error E are used g (k) As a parameter in an inner PID updating formula, then updating a weight between an input layer and a hidden layer, a hidden layer threshold, a weight between the hidden layer and an output layer threshold until an output error of a BP neural network output layer meets requirements, and finally judging whether the input voice is a set voice signal;
the method comprises the following steps:
A. carrying out training initialization preparation, initializing a neural network structure, and acquiring a sample set for training;
B. carrying out feature extraction on the sample set to obtain a feature set;
C. training a neural network by taking the feature set as a training set, and using the expected output Y in the training process g (k) And the actual output O g (k) Obtain an error E g (k) Error E g (k) Adjusting the learning rate η (k) as a parameter in the outer PID algorithm, and then continuing to use the error E g (k) And the adjusted learning rate eta (k +1) is taken as the weight W of the inner PID algorithm parameter to the input layer ij (k) Threshold value a j (k) And the output layer weight W jg (k) Threshold b g (k) Carrying out adjustment;
D. testing the adjusted neural network, extracting the characteristics of the test sample according to the step (2), and identifying the test sample by the neural network in the step (3) to obtain the error E in the step (3) g (k) When error E g (k) If the value is less than a certain threshold value, the neural network training is finished, and whether the input voice signal is a set voice signal or not is judged.
2. The method according to claim 1, wherein in the step C, the error E is determined by the method g (k) And adjusting the learning rate as an outer layer PID algorithm parameter, wherein an outer layer PID adjusting formula is as follows:
Figure FDA0003687321240000021
where s is 1,2, …, k, k is the current iteration number, N 3 Taking N as the number of output layers 3 3, wherein
Figure FDA0003687321240000022
E gD (k)=E g (k)-E g (k-1)。
3. The method according to claim 1, wherein in step C, an error E is used g (k) And the adjusted learning rate eta (k +1) is taken as the weight W of the inner PID algorithm parameter pair between the input layer and the hidden layer ij (k) Hidden layer threshold a j (k) And the weight W between the hidden layer and the output layer jg (k) Output layer threshold b g (k) The regulation is carried out, the outer layer PID regulation formula is as follows,
the updating formula of the weight between the input layer and the hidden layer is as follows:
Figure FDA0003687321240000023
the hidden layer threshold is formulated as:
Figure FDA0003687321240000024
N 3 taking N as the number of output layers 3 =3,
The updating formula of the weight between the hidden layer and the output layer is as follows:
Figure FDA0003687321240000025
the updating formula of the output layer threshold value is as follows:
Figure FDA0003687321240000026
where s is 1,2, …, k, k is the current iteration number, N 3 Taking N as the number of output layers 3 3, wherein E gD (k)=E g (k)-E g (k-1)。
Figure FDA0003687321240000027
EgD(k)=Eg(k)-Eg(k-1)。
CN202011455918.7A 2020-12-10 2020-12-10 BP neural network voice recognition method based on double-layer PID optimization Active CN112542161B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011455918.7A CN112542161B (en) 2020-12-10 2020-12-10 BP neural network voice recognition method based on double-layer PID optimization

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011455918.7A CN112542161B (en) 2020-12-10 2020-12-10 BP neural network voice recognition method based on double-layer PID optimization

Publications (2)

Publication Number Publication Date
CN112542161A CN112542161A (en) 2021-03-23
CN112542161B true CN112542161B (en) 2022-08-12

Family

ID=75018429

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011455918.7A Active CN112542161B (en) 2020-12-10 2020-12-10 BP neural network voice recognition method based on double-layer PID optimization

Country Status (1)

Country Link
CN (1) CN112542161B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113267993A (en) * 2021-04-22 2021-08-17 上海大学 Network training method and device based on collaborative learning
CN113411456B (en) * 2021-06-29 2023-05-02 中国人民解放军63892部队 Voice quality assessment method and device based on voice recognition

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101667012A (en) * 2008-09-03 2010-03-10 长春工程学院 Method for controlling reinforcement learning adaptive proportion integration differentiation-based distribution static synchronous compensator
US9053431B1 (en) * 2010-10-26 2015-06-09 Michael Lamport Commons Intelligent control with hierarchical stacked neural networks
CN108073985A (en) * 2016-11-14 2018-05-25 张素菁 A kind of importing ultra-deep study method for voice recognition of artificial intelligence
CN108445742A (en) * 2018-02-07 2018-08-24 广东工业大学 A kind of intelligent PID control method of gas suspension platform
CN109034390A (en) * 2018-08-07 2018-12-18 河北工业大学 Phase angular amplitude PID adaptive approach based on BP neural network Three-Dimensional Magnetic feature measurement
CN109991842A (en) * 2019-03-14 2019-07-09 合肥工业大学 Piano tone tuning method and system neural network based

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109492883A (en) * 2018-10-18 2019-03-19 山东工业职业学院 A kind of efficiency artificial intelligence on-line analysis system and method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101667012A (en) * 2008-09-03 2010-03-10 长春工程学院 Method for controlling reinforcement learning adaptive proportion integration differentiation-based distribution static synchronous compensator
US9053431B1 (en) * 2010-10-26 2015-06-09 Michael Lamport Commons Intelligent control with hierarchical stacked neural networks
CN108073985A (en) * 2016-11-14 2018-05-25 张素菁 A kind of importing ultra-deep study method for voice recognition of artificial intelligence
CN108445742A (en) * 2018-02-07 2018-08-24 广东工业大学 A kind of intelligent PID control method of gas suspension platform
CN109034390A (en) * 2018-08-07 2018-12-18 河北工业大学 Phase angular amplitude PID adaptive approach based on BP neural network Three-Dimensional Magnetic feature measurement
CN109991842A (en) * 2019-03-14 2019-07-09 合肥工业大学 Piano tone tuning method and system neural network based

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Research on the Application of PID Control with Neural Network and Parameter Adjustment Method of PID Controller;Jiayu Liu,等;《Association for Computing Machinery》;20181231;第72-76页 *
多变量自适应PID型神经网络控制器及其设计方法;丛爽,等;《信息与控制》;20061031;第35卷(第5期);第568-573页 *

Also Published As

Publication number Publication date
CN112542161A (en) 2021-03-23

Similar Documents

Publication Publication Date Title
CN112542161B (en) BP neural network voice recognition method based on double-layer PID optimization
US20180046915A1 (en) Compression of deep neural networks with proper use of mask
CN109671423B (en) Non-parallel text-to-speech conversion method under limited training data
CN107729999A (en) Consider the deep neural network compression method of matrix correlation
CN111477247B (en) Speech countermeasure sample generation method based on GAN
CN107689224A (en) The deep neural network compression method of reasonable employment mask
CN112766399B (en) Self-adaptive neural network training method for image recognition
CN112330487A (en) Photovoltaic power generation short-term power prediction method
CN114403486A (en) Intelligent control method of airflow type cut-tobacco drier based on local peak value coding circulation network
CN107273971B (en) Feed-forward neural network structure self-organization method based on neuron significance
CN113177355A (en) Power load prediction method
CN108734116B (en) Face recognition method based on variable speed learning deep self-coding network
CN108446506B (en) Uncertain system modeling method based on interval feedback neural network
CN114596567A (en) Handwritten digit recognition method based on dynamic feedforward neural network structure and growth rate function
CN111444787B (en) Fully intelligent facial expression recognition method and system with gender constraint
CN112069876A (en) Handwriting recognition method based on adaptive differential gradient optimization
CN113807005A (en) Bearing residual life prediction method based on improved FPA-DBN
Seman et al. The optimization of artificial neural networks connection weights using genetic algorithms for isolated spoken Malay parliamentary speeches
CN111144052A (en) CNN-ARX model-based linear primary inverted pendulum system modeling method and model
Sevinov et al. Algorithms for Synthesis of Adaptive Control Systems Based on the Neural Network Approach
CN104134091A (en) Neural network training method
JP3039408B2 (en) Sound classification method
Gouvea et al. Diversity-based model reference for genetic algorithms in dynamic environment
Toprak et al. Searching Optimal Values of Identification and Controller Design Horizon Lengths, and Regularization Parameters in NARMA Based Online Learning Controller Design
Ghule Implementation of Optimal Hidden Neuronsusing a fuzzy Controller

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant