CN116074697A - Vehicle-mounted acoustic equalizer compensation method and system based on deep neural network - Google Patents

Vehicle-mounted acoustic equalizer compensation method and system based on deep neural network Download PDF

Info

Publication number
CN116074697A
CN116074697A CN202310341219.7A CN202310341219A CN116074697A CN 116074697 A CN116074697 A CN 116074697A CN 202310341219 A CN202310341219 A CN 202310341219A CN 116074697 A CN116074697 A CN 116074697A
Authority
CN
China
Prior art keywords
neural network
deep neural
hidden layer
equalizer
output
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310341219.7A
Other languages
Chinese (zh)
Other versions
CN116074697B (en
Inventor
秦先清
肖浩
何志辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Chs Electronic Technology Co ltd
Original Assignee
Guangzhou Chs Electronic Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Chs Electronic Technology Co ltd filed Critical Guangzhou Chs Electronic Technology Co ltd
Priority to CN202310341219.7A priority Critical patent/CN116074697B/en
Publication of CN116074697A publication Critical patent/CN116074697A/en
Application granted granted Critical
Publication of CN116074697B publication Critical patent/CN116074697B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2430/00Signal processing covered by H04R, not provided for in its groups
    • H04R2430/03Synergistic effects of band splitting and sub-band processing
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention discloses a vehicle-mounted acoustic equalizer compensation method and a system based on a deep neural network, wherein the vehicle-mounted acoustic equalizer compensation method based on the deep neural network comprises the following steps: acquiring an in-vehicle test signal through a microphone, acquiring a secondary test signal through least square filtering, and calculating a compensation gain signal by using the secondary test signal; training a deep neural network, setting a deep neural network graphic equalizer, and inputting a source audio signal into the deep neural network graphic equalizer to obtain a fitted response curve. According to the invention, by training the deep neural network, complex computing processes such as Fourier transform and inversion matrix of the traditional audio equalizer are not needed, and when the complex environment changes in the running process of the automobile are faced, the audio equalizer can be used for rapidly correcting, so that the listening effect and the hearing comfort in the automobile are effectively improved, and the driving experience of the user in the automobile is improved.

Description

Vehicle-mounted acoustic equalizer compensation method and system based on deep neural network
Technical Field
The invention relates to the technical field of vehicle-mounted electronic equipment, in particular to a vehicle-mounted acoustic equalizer compensation method and system based on a deep neural network.
Background
With the rapid and stable growth of the domestic automobile market and the continuous development of electronic acoustics, consumers have increasingly higher and higher requirements on the tone quality of an in-car audio system, and continuously pursue better in-car listening effects and auditory comfort and driving experience of themselves. In-car audio systems, which are one of the most important electronic components in a car, are also becoming increasingly popular and important in the market along with the deep integration of the electronic information industry and the automobile industry. In order to meet the requirements of intelligence, comfort, differentiation and the like, an audio system plays an increasingly important role in automobiles, and gradually becomes a standard configuration of medium-high-grade automobiles, even low-grade automobiles. In order to improve the sound quality of the car audio, it is generally implemented by adjusting an equalizer, which is a core component in an audio processing system. The frequency response curve of the audio signal is directly corrected, compensated and the like from the sound source generating end through the equalizer, so that the frequency response curve of the audio signal heard by human ears is flatter and more uniform, and a driver and a passenger can hear more vivid and real sound.
At present, the design method of the vehicle-mounted audio equalizer almost needs complex computation such as Fourier transformation and inversion matrix, and the computation cost is greatly increased. And the parameter adjustment of the equalizer is usually manually adjusted depending on the experience of the disc-jockey, resulting in inefficiency and accuracy being subject to human factor interference. Therefore, it is necessary to improve the existing method or scheme, and a correction method for the vehicle-mounted audio equalizer is provided with high efficiency, low calculation cost and high precision.
Disclosure of Invention
In order to solve the technical problems, the invention provides a vehicle-mounted acoustic equalizer compensation method and system based on a deep neural network.
The first aspect of the invention provides a vehicle-mounted acoustic equalizer compensation method based on a deep neural network, which comprises the following steps:
the method comprises the steps of playing a white noise audio signal through a loudspeaker, obtaining an in-vehicle test signal through a microphone, and obtaining a secondary test signal through filtering by a least square method;
calculating a compensation gain signal by the secondary test signal and the white noise audio signal;
setting a frequency adjustment point on the compensation gain signal, and obtaining a secondary compensation gain signal by using an interpolation method;
training a deep neural network to obtain a deep neural network graphic equalizer, and fitting the superposition response with a secondary compensation gain signal;
and (3) inputting a source audio signal into the deep neural network graphic equalizer to obtain a fitted response curve, and canceling the loss of the audio signal propagating in the vehicle by the superposition response of the neural network graphic equalizer.
In the scheme, a frequency adjustment point is set through the center frequency of a 1/3 octave graphic equalizer, and a smoother secondary compensation gain signal is obtained by an interpolation method according to the frequency adjustment point;
the 1/3 octave graphic equalizer divides the audio frequency full frequency band into a plurality of frequency bands according to 1/3 frequency multiplication, respectively carries out lifting or attenuation treatment, does not influence each frequency point, carries out fine adjustment on frequency characteristics to obtain a required frequency response curve, and is a 2-order filter transfer function in the graphic equalizer
Figure SMS_1
The method specifically comprises the following steps: />
Figure SMS_2
Wherein->
Figure SMS_3
For the number of filter terms, the scaling factor +.>
Figure SMS_4
The definition is as follows:
Figure SMS_5
wherein->
Figure SMS_6
Is a linear peak gain, molecular coefficient->
Figure SMS_7
,/>
Figure SMS_8
The method comprises the following steps:
Figure SMS_9
Figure SMS_10
wherein->
Figure SMS_11
To normalize the center frequency, in radians,
Figure SMS_12
for the center frequency +.>
Figure SMS_13
For the sampling frequency, the sampling rate used in the whole operation is 192kHz, the denominator coefficient +.>
Figure SMS_14
,
Figure SMS_15
The method comprises the following steps:
Figure SMS_16
Figure SMS_17
wherein->
Figure SMS_18
The definition is as follows:
Figure SMS_19
wherein->
Figure SMS_20
For the filter bandwidth of each filter, defined as the frequency difference between adjacent bands, +.>
Figure SMS_21
Is the linear gain at the bandwidth;
Figure SMS_22
wherein->
Figure SMS_23
The 1/3 octave graphic equalizer has 31 wave bands and controls the signal gain on the narrow band of the whole audio frequency range from 20Hz to 20000 Hz; each band uses one 2-order IIR filter, all 31 filters being cascaded to form the overall transfer function of the graphic equalizer:
Figure SMS_24
in this solution, the graphic equalizer is provided with a gain factor +_ before the filter>
Figure SMS_25
The gain factor->
Figure SMS_26
Is the proportionality coefficient of the filter->
Figure SMS_27
The product is specifically:
Figure SMS_28
in the scheme, 31 nodes are arranged on an input layer of a deep neural network through gains of 31 frequency bands of a 1/3 octave graphic equalizer, and the 1/3 octave graphic equalizer is realized by adopting a 2-order IIR filter to obtain 31 optimized graphic equalizer gain values;
selecting 1500 pairs of inputs and outputs with random input gains as training data sets, wherein the input values are command gains set by a user, and the outputs are optimized filter gains used in filter design between-12 dB and 12 dB;
determining the node quantity of an input layer, a hidden layer and an output layer in a network structure, wherein the sizes of the input layer and the output layer are 31, the sizes of a first hidden layer and a second hidden layer are J=62 and K=31 respectively, dividing the training data set into a training set and a testing set, training by using a Bayesian regularized back propagation algorithm through utilizing a function fitting neural network, and obtaining a deep neural network graphic equalizer.
The recipeIn the case, the deep neural network is provided with two hidden layers, and the first hidden layer is arranged on the first hidden layer
Figure SMS_29
The neurons, input is scaled user set command gain +.>
Figure SMS_30
,/>
Figure SMS_31
,.../>
Figure SMS_32
The value of the input data is between-1 and 1;
automatic scaling during training using the mapmamax function, th
Figure SMS_33
Weights of individual neurons->
Figure SMS_34
,
Figure SMS_35
,.../>
Figure SMS_36
To scale and sum the inputs, to add the bias term +.>
Figure SMS_37
Added to the summation and then +.>
Figure SMS_38
Calculating the output of neurons>
Figure SMS_39
The method specifically comprises the following steps:
Figure SMS_40
wherein (1)>
Figure SMS_45
User set life for scalingLet gain->
Figure SMS_46
,/>
Figure SMS_41
,...
Figure SMS_43
,/>
Figure SMS_47
For the number of input layer neuron node terms, +.>
Figure SMS_48
For the input layer size, +.>
Figure SMS_42
Equal to->
Figure SMS_44
The input of the second hidden layer neuron is the output of each neuron in the first hidden layer, the output of the kth neuron of the second hidden layer
Figure SMS_49
Calculating, wherein->
Figure SMS_50
For the number of first hidden layer neuron node entries, < +.>
Figure SMS_51
For the first hidden layer size,
Figure SMS_52
the weight and the paranoid item in the second hidden layer are respectively:
Figure SMS_53
the mth neuron of the output layer outputs the optimized gain of the mth filter by calculation
Figure SMS_54
Wherein->
Figure SMS_55
First hidden layer neuron node item number, < ->
Figure SMS_56
For the first hidden layer size, +.>
Figure SMS_57
The weights and the paranoid items in the output layer are specifically:
Figure SMS_58
in the scheme, parameters of the deep neural network are rewritten into a matrix form:
Figure SMS_59
the above formula maps the user set dB gain value +.>
Figure SMS_60
To the scaled dB gain value +.>
Figure SMS_61
Wherein->
Figure SMS_62
And->
Figure SMS_63
Figure SMS_64
Usage weight +.>
Figure SMS_65
Deviation value->
Figure SMS_66
And the nonlinear transfer function tanh calculation is based on +.>
Figure SMS_67
Output of the first hidden layer of (2)>
Figure SMS_68
Figure SMS_69
All outputs +.>
Figure SMS_70
To calculate the output of the second hidden layer using a different weight from the first hidden layer>
Figure SMS_71
Deviation value->
Figure SMS_72
And calculating the output ++of the second hidden layer from the nonlinear sigmoid function>
Figure SMS_73
Figure SMS_74
/>
Figure SMS_75
Output of the second hidden layer +.>
Figure SMS_76
As input to the output layer, use the weight +.>
Figure SMS_77
Weighting it and adding +.>
Figure SMS_78
A defined deviation value;
output layer output optimized gain vector of deep neural network
Figure SMS_79
The value is between->
Figure SMS_80
Based on the training data object +.>
Figure SMS_81
And->
Figure SMS_82
Mapping it to dB values.
The second aspect of the present invention also provides a vehicle-mounted acoustic equalizer compensation system based on a deep neural network, the system comprising: the system comprises a memory, a processor, a loudspeaker module, an audio equalization evaluation module, a loudspeaker frequency response range identification module, an audio equalization algorithm module and a neural network module, wherein the memory comprises a vehicle-mounted acoustic equalizer compensation method program based on a deep neural network;
the loudspeaker module plays white noise audio signals on the vehicle-mounted sound equipment, and a microphone collects the audio signals;
the audio equalization evaluation module performs preliminary processing on the collected audio signals and evaluates whether the collected audio signals reach an expected equalization effect or not;
the speaker frequency response range identification module analyzes the frequency response range of the speaker and is divided into high-pass, low-pass, medium-low-pass and full-band types;
the audio equalization algorithm module calculates the optimal filter bank parameters according to the set parameters and transmits the equalization filter bank parameters to the neural network module of the vehicle-mounted loudspeaker system through a communication protocol by PC software;
the neural network module is used for receiving parameters of the equalizer and designing a neural network graphic equalizer.
Compared with the prior art, the invention has the following advantages:
the design of the vehicle-mounted audio equalizer with high quality requirement on tone quality almost all needs complex computation such as Fourier transformation and inversion matrix, and the like, so that the vehicle-mounted audio equalizer has low efficiency, high cost, low precision and long computation time; and the graphic equalizer designed by using the neural network has high efficiency, high precision and low calculation cost.
Through training the graphic equalizer that the degree of depth neural network designed when facing the complex environmental change in the car driving process, the audio equalizer can correct fast, improves the in-car listening effect and the hearing comfort level effectively to improve the experience of in-car user's driving.
Drawings
FIG. 1 shows a flow chart of a method for compensating a vehicle acoustic equalizer based on a deep neural network of the present invention;
FIG. 2 shows a block diagram of a single 2-stage filter of the present invention;
FIG. 3 shows a schematic diagram of the series composition of a graphic equalizer of the present invention;
FIG. 4 shows a network structure of a deep neural network in the present invention;
fig. 5 shows the structure of a single neuron in the deep neural network structure in the present invention.
Fig. 6 shows an overall block diagram of a vehicle audio equalizer compensation system based on a deep neural network of the present invention.
Detailed Description
In order that the above-recited objects, features and advantages of the present invention will be more clearly understood, a more particular description of the invention will be rendered by reference to the appended drawings and appended detailed description. It should be noted that, in the case of no conflict, the embodiments of the present application and the features in the embodiments may be combined with each other.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, however, the present invention may be practiced in other ways than those described herein, and therefore the scope of the present invention is not limited to the specific embodiments disclosed below.
Fig. 1 shows a flowchart of a vehicle-mounted acoustic equalizer compensation method based on a deep neural network.
As shown in fig. 1, a first aspect of the present invention provides a vehicle acoustic equalizer compensation method based on a deep neural network, including:
s102, playing a white noise audio signal through a loudspeaker, acquiring an in-vehicle test signal through a microphone, and filtering through a least square method to acquire a secondary test signal;
s104, calculating a compensation gain signal through the secondary test signal and the white noise audio signal;
s106, setting a frequency adjustment point on the compensation gain signal, and obtaining a second-level compensation gain signal by using an interpolation method;
s108, training the deep neural network to obtain a deep neural network graphic equalizer, and fitting the superposition response with the second-level compensation gain signal;
s110, inputting a source audio signal into the deep neural network graphic equalizer to obtain a fitted response curve, and canceling the loss of the audio signal propagating in the vehicle by the superposition response of the neural network graphic equalizer.
The white noise audio signal is played by the vehicle-mounted sound equipment, loss occurs in the process of being transmitted to the human ear in the vehicle due to the influence of the tire noise road noise and the air medium in the transmission process, then the loss signal is collected by the microphone, the least square and other smoothing processing is carried out on the loss signal to obtain a smoothed test signal, the smoothed test signal is compared with the white noise audio signal, and the gain difference value is calculated to obtain the compensation gain signal.
Setting a frequency adjustment point through the center frequency of the 1/3 octave graphic equalizer, and obtaining a smoother secondary compensation gain signal by an interpolation method according to the frequency adjustment point; the 1/3 octave graphic equalizer divides the audio frequency full frequency band into a plurality of frequency bands according to 1/3 frequency multiplication, respectively carries out lifting or attenuation treatment, does not influence each frequency point, carries out fine adjustment on frequency characteristics to obtain a required frequency response curve, and is a 2-order filter transfer function in the graphic equalizer
Figure SMS_83
The method specifically comprises the following steps:
Figure SMS_84
wherein->
Figure SMS_85
For the number of filter terms, the scaling factor +.>
Figure SMS_86
The definition is as follows:
Figure SMS_87
wherein->
Figure SMS_88
Is a linear peak gain, molecular coefficient->
Figure SMS_89
,/>
Figure SMS_90
The method comprises the following steps:
Figure SMS_91
Figure SMS_92
wherein->
Figure SMS_93
To normalize the center frequency, in radians,
Figure SMS_94
for the center frequency +.>
Figure SMS_95
For the sampling frequency, the sampling rate used in the whole operation is 192kHz, the denominator coefficient +.>
Figure SMS_96
,
Figure SMS_97
The method comprises the following steps:
Figure SMS_98
/>
Figure SMS_99
wherein->
Figure SMS_100
The definition is as follows:
Figure SMS_101
wherein->
Figure SMS_102
For the filter bandwidth of each filter, defined as the frequency difference between adjacent bands, +.>
Figure SMS_103
Is the linear gain at the bandwidth;
Figure SMS_104
wherein->
Figure SMS_105
The 1/3 octave graphic equalizer has 31 wave bands and controls the signal gain on the narrow band of the whole audio frequency range from 20Hz to 20000 Hz; acquiring a third frequency multiplication table of bandwidth and center frequency according to the design process of the 1/3 octave graphic equalizer, and setting a frequency adjustment point according to the center frequency of the third frequency multiplication table;
each band uses one 2-order IIR filter, as shown in the single 2-order filter structure of fig. 2, all 31 filters are cascaded to form the overall transfer function of the graphic equalizer:
Figure SMS_106
the gain factor +.>
Figure SMS_107
The gain factor->
Figure SMS_108
Is the proportionality coefficient of the filter->
Figure SMS_109
The product is specifically:
Figure SMS_110
and a filter is inserted between the adjacent filters to compensate the problem of insufficient compensation amount between the two band-pass filters with adjacent central frequencies.
It should be noted that, the 1/3 octave graphic equalizer has 31 frequency bands, which means that it has 31 user-adjustable command gains, 31 nodes are set on the input layer of the deep neural network through the gains of 31 frequency bands of the 1/3 octave graphic equalizer, one node is set for the gain of each frequency band, and the 1/3 octave graphic equalizer is realized by adopting a 2-order IIR filter, so as to obtain 31 optimized graphic equalizer gain values;
selecting 1500 pairs of inputs and outputs with random input gains as training data sets, wherein the input values are command gains set by a user, and the outputs are optimized filter gains used in filter design between-12 dB and 12 dB; the training dataset was divided into two sets, training set (70% of the entire dataset) and test set (30% of the rest). The test set is not used for training itself, but is only used for monitoring the performance of the model on invisible data in the training process, and setting a stopping condition from training to convergence;
through initial training tests on the deep neural network, the node quantity of an input layer, a hidden layer and an output layer in the network structure is determined, the sizes of the input layer and the output layer are 31, the sizes of a first hidden layer and a second hidden layer are J=62 and K=31 respectively, the training data set is divided into a training set and a testing set, the training data set is trained by using a function fitting neural network through a Bayesian regularized back propagation algorithm, and a deep neural network graph equalizer is obtained. Figure 4 shows a deep neural network architecture,wherein the method comprises the steps of
Figure SMS_111
,
Figure SMS_112
,.../>
Figure SMS_113
Is the command gain set by the user, +.>
Figure SMS_114
,/>
Figure SMS_115
,.../>
Figure SMS_116
Is the optimized filter gain in dB;
the deep neural network is trained by using a Matlab fitting function, wherein the fitting function is a function fitting neural network capable of forming generalization on the input-output relationship of training data. The training algorithm selects a Bayesian regularization back propagation algorithm, and updates the weight and the bias value according to the Levenberg-Marquardt (LM) optimization, and the Bayesian regularization ensures that the obtained network has good generalization effect by minimizing the combination of the square error and the network weight.
Fig. 5 shows the structure of individual neurons in a deep neural network structure.
It should be noted that the deep neural network is provided with two hidden layers, and the first hidden layer is at the first hidden layer
Figure SMS_117
The neurons, input is scaled user set command gain +.>
Figure SMS_118
,/>
Figure SMS_119
,.../>
Figure SMS_120
Transport and deliverThe value of the incoming data is between-1 and 1;
automatic scaling during training using the mapmamax function, th
Figure SMS_121
Weights of individual neurons->
Figure SMS_122
,
Figure SMS_123
,.../>
Figure SMS_124
To scale and sum the inputs, to add the bias term +.>
Figure SMS_125
Added to the summation and then +.>
Figure SMS_126
Calculating the output of neurons>
Figure SMS_127
The method specifically comprises the following steps:
Figure SMS_130
wherein (1)>
Figure SMS_132
Command gain for scaled user set>
Figure SMS_135
,/>
Figure SMS_128
,...
Figure SMS_131
,/>
Figure SMS_134
For the number of input layer neuron node terms, +.>
Figure SMS_136
For the input layer size, +.>
Figure SMS_129
Equal to->
Figure SMS_133
The input of the second hidden layer neuron is the output of each neuron in the first hidden layer, the output of the kth neuron of the second hidden layer
Figure SMS_137
Calculating, wherein->
Figure SMS_138
For the number of first hidden layer neuron node entries, < +.>
Figure SMS_139
For the first hidden layer size,
Figure SMS_140
the weight and the paranoid item in the second hidden layer are respectively:
Figure SMS_141
the mth neuron of the output layer outputs the optimized gain of the mth filter by calculation +.>
Figure SMS_142
Wherein->
Figure SMS_143
First hidden layer neuron node item number, < ->
Figure SMS_144
For the first hidden layer size, +.>
Figure SMS_145
The weights and the paranoid items in the output layer are specifically:
Figure SMS_146
the parameters of the deep neural network are rewritten in a matrix form as follows:
Figure SMS_147
the above formula maps the user set dB gain value +.>
Figure SMS_148
To the scaled dB gain value +.>
Figure SMS_149
Wherein->
Figure SMS_150
And->
Figure SMS_151
Figure SMS_152
Usage weight +.>
Figure SMS_153
Deviation value->
Figure SMS_154
And the nonlinear transfer function tanh calculation is based on +.>
Figure SMS_155
Output of the first hidden layer of (2)>
Figure SMS_156
Figure SMS_157
All outputs +.>
Figure SMS_158
To calculate the output of the second hidden layer using a different weight from the first hidden layer>
Figure SMS_159
Deviation value->
Figure SMS_160
And calculating the output ++of the second hidden layer from the nonlinear sigmoid function>
Figure SMS_161
Figure SMS_162
Figure SMS_163
Output of the second hidden layer +.>
Figure SMS_164
As input to the output layer, use the weight +.>
Figure SMS_165
Weighting it and adding +.>
Figure SMS_166
A defined deviation value;
output layer output optimized gain vector of deep neural network
Figure SMS_167
The value is between->
Figure SMS_168
Based on the training data object +.>
Figure SMS_169
And->
Figure SMS_170
Mapping it to dB values.
Inputting the white noise signal into a set deep neural network graphic equalizer; the output signal of the deep neural network graphic equalizer comprises a fitted superposition response, and after the output signal propagates in the vehicle, the superposition response of the neural network graphic equalizer counteracts the loss in the propagation process, so that the audio signal behind the human ear is not distorted compared with the input white noise audio signal.
Fig. 6 shows an overall block diagram of a vehicle audio equalizer compensation system based on a deep neural network of the present invention.
The second aspect of the present invention also provides a vehicle-mounted acoustic equalizer compensation system based on a deep neural network, the system comprising: the system comprises a memory, a processor, a loudspeaker module, an audio equalization evaluation module, a loudspeaker frequency response range identification module, an audio equalization algorithm module and a neural network module, wherein the memory comprises a vehicle-mounted acoustic equalizer compensation method program based on a deep neural network;
the loudspeaker module plays white noise audio signals on the vehicle-mounted sound equipment, and a microphone collects the audio signals;
the audio equalization evaluation module performs preliminary processing on the collected audio signals and evaluates whether the collected audio signals reach an expected equalization effect or not;
the speaker frequency response range identification module analyzes the frequency response range of the speaker and is divided into high-pass, low-pass, medium-low-pass and full-band types; for example, the frequency response range of a tweeter is above 1KHz, so the frequency response range of a speaker is to be identified on the basis of known speaker types. Notably, vehicle audio systems have different requirements for high and low frequencies: (1) frequency components below 30Hz are negligible; (2) the 20KHz frequency portion cannot have excessive attenuation;
the audio equalization algorithm module calculates the optimal filter bank parameters according to the set parameters and transmits the equalization filter bank parameters to the neural network module of the vehicle-mounted loudspeaker system through a communication protocol by PC software;
the neural network module is used for receiving parameters of the equalizer and designing a neural network graphic equalizer.
The third aspect of the present invention also provides a computer readable storage medium, where the computer readable storage medium includes a vehicle-mounted acoustic equalizer compensation method program based on a deep neural network, where the method program is executed by a processor, to implement a method for compensating a vehicle-mounted acoustic equalizer based on a deep neural network as described in any one of the above.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above described device embodiments are only illustrative, e.g. the division of the units is only one logical function division, and there may be other divisions in practice, such as: multiple units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. In addition, the various components shown or discussed may be coupled or directly coupled or communicatively coupled to each other via some interface, whether indirectly coupled or communicatively coupled to devices or units, whether electrically, mechanically, or otherwise.
The units described above as separate components may or may not be physically separate, and components shown as units may or may not be physical units; can be located in one place or distributed to a plurality of network units; some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present invention may be integrated in one processing unit, or each unit may be separately used as one unit, or two or more units may be integrated in one unit; the integrated units may be implemented in hardware or in hardware plus software functional units.
Those of ordinary skill in the art will appreciate that: all or part of the steps for implementing the above method embodiments may be implemented by hardware related to program instructions, and the foregoing program may be stored in a computer readable storage medium, where the program, when executed, performs steps including the above method embodiments; and the aforementioned storage medium includes: a mobile storage device, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk or an optical disk, or the like, which can store program codes.
Alternatively, the above-described integrated units of the present invention may be stored in a computer-readable storage medium if implemented in the form of software functional modules and sold or used as separate products. Based on such understanding, the technical solutions of the embodiments of the present invention may be embodied in essence or a part contributing to the prior art in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the methods described in the embodiments of the present invention. And the aforementioned storage medium includes: a removable storage device, ROM, RAM, magnetic or optical disk, or other medium capable of storing program code.
The foregoing is merely illustrative of the present invention, and the present invention is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (8)

1. The vehicle-mounted acoustic equalizer compensation method based on the deep neural network is characterized by comprising the following steps of:
the method comprises the steps of playing a white noise audio signal through a loudspeaker, obtaining an in-vehicle test signal through a microphone, and obtaining a secondary test signal through filtering by a least square method;
calculating a compensation gain signal by the secondary test signal and the white noise audio signal;
setting a frequency adjustment point on the compensation gain signal, and obtaining a secondary compensation gain signal by using an interpolation method;
training a deep neural network to obtain a deep neural network graphic equalizer, and fitting the superposition response with a secondary compensation gain signal;
and (3) inputting a source audio signal into the deep neural network graphic equalizer to obtain a fitted response curve, and canceling the loss of the audio signal propagating in the vehicle by the superposition response of the neural network graphic equalizer.
2. The method for compensating a vehicle-mounted acoustic equalizer based on a deep neural network according to claim 1, wherein a frequency adjustment point is set by the center frequency of a third frequency multiplication table of a 1/3 octave graphic equalizer, and a smoother secondary compensation gain signal is obtained by interpolation according to the frequency adjustment point;
the 1/3 octave graphic equalizer divides the audio frequency full frequency band into a plurality of frequency bands according to 1/3 frequency multiplication, respectively carries out lifting or attenuation treatment, does not influence each frequency point, finely adjusts the frequency characteristic, and is a 2-order filter transfer function in the 1/3 octave graphic equalizer
Figure QLYQS_15
The method specifically comprises the following steps: />
Figure QLYQS_19
Wherein->
Figure QLYQS_22
For the number of filter terms, the scaling factor +.>
Figure QLYQS_4
The definition is as follows: />
Figure QLYQS_7
Wherein->
Figure QLYQS_13
Is a linear peak gain, molecular coefficient->
Figure QLYQS_17
,/>
Figure QLYQS_2
The method comprises the following steps: />
Figure QLYQS_8
Wherein->
Figure QLYQS_12
For normalizing the center frequency, in radians, +.>
Figure QLYQS_16
For the center frequency +.>
Figure QLYQS_3
For sampling frequency, the sampling rate used in the whole work is 192kHz, the denominator coefficient
Figure QLYQS_5
,/>
Figure QLYQS_9
The method comprises the following steps: />
Figure QLYQS_11
Wherein->
Figure QLYQS_18
The definition is as follows: />
Figure QLYQS_21
Wherein->
Figure QLYQS_24
For the filter bandwidth of each filter, defined as the frequency difference between adjacent bands, +.>
Figure QLYQS_26
Is the linear gain at the bandwidth; />
Figure QLYQS_1
Wherein->
Figure QLYQS_6
The 1/3 octave graphic equalizer has 31 wave bands, controls the signal gain on the narrow band of the audio frequency range from 20Hz to 20000Hz, each frequency band uses a 2-order IIR filter, and all 31 filters are cascaded to form the integral transfer function of the graphic equalizer:
Figure QLYQS_10
the 1/3 octave graphic equalizer is provided with a gain factor +.>
Figure QLYQS_14
The gain factor->
Figure QLYQS_20
Is the proportionality coefficient of the filter->
Figure QLYQS_23
The product is specifically: />
Figure QLYQS_25
3. The method for compensating the vehicle-mounted acoustic equalizer based on the deep neural network according to claim 1, wherein 31 nodes are arranged on an input layer of the deep neural network through gains of 31 frequency bands of a 1/3 octave graphic equalizer, and the 1/3 octave graphic equalizer is realized by adopting a 2-order IIR filter, so that 31 optimized graphic equalizer gain values are obtained;
selecting 1500 pairs of inputs and outputs with random input gains as training data sets, wherein the input values are command gains set by a user, and the outputs are optimized filter gains used in filter design between-12 dB and 12 dB;
determining the node quantity of an input layer, a hidden layer and an output layer in a network structure, wherein the sizes of the input layer and the output layer are 31, the sizes of a first hidden layer and a second hidden layer are J=62 and K=31 respectively, dividing the training data set into a training set and a testing set, training by using a Bayesian regularized back propagation algorithm through utilizing a function fitting neural network, and obtaining a deep neural network graphic equalizer.
4. The method for compensating an equalizer of a vehicle audio system based on a deep neural network according to claim 1, wherein the deep neural network is provided with two hidden layers, and the first hidden layer is a first hidden layer
Figure QLYQS_27
The neurons, input is scaled user set command gain +.>
Figure QLYQS_28
,/>
Figure QLYQS_29
,.../>
Figure QLYQS_30
The value of the input data is between-1 and 1;
automatic scaling during training using the mapmamax function, th
Figure QLYQS_39
Weights of individual neurons->
Figure QLYQS_32
,/>
Figure QLYQS_35
,...
Figure QLYQS_38
To scale and sum the inputs, to add the bias term +.>
Figure QLYQS_42
Added to the summation and then utilized with nonlinear sigmoid function->
Figure QLYQS_43
Calculating the output of neurons>
Figure QLYQS_46
The method specifically comprises the following steps: />
Figure QLYQS_41
Wherein (1)>
Figure QLYQS_45
Command gain for scaled user set>
Figure QLYQS_31
,
Figure QLYQS_37
,.../>
Figure QLYQS_34
,/>
Figure QLYQS_36
For the number of input layer neuron node terms, +.>
Figure QLYQS_40
For the input layer size, +.>
Figure QLYQS_44
Equal to->
Figure QLYQS_33
The input of the second hidden layer neuron is the output of each neuron in the first hidden layer, the output of the kth neuron of the second hidden layer
Figure QLYQS_48
Calculating, wherein->
Figure QLYQS_51
For the number of first hidden layer neuron node entries, < +.>
Figure QLYQS_54
For the first hidden layer size,
Figure QLYQS_49
the weight and the paranoid item in the second hidden layer are respectively: />
Figure QLYQS_50
The mth neuron of the output layer outputs the optimized gain of the mth filter by calculation +.>
Figure QLYQS_53
Wherein->
Figure QLYQS_56
First hidden layer neuron node item number, < ->
Figure QLYQS_47
For the first hidden layer size, +.>
Figure QLYQS_52
The weights and the paranoid items in the output layer are specifically:
Figure QLYQS_55
5. the method for compensating an in-vehicle acoustic equalizer based on a deep neural network according to claim 4, wherein the parameters of the deep neural network are rewritten in a matrix form as:
Figure QLYQS_65
the above formula maps the user set dB gain value +.>
Figure QLYQS_59
To after scalingdB gain value +.>
Figure QLYQS_61
Wherein->
Figure QLYQS_60
And->
Figure QLYQS_64
;/>
Figure QLYQS_68
Usage weight +.>
Figure QLYQS_72
Deviation value->
Figure QLYQS_66
And the nonlinear transfer function tanh calculation is based on +.>
Figure QLYQS_70
Output of the first hidden layer of (2)>
Figure QLYQS_57
;/>
Figure QLYQS_62
All outputs +.>
Figure QLYQS_69
To calculate the output of the second hidden layer using a different weight from the first hidden layer>
Figure QLYQS_73
Deviation value->
Figure QLYQS_74
And calculating the output ++of the second hidden layer from the nonlinear sigmoid function>
Figure QLYQS_75
;/>
Figure QLYQS_58
Output of the second hidden layer +.>
Figure QLYQS_63
As input to the output layer, use the weight +.>
Figure QLYQS_67
Weighting it and adding +.>
Figure QLYQS_71
A defined deviation value;
output layer output optimized gain vector of deep neural network
Figure QLYQS_76
The value is between->
Figure QLYQS_77
Based on the training data object +.>
Figure QLYQS_78
And->
Figure QLYQS_79
Mapping it to dB values.
6. A vehicle acoustic equalizer compensation system based on a deep neural network, the system comprising: the system comprises a memory, a processor, a loudspeaker module, an audio equalization evaluation module, a loudspeaker frequency response range identification module, an audio equalization algorithm module and a neural network module, wherein the memory comprises a vehicle-mounted acoustic equalizer compensation method program based on a deep neural network;
the loudspeaker module plays white noise audio signals on the vehicle-mounted sound equipment, and a microphone collects the audio signals;
the audio equalization evaluation module performs preliminary processing on the collected audio signals and evaluates whether the collected audio signals reach an expected equalization effect or not;
the speaker frequency response range identification module analyzes the frequency response range of the speaker and is divided into high-pass, low-pass, medium-low-pass and full-band types;
the audio equalization algorithm module calculates the optimal filter bank parameters according to the set parameters and transmits the equalization filter bank parameters to the neural network module of the vehicle-mounted loudspeaker system through a communication protocol by PC software;
the neural network module is used for receiving parameters of the equalizer and designing a neural network graphic equalizer.
7. The vehicle-mounted acoustic equalizer compensation system based on the deep neural network according to claim 6, wherein 31 nodes are arranged on an input layer of the deep neural network through gains of 31 frequency bands of a 1/3 octave graphic equalizer, and the 1/3 octave graphic equalizer is realized by a 2-order IIR filter, so that 31 optimized graphic equalizer gain values are obtained;
selecting 1500 pairs of inputs and outputs with random input gains as training data sets, wherein the input values are command gains set by a user, and the outputs are optimized filter gains used in filter design between-12 dB and 12 dB;
determining the node quantity of an input layer, a hidden layer and an output layer in a network structure, wherein the sizes of the input layer and the output layer are 31, the sizes of a first hidden layer and a second hidden layer are J=62 and K=31 respectively, dividing the training data set into a training set and a testing set, training by using a Bayesian regularized back propagation algorithm through utilizing a function fitting neural network, and obtaining a deep neural network graphic equalizer.
8. The vehicle-mounted acoustic equalizer compensation system based on the deep neural network according to claim 6, wherein parameters of the deep neural network are rewritten in a matrix form as:
Figure QLYQS_90
the above formula maps the user set dB gain value +.>
Figure QLYQS_81
To the scaled dB gain value +.>
Figure QLYQS_86
Wherein->
Figure QLYQS_83
And->
Figure QLYQS_84
;/>
Figure QLYQS_88
Usage weight +.>
Figure QLYQS_92
Deviation value->
Figure QLYQS_89
And the nonlinear transfer function tanh calculation is based on +.>
Figure QLYQS_93
Output of the first hidden layer of (2)>
Figure QLYQS_80
;/>
Figure QLYQS_85
All outputs +.>
Figure QLYQS_94
To calculate the output of the second hidden layer using a different weight from the first hidden layer>
Figure QLYQS_97
Deviation value->
Figure QLYQS_96
And calculating the output ++of the second hidden layer from the nonlinear sigmoid function>
Figure QLYQS_98
;/>
Figure QLYQS_82
Output of the second hidden layer +.>
Figure QLYQS_87
As input to the output layer, use the weight +.>
Figure QLYQS_91
Weighting it and adding +.>
Figure QLYQS_95
A defined deviation value;
output layer output optimized gain vector of deep neural network
Figure QLYQS_99
The value is between->
Figure QLYQS_100
Based on the training data object +.>
Figure QLYQS_101
And->
Figure QLYQS_102
Mapping it to dB values. />
CN202310341219.7A 2023-04-03 2023-04-03 Vehicle-mounted acoustic equalizer compensation method and system based on deep neural network Active CN116074697B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310341219.7A CN116074697B (en) 2023-04-03 2023-04-03 Vehicle-mounted acoustic equalizer compensation method and system based on deep neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310341219.7A CN116074697B (en) 2023-04-03 2023-04-03 Vehicle-mounted acoustic equalizer compensation method and system based on deep neural network

Publications (2)

Publication Number Publication Date
CN116074697A true CN116074697A (en) 2023-05-05
CN116074697B CN116074697B (en) 2023-07-18

Family

ID=86171794

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310341219.7A Active CN116074697B (en) 2023-04-03 2023-04-03 Vehicle-mounted acoustic equalizer compensation method and system based on deep neural network

Country Status (1)

Country Link
CN (1) CN116074697B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5737485A (en) * 1995-03-07 1998-04-07 Rutgers The State University Of New Jersey Method and apparatus including microphone arrays and neural networks for speech/speaker recognition systems
US20050041731A1 (en) * 2001-09-07 2005-02-24 Azizi Seyed Ali Equalizer system
CN110325929A (en) * 2016-12-07 2019-10-11 阿瑞路资讯安全科技股份有限公司 System and method for detecting the waveform analysis of cable network variation
CN110913305A (en) * 2019-12-05 2020-03-24 广东技术师范大学 Self-adaptive equalizer compensation method for vehicle-mounted sound equipment
CN112767964A (en) * 2019-10-21 2021-05-07 索尼公司 Electronic apparatus, method and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5737485A (en) * 1995-03-07 1998-04-07 Rutgers The State University Of New Jersey Method and apparatus including microphone arrays and neural networks for speech/speaker recognition systems
US20050041731A1 (en) * 2001-09-07 2005-02-24 Azizi Seyed Ali Equalizer system
CN110325929A (en) * 2016-12-07 2019-10-11 阿瑞路资讯安全科技股份有限公司 System and method for detecting the waveform analysis of cable network variation
CN112767964A (en) * 2019-10-21 2021-05-07 索尼公司 Electronic apparatus, method and storage medium
JP2021076831A (en) * 2019-10-21 2021-05-20 ソニーグループ株式会社 Electronic apparatus, method and computer program
CN110913305A (en) * 2019-12-05 2020-03-24 广东技术师范大学 Self-adaptive equalizer compensation method for vehicle-mounted sound equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
杨俊杰: "自适应车载音频均衡算法研究及实现", 广东师范大学硕士学位论文, pages 2 - 4 *

Also Published As

Publication number Publication date
CN116074697B (en) 2023-07-18

Similar Documents

Publication Publication Date Title
US20200251119A1 (en) Method and device for processing audio signal using audio filter having non-linear characterstics
CN101483414B (en) Voice intelligibility enhancement system and voice intelligibility enhancement method
CN112562627B (en) Feedforward filter design method, active noise reduction method, system and electronic equipment
CN112581973B (en) Voice enhancement method and system
CN108900943A (en) A kind of scene adaptive active denoising method and earphone
US20160066087A1 (en) Joint noise suppression and acoustic echo cancellation
US20230186892A1 (en) Managing Characteristics of Active Noise Reduction
US8761410B1 (en) Systems and methods for multi-channel dereverberation
CN106063293B (en) The method and system of automatic sound equilibrium
US10542346B2 (en) Noise estimation for dynamic sound adjustment
CN108540895B (en) Intelligent equalization device design method and noise cancelling headphone with intelligent equalization device
CN111971975B (en) Active noise reduction method, system, electronic equipment and chip
US20200396539A1 (en) Speaker emulation of a microphone for wind detection
CN108810746A (en) A kind of sound quality optimization method, feedback noise reduction system, earphone and storage medium
CN112562624B (en) Active noise reduction filter design method, noise reduction method, system and electronic equipment
CN114677997A (en) Real vehicle active noise reduction method and system based on acceleration working condition
KR20240007168A (en) Optimizing speech in noisy environments
CN101727907A (en) Audio-frequency processing circuit, audio-frequency processing device and method
CN116074697B (en) Vehicle-mounted acoustic equalizer compensation method and system based on deep neural network
CN110708651B (en) Hearing aid squeal detection and suppression method and device based on segmented trapped wave
CN113763984A (en) Parameterized noise elimination system for distributed multiple speakers
CN117292698B (en) Processing method and device for vehicle-mounted audio data and electronic equipment
CN112687285B (en) Echo cancellation method and device
EP4319192A1 (en) Echo suppressing device, echo suppressing method, and echo suppressing program
US11330376B1 (en) Hearing device with multiple delay paths

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant