CN112298031A - Electric automobile active sounding method and system based on gear shifting strategy migration - Google Patents
Electric automobile active sounding method and system based on gear shifting strategy migration Download PDFInfo
- Publication number
- CN112298031A CN112298031A CN202011179860.8A CN202011179860A CN112298031A CN 112298031 A CN112298031 A CN 112298031A CN 202011179860 A CN202011179860 A CN 202011179860A CN 112298031 A CN112298031 A CN 112298031A
- Authority
- CN
- China
- Prior art keywords
- vehicle
- gru
- active
- sound
- rnn model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 22
- 238000013508 migration Methods 0.000 title claims abstract description 10
- 230000005012 migration Effects 0.000 title claims abstract description 10
- 238000002485 combustion reaction Methods 0.000 claims abstract description 23
- 230000002194 synthesizing effect Effects 0.000 claims abstract description 13
- 230000015572 biosynthetic process Effects 0.000 claims description 9
- 238000003786 synthesis reaction Methods 0.000 claims description 9
- 238000012549 training Methods 0.000 claims description 8
- 230000004913 activation Effects 0.000 claims description 7
- 238000004519 manufacturing process Methods 0.000 claims description 6
- 210000005069 ears Anatomy 0.000 claims description 4
- 230000005540 biological transmission Effects 0.000 abstract description 2
- 238000001228 spectrum Methods 0.000 description 18
- 230000006870 function Effects 0.000 description 9
- 230000003595 spectral effect Effects 0.000 description 8
- 230000001133 acceleration Effects 0.000 description 6
- 238000004458 analytical method Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 238000011160 research Methods 0.000 description 3
- 238000004904 shortening Methods 0.000 description 3
- 238000005311 autocorrelation function Methods 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000004891 communication Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 239000000446 fuel Substances 0.000 description 1
- 230000014509 gene expression Effects 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000001308 synthesis method Methods 0.000 description 1
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60Q—ARRANGEMENT OF SIGNALLING OR LIGHTING DEVICES, THE MOUNTING OR SUPPORTING THEREOF OR CIRCUITS THEREFOR, FOR VEHICLES IN GENERAL
- B60Q5/00—Arrangement or adaptation of acoustic signal devices
- B60Q5/005—Arrangement or adaptation of acoustic signal devices automatically actuated
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W50/00—Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
- B60W50/08—Interaction between the driver and the control system
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W50/00—Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
- B60W2050/0001—Details of the control system
- B60W2050/0043—Signal treatments, identification of variables or parameters, parameter estimation or state estimation
Landscapes
- Engineering & Computer Science (AREA)
- Mechanical Engineering (AREA)
- Automation & Control Theory (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Human Computer Interaction (AREA)
- Transportation (AREA)
- Hybrid Electric Vehicles (AREA)
- Measurement Of Mechanical Vibrations Or Ultrasonic Waves (AREA)
Abstract
The invention discloses an electric automobile active sounding method and system based on shift strategy migration. An active sounding method of an electric automobile based on shift strategy migration comprises the following steps: collecting body parameters of the electric automobile; sending the vehicle body parameters into a pre-trained GRU-RNN model, and predicting the revolution and sound pressure level through the GRU-RNN model; and synthesizing a corresponding active sound emitting source according to the revolution, adjusting the amplitude of the active sound emitting source according to the sound pressure level, and playing through a loudspeaker in the vehicle. The invention can simulate the vivid sound field environment in the vehicle of the transmission internal combustion engine vehicle type and has better driving pleasure.
Description
Technical Field
The invention belongs to the field of active sounding of electric automobiles, and relates to an active sounding method and system of an electric automobile based on shift strategy migration.
Background
With the stricter environmental protection standards, new energy electric vehicles become mainstream gradually, and with the development of acoustic technology, the acoustic environment in electric vehicles is quieter and quieter. But more and more owners also hope that sound matched with the motion state exists in the automobile, so that the voice feedback can be realized, and better driving feeling can be achieved.
Since the sound of the traditional internal combustion engine, which is vivid and low in power, has been widely accepted by the public, the current mainstream research is based on the sound quality research of the traditional internal combustion engine, and the sound with similar style is synthesized on the electric automobile through the sound quality research of the traditional internal combustion engine. However, in addition to the acoustic quality, the shift strategy is also important, and by means of a complex shift strategy, a more engaging sound can be matched.
On an electric vehicle, unlike a conventional fuel vehicle, there is no concept of vehicle speed shifting. In view of the above, there is a need for an active sounding method and system for an electric vehicle that takes into account a virtual shift strategy.
Disclosure of Invention
The invention aims to provide an electric automobile active sound production method and system based on shift strategy migration, which can simulate a realistic sound field environment in an automobile of a transmission internal combustion engine type and experience better driving pleasure.
In order to achieve the purpose, the invention adopts the technical scheme that:
an active sounding method of an electric automobile based on shift strategy migration comprises the following steps:
collecting body parameters of the electric automobile;
sending the vehicle body parameters into a pre-trained GRU-RNN model, and predicting the revolution and sound pressure level through the GRU-RNN model; and
and synthesizing a corresponding active sound emitting source according to the revolution, adjusting the amplitude of the active sound emitting source according to the sound pressure level, and playing through a loudspeaker in the vehicle.
In one embodiment, the GRU-RNN model is pre-trained by:
a. selecting a traditional internal combustion engine model to be learned;
b. acquiring vehicle body data of a selected internal combustion engine vehicle type under various working conditions and sound pressure levels of ears of people in the vehicle, wherein the vehicle body data comprises the number of revolutions;
c. the body data except the number of revolutions and the mode depending on the conventional motorcycle type are input signals, and the number of revolutions and the sound pressure level are output signals to the GRU-RNN model for training.
In one embodiment, the GRU-RNN model has five input layer units corresponding to vehicle speed, throttle, torque, gear, and mode, respectively. The mode is used for distinguishing and simulating gear shifting strategies of different vehicle types or gear shifting strategies of different styles of the same vehicle, and is a parameter required to be set by a user in an active sounding system of the electric vehicle.
In one embodiment, the GRU-RNN model has at least three hidden layers, the activation function of which is relu.
In one embodiment, the GRU-RNN model has two output layer units corresponding to the rotation speed and the sound pressure level, respectively, and the activation function of the output layer is sigmoid.
In one embodiment, the vehicle body parameters comprise a vehicle speed, an accelerator, a torque and a gear of the electric vehicle, and the vehicle speed, the accelerator and the torque are respectively normalized and then sent to a pre-trained GRU-RNN model.
In an embodiment, the vehicle body data further includes a vehicle speed, an accelerator, a torque, and a gear of a conventional internal combustion engine vehicle, and the vehicle speed, the accelerator, the torque, the gear, the number of revolutions, and the sound pressure level are respectively normalized and then sent to the GRU-RNN model for training.
In an embodiment, in the step c, the GRU-RNN model is trained by a BP algorithm until convergence.
In an embodiment, the step of synthesizing the corresponding active sound source according to the number of rotations specifically includes:
s1, obtaining amplitudes and phases of 1,2, … and R subharmonics at a plurality of frequency points of a section of sound source, and storing the amplitudes and phases as a frequency amplitude phase parameter table;
s2, converting the corresponding fundamental frequency f according to the revolution;
s3, searching the position of the fundamental frequency f in the frequency amplitude phase parameter table;
s4, according to the position of the fundamental frequency f, interpolating to obtain the amplitude and the phase of all order harmonics at the fundamental frequency point;
and S5, synthesizing the active sounding signal in the vehicle according to the fundamental frequency f and the amplitude and phase of each order harmonic.
Preferably, the step S1 specifically includes:
s11, acquiring a section of sound of pure acceleration or pure deceleration of the engine as a sound source, and converting the sound source into a spectrogram;
s12, acquiring a candidate frequency point set of a frame of data in the spectrogram and a corresponding spectrum amplitude;
s13, selecting a candidate frequency point set of next frame data and a corresponding frequency spectrum amplitude according to the acquired candidate frequency point of the previous frame data;
repeating the step S13 until the candidate frequency point sets of all frames and the corresponding spectrum amplitudes are obtained;
s14, respectively connecting the frequency points of all frames into lines according to the same corresponding j value and different i values, respectively multiplying the obtained lines by corresponding frequency spectrum amplitude values, and selecting the line with the maximum multiplication result as a base frequency line, wherein j is a subscript of a certain frequency point in a candidate frequency point set to which the certain frequency point belongs, and i is a frame number;
and S15, respectively obtaining the amplitudes and phases of the 1,2, … and R subharmonics of the fundamental frequency line in the spectrogram.
Preferably, the step S12 further includes:
s12-1, intercepting first frame data x (n), wherein n is discrete time index, and n is 0, …, L1-1,L1The preset length of the first frame data is set;
s12-2, solving an autocorrelation function R (m) of the intercepted data according to the formula (1), wherein m is a subscript in an autocorrelation domain,
selecting subscripts corresponding to the maximum N peak values from R (m) in a specified preset search interval, wherein N is the number of preset candidate frequency points, converting corresponding frequency points according to the subscripts corresponding to the peak values in an autocorrelation domain, and calculating the spectral amplitude of the candidate frequency points through discrete Fourier transform;
s12-3, calculating the SHC function of the intercepted data according to the formula (2),
where f is frequency, x (f) is x (n) discrete fourier transform, r is 1, …, H is the total number of predetermined harmonics, f' is-Lf,…,Lf,LfA preset frequency range; selecting frequencies corresponding to the maximum N peak values from SHC (f) in a specified preset search interval, and obtaining the spectrum amplitudes corresponding to the candidate frequency points through discrete Fourier transform;
s12-4, shortening the first frame data x (n) by half, and on the basis, respectively obtaining corresponding candidate frequency points and corresponding spectrum amplitudes according to the steps S12-2 and S12-3;
s12-5, merging the candidate frequency points obtained in the steps S12-2 to S12-4, and if the ratio of the two candidate frequency points is less than a preset value delta, merging, wherein the new candidate frequency point is the average value of the two frequency points, and the spectrum amplitude is the maximum value of the corresponding amplitudes of the two frequency points; sorting the combined candidate frequency points from large to small according to the corresponding frequency spectrum amplitude values, if the number of the combined candidate frequency points is more than 2N, removing the candidate frequency point with the minimum corresponding frequency spectrum amplitude value until the number of the combined candidate frequency points is equal to 2N, and simultaneously allocating a counter for each frequency point, wherein the initial value is a preset value integer Cmax。
Preferably, the step S13 further includes:
s13-1, on the basis of the previous FRAME, moving the FRAME _ LEN point as the start position of the current FRAME, wherein FRAME _ LEN is a preset parameter value, and the length L of the current frame is determined according to the candidate frequency point of the previous framei=α*TmaxWhere i is the current frame index, TmaxThe period length corresponding to the minimum frequency point of the previous frame candidate frequency point is defined, and alpha is a preset multiple;
s13-2, acquiring frequency point set F of current framecAnd corresponding spectral magnitudes.
S13-3, sequentially according to the candidate frequency point set F of the previous framei-1Of frequency in FcSelecting the closest point, satisfying that the ratio of frequency points is less than a preset value delta, and using the closest point as a current frame candidate frequency point set Fi;
S13-4, if at FcIs absent from Fi-1At a certain frequency point fi-1,jCorresponding frequency points, where j is a subscript in the set; then at fi-1,jSearching for the SHC value in the preset nearby interval, and selecting the frequency point corresponding to the SHC maximum value as a new frequency point fi,jRecording corresponding spectrum amplitude, and subtracting 1 from the corresponding counter value;
s13-5, checking the counter corresponding to each candidate frequency, if the counter corresponding value is 0, replacing the frequency, and after the frequency is replaced, resetting the corresponding counter value to the preset initial value Cmax;
S13-6, acquiring a candidate frequency point set F of the current frameiCorresponding spectral amplitude and corresponding counter value.
The invention adopts another technical scheme that:
an electric vehicle active sound production system based on shift strategy migration, comprising:
the vehicle body parameter acquisition module is used for acquiring vehicle body parameters of the electric vehicle;
a prediction model for predicting the number of revolutions and the sound pressure level from the body parameters;
the active sound emitting source synthesizing module is used for synthesizing a corresponding active sound emitting source through the revolution, and adjusting the amplitude of the active sound emitting source according to the sound pressure level;
and the loudspeaker is used for playing sound according to the active sound source generated by the active sound source synthesis module.
In one embodiment, the prediction model is a GRU-RNN model, which is pre-trained by the following steps:
a. selecting a traditional internal combustion engine model to be learned;
b. acquiring vehicle body data of a selected internal combustion engine vehicle type under various working conditions and sound pressure levels of ears of people in the vehicle, wherein the vehicle body data comprises the number of revolutions;
c. the body data except the number of revolutions and the mode depending on the conventional motorcycle type are input signals, and the number of revolutions and the sound pressure level are output signals to the GRU-RNN model for training.
In one embodiment, the body parameter acquisition module includes a CAN bus in communication with the electric vehicle.
Compared with the prior art, the invention has the following advantages by adopting the scheme:
the invention adopts the RNN deep learning network based on the GRU structure, can better simulate the current revolution number according to the vehicle body parameters, and can directly learn the corresponding gear shifting strategy from the data because the deep learning network has high nonlinearity, so that an approximate model does not need to be artificially established, and the working difficulty is reduced. The gear shifting strategy is decoupled from the sound style, the gear shifting strategy and the sound style are respectively and independently designed, and the labor division is clear, so that the sound style of the traditional internal combustion engine type A and the gear shifting strategy of the traditional internal combustion engine type B can be simultaneously possessed on the same electric automobile.
Drawings
In order to more clearly illustrate the technical solution of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on the drawings without creative efforts.
FIG. 1 is a schematic diagram of a GRU-RNN model employed in accordance with an embodiment of the present invention;
FIG. 2 is a schematic diagram of a GRU structure of the GRU-RNN model shown in FIG. 1;
FIG. 3 is a short-time Fourier transform spectrogram;
FIG. 4 is a graph of the tracked multiline frequencies;
fig. 5 is a block diagram of an active sound system of an electric vehicle according to an embodiment of the present invention.
Wherein:
1. a vehicle body parameter acquisition module; 2. a prediction model; 3. an active sound source emitting synthesis module; 31. a frequency-phase parameter module; 32. an interpolation module; 33. a synthesis module; 4. a sound field control module; 5. driving a power amplifier; 6. a loudspeaker.
Detailed Description
Preferred embodiments of the present invention will be described in detail below with reference to the accompanying drawings so that the advantages and features of the invention may be more readily understood by those skilled in the art. It should be noted that the description of the embodiments is provided to help understanding of the present invention, but the present invention is not limited thereto. In addition, the technical features involved in the embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
Training GRU-RNN model
1. And selecting a standard-aligning traditional internal combustion engine model.
2. The method comprises the following steps of collecting vehicle body data under various working conditions through a CAN bus: vehicle speed, throttle, torque, gear and number of revolutions, and the sound pressure level at the corresponding driver's ear in the vehicle is recorded by an acoustic measurement device.
3. The vehicle speed data is normalized according to the maximum vehicle speed number designed by the vehicle, the throttle data is normalized according to the maximum throttle number, the torque data is normalized according to the maximum torque data, the gear data only records P gear, N gear, R gear and D gear (if the D gear is subdivided, the D gear is processed uniformly according to the D gear), the P gear is 0, the N gear is 1, the R gear is 2, the D gear is 3, the revolution data is normalized according to the maximum revolution data, and the sound pressure data (in dB) is normalized according to the maximum sound pressure level.
4. The vehicle speed, the accelerator, the torque and the gear are used as input signals, a mode is additionally used as the input signals, and the mode signals have the significance that if various standard vehicle types exist, one set of parameters can be shared, so that mode codes represent different vehicle types, non-negative integers represent different vehicle types, and if only one standard vehicle type exists, the mode input value is constant 0. The number of revolutions and the sound pressure level are taken as output signals. And sending the collected data into a prediction model to train through a BP algorithm until convergence.
Specifically, the prediction model is an RNN model based on the GRU structure (GRU-RNN model). In practice, it has been observed that the shift strategy is not only dependent on the current input parameters, but also on a time state, and therefore the GRU-RNN model is used to match this non-linear process.
As shown in fig. 1, the number of RNN input layer units is 5, and corresponds to vehicle speed, accelerator, torque, gear, and mode, respectively. The hidden layer is three layers, the number of units of each layer is N, N is a preset value, and the activation function of the hidden layer is a relu function. The number of output layer units is 2, corresponding to the number of revolutions and the sound pressure level, and the output layer activation function is sigmoid. The mathematical expressions of the activation functions relu and sigmoid are respectively as follows: y is max (0, x) and y is 1/(1+ e)-x)。
The GRU structure is shown in fig. 2. Wherein:
zt=sigmoid(Wz·[ht-1,xt])
rt=sigmoid(Wr·[ht-1,xt])
in the formula, WzAnd WrAre corresponding weight functions, xtIs a hidden layer input, htIs a hidden layer output, ht-1Is the hidden layer output at the last time.
Prediction by GRU-RNN model
On a new energy electric automobile, vehicle body parameters such as a vehicle speed, an accelerator, a torque and a gear are collected through a Controller Area Network (CAN), normalized data are sent to the trained prediction model (namely a GRU-RNN model), corresponding normalized revolution and a corresponding pressure level are predicted through a network model, a corresponding active sound source is synthesized according to the normalized revolution (such as a sine wave synthesis method or a waveform splicing method), the amplitude of the synthesized sound source is adjusted according to the pressure level, and the synthesized sound source is played through a loudspeaker in the automobile.
Active sounding method of electric automobile
The embodiment also provides a specific active sounding method of the electric vehicle, which comprises the following steps:
s1, obtaining amplitudes and phases of 1,2, … and R harmonics at a plurality of frequency points of a section of sound source in advance through a multi-scale multi-line fundamental frequency analysis algorithm, and storing the amplitudes and phases as a frequency amplitude phase parameter table;
s2, obtaining body parameters of the electric automobile and converting the body parameters into corresponding fundamental frequency f;
s3, searching the position (such as f) of the fundamental frequency f in the frequency amplitude phase parameter tablen<f<fn+1);
S4, according to the position of the fundamental frequency f, obtaining the amplitude A of all order harmonics at the fundamental frequency point by interpolationkAnd phase phik;
In the above formula, K is the effective harmonic order, K is 1 … K, and t is time.
Step S1 is specifically as follows:
s11, obtaining a pure acceleration or pure deceleration sound source, where the sound source can be obtained by downloading from internet or re-recording, and fig. 3 shows a short-time fourier transform spectrogram of the sound source selected in this embodiment.
S12, initialization phase (first frame):
s12.0, truncating the first frame data x (n), where n is a discrete time index, and n is 0, …, L1-1,L1The preset length of the first frame data;
s12.1, solving an autocorrelation function R (m) of the intercepted data, wherein m is a subscript in an autocorrelation domain
And selecting subscripts corresponding to the maximum N peak values in the specified preset search interval from the R (m) (if the N peak values do not exist, selecting subscripts corresponding to actual peak values, and if one peak value does not exist, selecting subscripts corresponding to the maximum value), wherein N is the preset number of candidate frequency points, converting corresponding frequency points according to the subscripts corresponding to the peak values in the autocorrelation domain, and calculating the spectrum amplitude of the candidate frequency points through discrete Fourier transform.
S12.2, SHC (spectral harmonic correlation) function for intercepting data
Where f is frequency, x (f) is x (n) discrete fourier transform, r 1, …, H is a harmonic number subscript, H is a predetermined total number of harmonics, f' Lf,…,LfIs a frequency range subscript, LfIs a predetermined frequency range. And selecting frequencies corresponding to the maximum N peak values in the designated preset search interval from the SHC (f) (if the N peak values do not exist, selecting frequencies corresponding to actual peak values, and if one peak value does not exist, selecting the frequency corresponding to the maximum value), and acquiring the spectrum amplitudes corresponding to the candidate frequency points through discrete Fourier transform.
S12.3, shortening the first frame data x (n) by half, and on the basis, respectively obtaining corresponding candidate frequency points and corresponding spectrum amplitudes according to the steps S12.1 and S12.2. In the interval where the revolution number changes rapidly, because the length is too long, some fundamental frequency points may be submerged, and by shortening the length, on different time scales, the fundamental frequency points can be acquired.
And S12.4, merging the candidate frequency points obtained in the steps S12.1-12.3, and if the ratio of the two candidate frequency points is less than a preset value delta, merging, wherein the new candidate frequency point is the average value of the two frequency points, and the spectrum amplitude is the maximum value of the corresponding amplitudes of the two frequency points. Sorting the combined candidate frequency points from large to small according to the corresponding frequency spectrum amplitude values, if the number of the combined candidate frequency points is more than 2N, removing the candidate frequency point with the minimum corresponding frequency spectrum amplitude value until the number of the combined candidate frequency points is equal to 2N, and simultaneously allocating a counter for each frequency point, wherein the initial value is a preset value integer Cmax
S13, tracking:
s13.1, on the basis of the previous FRAME, moving a FRAME _ LEN point as the starting position of the current FRAME, wherein the FRAME _ LEN is a preset parameter value, and determining the length L of the current FRAME according to the candidate frequency point of the previous FRAMEi=α*TmaxWhere i is the current frame index, TmaxAnd the period length corresponding to the minimum frequency point of the candidate frequency point of the previous frame is alpha which is a preset multiple.
S13.2, acquiring a frequency point set F according to the steps S12.1-12.4cAnd corresponding spectral magnitudes.
S13.3, sequentially collecting F according to the candidate frequency points of the previous framei-1Of frequency in FcSelecting the closest point, satisfying that the ratio of frequency points is less than a preset value delta, and using the closest point as a current frame candidate frequency point set Fi。
S13.4. operate according to S13.3 if at FcIs absent from Fi-1At a certain frequency point fi-1,jCorresponding frequency points, where j is a subscript in the set. Then at fi-1,jSearching for the SHC value in the preset nearby interval, and selecting the frequency point corresponding to the SHC maximum value as a new frequency point fi,jAnd the corresponding spectral amplitude is recorded,while subtracting 1 from its corresponding counter value.
Sometimes, at a certain number of revolutions, the fundamental frequency value does not necessarily protrude or disappear, and therefore, it cannot be immediately determined that the fundamental frequency value disappears.
S13.5, on the basis of S13.4, checking the counter corresponding to each candidate frequency, and if the value corresponding to the counter is 0, replacing the frequency, wherein the replacement rule is as follows: A. if FcIf the frequency point with the maximum intermediate frequency spectrum amplitude is not selected in the step S13.3-13.4, selecting the frequency point; B. if the A condition is not satisfied, then at FcThe frequency point closest to the current frequency is selected. After the frequency is replaced, the corresponding counter value is reset to a preset initial value Cmax。
S13.6, acquiring a candidate frequency point set F of the current frame according to the stepsiCorresponding spectral amplitude and corresponding counter value.
S14, after calculating the candidate frequency points of all frames, fi,jConnecting different i values corresponding to the same j value into a line, multiplying the line by the corresponding spectral amplitude value, and selecting the line with the largest multiplication result as the finally obtained fundamental frequency line, as shown in fig. 4.
Since a plurality of consecutive lines may be searched, the fundamental frequency line may not be dominant in some local regions but may be dominant in the whole region, and thus the result of picking in a global manner may be more reliable.
And S15, respectively obtaining the amplitudes and phases of 1,2, … and R harmonics of the candidate frequency point lines in the short-time Fourier transform spectrum.
In step S2, the collected parameters of the electric vehicle such as accelerator, vehicle speed, pedal, etc. are normalized and then input into the pre-trained GRU-RNN model, so as to predict the number of revolutions and the sound pressure level, and obtain the corresponding fundamental frequency f through the revolution speed.
In step S5, the weighting coefficients of the harmonics of each order are also adjusted according to the sound pressure level obtained in step S2 to obtain the amplitude and phase of the desired total harmonic.
Active sound production system of electric automobile
The embodiment also provides a specific active sound production system of the electric automobile. As shown in fig. 5, the active sound generation system of the electric vehicle includes: the system comprises an automobile body parameter acquisition module 1, a prediction model 2, an active sound source synthesis module 3, a sound field control module 4, a driving power amplifier 5 and an in-automobile loudspeaker 6.
The vehicle body parameter obtaining module 1 is used for obtaining vehicle body parameters of the electric vehicle. In this embodiment, a CAN bus communicating with an electric vehicle is specifically used. The body parameters are vehicle speed, throttle, torque, and gear.
The prediction model 2 is used to predict the number of revolutions and the sound pressure level from the body parameters. In this embodiment, the above-mentioned training method is specifically adopted to pre-train the GRU-RNN model, and the normalized vehicle speed, accelerator, torque, and gear are input into the trained GRU-RNN model to predict and output the number of revolutions and the sound pressure level.
The active sound source synthesis module 3 is used for synthesizing a corresponding active sound source according to the rotation number and adjusting the amplitude of the active sound source according to the sound pressure level. Specifically, the active sound emission source synthesizing module 3 includes a frequency-phase parameter module 31, an interpolation module 32, and a synthesizing module 33. The frequency-phase parameter module 31 receives a section of sound source with pure acceleration or pure deceleration, and analyzes the amplitude and phase of the 1,2, …, R harmonic at a plurality of frequency points according to the above-mentioned multi-scale multi-line fundamental frequency analysis algorithm, and stores the amplitude and phase as a frequency-amplitude-phase parameter table. One input end of the interpolation module 32 is electrically connected to the output end of the frequency-phase parameter module 31 to obtain a frequency-amplitude phase parameter table, the other input end of the interpolation module 32 is electrically connected to the output end of the prediction model 2 to obtain the predicted revolution number and the predicted sound pressure level, the position of the interpolation module in the frequency-amplitude phase parameter table is searched according to the fundamental frequency corresponding to the rotating speed, and the amplitude and the phase of all order harmonics at the fundamental frequency point are obtained through an interpolation algorithm. The input end of the synthesis module 33 is electrically connected to the output end of the interpolation module 32, and is used for obtaining the amplitudes and phases of all the order harmonics at the fundamental frequency point according to the formulaSynthesizing corresponding engine signals, adjusting the weight coefficient of each order harmonic according to the sound pressure level to obtain the amplitude and the phase of the expected total harmonic, and outputting an active sound source signal.
The input end of the sound field control module 4 is electrically connected with the output end of the synthesis module 33, and is used for debugging the sound field in the vehicle through a sound field control debugging technology.
The input end of the driving power amplifier 5 is electrically connected with the output end of the sound field control module 4, so that the synthesized active sound source signal is converted into an analog signal and fed to the loudspeaker 6 in the car for playing. When the loudspeaker 6 in the vehicle is the digital loudspeaker 6, the input end of the loudspeaker 6 is directly electrically connected with the synthesis module, and the power amplifier 5 does not need to be driven.
The active sounding method and the system for the electric automobile can keep the inheritance of the sound DNA of the traditional internal combustion engine automobile brand. Although the active sound emitting source can be completely redesigned, it is difficult to obtain user approval because of the strong subjectivity and individuality. The sound of the internal combustion engine which partially inherits longer vehicle models is accepted by the public, and the original sound characteristic can be kept by using the method, so that the sound of the internal combustion engine is easier to enter the market. Only one sound source with continuous acceleration and continuous deceleration is needed, and because all the rotating speed information is contained in the sound source with continuous acceleration or deceleration, the recording cost of a large amount of time and manpower can be avoided. The acceleration or deceleration sound sources of different internal combustion engine models exist on the internet, through a high-precision analysis algorithm, a revolution signal does not need to be acquired through a Controller Area Network (CAN), the corresponding fundamental frequency CAN be well estimated, and the corresponding sound source parameters CAN be further acquired, so that the sound sources are prevented from being recorded again, and a large amount of time and labor are saved.
The above embodiments are merely illustrative of the technical ideas and features of the present invention, and are preferred embodiments, which are intended to enable those skilled in the art to understand the contents of the present invention and implement the present invention, and not to limit the scope of the present invention. All equivalent changes or modifications made according to the spirit of the present invention should be covered within the protection scope of the present invention.
Claims (10)
1. An active sounding method of an electric automobile based on shift strategy migration is characterized by comprising the following steps:
collecting body parameters of the electric automobile;
sending the vehicle body parameters into a pre-trained GRU-RNN model, and predicting the revolution and sound pressure level through the GRU-RNN model; and
and synthesizing a corresponding active sound emitting source according to the revolution, adjusting the amplitude of the active sound emitting source according to the sound pressure level, and playing through a loudspeaker in the vehicle.
2. The active sounding method of the electric vehicle as claimed in claim 1, wherein the GRU-RNN model is pre-trained by:
a. selecting a traditional internal combustion engine model to be learned;
b. acquiring vehicle body data of a selected internal combustion engine vehicle type under various working conditions and sound pressure levels of ears of people in the vehicle, wherein the vehicle body data comprises the number of revolutions;
c. the body data except the number of revolutions and the mode depending on the conventional motorcycle type are input signals, and the number of revolutions and the sound pressure level are output signals to the GRU-RNN model for training.
3. The active sounding method of the electric vehicle according to claim 1 or 2, characterized in that: the GRU-RNN model is provided with five input layer units which respectively correspond to a vehicle speed, an accelerator, a torque, a gear and a mode.
4. The active sounding method of the electric vehicle according to claim 1 or 2, characterized in that: the GRU-RNN model has at least three hidden layers, and the activation function of the hidden layers is relu.
5. The active sounding method of the electric vehicle according to claim 1 or 2, characterized in that: the GRU-RNN model is provided with two output layer units respectively corresponding to the rotating speed and the sound pressure level, and the activation function of the output layer is sigmoid.
6. The active sounding method of the electric vehicle according to claim 2, characterized in that: the vehicle body parameters comprise the speed, the accelerator, the torque and the gear of the electric vehicle, and the speed, the accelerator and the torque are respectively normalized and then sent into a pre-trained GRU-RNN model; and/or
The vehicle body data further comprise the vehicle speed, the accelerator, the torque and the gear of a traditional internal combustion engine vehicle type, and the vehicle speed, the accelerator, the torque, the gear, the revolution and the sound pressure level are respectively normalized and then sent to the GRU-RNN model for training.
7. The active sounding method of the electric vehicle according to claim 2, characterized in that: in the step c, the GRU-RNN model is trained through a BP algorithm until convergence.
8. An electric vehicle active sound production system based on shift strategy migration, comprising:
the vehicle body parameter acquisition module is used for acquiring vehicle body parameters of the electric vehicle;
a prediction model for predicting the number of revolutions and the sound pressure level from the body parameters;
the active sound emitting source synthesizing module is used for synthesizing a corresponding active sound emitting source through the revolution, and adjusting the amplitude of the active sound emitting source according to the sound pressure level;
and the loudspeaker is used for playing sound according to the active sound source generated by the active sound source synthesis module.
9. The active sounding system of claim 8, wherein the prediction model is a GRU-RNN model pre-trained by the following steps:
a. selecting a traditional internal combustion engine model to be learned;
b. acquiring vehicle body data of a selected internal combustion engine vehicle type under various working conditions and sound pressure levels of ears of people in the vehicle, wherein the vehicle body data comprises the number of revolutions;
c. the vehicle body data other than the revolution number and the mode depending on the conventional internal combustion engine model are input signals, and the revolution number and the sound pressure level are output signals to the GRU-RNN model for training.
10. The electric vehicle sound production system according to claim 8, wherein: the vehicle body parameter acquisition module comprises a CAN bus communicated with the electric vehicle.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011179860.8A CN112298031B (en) | 2020-10-29 | 2020-10-29 | Active sounding method and system for electric automobile based on shift strategy migration |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011179860.8A CN112298031B (en) | 2020-10-29 | 2020-10-29 | Active sounding method and system for electric automobile based on shift strategy migration |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112298031A true CN112298031A (en) | 2021-02-02 |
CN112298031B CN112298031B (en) | 2024-09-06 |
Family
ID=74331948
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011179860.8A Active CN112298031B (en) | 2020-10-29 | 2020-10-29 | Active sounding method and system for electric automobile based on shift strategy migration |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112298031B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112466274A (en) * | 2020-10-29 | 2021-03-09 | 中科上声(苏州)电子有限公司 | In-vehicle active sounding method and system of electric automobile |
CN114446267A (en) * | 2022-02-28 | 2022-05-06 | 重庆长安汽车股份有限公司 | Active sound synthesis method for vehicle |
CN114973846A (en) * | 2022-06-30 | 2022-08-30 | 延锋国际汽车技术有限公司 | Driving somatosensory and sound wave simulation system |
CN116206624A (en) * | 2023-05-04 | 2023-06-02 | 科大讯飞(苏州)科技有限公司 | Vehicle sound wave synthesizing method, device, storage medium and equipment |
Citations (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2013218079A (en) * | 2012-04-06 | 2013-10-24 | Toyota Motor Corp | Simulated sound generating device, simulated sound generating method, program and medium |
US20170123754A1 (en) * | 2015-11-03 | 2017-05-04 | Daesung Electric Co., Ltd | Vehicle sound generator apparatus and method for controlling the same |
CN106671875A (en) * | 2017-01-25 | 2017-05-17 | 哈尔滨工业大学(威海) | Electric car driving system sound acquisition sounding method and device |
CN109591693A (en) * | 2018-10-31 | 2019-04-09 | 清华大学苏州汽车研究院(相城) | A kind of electric vehicle motion sound quality active sonification system |
CN110050302A (en) * | 2016-10-04 | 2019-07-23 | 纽昂斯通讯有限公司 | Speech synthesis |
CN110481470A (en) * | 2019-08-15 | 2019-11-22 | 中国第一汽车股份有限公司 | A kind of electric car active sonification system design method |
US20200013225A1 (en) * | 2019-08-14 | 2020-01-09 | Lg Electronics Inc. | Vehicle external information output method using augmented reality and apparatus therefor |
CN110718206A (en) * | 2019-09-02 | 2020-01-21 | 中国第一汽车股份有限公司 | Sound target setting method of active sound production system and active sound production system |
CN110718207A (en) * | 2019-09-06 | 2020-01-21 | 中国第一汽车股份有限公司 | Sound synthesis precision verification method for active sound production system and active sound production system |
CN111048099A (en) * | 2019-12-16 | 2020-04-21 | 随手(北京)信息技术有限公司 | Sound source identification method, device, server and storage medium |
CN111145763A (en) * | 2019-12-17 | 2020-05-12 | 厦门快商通科技股份有限公司 | GRU-based voice recognition method and system in audio |
CN111354371A (en) * | 2020-02-26 | 2020-06-30 | Oppo广东移动通信有限公司 | Method, device, terminal and storage medium for predicting running state of vehicle |
CN111462735A (en) * | 2020-04-10 | 2020-07-28 | 网易(杭州)网络有限公司 | Voice detection method and device, electronic equipment and storage medium |
CN111731185A (en) * | 2020-06-16 | 2020-10-02 | 东风汽车集团有限公司 | Method and system for simulating engine sounding during automobile acceleration and deceleration |
US20200324697A1 (en) * | 2019-04-15 | 2020-10-15 | Hyundai Motor Company | Vehicle engine sound control system and control method based on driver propensity using artificial intelligence |
US20210118421A1 (en) * | 2018-06-12 | 2021-04-22 | Harman International Industries, Incorporated | System and method for adaptive magnitude vehicle sound synthesis |
-
2020
- 2020-10-29 CN CN202011179860.8A patent/CN112298031B/en active Active
Patent Citations (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2013218079A (en) * | 2012-04-06 | 2013-10-24 | Toyota Motor Corp | Simulated sound generating device, simulated sound generating method, program and medium |
US20170123754A1 (en) * | 2015-11-03 | 2017-05-04 | Daesung Electric Co., Ltd | Vehicle sound generator apparatus and method for controlling the same |
CN110050302A (en) * | 2016-10-04 | 2019-07-23 | 纽昂斯通讯有限公司 | Speech synthesis |
CN106671875A (en) * | 2017-01-25 | 2017-05-17 | 哈尔滨工业大学(威海) | Electric car driving system sound acquisition sounding method and device |
US20210118421A1 (en) * | 2018-06-12 | 2021-04-22 | Harman International Industries, Incorporated | System and method for adaptive magnitude vehicle sound synthesis |
CN109591693A (en) * | 2018-10-31 | 2019-04-09 | 清华大学苏州汽车研究院(相城) | A kind of electric vehicle motion sound quality active sonification system |
US20200324697A1 (en) * | 2019-04-15 | 2020-10-15 | Hyundai Motor Company | Vehicle engine sound control system and control method based on driver propensity using artificial intelligence |
US20200013225A1 (en) * | 2019-08-14 | 2020-01-09 | Lg Electronics Inc. | Vehicle external information output method using augmented reality and apparatus therefor |
CN110481470A (en) * | 2019-08-15 | 2019-11-22 | 中国第一汽车股份有限公司 | A kind of electric car active sonification system design method |
CN110718206A (en) * | 2019-09-02 | 2020-01-21 | 中国第一汽车股份有限公司 | Sound target setting method of active sound production system and active sound production system |
CN110718207A (en) * | 2019-09-06 | 2020-01-21 | 中国第一汽车股份有限公司 | Sound synthesis precision verification method for active sound production system and active sound production system |
CN111048099A (en) * | 2019-12-16 | 2020-04-21 | 随手(北京)信息技术有限公司 | Sound source identification method, device, server and storage medium |
CN111145763A (en) * | 2019-12-17 | 2020-05-12 | 厦门快商通科技股份有限公司 | GRU-based voice recognition method and system in audio |
CN111354371A (en) * | 2020-02-26 | 2020-06-30 | Oppo广东移动通信有限公司 | Method, device, terminal and storage medium for predicting running state of vehicle |
CN111462735A (en) * | 2020-04-10 | 2020-07-28 | 网易(杭州)网络有限公司 | Voice detection method and device, electronic equipment and storage medium |
CN111731185A (en) * | 2020-06-16 | 2020-10-02 | 东风汽车集团有限公司 | Method and system for simulating engine sounding during automobile acceleration and deceleration |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112466274A (en) * | 2020-10-29 | 2021-03-09 | 中科上声(苏州)电子有限公司 | In-vehicle active sounding method and system of electric automobile |
CN112466274B (en) * | 2020-10-29 | 2024-02-27 | 中科上声(苏州)电子有限公司 | In-vehicle active sounding method and system of electric vehicle |
CN114446267A (en) * | 2022-02-28 | 2022-05-06 | 重庆长安汽车股份有限公司 | Active sound synthesis method for vehicle |
CN114973846A (en) * | 2022-06-30 | 2022-08-30 | 延锋国际汽车技术有限公司 | Driving somatosensory and sound wave simulation system |
CN116206624A (en) * | 2023-05-04 | 2023-06-02 | 科大讯飞(苏州)科技有限公司 | Vehicle sound wave synthesizing method, device, storage medium and equipment |
CN116206624B (en) * | 2023-05-04 | 2023-08-29 | 科大讯飞(苏州)科技有限公司 | Vehicle sound wave synthesizing method, device, storage medium and equipment |
Also Published As
Publication number | Publication date |
---|---|
CN112298031B (en) | 2024-09-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112298031B (en) | Active sounding method and system for electric automobile based on shift strategy migration | |
US9553553B2 (en) | Engine sound synthesis system | |
US9536510B2 (en) | Sound system including an engine sound synthesizer | |
JP4888386B2 (en) | Engine sound processing device | |
JP6791258B2 (en) | Speech synthesis method, speech synthesizer and program | |
Amatriain et al. | Spectral processing | |
US20030221542A1 (en) | Singing voice synthesizing method | |
CN108831437A (en) | A kind of song generation method, device, terminal and storage medium | |
CN112466274A (en) | In-vehicle active sounding method and system of electric automobile | |
EP3121808B1 (en) | System for modeling characteristics of an electronic musical instrument | |
EP3121608B1 (en) | Method of modeling characteristics of a non linear system. | |
CN111583894A (en) | Method, device, terminal equipment and computer storage medium for correcting tone in real time | |
JP4076887B2 (en) | Vocoder device | |
JP2003099462A (en) | Musical composition retrieving device | |
KR101804772B1 (en) | Sound control apparatus, vehicle and method of controlling thereof | |
JP2000503412A (en) | Waveform synthesis | |
CN118102544A (en) | Light control method, device, electronic equipment and storage medium | |
CN112652315B (en) | Automobile engine sound real-time synthesis system and method based on deep learning | |
CN110097618A (en) | A kind of control method, device, vehicle and the storage medium of music animation | |
JP3362577B2 (en) | In-vehicle sound synthesizer for vehicles | |
FR2510288A1 (en) | Underwater noise generator for sonar simulation - uses auto-correlator, to generate filter coefficients in series with white noise generator and predictive analysers | |
CN111863028A (en) | Engine sound synthesis method and system | |
CN113593537A (en) | Speech emotion recognition method and device based on complementary feature learning framework | |
CN117592384B (en) | Active sound wave generation method based on generation countermeasure network | |
EP3821427B1 (en) | Method for generating an audio signal, in particular for active control of the sound of the engine of a land vehicle, and corresponding apparatus |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |