CN108305603A - Sound effect treatment method and its equipment, storage medium, server, sound terminal - Google Patents
Sound effect treatment method and its equipment, storage medium, server, sound terminal Download PDFInfo
- Publication number
- CN108305603A CN108305603A CN201710999163.9A CN201710999163A CN108305603A CN 108305603 A CN108305603 A CN 108305603A CN 201710999163 A CN201710999163 A CN 201710999163A CN 108305603 A CN108305603 A CN 108305603A
- Authority
- CN
- China
- Prior art keywords
- audio
- sample
- target
- audio data
- data packet
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H1/00—Details of electrophonic musical instruments
- G10H1/0091—Means for obtaining special acoustic effects
Abstract
The embodiment of the present invention discloses a kind of sound effect treatment method and its equipment, storage medium, server, sound terminal, wherein method include the following steps:When receiving the play instruction to target audio, the audio data of the target audio is obtained;The corresponding target audio data packet of the target audio is obtained in sound effect parameters set, the target audio data packet includes the digital equalising parameter of target audio and the reverberation parameters of target audio;Synthesis processing is carried out to the audio data using the target audio data packet;And export the synthesis treated the audio data.Using the present invention, adaptive audio can be provided based on the information of audio, build the audio for being most suitable for audio content, improve the intelligent of audio effect processing.
Description
Technical field
The present invention relates to Internet technical field more particularly to a kind of sound effect treatment methods and its equipment, storage medium, clothes
Business device, sound terminal.
Background technology
It refers to promote the sense of reality, atmosphere or the drama message of scene, and being added on by the effect manufactured by sound that audio, which refers to,
Noise on vocal cords or sound are the artificial sound manufactured or reinforce, for enhance to film, electronic game, music or other
The acoustic processing of the art or other content of media.
With the popularization and application of smart machine, requirement of the user to sense of hearing is consequently increased, in order to promote multimedia audio
Experience, smart machine are typically arranged with a variety of audio settings, such as digital equalising, reverberation effect, channel expansion etc..These sounds
Effect setting provides more setting options, to reach the different demands of client, but once sets, plays all music and all can
It is reduced with same audio so as to cause that can not be accustomed to providing most suitable audio with style of song type according to the audition of user
Audio effect processing it is intelligent.
Invention content
A kind of sound effect treatment method of offer of the embodiment of the present invention and its equipment, storage medium, terminal, can be based on audio
Information provides adaptive audio, builds the audio for being most suitable for audio content, improves the intelligent of audio effect processing.
On the one hand the embodiment of the present invention provides a kind of sound effect treatment method, it may include:
When receiving the play instruction to target audio, the audio data of the target audio is obtained;
The corresponding target audio data packet of the target audio, the target audio data are obtained in sound effect parameters set
Include the digital equalising parameter of target audio and the reverberation parameters of target audio;
Synthesis processing is carried out to the audio data using the target audio data packet;And
Export the synthesis treated the audio data.
Optionally, it is described when receiving the play instruction to target audio before, further include:
Collecting sample audio, obtains the feature frequency response curve of the sample audio, and obtains the sample of the sample audio
Frequency information and sample timbre information;
Based on the feature frequency response curve, the sample frequency information and the sample timbre information, sample sound is obtained
The digital equalising processing curve of frequency and the reverberation parameters of sample audio;
The sample audio label of the sample audio is obtained, equal loudness contour and the sample audio label are based on, to institute
It states the octave characteristic point selected in frequency range in digital equalising processing curve to be adjusted, to obtain each octave feature
The corresponding digital equalising parameter of point;
By the corresponding digital equalising parameter of each octave characteristic point, the reverberation parameters of the sample audio and institute
Sample audio label is stated to preserve into sound effect parameters set.
Optionally, it is described by the corresponding digital equalising parameter of each octave characteristic point, the sample audio it is mixed
It rings parameter and the sample audio label is preserved into sound effect parameters set, including:
After the corresponding digital equalising parameter of each octave characteristic point, the compression of the reverberation parameters of the sample audio
It is stored as the corresponding audio data packet of the sample audio label;
The sample audio label and audio data packet corresponding with the sample audio label are preserved to the audio
In parameter sets.
Optionally, the audio data for obtaining the target audio, including:
Obtain the audio data and audio tag of the target audio;
It is described to obtain the corresponding target audio data packet of the target audio in sound effect parameters set, including:
The corresponding target audio data packet of the audio tag is obtained in the sound effect parameters set, and reads the mesh
Mark with phonetic symbols imitates the digital equalising parameter of the target audio in data packet and the reverberation parameters of target audio.
It is optionally, described that the corresponding target audio data packet of the audio tag is obtained in the sound effect parameters set,
Including:
The target sample audio label belonging to the audio tag is searched in sample audio tag set, and in the sound
The corresponding target audio data packet of the target sample audio label is obtained in effect parameter sets.
Optionally, it is described by the corresponding digital equalising parameter of each octave characteristic point, the reverberation parameters and
The sample audio label is preserved into sound effect parameters set, including:
By the corresponding digital equalising parameter of each octave characteristic point, the reverberation parameters, sample frequency letter
Breath, the sample timbre information and the sample audio label are preserved into sound effect parameters set.
Optionally, the audio data for obtaining the target audio, including:
Obtain audio data, target frequency information and the target timbre information of the target audio;
It is described to obtain the corresponding target audio data packet of the target audio in the sound effect parameters set, including:
By the target frequency information and the sample of target timbre information and each audio data packet in sound effect parameters set
This frequency information and sample timbre information are matched, and the target frequency information and target tone color are obtained after matching
Information and the sample frequency information of each audio data packet and the matching similarity of sample timbre information;
The highest sample frequency information of matching similarity and the corresponding sample audio data packet of sample timbre information are obtained,
Using the sample audio data packet as target audio data packet.
Optionally, when the sound effect treatment method operates in server side;
Described output synthesis treated audio data, including:
Will synthesis treated that the audio data is sent to sound terminal so that the sound terminal exports at the synthesis
The audio data after reason.
Optionally, when the sound effect treatment method operates in sound terminal side;
It is described to obtain the audio data of the target audio when receiving the play instruction to target audio, including:
When receiving the play instruction to target audio, the audio number for the target audio that server is sent is received
According to;
It is described that the corresponding target audio data packet of the target audio, the target audio are obtained in sound effect parameters set
Data packet includes the digital equalising parameter of target audio and the reverberation parameters of target audio, including:
Receive the sound effect parameters set that the server is sent;
The corresponding target audio data packet of the target audio, the target audio are obtained in the sound effect parameters set
Data packet includes the digital equalising parameter of target audio and the reverberation parameters of target audio.
On the other hand the embodiment of the present invention provides a kind of audio effect processing equipment, it may include:
Information acquisition unit, for when receiving the play instruction to target audio, obtaining the sound of the target audio
Frequency evidence;
Parameter acquiring unit, for obtaining the corresponding target audio data of the target audio in sound effect parameters set
Packet, the target audio data packet includes the digital equalising parameter of target audio and the reverberation parameters of target audio;
Data outputting unit, for carrying out synthesis processing to the audio data using the target audio data packet, and
Will synthesis treated that the audio data is sent to sound terminal treated so that the sound terminal exports synthesis sound
Frequency evidence.
Optionally, the equipment further includes:
Sample information acquiring unit is used for collecting sample audio, obtains the feature frequency response curve of the sample audio, and obtain
Take the sample frequency information and sample timbre information of the sample audio;
Sample parameter acquiring unit, for being based on the feature frequency response curve, the sample frequency information and the sample
This timbre information obtains the digital equalising processing curve of sample audio and the reverberation parameters of sample audio;
Sample parameter adjustment unit, the sample audio label for obtaining the sample audio, based on equal loudness contour and
The sample audio label, the octave characteristic point handled in curve in selected frequency range the digital equalising are adjusted
It is whole, to obtain the corresponding digital equalising parameter of each octave characteristic point;
Sample information storage unit, for by the corresponding digital equalising parameter of each octave characteristic point, the sample
The reverberation parameters of this audio and the sample audio label are preserved into sound effect parameters set.
Optionally, the sample information storage unit, including:
Data packet obtains subelement, for by the corresponding digital equalising parameter of each octave characteristic point, the sample
It is stored as the corresponding audio data packet of the sample audio label after the reverberation parameters compression of this audio;
Information saving subunit is used for the sample audio label and the sound corresponding with the sample audio label
Effect data packet is preserved into the sound effect parameters set.
Optionally, described information acquiring unit is specifically used for:
Obtain the audio data and audio tag of the target audio;
The parameter acquiring unit is specifically used for:
The corresponding target audio data packet of the audio tag is obtained in the sound effect parameters set, and reads the mesh
Mark with phonetic symbols imitates the digital equalising parameter of target audio and the reverberation parameters of target audio described in data packet.
Optionally, the parameter acquiring unit is specifically used for:
The target sample audio label belonging to the audio tag is searched in sample audio tag set, and in the sound
The corresponding target audio data packet of the target sample audio label is obtained in effect parameter sets.
Optionally, the data packet obtains subelement specifically for each octave characteristic point is corresponding digital equal
It is stored after weighing apparatus parameter, the reverberation parameters of the sample audio, the sample frequency information and sample timbre information compression
For the corresponding audio data packet of the sample audio label.
Optionally, described information acquiring unit is specifically used for:
Obtain audio data, target frequency information and the target timbre information of the target audio;
The parameter acquiring unit includes:
Similarity obtains subelement, is used for the target frequency information and target timbre information and sound effect parameters set
In each audio data packet sample frequency information and sample timbre information matched, and the target is obtained after matching
The sample frequency information and sample timbre information of frequency information and target timbre information and each audio data packet
Matching similarity;
Data packet obtains subelement, for obtaining the highest sample frequency information of matching similarity and sample timbre information
Corresponding sample audio data packet, using the sample audio data packet as target audio data packet.
On the other hand the embodiment of the present invention provides a kind of computer storage media, the computer storage media is stored with
A plurality of instruction, described instruction are suitable for being loaded by processor and executing following steps:
When receiving the play instruction to target audio, the audio data of the target audio is obtained;
The corresponding target audio data packet of the target audio, the target audio data are obtained in sound effect parameters set
Include the digital equalising parameter of target audio and the reverberation parameters of target audio;
Synthesis processing is carried out to the audio data using the target audio data packet, and will synthesis that treated is described
Audio data is sent to sound terminal, treated so that the sound terminal exports the synthesis audio data.
The embodiment of the present invention also provides a kind of server, it may include:Processor and memory;Wherein, the memory is deposited
Computer program is contained, the computer program is suitable for being loaded by the processor and executing following steps:
When receiving the play instruction to target audio, the audio data of the target audio is obtained;
The corresponding target audio data packet of the target audio, the target audio data are obtained in sound effect parameters set
Include the digital equalising parameter of target audio and the reverberation parameters of target audio;
Synthesis processing is carried out to the audio data using the target audio data packet, and will synthesis that treated is described
Audio data is sent to sound terminal, treated so that the sound terminal exports the synthesis audio data.
The embodiment of the present invention also provides a kind of audio effect processing equipment, it may include:
Information receiving unit, for when receiving the play instruction to target audio, receiving the described of server transmission
The audio data of target audio;
Parameter acquiring unit, the sound effect parameters set sent for receiving the server, in the sound effect parameters set
Middle to obtain the corresponding target audio data packet of the target audio, the target audio data packet includes that the number of target audio is equal
The reverberation parameters of the parameter that weighs and target audio;
Data outputting unit, for carrying out synthesis processing to the audio data using the target audio data packet, and
Will synthesis treated that the audio data exports.
Optionally, described information receiving unit is specifically used for:
Receive the audio data and audio tag of the target audio that server is sent;
The parameter acquiring unit is specifically used for:
The corresponding target audio data packet of the audio tag is obtained in the sound effect parameters set, and reads the mesh
Mark with phonetic symbols imitates the digital equalising parameter of target audio and the reverberation parameters of target audio described in data packet.
Optionally, the parameter acquiring unit is specifically used for:
The target sample audio label belonging to the audio tag is searched in sample audio tag set, and in the sound
The corresponding target audio data packet of the target sample audio label is obtained in effect parameter sets.
Optionally, described information receiving unit is specifically used for:
Receive audio data, target frequency information and the target timbre information of the target audio that server is sent;
The parameter acquiring unit includes:
Similarity obtains subelement, is used for the target frequency information and target timbre information and sound effect parameters set
In each audio data packet sample frequency information and sample timbre information matched, and the target is obtained after matching
The sample frequency information and sample timbre information of frequency information and target timbre information and each audio data packet
Matching similarity;
Data packet obtains subelement, obtains the highest sample frequency information of matching similarity and sample timbre information corresponds to
Sample audio data packet, using the sample audio data packet as target audio data packet.
The embodiment of the present invention also provides a kind of computer storage media, and the computer storage media is stored with a plurality of finger
It enables, described instruction is suitable for being loaded by processor and executing following steps:
When receiving the play instruction to target audio, the audio number for the target audio that server is sent is received
According to;
The sound effect parameters set that the server is sent is received, the target audio is obtained in the sound effect parameters set
Corresponding target audio data packet, the target audio data packet include the digital equalising parameter and target audio of target audio
Reverberation parameters;
Synthesis processing is carried out to the audio data using the target audio data packet, and exports the synthesis treated
The audio data.
The embodiment of the present invention also provides a kind of sound terminal, it may include:Processor and memory;Wherein, the memory
It is stored with computer program, the computer program is suitable for being loaded by the processor and executing following steps:
When receiving the play instruction to target audio, the audio number for the target audio that server is sent is received
According to;
The sound effect parameters set that the server is sent is received, the target audio is obtained in the sound effect parameters set
Corresponding target audio data packet, the target audio data packet include the digital equalising parameter and target audio of target audio
Reverberation parameters;
Synthesis processing is carried out to the audio data using the target audio data packet, and exports the synthesis treated
The audio data.
In embodiments of the present invention, by when receiving the play instruction to target audio, obtaining the sound of target audio
Frequency evidence, and in sound effect parameters set after the corresponding target audio data packet of acquisition target audio, using the target audio number
Synthesis processing is carried out to audio data according to packet, finally will synthesis treated that audio data exports.By based on audio
Information provides adaptive audio, can build the audio of most suitable audio content, enrich audio effect processing mode, improve audio
What is handled is intelligent.
Description of the drawings
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below
There is attached drawing needed in technology description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this
Some embodiments of invention for those of ordinary skill in the art without creative efforts, can be with
Obtain other attached drawings according to these attached drawings.
Fig. 1 is a kind of flow diagram of sound effect treatment method provided in an embodiment of the present invention;
Fig. 2 is the flow diagram of another sound effect treatment method provided in an embodiment of the present invention;
Fig. 3 is a kind of interface schematic diagram of feature frequency response curve provided in an embodiment of the present invention;
Fig. 4 is a kind of interface schematic diagram of digital equalising processing curve provided in an embodiment of the present invention;
Fig. 5 is a kind of interface schematic diagram of equal loudness contour provided in an embodiment of the present invention;
Fig. 6 is the interface schematic diagram after a kind of octave characteristic point provided in an embodiment of the present invention is discrete;
Fig. 7 is the flow diagram of another sound effect treatment method provided in an embodiment of the present invention;
Fig. 8 is the flow diagram of another sound effect treatment method provided in an embodiment of the present invention;
Fig. 9 is the flow diagram of another sound effect treatment method provided in an embodiment of the present invention;
Figure 10 is the flow diagram of another sound effect treatment method provided in an embodiment of the present invention;
Figure 11 is a kind of structural schematic diagram of audio effect processing equipment provided in an embodiment of the present invention;
Figure 12 is the structural schematic diagram of another audio effect processing equipment provided in an embodiment of the present invention;
Figure 13 is a kind of structural schematic diagram of sample information storage unit provided in an embodiment of the present invention;
Figure 14 is that the embodiment of the present invention provides a kind of structural schematic diagram of parameter acquiring unit;
Figure 15 is the structural schematic diagram that the embodiment of the present invention provides another audio effect processing equipment;
Figure 16 is the structural schematic diagram that the embodiment of the present invention provides another parameter acquiring unit;
Figure 17 is a kind of structural schematic diagram of server provided in an embodiment of the present invention;
Figure 18 is a kind of structural schematic diagram of sound terminal provided in an embodiment of the present invention.
Specific implementation mode
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete
Site preparation describes, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.It is based on
Embodiment in the present invention, it is obtained by those of ordinary skill in the art without making creative efforts every other
Embodiment shall fall within the protection scope of the present invention.
Sound effect treatment method provided in an embodiment of the present invention can be applied to the scene of audio enhancing, by receiving pair
When the play instruction of target audio, the audio data of target audio is obtained, and target audio pair is obtained in sound effect parameters set
After the target audio data packet answered, synthesis processing is carried out to audio data using the target audio data packet, it finally will be at synthesis
Audio data after reason is exported.Adaptive audio is provided by the information based on audio, can be built in most suitable audio
The audio of appearance enriches audio effect processing mode, improves the intelligent of audio effect processing.
The present embodiments relate to sound effect treatment method execution depend on computer program, can be set based on audio effect processing
It is standby to run on the computer system of Feng Ruoyiman systems.The audio effect processing equipment may include with storage, calculate and
The server of the functions such as audio synthesis can also include sound equipment, tablet computer, personal computer (PC), smart mobile phone, palm electricity
The audio effect processings terminal device such as brain and mobile internet device (MID).
Below in conjunction with attached drawing 1- attached drawings 10, describe in detail to sound effect treatment method provided in an embodiment of the present invention.
Fig. 1 is referred to, for an embodiment of the present invention provides a kind of flow diagrams of sound effect treatment method.As shown in Figure 1,
The embodiment of the present invention the method may include following steps S101- steps S103.
S101 obtains the audio data of the target audio when receiving the play instruction to target audio;
It is understood that audio is a kind of important media in multimedia, it is the form of voice signal.As one kind
The carrier of information, audio can be divided into voice, music and other sound three types, and in embodiments of the present invention, the audio is
Music can be individual a piece of music in music player, or the companion in the multimedias such as video, game, e-book
It is happy.The target audio is music for output of the user selected in more songs.It, can for an audio
To include much information, such as musical designation, Ge Shouming, audio data, affiliated album, time, total duration, audio tag are delivered
Deng.Wherein, the audio data is the opaque binary stream of a succession of non-semantic symbolic indication, that is, target audio is interior
Hold.The audio tag can be the different types of style of song such as Art Rock, punk, metal music or folk song.Optionally, the sound
Frequency may also include the frequency information and timbre information of audio, and the frequency information and timbre information are that the frequency spectrum of audio is special
Property, the i.e. frequency domain characteristic of audio signal.
In the specific implementation, when audio effect processing equipment receives the play instruction for target audio, target audio is obtained
Audio-frequency information, and extract audio data, audio tag, frequency information and timbre information in the audio-frequency information etc..Such as,
When audio effect processing equipment receives the play instruction for playing target audio " performer ", the audio data and audio of " performer " are obtained
Label " prevalence ".
S102 obtains the corresponding target audio data packet of the target audio, the target sound in sound effect parameters set
It includes the digital equalising parameter of target audio and the reverberation parameters of target audio to imitate data packet;
It is understood that may include the sample of multiple sample audios, each sample audio in the sound effect parameters set
This audio label, audio data packet corresponding with each sample audio label, the sample frequency information of each sample audio and
Sample timbre information, wherein the audio data packet may include digital equalising parameter and reverberation parameters;Optionally, described
In sound effect parameters set can also include sample audio tag set, the sample audio tag set be sample audio label with
The correspondence of sample audio label, the sample audio label include the inhomogeneities such as rock and roll, metal music, folk song and disco
The style of song of type, and sample audio label corresponding from every class sample audio label is then the different wind under this kind of sample audio label
The style of song of lattice.For example, being a form of sound effect parameters set as shown in table 1, table 2 is a form of sample audio tally set
It closes, if sample audio label is " rock and roll ", corresponding sample audio label may include " Art Rock, punk, Post
Rock, stone roller core etc. ".
Specifically, the digital equalising parameter be exactly adjust digital equalising processing curve in each frequency range signal after signal
Yield value, the digital equalising parameter are corresponded with each octave characteristic point.Wherein, the octave characteristic point, as
Discrete spectrum is divided into frequency range one by one, so that the upper limiting frequency of each frequency range is doubled than lower frequency limit, (namely frequency is
2:The frequency band of 1 frequency interval), to obtain multiple octaves, then characteristic point is taken in each octave respectively.For example,
If signal frequency range is 63Hz~16kHz, 8 octaves can be divided into, respectively 63Hz~126Hz, 126Hz~252Hz,
252Hz~504Hz, 504Hz~1.08kHz, 1.008kHz~2.016kHz, 2.016kHz~4.032kHz, 4.032kHz~
Then 8.064kHz and 8.064Hz~16kHz takes at least one characteristic point, each characteristic point to correspond in each octave
One digital balance parameters.Likewise, if being inserted into two frequencies between the upper and lower limit frequency of an octave, make 4
Ratio between frequency is identical (adjacent two frequency ratio=1.33 times), and an octave is divided into 3 sound intervals in this way, claims this
Kind sound interval is third-octave.N times of octave can be obtained according to this calculation, that is, with oc=((2) ^1/2) ^n is to be
Number, in reference frequency f0Front and back respectively take L=f0* oc, then segmentation merge after calculating power spectrum, wherein f0On the basis of frequency.
The octave can be applied in fields such as 31 sections of balanced devices, acoustic pressure analysis, vibration and noise reducings.
The generation of reverberation is since a sound producing body is after sending out sound wave, and sound wave touches barrier via air
Surface will be reflected, due to the complexity of actual environment, result in the sound that a source of sound is sent out will produce it is various
Echo from all directions after these sound mix, is formed so-called reverberation.The reverberation parameters may include mixing
Ring intensity, reverberation time, nephelometric turbidity unit and reverberation density etc..Wherein, the uninteresting hair shaft of reverberation time short sound, long sound mix again
Confuse it is unclear lose a large amount of details, the suitable reverberation time can not only beautify sound, cover musical instrument noise, musical sound can also be made to melt
Close the continuity for increasing loudness and syllable.It should be noted that a target audio corresponds to a reverberation parameters.
In the specific implementation, audio effect processing equipment obtains the audio tag of target audio, and in sample audio tag set
The target sample audio label belonging to the audio tag is searched, the target sample sound is obtained in the sound effect parameters set
The corresponding audio data packet of criterion label;Optionally, the target belonging to the audio tag is searched in sample audio tag set
Sample audio label obtains the corresponding target audio data of the target sample audio label in the sound effect parameters set
Packet, and read the digital equalising parameter of target audio in the target audio data packet and the reverberation parameters of target audio;
Optionally, by the target frequency information of target audio and target timbre information and each audio data packet in sound effect parameters set
Sample frequency information and sample timbre information matched, and the target frequency information and target are obtained after matching
Timbre information and the sample frequency information of each audio data packet and the matching similarity of sample timbre information, acquisition
With the highest sample frequency information of similarity and the corresponding sample audio data packet of sample timbre information, by the sample audio
Data packet is as target audio data packet.
For example, if the audio tag of target audio is " heavy metal ", in the sample audio tag set shown in table 2 really
Target sample audio label " metal music " belonging to fixed " heavy metal ", then can be true in the sound effect parameters set shown in table 1
The sound effect parameters of fixed " metal music " are " B1 ".
Table 1
Sample audio label | Sound effect parameters | Frequency information | Timbre information |
Rock and roll | A1 | A2 | A3 |
Metal music | B1 | B2 | B3 |
Folk song | C1 | C2 | C3 |
Disco | D1 | D2 | D3 |
Table 2
S103 carries out synthesis processing to the audio data using the target audio data packet, and exports at the synthesis
The audio data after reason.
In the specific implementation, by audio effect processing equipment to the sound effect parameters and audio number in identified target audio data packet
According to be acquired, convert, filter, valuation, enhancing, compression, the synthesis processing such as identification, to obtain the audio after transformation audio,
Then by the audio output.
Wherein, when the audio effect processing equipment is server, the audio after transformation audio is sent to sound terminal, with
The sound terminal is set to export the audio after the transformation audio;When the audio effect processing equipment is sound terminal,
Directly the audio after the transformation audio can be exported, be played.
In a kind of feasible realization method, DSP (Digital Signal are provided in audio effect processing equipment
Processing) audio & video coding standard system, including main devices have the digital signal processor DSP, (simulation/number of audio A/D
Word) and D/A (digital-to-analog), RAM, ROM and peripheral processor, the codec sampling to one 16bit of DSP transmission every time
After data, it will cause and interrupt button reception interruption, DSP is deposited into the data received in the input-buffer of system, simultaneously
Audio data to being stored in caching carries out respectively processing (such as convert, filter, valuation), and is stored in after some transformation
Output to system caches, and interrupting button output interrupt routine can periodically fetch in being cached from output in execution, by encoding and decoding
Device exports in an analog manner, retransmits to sound terminal or is played out after being directly over power amplifier.
Optionally, using dynamic gain processing and noise suppressed in synthesizing processing procedure, to ensure not will produce power
Overload or clipping distortion.
Optionally, in synthesizing processing procedure using 96kHz superelevation sample rate can ensure high quality digital-to-analogue conversion and
90dB or more high s/n ratios.
In embodiments of the present invention, by when receiving the play instruction to target audio, obtaining the sound of target audio
Frequency evidence, and in sound effect parameters set after the corresponding target audio data packet of acquisition target audio, using the target audio number
Synthesis processing is carried out to audio data according to packet, finally will synthesis treated that audio data exports.By based on audio
Information provides adaptive audio, can build the audio of most suitable audio content, enrich audio effect processing mode, improve audio
What is handled is intelligent.
Fig. 2 is referred to, for an embodiment of the present invention provides the flow diagrams of another sound effect treatment method.Such as Fig. 2 institutes
Show, the embodiment of the present invention the method may include following steps S201- steps S207.
S201, collecting sample audio obtains the feature frequency response curve of the sample audio, and obtains the sample audio
Sample frequency information and sample timbre information;
It is understood that audio is a kind of important media in multimedia, it is the form of voice signal.As one kind
The carrier of information, audio can be divided into voice, music and other sound three types, in embodiments of the present invention, the sample sound
Frequency is music, and the sample audio is at least one music for storage of user selected in more songs.For
May include much information for one sample audio, such as sample audio data, frequency information, timbre information, Ge Shouming, institute
Belong to album, deliver time, total duration, sample audio label etc..Wherein, the sample audio label can be Art Rock, friend
Gram, different types of style of song, the sample audio data such as heavy metal, Summoning or folk song be a succession of non-semantic symbolic indication
Opaque binary stream, that is, sample audio content, the sample frequency information and sample timbre information are sample
The spectral characteristic (frequency domain characteristic of sample audio signal) of this audio.
Feature frequency response refers to generated sound when the sample audio signal that one is exported with constant voltage is connected with system
With the variation of frequency and there is a phenomenon where increasing or decaying, this acoustic pressure variation relation curve associated with frequency is known as pressure
Feature frequency response curve, as shown in figure 3, abscissa is frequency, ordinate is sound pressure level.
In the specific implementation, the frequency range due to human ear audition is 20Hz to 20kHz, 20Hz~20kHz can be acquired
Sample audio after, and the spectral characteristic in sample audio in suitable frequency range (such as 50Hz~20kHz) is extracted, according to being carried
The spectral characteristic taken draws the feature frequency response curve of the sample audio.
S202 is based on the feature frequency response curve, the sample frequency information and the sample timbre information, obtains sample
The digital equalising processing curve of this audio and the reverberation parameters of sample audio;
It is understood that the principle of digital equalising processing is:Input signal " X " is established into corresponding output signal " Y ", Y
=f (X) includes the function of one and " X " respective frequencies " k " again wherein in this action type of f ().The function of " X " will be corresponded to
Expression formula expansion i.e. Y=g (k) * X, wherein g () change with digital equalising parameter regulation.In embodiments of the present invention,
The input signal " X " is sample frequency information, the sample timbre information and feature frequency response curve, the frequency " k "
For the corresponding frequency values of the feature frequency response curve.Based on above-mentioned handling principle, produces digital equalising and handle curve.Such as Fig. 4
It is shown a kind of digital equalising processing curve, abscissa is frequency, and ordinate is digital equalising parameter, the digital equalising processing
Curve is change curve of the digital equalising parameter with frequency.
Meanwhile it being based on the feature frequency response curve, the sample frequency information and the sample timbre information, it can be obtained
The reverberation parameters of sample audio.Wherein, the generation of reverberation is since a sound producing body is after sending out sound wave, and sound wave is via sky
The surface that gas touches barrier will be reflected, and due to the complexity of actual environment, result in the sound that a source of sound is sent out
Sound will produce various echo from all directions, after these sound mix, be formed so-called reverberation.Reverberation
First sound is " direct sound wave ", that is, source sound, and dry voice output is called in effect device, is to send out direct arrival from sound
The sound of audience's ear and the chief component of sound pressure level.The propagation attenuation of sound pressure level and distance square are inversely proportional, i.e.,
Distance doubles, and sound pressure level reduces 6dB.Subsequent several apparent sound being separated by out are called " early reflected sound ", again
Claim nearly secondary reflection sound, is that the sound that sends out of sound source is reached after ambient interfaces (wall and ceiling, ground) reflect 1~2 time and listened
The sound of many ears, the reflected sound reached within 50ms more late than direct sound wave belong to this range, and sound is bigger, brighter
It is aobvious, it can reflect source sound in space, ear and distance relation between the walls.The characteristics of reflection is that ear can not
It and direct sound wave are distinguished, can only be superimposed them impression.Therefore reflection is to improving sound pressure level harmony
The clarity of sound is beneficial.Its propagation attenuation is related with the sound absorption characteristics of reflecting interface.The last one sound is " reverberation sound ", is
The multiple reflections sound to 50ms or more more late than direct sound wave.For music, though reverberation sound can increase the richness of music, it
But the clarity that sound can be reduced while increasing fullness can make sound send out " dry " when too small, can not be excessive.Reverberation
The size of sound is directly related with the sound absorption characteristics of ambient interfaces.
The reverberation parameters may include reverrberation intensity, reverberation time, nephelometric turbidity unit and reverberation density etc..Wherein, reverberation
The time uninteresting hair shaft of short sound, long sound is again confused to lose a large amount of details, and the suitable reverberation time can not only beautify
Sound covers musical instrument noise, and musical sound fusion can also be made to increase the continuity of loudness and syllable.It should be noted that a sample
This audio corresponds to a reverberation parameters.
S203 obtains the sample audio label of the sample audio, is based on equal loudness contour and the sample audio label,
The octave characteristic point handled in curve in selected frequency range the digital equalising is adjusted, to obtain each octave
The corresponding digital equalising parameter of characteristic point;
Specifically, the sample audio label may include that rock and roll, metal music, folk song and disco etc. are different types of
Style of song, and may include at least one sample audio label per class sample audio label, as sample audio label " rock and roll " can be with
Including " Art Rock, punk, Post rock, stone roller core etc. " sample audio label.The sample sound for obtaining the sample audio
Criterion label, it is to be understood that the sample audio label for obtaining sample audio, based on sample audio label as shown in Figure 2 with
The correspondence of sample audio label, you can get the sample audio label of sample audio.For example, sample audio is " grey aunt
Ma ", the sample audio label of " Cinderella " they are " popular metal ", then table look-up 2 it is found that " popular metal " belongs to " metal music ",
Therefore, the sample audio label of " Cinderella " is " metal music ".
The equal loudness contour be description etc. ring under the conditions of sound pressure level and frequency of sound wave relation curve, be aural signature it
One.Pure tone i.e. at different frequencies needs which kind of sound pressure level reached, and could obtain consistent hearing loudness for hearer.Such as
Fig. 5 show a kind of pure tone equal loudness contour figure, and abscissa is frequency, and ordinate is sound pressure level.Acoustic pressure is that atmospheric pressure is disturbed
The variation generated afterwards, the i.e. overbottom pressure of atmospheric pressure, it is equivalent to pressure change caused by being superimposed a disturbance on atmospheric pressure.
Sound pressure level indicates that unit dB is defined as the ratio of acoustic pressure virtual value p (e) to be measured and reference sound pressure p (ref) with symbol SPL
Value takes common logarithm, multiplied by with 20.The sound pressure level of different frequency in figure corresponding to every curve is different, but human ear sense
The loudness felt is the same, and every curve indicates the variation relation of different loudness side's lower frequencies and sound pressure level.By equal loudness contour
It is known that when loudness is smaller, human ear is insensitive to high bass perception for race, and when loudness is larger, high bass perception is gradually clever
It is quick.
The digital equalising parameter is exactly to adjust the yield value of signal after the signal of each frequency range in digital equalising processing curve,
The digital equalising parameter is corresponded with each octave characteristic point.Wherein, the octave characteristic point, as by discrete frequency
Spectrum is divided into frequency range one by one, and so that the upper limiting frequency of each frequency range is doubled than lower frequency limit, (namely frequency is 2:1 frequency
The frequency band at interval), to obtain multiple octaves, then characteristic point is taken in each octave respectively.If for example, signal frequency
Ranging from 63Hz~16kHz, can be divided into 8 octaves, and respectively 63Hz~126Hz, 126Hz~252Hz, 252Hz~
504Hz, 504Hz~1.08kHz, 1.008kHz~2.016kHz, 2.016kHz~4.032kHz, 4.032kHz~8.064kHz
And 8.064Hz~16kHz, then characteristic point is taken in each octave.It, then will each again if 1/24 octave characteristic point
Sound interval is divided into 24 frequency ranges, a 1/24 octave characteristic point of each frequency range as the octave, each 1/24 octave
Characteristic point corresponds to a digital balance parameters, is adjusted to the corresponding digital equalising parameter of each 1/24 octave characteristic point,
To determine a suitable parameter area for each 1/24 octave characteristic point.
In the specific implementation, selected digital equalising handles the curve of partial frequency range in curve, selected curve is divided into
Octave one by one, the parameter information for being then based on equal loudness contour is adjusted the characteristic point in each octave, and protects
Card during the adjustment, the parameter area of the digital equalising parameter of each octave characteristic point indicated by sample audio label
Interior variation, so that it is determined that the digital equalising parameter area of each octave characteristic point.For example, to digital equalising shown in Fig. 4
Curve in reason curve within the scope of 63Hz~16kHz is divided into 8 octaves, then takes characteristic point to each octave, obtain as
Discrete octave characteristic point shown in fig. 6, then based on the corresponding commonly used digital balance parameters of sample audio label to each discrete
The digital equalising parameter of octave characteristic point is adjusted, to determine that octave each under the sample audio label takes characteristic point
Digital equalising parameter area.
S204, by the corresponding digital equalising parameter of each octave characteristic point, the reverberation parameters of the sample audio
And the sample audio label is preserved into sound effect parameters set;
It is understood that the digital equalising parameter, the reverberation parameters and the sample audio label can count
Be stored in sound effect parameters set according to the form of packet, can also the form of mapping table be stored in sound effect parameters set.
It is described by the corresponding number of each octave characteristic point in a kind of specific implementation of the embodiment of the present invention
Balance parameters, the reverberation parameters of the sample audio and the sample audio label are preserved into sound effect parameters set, can be with
Including following steps, as shown in Figure 7:
S301, by the corresponding digital equalising parameter of each octave characteristic point, the reverberation parameters of the sample audio
The corresponding audio data packet of the sample audio label is stored as after compression;
Specifically, the data packet is a kind of data memory format, may include multiple fields, each field can identify
Different information.By the way that the digital equalising parameter and the reverberation parameters are compressed in the data field of audio data packet,
To form audio data packet.Include multigroup digital equalising parameter and one group of reverberation parameters in each audio data packet.It is described
Multigroup digital equalising parameter includes the corresponding digital equalising parameter of characteristic point of each octave, and each octave can
To include multiple characteristic points, it should be noted that multigroup digital equalising parameter and the reverberation parameters of the sample audio
For the parameter indicated by the sample audio label, that is to say, that under different sample labels, the number of the sample audio is
The parameter that weighs and the reverberation parameters of the sample audio are different.
Table 3
For example, if signal frequency range is 63Hz~16kHz, it is divided into 8 octaves, 9 features is taken in each octave
The corresponding digital equalising parameter of 9 characteristic points of point, the 1st octave is respectively A11~A19,9 spies of the 2nd octave
The corresponding digital equalising parameter of sign point is respectively A21~A29 ..., the corresponding digital equalising ginseng of 9 characteristic points of the 8th octave
Number is respectively A81~A89, and the reverberation parameters of signal are B, then the multigroup sample audio stored in an audio data packet
Digital equalising parameter is A11~A19, A21~A29 ... A81~A89, and the reverberation parameters of sample audio are B, as shown in table 3.
S302 preserves the sample audio label and the audio data packet into the sound effect parameters set.
In the specific implementation, sample audio label to be added to the selected field of audio data packet, then such as head file will add
The audio data packet of sample audio label has been added to preserve into sound effect parameters set, alternatively, by sample audio label and audio number
It is corresponded to and is preserved into sound effect parameters set in the form of mapping table according to packet.
S205 obtains the audio data of the target audio when receiving the play instruction to target audio;
It is understood that in embodiments of the present invention, the target audio is music, and the target audio is user
The music for output selected in more songs.
In a kind of specific implementation of the embodiment of the present invention, when audio effect processing equipment receives broadcasting for target audio
When putting instruction, the audio-frequency information of target audio is obtained, and extracts audio data and audio tag in the audio-frequency information.Such as,
When audio effect processing equipment receives the play instruction of broadcasting " performer ", the audio data and audio tag " stream of " performer " are obtained
Row ".
In another kind specific implementation of the embodiment of the present invention, when audio effect processing equipment is received for target audio
When play instruction, audio data, target frequency information and the target timbre information of the target audio are obtained.The target frequency
Rate information and target timbre information are the spectral characteristic of target audio, the i.e. frequency domain characteristic of target audio signal.
S206 obtains the corresponding target audio data packet of the target audio, the target sound in sound effect parameters set
It includes the digital equalising parameter of target audio and the reverberation parameters of target audio to imitate data packet;
In a kind of specific implementation of the embodiment of the present invention, audio effect processing equipment obtains in the sound effect parameters set
The corresponding target audio data packet of the audio tag, and read the number of target audio in the target audio data packet
The reverberation parameters of the parameter that weighs and target audio.Wherein, described to obtain the audio tag pair in the sound effect parameters set
The target audio data packet answered, it is to be understood that the mesh belonging to the audio tag is searched in sample audio tag set
Standard specimen this audio label, and the corresponding target audio number of the target sample audio label is obtained in the sound effect parameters set
According to packet.That is, audio effect processing equipment searches the corresponding sample audio label of audio tag, then in sound effect parameters set
Correspondence based on sample audio label and audio data packet can get the digital equalising parameter and mesh of target audio
The reverberation parameters of mark with phonetic symbols frequency.The corresponding sample audio label of the lookup audio tag, it is to be understood that be based on sample audio
The keyword of label finds the sample audio label to match with the keyword, then in the sample audio tag set of setting
Obtain the corresponding sample audio label of sample audio label.The sample audio tag set can be stored in described as subset
In sound effect parameters set, it can also be used as individually set and stored.
It is described by the corresponding number of each octave characteristic point in another kind specific implementation of the embodiment of the present invention
Word balance parameters, the reverberation parameters of the sample audio and the sample audio label are preserved into sound effect parameters set, packet
It includes:
By the corresponding digital equalising parameter of each octave characteristic point, reverberation parameters of the sample audio, described
Sample frequency information, the sample timbre information and the sample audio label are preserved into sound effect parameters set;It is described
The corresponding target audio data packet of the target audio is obtained in the sound effect parameters set, may include following steps,
As shown in Figure 8:
S401, by the target frequency information and target timbre information and each audio data packet in sound effect parameters set
Sample frequency information and sample timbre information matched, and the target frequency information and target are obtained after matching
Timbre information and the sample frequency information of each audio data packet and the matching similarity of sample timbre information;
In the specific implementation, when the keyword of audio tag is unclear or comprising multiple audio tags, it can be by audio
Data packet unzips it, and extracts sample frequency information and sample timbre information after decompression, traverses audio ginseng successively
Every group of sample frequency information in manifold conjunction and sample timbre information, target frequency information and target timbre information are distinguished
It is matched with the sample frequency information and sample timbre information traversed, and obtains the matching similarity after every group of matching.
Optionally, can be able to be general by the caching corresponding with the audio data packet of the matching similarity after matching, the caching form
It is added in the setting field of audio data packet with similarity, or close matching similarity and audio data packet to map
It is that the form of table is cached into sound effect parameters set.
S402 obtains the highest sample frequency information of matching similarity and the corresponding sample audio number of sample timbre information
According to packet, using the sample audio data packet as target audio data packet.
S207 carries out synthesis processing using the target audio data packet to the audio data, and after synthesis is handled
The audio data be sent to sound terminal treated so that the sound terminal exports the synthesis audio data.
In the specific implementation, by audio effect processing equipment to the sound effect parameters and audio number in identified target audio data packet
According to be acquired, convert, filter, valuation, enhancing, compression, the synthesis processing such as identification, to obtain the audio after transformation audio,
The audio is sent to sound terminal to be exported.In a kind of feasible realization method, set in audio effect processing equipment
Be equipped with DSP audio & video coding standard systems, including main devices have digital signal processor DSP, audio A/D (analog/digital)
With D/A (digital-to-analog), RAM, ROM and peripheral processor, codec transmits the sampled data of a 16bit to DSP every time
After, it will cause and interrupt button reception interruption, DSP is deposited into the data received in the input-buffer of system, while to
Audio data through deposit caching carries out respectively processing (such as convert, filter, valuation), and is deposited into after some transformation and is
The output of system caches, and interrupt button output interrupt routine can timing execute cached from output in fetch, by codec with
The mode of simulation exports, and retransmits to sound terminal, is played out after power amplifier.
In embodiments of the present invention, the audio tally set for carrying sound effect parameters is provided in audio effect processing equipment first
It closes, when receiving the play instruction to target audio, obtains the audio data of target audio, and obtained in sound effect parameters set
After taking the corresponding target audio data packet of target audio, synthesis processing is carried out to audio data using the target audio data packet,
Finally will synthesis treated that audio data is sent to sound terminal exports.It is provided by the information based on audio adaptive
Audio can build the audio of most suitable audio content, enrich audio effect processing mode, improve the intelligent of audio effect processing,
Completely new personalized audio experience is provided simultaneously.
Fig. 9 is referred to, for an embodiment of the present invention provides the flow diagrams of another sound effect treatment method.Such as Fig. 9 institutes
Show, the embodiment of the present invention the method may include following steps S501- steps S503.
S501 receives the sound for the target audio that server is sent when receiving the play instruction to target audio
Frequency evidence;
It is understood that audio is a kind of important media in multimedia, it is the form of voice signal.As one kind
The carrier of information, audio can be divided into voice, music and other sound three types, and in embodiments of the present invention, the audio is
Music can be individual a piece of music in music player, or the companion in the multimedias such as video, game, e-book
It is happy.The target audio is music for output of the user selected in more songs.It, can for an audio
To include much information, such as musical designation, Ge Shouming, audio data, affiliated album, time, total duration, audio tag are delivered
Deng.Wherein, the audio data is the opaque binary stream of a succession of non-semantic symbolic indication, that is, target audio is interior
Hold.The audio tag can be the different types of style of song such as Art Rock, punk, metal music or folk song.Optionally, the sound
Frequency may also include the frequency information and timbre information of audio, and the frequency information and timbre information are that the frequency spectrum of audio is special
Property, the i.e. frequency domain characteristic of audio signal.
In the specific implementation, when server receives the play instruction for target audio, the audio of target audio is acquired
Information, and extract the information such as audio data, audio tag, frequency information and timbre information in the audio-frequency information and be sent to
Audio effect processing equipment (sound terminal);Optionally, when audio effect processing equipment receives the play instruction for target audio, to
The audio-frequency information that server sends target audio obtains request, so that the audio-frequency information of collection of server target audio, and receive
The collected audio-frequency information of institute of server feedback.
S502 receives the sound effect parameters set that the server is sent, the mesh is obtained in the sound effect parameters set
The corresponding target audio data packet of mark with phonetic symbols frequency, the target audio data packet includes the digital equalising parameter and mesh of target audio
The reverberation parameters of mark with phonetic symbols frequency;
It is understood that include in the sound effect parameters set each sample audio sample audio label, with it is every
The corresponding audio data packet of a sample audio label, the sample frequency information of each sample audio and sample timbre information,
In, the audio data packet includes the digital equalising parameter of sample audio and the reverberation parameters of sample audio;Optionally, described
In sound effect parameters set can also include sample audio tag set, the sample audio tag set be sample audio label with
The correspondence of sample audio label, the sample audio label include the inhomogeneities such as rock and roll, metal music, folk song and disco
The style of song of type, and sample audio label corresponding from every class sample audio label is then the different wind under this kind of sample audio label
The style of song of lattice.For example, being a form of sound effect parameters set as shown in table 1, table 2 is a form of sample audio tally set
It closes, if sample audio label is " rock and roll ", corresponding sample audio label may include " Art Rock, punk, Post
Rock, stone roller core etc. ".
In the specific implementation, audio effect processing equipment receives server sound effect parameters transmitted after establishing sound effect parameters set
Set, and the sound effect parameters set is stored, the number of target audio is then obtained in the sound effect parameters set stored
The reverberation parameters of word balance parameters and target audio.
In a kind of feasible realization method, audio effect processing equipment receives the audio for the target audio that server is sent
Then data and audio tag obtain the corresponding target audio data packet of the audio tag in the sound effect parameters set,
And read the digital equalising parameter of target audio and the reverberation parameters of target audio described in the target audio data packet.Into
One step, it is described that the corresponding target audio data packet of the target audio is obtained in the sound effect parameters set, may include
The target sample audio label belonging to the audio tag is searched in sample audio tag set, and in the sound effect parameters collection
The corresponding target audio data packet of the target sample audio label is obtained in conjunction.
In the specific implementation, by server by audio data, audio tag, target frequency information and target timbre information etc.
It is sent to audio effect processing equipment, audio effect processing equipment searches the target belonging to the audio tag in sample audio tag set
Sample audio label, and the corresponding target audio data of the target sample audio label are obtained in the sound effect parameters set
Packet.That is, audio effect processing equipment searches the corresponding sample audio label of audio tag, the then base in sound effect parameters set
The digital equalising parameter and target of target audio can be got in the correspondence of sample audio label and audio data packet
The reverberation parameters of audio.The corresponding sample audio label of the lookup audio tag, it is to be understood that be based on sample audio mark
The keyword of label finds the sample audio label to match with the keyword, is then obtained in the sample audio tag set of setting
Sample the corresponding sample audio label of this audio tag.The sample audio tag set can be stored in the sound as subset
It imitates in parameter sets, can also be used as individually set and stored.
In another feasible realization method, audio effect processing equipment receives the sound for the target audio that server is sent
Frequency evidence, target frequency information and target timbre information;As shown in Figure 10, described to obtain institute in the sound effect parameters set
The corresponding target audio data packet of target audio is stated, is included the following steps:
S601, by the target frequency information and target timbre information and each audio data packet in sound effect parameters set
Sample frequency information and sample timbre information matched, and the target frequency information and target are obtained after matching
Timbre information and the sample frequency information of each audio data packet and the matching similarity of sample timbre information;
S602 obtains the highest sample frequency information of matching similarity and the corresponding sample audio number of sample timbre information
According to packet, using the sample audio data packet as target audio data packet.
In the specific implementation, when the keyword of audio tag is unclear or comprising multiple audio tags, it can be by audio
Data packet unzips it, and extracts sample frequency information and sample timbre information after decompression, traverses audio ginseng successively
Every group of sample frequency information in manifold conjunction and sample timbre information, target frequency information and target timbre information are distinguished
It is matched with the sample frequency information and sample timbre information traversed, and obtains the target frequency information after matching
And target timbre information is similar to the sample frequency information of each audio data packet and the matching of sample timbre information
Degree obtains the highest sample frequency information of matching similarity and the corresponding sample audio data packet of sample timbre information, by institute
Sample audio data packet is stated as target audio data packet, and reads the number of the target audio in the target audio data packet
The reverberation parameters of balance parameters and target audio.
S503 carries out synthesis processing using the target audio data packet to the audio data, exports synthesis processing
The audio data afterwards.
In the specific implementation, by audio effect processing equipment to the sound effect parameters and audio number in identified target audio data packet
According to be acquired, convert, filter, valuation, enhancing, compression, the synthesis processing such as identification, to obtain the audio after transformation audio,
The audio is directly exported, is played.
In a kind of feasible realization method, DSP audio & video coding standard systems are provided in audio effect processing equipment, including
Main devices have digital signal processor DSP, audio A/D (analog/digital) and D/A (digital-to-analog), RAM, ROM and outer
Processor is enclosed, after codec transmits the sampled data of a 16bit to DSP every time, will cause and interrupt button reception interruption,
DSP is deposited into the data received in the input-buffer of system, while being carried out to the audio data for being stored in caching respective
Processing (is such as converted, is filtered, valuation), and the output caching of system is deposited into after some transformation, and interrupts button output
Interrupt routine can be executed periodically and be fetched in output caching, be exported in an analog manner by codec, by power amplification
It is played out after device.
Optionally, using dynamic gain processing and noise suppressed in synthesizing processing procedure, to ensure not will produce power
Overload or clipping distortion.
Optionally, in synthesizing processing procedure using 96kHz superelevation sample rate can ensure high quality digital-to-analogue conversion and
90dB or more high s/n ratios.
In embodiments of the present invention, when server receives the play instruction to target audio, target audio is obtained
Audio data is simultaneously sent to audio effect processing equipment, meanwhile, the audio tag set for carrying sound effect parameters is sent to audio effect processing
Equipment is sent to audio effect processing equipment, and audio effect processing equipment obtains the corresponding target audio of target audio in sound effect parameters set
After data packet, synthesis processing is carried out to audio data using the target audio data packet, it finally will synthesis treated audio number
According to being exported.Adaptive audio is provided by the information based on audio, the audio of most suitable audio content can be built, is enriched
Audio effect processing mode, improves the intelligent of audio effect processing.
Below in conjunction with attached drawing 11- attached drawings 18, describe in detail to audio effect processing equipment provided in an embodiment of the present invention.
It should be noted that audio effect processing equipment shown in attached drawing 11, the method for executing Fig. 1-embodiment illustrated in fig. 10 of the present invention,
For convenience of description, it illustrates only and is not disclosed with the relevant part of the embodiment of the present invention, particular technique details, please refer to this hair
Bright Fig. 1-embodiments shown in Fig. 10.
Figure 11 is referred to, for an embodiment of the present invention provides a kind of structural schematic diagrams of audio effect processing equipment.Such as Figure 11 institutes
Show, the audio effect processing equipment 1 of the embodiment of the present invention may include:Information acquisition unit 11,12 sum number of parameter acquiring unit
According to output unit 13.
Information acquisition unit 11, for when receiving the play instruction to target audio, obtaining the target audio
Audio data;
It is understood that audio is a kind of important media in multimedia, it is the form of voice signal.As one kind
The carrier of information, audio can be divided into voice, music and other sound three types, and in embodiments of the present invention, the audio is
Music can be individual a piece of music in music player, or the companion in the multimedias such as video, game, e-book
It is happy.The target audio is music for output of the user selected in more songs.It, can for an audio
To include much information, such as musical designation, Ge Shouming, voice data stream, affiliated album, time, total duration, audio tag are delivered
Deng.Wherein, the audio data is the opaque binary stream of a succession of non-semantic symbolic indication, that is, target audio is interior
Hold.The audio tag can be the different types of style of song such as Art Rock, punk, metal music or folk song.Optionally, the sound
Frequency may also include frequency information and timbre information of audio etc., and the frequency information and timbre information are the frequency spectrum of audio
Characteristic, the i.e. frequency domain characteristic of audio signal.
In the specific implementation, when information acquisition unit 11 receives the play instruction for target audio, target sound is obtained
The audio-frequency information of frequency, and extract the audio data in the audio-frequency information and audio tag, audio tag, frequency information and sound
Color information etc..Such as, when audio effect processing equipment receives the play instruction for playing target audio " performer ", the audio of " performer " is obtained
Data and audio tag " prevalence ".
Parameter acquiring unit 12, for obtaining the corresponding target audio data of the target audio in sound effect parameters set
Packet, the target audio data packet includes the digital equalising parameter of target audio and the reverberation parameters of target audio;
It is understood that may include in the sound effect parameters set sample audio label for having each sample audio,
Audio data packet corresponding with each sample audio label, the sample frequency information of each sample audio and sample tone color letter
Breath, wherein the audio data packet may include digital equalising parameter and reverberation parameters, these information can be with the shape of data packet
Formula is stored, can also the form of mapping table stored;Optionally, can also include in the sound effect parameters set
Sample audio tag set, the sample audio tag set are the correspondence of sample audio label and sample audio label,
The sample audio label includes the different types of style of song such as rock and roll, metal music, folk song and disco, and with every class sample sound
The corresponding sample audio label of criterion label is then the style of song of the different-style under this kind of sample audio label.For example, as shown in table 1
For a form of sound effect parameters set, table 2 is a form of sample audio tag set, if sample audio label is " to shake
Rolling ", then corresponding sample audio label may include " Art Rock, punk, Post rock, stone roller core etc. ".
Specifically, the digital equalising parameter be exactly adjust digital equalising processing curve in each frequency range signal after signal
Yield value, the digital equalising parameter are corresponded with each octave characteristic point.Wherein, the octave characteristic point, as
Discrete spectrum is divided into frequency range one by one, so that the upper limiting frequency of each frequency range is doubled than lower frequency limit, (namely frequency is
2:The frequency band of 1 frequency interval), to obtain multiple octaves, then characteristic point is taken in each frequency range respectively.If for example,
Signal frequency range is 63Hz~16kHz, can be divided into 8 octaves, respectively 63Hz~126Hz, 126Hz~252Hz,
252Hz~504Hz, 504Hz~1.08kHz, 1.008kHz~2.016kHz, 2.016kHz~4.032kHz, 4.032kHz~
Then 8.064kHz and 8.064Hz~16kHz takes characteristic point, each characteristic point to correspond to a number in each octave
Balance parameters.Likewise, if being inserted into two frequencies between the upper and lower limit frequency of an octave, make between 4 frequencies
Ratio it is identical (adjacent two frequency ratio=1.33 times), an octave is divided into 3 sound intervals in this way, this sound interval is referred to as
Third-octave.N times of octave can be obtained according to this calculation, that is, ^n is coefficient using oc=((2) ^1/2), in base
Quasi- frequency f0Front and back respectively take L=f0* oc, then segmentation merge after calculating power spectrum, wherein f0On the basis of frequency.The frequency multiplication
Journey can be applied in fields such as 31 sections of balanced devices, acoustic pressure analysis, vibration and noise reducings.
The generation of reverberation is since a sound producing body is after sending out sound wave, and sound wave touches barrier via air
Surface will be reflected, due to the complexity of actual environment, result in the sound that a source of sound is sent out will produce it is various
Echo from all directions after these sound mix, is formed so-called reverberation.The reverberation parameters may include mixing
Ring intensity, reverberation time, nephelometric turbidity unit and reverberation density etc..Wherein, the uninteresting hair shaft of reverberation time short sound, long sound mix again
Confuse it is unclear lose a large amount of details, the suitable reverberation time can not only beautify sound, cover musical instrument noise, musical sound can also be made to melt
Close the continuity for increasing loudness and syllable.It should be noted that a target audio corresponds to a reverberation parameters.
In the specific implementation, parameter acquiring unit 12 obtains the audio tag of target audio, and in sample audio tag set
The middle target sample audio label searched belonging to the audio tag, the target sample is obtained in the sound effect parameters set
The corresponding audio data packet of audio label;Optionally, the mesh belonging to the audio tag is searched in sample audio tag set
Standard specimen this audio label obtains the corresponding target audio data of the target sample audio label in the sound effect parameters set
Packet, and read the digital equalising parameter of target audio in the target audio data packet and the reverberation parameters of target audio;
Optionally, by the target frequency information of target audio and target timbre information and each audio data packet in sound effect parameters set
Sample frequency information and sample timbre information matched, and the target frequency information and target are obtained after matching
Timbre information and the sample frequency information of each audio data packet and the matching similarity of sample timbre information, acquisition
With the highest sample frequency information of similarity and the corresponding sample audio data packet of sample timbre information, by the sample audio
Data packet is as target audio data packet.
For example, if the audio tag of target audio is " heavy metal ", in the sample audio tag set shown in table 2 really
Target sample audio label " metal music " belonging to fixed " heavy metal ", then can be true in the sound effect parameters set shown in table 1
The sound effect parameters of fixed " metal music " are " B1 ".
Data outputting unit 13, for carrying out synthesis processing to the audio data using the target audio data packet,
And export the synthesis treated the audio data.
In the specific implementation, by data outputting unit 13 to the sound effect parameters and audio in identified target audio data packet
The synthesis such as data are acquired, convert, filtering, valuation, enhancing, compression, identification processing, to obtain the sound after transformation audio
Frequently, then by the audio output.
Wherein, when the audio effect processing equipment be server when, by data outputting unit 13 by convert audio after audio
It is sent to sound terminal, so that the sound terminal exports the audio after the transformation audio;When the audio effect processing
When equipment is sound terminal, directly the audio after the transformation audio can be exported by data outputting unit 13, played.
In a kind of feasible realization method, it is provided with DSP audio & video coding standard systems in data outputting unit 13, wraps
The main devices included have digital signal processor DSP, audio A/D (analog/digital) and D/A (digital-to-analog), RAM, ROM and
Peripheral processor after codec transmits the sampled data of a 16bit to DSP every time, will cause and interrupt in button reception
Disconnected, DSP is deposited into the data received in the input-buffer of system, while being carried out to the audio data for being stored in caching each
It (such as converts, filter, valuation) from processing, and be deposited into the output caching of system after some transformation, and it is defeated to interrupt button
Going out interrupt routine can periodically fetch in being cached from output in execution, export, retransmit to sound in an analog manner by codec
It rings terminal or is played out after being directly over power amplifier.
Optionally, data outputting unit 13 is using dynamic gain processing and noise suppressed in synthesizing processing procedure, to protect
Card not will produce power overload or clipping distortion.
Optionally, in synthesizing processing procedure data outputting unit 13 using 96kHz superelevation sample rate can ensure it is high-quality
The digital-to-analogue conversion of amount and 90dB or more high s/n ratios.
In embodiments of the present invention, by when receiving the play instruction to target audio, obtaining the sound of target audio
Frequency evidence, and in sound effect parameters set after the corresponding target audio data packet of acquisition target audio label, using the target sound
Effect data packet carries out synthesis processing to audio data, finally will synthesis treated that audio data exports.By being based on sound
The information of frequency provides adaptive audio, can build the audio of most suitable audio content, enrich audio effect processing mode, improve
Audio effect processing it is intelligent.
Figure 12 is referred to, for an embodiment of the present invention provides the structural schematic diagrams of another audio effect processing equipment.Such as Figure 12
Shown, the audio effect processing equipment 1 of the embodiment of the present invention may include:Information acquisition unit 11, parameter acquiring unit 12, number
Believe according to output unit 13, sample information acquiring unit 14, sample parameter acquiring unit 15, sample parameter adjustment unit 16 and sample
Cease storage unit 17.
Sample information acquiring unit 14 is used for collecting sample audio, obtains the feature frequency response curve of the sample audio, and
Obtain the sample frequency information and sample timbre information of the sample audio;
It is understood that audio is a kind of important media in multimedia, it is the form of voice signal.As one kind
The carrier of information, audio can be divided into voice, music and other sound three types, in embodiments of the present invention, the sample sound
Frequency is music, and the sample audio is at least one music for storage of user selected in more songs.For
May include much information for one sample audio, such as sample audio data, frequency information, timbre information, Ge Shouming, institute
Belong to album, deliver time, total duration, sample audio label etc..Wherein, the sample audio label can be Art Rock, friend
Gram, different types of style of song, the sample audio data such as heavy metal, Summoning or folk song be a succession of non-semantic symbolic indication
Opaque binary stream, that is, sample audio content, the sample frequency information and sample timbre information are sample
The spectral characteristic (frequency domain characteristic of sample audio signal) of this audio.
Feature frequency response refers to generated sound when the sample audio signal that one is exported with constant voltage is connected with system
With the variation of frequency and there is a phenomenon where increasing or decaying, this acoustic pressure variation relation curve associated with frequency is known as pressure
Feature frequency response curve, as shown in figure 3, abscissa is frequency, ordinate is sound pressure level.In the specific implementation, due to human ear audition
Frequency range is 20Hz to 20kHz, therefore the sample audio of 20Hz~20KHz can be acquired by sample information acquiring unit 14
Afterwards, and the spectral characteristic in sample audio in suitable frequency range (such as 50Hz~20kHz) is extracted, it is special according to the frequency spectrum extracted
Property draws the feature frequency response curve of the sample audio.
Sample parameter acquiring unit 15, for based on the feature frequency response curve, the sample frequency information and described
Sample timbre information obtains the digital equalising processing curve of sample audio and the reverberation parameters of sample audio;
It is understood that the principle of digital equalising processing is:Input signal " X " is established into corresponding output signal " Y ", Y
=f (X) includes the function of one and " X " respective frequencies " k " again wherein in this action type of f ().The function of " X " will be corresponded to
Expression formula expansion i.e. Y=g (k) * X, wherein g () change with digital equalising parameter regulation.In embodiments of the present invention,
The input signal " X " is sample frequency information, the sample timbre information and feature frequency response curve, the frequency " k "
For the corresponding frequency values of the feature frequency response curve.Based on above-mentioned handling principle, produces digital equalising and handle curve.Such as Fig. 4
It is shown a kind of digital equalising processing curve, abscissa is frequency, and ordinate is digital equalising parameter, the digital equalising processing
Curve is change curve of the digital equalising parameter with frequency.
Meanwhile it being based on the feature frequency response curve, the sample frequency information and the sample timbre information, it can be obtained
The reverberation parameters of sample audio.Wherein, the generation of reverberation is since a sound producing body is after sending out sound wave, and sound wave is via sky
The surface that gas touches barrier will be reflected, and due to the complexity of actual environment, result in the sound that a source of sound is sent out
Sound will produce various echo from all directions, after these sound mix, be formed so-called reverberation.Reverberation
First sound is " direct sound wave ", that is, source sound, and dry voice output is called in effect device, is to send out direct arrival from sound
The sound of audience's ear and the chief component of sound pressure level.The propagation attenuation of sound pressure level and distance square are inversely proportional, i.e.,
Distance doubles, and sound pressure level reduces 6dB.Subsequent several apparent sound being separated by out are called " early reflected sound ", again
Claim nearly secondary reflection sound, is that the sound that sends out of sound source is reached after ambient interfaces (wall and ceiling, ground) reflect 1~2 time and listened
The sound of many ears, the reflected sound reached within 50ms more late than direct sound wave belong to this range, and sound is bigger, brighter
It is aobvious, it can reflect source sound in space, ear and distance relation between the walls.The characteristics of reflection is that ear can not
It and direct sound wave are distinguished, can only be superimposed them impression.Therefore reflection is to improving sound pressure level harmony
The clarity of sound is beneficial.Its propagation attenuation is related with the sound absorption characteristics of reflecting interface.The last one sound is " reverberation sound ", is
The multiple reflections sound to 50ms or more more late than direct sound wave.For music, though reverberation sound can increase the richness of music, it
But the clarity that sound can be reduced while increasing fullness can make sound send out " dry " when too small, can not be excessive.Reverberation
The size of sound and the sound absorption characteristics of ambient interfaces are directly related.
The reverberation parameters may include reverrberation intensity, reverberation time, nephelometric turbidity unit and reverberation density etc..Wherein, reverberation
The time uninteresting hair shaft of short sound, long sound is again confused to lose a large amount of details, and the suitable reverberation time can not only beautify
Sound covers musical instrument noise, and musical sound fusion can also be made to increase the continuity of loudness and syllable.It should be noted that a sample
This audio corresponds to a reverberation parameters.
Sample parameter adjustment unit 16, the sample audio label for obtaining the sample audio, based on equal loudness contour with
And the sample audio label, the octave characteristic point handled in curve in selected frequency range the digital equalising are adjusted
It is whole, to obtain the corresponding digital equalising parameter of each octave characteristic point;
Specifically, the sample audio label may include that rock and roll, metal music, folk song and disco etc. are different types of
Style of song, and may include at least one sample audio label per class sample audio label, as sample audio label " rock and roll " can be with
Including " Art Rock, punk, Post rock, stone roller core etc. " sample audio label.The sample sound for obtaining the sample audio
Criterion label, it is to be understood that the sample audio label for obtaining sample audio, based on sample audio label as shown in Figure 2 with
The correspondence of sample audio label, you can get the sample audio label of sample audio.For example, sample audio is " grey aunt
Ma ", the sample audio label of " Cinderella " they are " popular metal ", then table look-up 2 it is found that " popular metal " belongs to " metal music ",
Therefore, the sample audio label of " Cinderella " is " metal music ".
The equal loudness contour be description etc. ring under the conditions of sound pressure level and frequency of sound wave relation curve, be aural signature it
One.Pure tone i.e. at different frequencies needs which kind of sound pressure level reached, and could obtain consistent hearing loudness for hearer.Such as
Fig. 5 show a kind of pure tone equal loudness contour figure, and abscissa is frequency, and ordinate is sound pressure level.Acoustic pressure is that atmospheric pressure is disturbed
The variation generated afterwards, the i.e. overbottom pressure of atmospheric pressure, it is equivalent to pressure change caused by being superimposed a disturbance on atmospheric pressure.
Sound pressure level indicates that unit dB is defined as the ratio of acoustic pressure virtual value p (e) to be measured and reference sound pressure p (ref) with symbol SPL
Value takes common logarithm, multiplied by with 20.The sound pressure level of different frequency in figure corresponding to every curve is different, but human ear sense
The loudness felt is the same, and every curve indicates the variation relation of different loudness side's lower frequencies and sound pressure level.By equal loudness contour
It is known that when loudness is smaller, human ear is insensitive to high bass perception for race, and when loudness is larger, high bass perception is gradually clever
It is quick.
The digital equalising parameter is exactly to adjust the yield value of signal after the signal of each frequency range in digital equalising processing curve,
The digital equalising parameter is corresponded with each octave characteristic point.Wherein, the octave characteristic point, as by discrete frequency
Spectrum is divided into frequency range one by one, and so that the upper limiting frequency of each frequency range is doubled than lower frequency limit, (namely frequency is 2:1 frequency
The frequency band at interval), to obtain multiple octaves, then characteristic point is taken in each octave respectively.If for example, signal frequency
Ranging from 63Hz~16kHz, can be divided into 8 octaves, and respectively 63Hz~126Hz, 126Hz~252Hz, 252Hz~
504Hz, 504Hz~1.08kHz, 1.008kHz~2.016kHz, 2.016kHz~4.032kHz, 4.032kHz~8.064kHz
And 8.064Hz~16kHz, characteristic point then is taken in each octave, it, then will each again if 1/24 octave characteristic point
Sound interval is divided into 24 frequency ranges, a 1/24 octave characteristic point of each frequency range as the octave, each 1/24 octave
Characteristic point corresponds to a digital balance parameters, is adjusted to the corresponding digital equalising parameter of each 1/24 octave characteristic point,
So that it is determined that a suitable parameter area.
In the specific implementation, sample parameter adjustment unit 16 selectes the song of partial frequency range in digital equalising processing curve
Selected curve is divided into octave one by one, is then based on the parameter information of equal loudness contour in each octave by line
Characteristic point is adjusted, and ensures that the digital equalising parameter of each octave characteristic point is in sample audio during the adjustment
Variation in parameter area indicated by label, so that it is determined that the digital equalising parameter area of each octave characteristic point.For example, right
Curve in digital equalising processing curve shown in Fig. 4 within the scope of 63Hz~16kHz is divided into 8 octaves, then to each times
Sound interval takes characteristic point, obtains discrete octave characteristic point as shown in FIG. 6, then corresponding commonly used digital based on sample audio label
Balance parameters are adjusted the digital equalising parameter of each discrete octave characteristic point, to determine under the sample audio label
Each octave takes the digital equalising parameter area of characteristic point.
Sample information storage unit 17, for by the corresponding digital equalising parameter of each octave characteristic point, described
The reverberation parameters of sample audio and the sample audio label are preserved into sound effect parameters set;
It is understood that the digital equalising parameter, the reverberation parameters and the sample audio label can count
Be stored in sound effect parameters set according to the form of packet, can also the form of mapping table be stored in sound effect parameters set.
Optionally, as shown in figure 13, the sample information storage unit 17, including:
Data packet obtains subelement 171, for by the corresponding digital equalising parameter of each octave characteristic point, described
It is stored as the corresponding audio data packet of the sample audio label after the reverberation parameters compression of sample audio;
Specifically, the data packet is a kind of data memory format, may include multiple fields, each field can identify
Different information.By the way that the digital equalising parameter and the reverberation parameters are compressed in the data field of audio data packet,
To form audio data packet.Include multigroup digital equalising parameter and one group of reverberation parameters in each audio data packet.It is described
Multigroup digital equalising parameter includes the corresponding digital equalising parameter of characteristic point of each octave, and each octave can
To include multiple characteristic points, it should be noted that multigroup digital equalising parameter and the reverberation parameters of the sample audio
For the parameter indicated by the sample audio label, that is to say, that under different sample labels, the number of the sample audio is
The parameter that weighs and the reverberation parameters of the sample audio are different.
For example, if signal frequency range is 63Hz~16kHz, it is divided into 8 octaves, 9 features is taken in each octave
The corresponding digital equalising parameter of 9 characteristic points of point, the 1st octave is respectively A11~A19,9 spies of the 2nd octave
The corresponding digital equalising parameter of sign point is respectively A21~A29 ..., the corresponding digital equalising ginseng of 9 characteristic points of the 8th octave
Number is respectively A81~A89, and the reverberation parameters of signal are B, then the multigroup sample audio stored in an audio data packet
Digital equalising parameter is A11~A19, A21~A29 ... A81~A89, and the reverberation parameters of sample audio are B, as shown in table 3.
Information saving subunit 172, for preserving the sample audio label and the audio data packet to the sound
It imitates in parameter sets.
In the specific implementation, sample audio label to be added to the selected field of audio data packet, then such as head file will add
The audio data packet of sample audio label has been added to preserve into sound effect parameters set, alternatively, by sample audio label and audio number
It is corresponded to and is preserved into sound effect parameters set in the form of mapping table according to packet.
Information acquisition unit 11, for when receiving the play instruction to target audio, obtaining the target audio
Audio data;
It is understood that in embodiments of the present invention, the target audio is music, and the target audio is user
The music for output selected in more songs.
In a kind of specific implementation of the embodiment of the present invention, when information acquisition unit 11 is received for target audio
When play instruction, the audio-frequency information of target audio is obtained, and extracts audio data and audio tag in the audio-frequency information.
Such as, when audio effect processing equipment receives the play instruction for playing target audio " performer ", the audio data and sound of " performer " are obtained
Frequency marking label " prevalence ".
In another kind specific implementation of the embodiment of the present invention, when information acquisition unit 11 is received for target audio
Play instruction when, obtain audio data, target frequency information and the target timbre information of the target audio.The target
Frequency information and target timbre information are the spectral characteristic of target audio, the i.e. frequency domain characteristic of target audio signal.
Parameter acquiring unit 12, for obtaining the corresponding target audio data of the target audio in sound effect parameters set
Packet, the target audio data packet includes the digital equalising parameter of target audio and the reverberation parameters of target audio;
Optionally, described information acquiring unit 11 is specifically used for:
Obtain the audio data and audio tag of the target audio;
The parameter acquiring unit 12 is specifically used for:
The corresponding target audio data packet of the audio tag is obtained in the sound effect parameters set, and reads the mesh
Mark with phonetic symbols imitates the digital equalising parameter of the target audio in data packet and the reverberation parameters of target audio.
Further, the parameter acquiring unit 12 is specifically used for:
The target sample audio label belonging to the audio tag is searched in sample audio tag set, and in the sound
The corresponding target audio data packet of the target sample audio label is obtained in effect parameter sets.
In the specific implementation, parameter acquiring unit 12 searches the corresponding sample audio label of audio tag, then join in audio
The correspondence based on sample audio label and audio data packet can get the digital equalising ginseng of target audio during manifold is closed
The reverberation parameters of number and target audio.The corresponding sample audio label of the lookup audio tag, it is to be understood that be based on
The keyword of sample audio label finds the sample audio label to match with the keyword, then in the sample audio mark of setting
The corresponding sample audio label of sample audio label is obtained in label set.The sample audio tag set can be deposited as subset
It is stored in the sound effect parameters set, can also be used as individually set and stored.
Optionally, the sample information storage unit 17 is specifically used for the corresponding number of each octave characteristic point
Balance parameters, the reverberation parameters of the sample audio, the sample frequency information, the sample timbre information and the sample
Audio label is preserved into sound effect parameters set;
Further, described information acquiring unit 11 is specifically used for:
Obtain audio data, target frequency information and the target timbre information of the target audio;
Further, as shown in figure 14, the parameter acquiring unit 12, including:
Similarity obtains subelement 121, is used for the target frequency information and target timbre information and sound effect parameters
The sample frequency information of each audio data packet and sample timbre information are matched in set, and after matching described in acquisition
Target frequency information and target timbre information and the sample frequency information and sample tone color of each audio data packet are believed
The matching similarity of breath;
In the specific implementation, when the keyword of audio tag is unclear or comprising multiple audio tags, it can be by audio
Data packet unzips it, and extracts sample frequency information and sample timbre information after decompression, traverses audio ginseng successively
Every group of sample frequency information in manifold conjunction and sample timbre information, target frequency information and target timbre information are distinguished
It is matched with the sample frequency information and sample timbre information traversed, and obtains the matching similarity after every group of matching.
Optionally, can be able to be general by the caching corresponding with the audio data packet of the matching similarity after matching, the caching form
It is added in the setting field of audio data packet with similarity, or close matching similarity and audio data packet to map
It is that the form of table is cached into sound effect parameters set.
Data packet obtains subelement 122, for obtaining the highest sample frequency information of matching similarity and sample tone color
The corresponding sample audio data packet of information, using the sample audio data packet as target audio data packet.
Data outputting unit 13, for carrying out synthesis processing to the audio data using the target audio data packet,
And will synthesis treated that the audio data is sent to sound terminal so that the sound terminal exports the synthesis, treated
The audio data.
In the specific implementation, by data outputting unit 13 to the sound effect parameters and audio in identified target audio data packet
The synthesis such as data are acquired, convert, filtering, valuation, enhancing, compression, identification processing, to obtain the sound after transformation audio
Frequently, which sound terminal is sent to be exported.In a kind of feasible realization method, in data outputting unit 13
In be provided with DSP audio & video coding standard systems, including main devices have the digital signal processor DSP, (simulation/number of audio A/D
Word) and D/A (digital-to-analog), RAM, ROM and peripheral processor, the codec sampling to one 16bit of DSP transmission every time
After data, it will cause and interrupt button reception interruption, DSP is deposited into the data received in the input-buffer of system, simultaneously
Audio data to being stored in caching carries out respectively processing (such as convert, filter, valuation), and is stored in after some transformation
Output to system caches, and interrupting button output interrupt routine can periodically fetch in being cached from output in execution, by encoding and decoding
Device exports in an analog manner, retransmits to sound terminal, is played out after power amplifier.
In embodiments of the present invention, the audio tally set for carrying sound effect parameters is provided in audio effect processing equipment first
It closes, when receiving the play instruction to target audio, obtains the audio data of target audio, and obtained in sound effect parameters set
After taking the corresponding target audio data packet of target audio, synthesis processing is carried out to audio data using the target audio data packet,
Finally will synthesis treated that audio data is sent to sound terminal exports.It is provided by the information based on audio adaptive
Audio can build the audio of most suitable audio content, enrich audio effect processing mode, improve the intelligent of audio effect processing,
Completely new personalized audio experience is provided simultaneously.
Figure 15 is referred to, for an embodiment of the present invention provides the structural schematic diagrams of another audio effect processing equipment.Such as Figure 15
Shown, the audio effect processing equipment 2 of the embodiment of the present invention may include:Information receiving unit 21,22 and of set transmission unit
Data outputting unit 23.
Information receiving unit 21, for when receiving the play instruction to target audio, receiving the institute that server is sent
State the audio data of target audio;
It is understood that audio is a kind of important media in multimedia, it is the form of voice signal.As one kind
The carrier of information, audio can be divided into voice, music and other sound three types, and in embodiments of the present invention, the audio is
Music can be individual a piece of music in music player, or the companion in the multimedias such as video, game, e-book
It is happy.The target audio is music for output of the user selected in more songs.It, can for an audio
To include much information, such as musical designation, Ge Shouming, audio data, affiliated album, time, total duration, audio tag are delivered
Deng.Wherein, the audio data is the opaque binary stream of a succession of non-semantic symbolic indication, that is, target audio is interior
Hold.The audio tag can be the different types of style of song such as Art Rock, punk, metal music or folk song.Optionally, the sound
Frequency may also include the frequency information and timbre information of audio, and the frequency information and timbre information are that the frequency spectrum of audio is special
Property, the i.e. frequency domain characteristic of audio signal.
In the specific implementation, when server receives the play instruction for target audio, the audio of target audio is acquired
Information, and extract the audio data in the audio-frequency information and be sent to information receiving unit 21;Optionally, work as information receiving unit
21 when receiving the play instruction for target audio, and the audio-frequency information that target audio is sent to server obtains request, so that
The audio-frequency information of collection of server target audio, and receive the collected audio-frequency information of institute of server feedback.
Optionally, described information receiving unit 21 is specifically used for receiving the audio number for the target audio that server is sent
According to and audio tag.
Optionally, described information receiving unit 21 is specifically used for receiving the audio number for the target audio that server is sent
According to, target frequency information and target timbre information.
Parameter acquiring unit 22, the sound effect parameters set sent for receiving the server, in the sound effect parameters collection
The corresponding target audio data packet of the target audio is obtained in conjunction, the target audio data packet includes the number of target audio
The reverberation parameters of balance parameters and target audio;
It is understood that include in the sound effect parameters set each sample audio sample audio label, with it is every
The corresponding audio data packet of a sample audio label, the sample frequency information of each sample audio and sample timbre information,
In, the audio data packet includes the digital equalising parameter of sample audio and the reverberation parameters of sample audio;Optionally, described
In sound effect parameters set can also include sample audio tag set, the sample audio tag set be sample audio label with
The correspondence of sample audio label, the sample audio label include the inhomogeneities such as rock and roll, metal music, folk song and disco
The style of song of type, and sample audio label corresponding from every class sample audio label is then the different wind under this kind of sample audio label
The style of song of lattice.For example, being a form of sound effect parameters set as shown in table 1, table 2 is a form of sample audio tally set
It closes, if sample audio label is " rock and roll ", corresponding sample audio label may include " Art Rock, punk, Post
Rock, stone roller core etc. ".
In the specific implementation, parameter acquiring unit 22 receives server audio ginseng transmitted after establishing sound effect parameters set
Manifold is closed, and the sound effect parameters set is stored, and target audio is then obtained in the sound effect parameters set stored
The reverberation parameters of digital equalising parameter and target audio.
Optionally, the parameter acquiring unit 22 is specifically used for:
The corresponding target audio data packet of the audio tag is obtained in the sound effect parameters set, and reads the mesh
Mark with phonetic symbols imitates the digital equalising parameter of target audio and the reverberation parameters of target audio described in data packet.
Further, the parameter acquiring unit 22 is specifically used for:
The target sample audio label belonging to the audio tag is searched in sample audio tag set, and in the sound
The corresponding target audio data packet of the target sample audio label is obtained in effect parameter sets.
In the specific implementation, by server by audio data, audio tag, target frequency information and target timbre information etc.
It is sent to parameter acquiring unit 22, parameter acquiring unit 22 is searched in sample audio tag set belonging to the audio tag
Target sample audio label, and the corresponding target audio of the target sample audio label is obtained in the sound effect parameters set
Data packet.That is, parameter acquiring unit 22 searches the corresponding sample audio label of audio tag, then in sound effect parameters collection
In conjunction the correspondence based on sample audio label and audio data packet can get the digital equalising parameter of target audio with
And the reverberation parameters of target audio.The corresponding sample audio label of the lookup audio tag, it is to be understood that be based on sample
The keyword of audio tag finds the sample audio label to match with the keyword, then in the sample audio tally set of setting
The corresponding sample audio label of sample audio label is obtained in conjunction.The sample audio tag set can be stored in as subset
In the sound effect parameters set, it can also be used as individually set and stored.
Optionally, as shown in figure 16, the parameter acquiring unit 22 includes:
Similarity obtains subelement 221, is used for the target frequency information and target timbre information and sound effect parameters
The sample frequency information of each audio data packet and sample timbre information are matched in set, and after matching described in acquisition
Target frequency information and target timbre information and the sample frequency information and sample tone color of each audio data packet are believed
The matching similarity of breath;
Data packet obtains subelement 222, for obtaining the highest sample frequency information of matching similarity and sample tone color
The corresponding sample audio data packet of information, using the sample audio data packet as target audio data packet.
In the specific implementation, when the keyword of audio tag is unclear or comprising multiple audio tags, it can be by audio
Data packet unzips it, and extracts sample frequency information and sample timbre information after decompression, traverses audio ginseng successively
Every group of sample frequency information in manifold conjunction and sample timbre information, target frequency information and target timbre information are distinguished
It is matched with the sample frequency information and sample timbre information traversed, and obtains the target frequency information after matching
And target timbre information is similar to the sample frequency information of each audio data packet and the matching of sample timbre information
Degree obtains the highest sample frequency information of matching similarity and the corresponding sample audio data packet of sample timbre information, by institute
Sample audio data packet is stated as target audio data packet, and reads the number of the target audio in the target audio data packet
The reverberation parameters of balance parameters and target audio.
Data outputting unit 23, for carrying out synthesis processing to the audio data using the target audio data packet,
And export the synthesis treated the audio data.
In the specific implementation, by audio effect processing equipment to the sound effect parameters and audio number in identified target audio data packet
According to be acquired, convert, filter, valuation, enhancing, compression, the synthesis processing such as identification, to obtain the audio after transformation audio,
The audio is directly exported, is played.
In a kind of feasible realization method, it is provided with DSP audio & video coding standard systems in data outputting unit 23, wraps
The main devices included have digital signal processor DSP, audio A/D (analog/digital) and D/A (digital-to-analog), RAM, ROM and
Peripheral processor after codec transmits the sampled data of a 16bit to DSP every time, will cause and interrupt in button reception
Disconnected, DSP is deposited into the data received in the input-buffer of system, while being carried out to the audio data for being stored in caching each
It (such as converts, filter, valuation) from processing, and be deposited into the output caching of system after some transformation, and it is defeated to interrupt button
Going out interrupt routine can periodically fetch in being cached from output in execution, export by codec, put through overpower in an analog manner
It is played out after big device.
Optionally, data outputting unit 23 is using dynamic gain processing and noise suppressed in synthesizing processing procedure, to protect
Card not will produce power overload or clipping distortion.
Optionally, in synthesizing processing procedure data outputting unit 23 using 96kHz superelevation sample rate can ensure it is high-quality
The digital-to-analogue conversion of amount and 90dB or more high s/n ratios.
In embodiments of the present invention, when server receives the play instruction to target audio, target audio is obtained
Audio data is simultaneously sent to audio effect processing equipment, meanwhile, the audio tag set for carrying sound effect parameters is sent to audio effect processing
Equipment is sent to audio effect processing equipment, and audio effect processing equipment obtains the corresponding target audio of target audio in sound effect parameters set
After data packet, synthesis processing is carried out to audio data using the target audio data packet, it finally will synthesis treated audio number
According to being exported.Adaptive audio is provided by the information based on audio, the audio of most suitable audio content can be built, is enriched
Audio effect processing mode, improves the intelligent of audio effect processing.
The embodiment of the present invention additionally provides a kind of computer storage media, and the computer storage media can be stored with more
Item instructs, and described instruction is suitable for being loaded by processor and being executed the method and step such as above-mentioned Fig. 1-embodiment illustrated in fig. 10, specifically
Implementation procedure may refer to illustrating for Fig. 1-embodiment illustrated in fig. 10, herein without repeating.
Figure 17 is referred to, for an embodiment of the present invention provides a kind of structural schematic diagrams of server.As shown in figure 17, described
Server 1000 may include:At least one processor 1001, such as CPU, at least one network interface 1004, user interface
1003, memory 1005, at least one communication bus 1002.Wherein, communication bus 1002 is for realizing between these components
Connection communication.Wherein, user interface 1003 may include display screen (Display), keyboard (Keyboard), optional user interface
1003 can also include standard wireline interface and wireless interface.Network interface 1004 may include optionally that the wired of standard connects
Mouth, wireless interface (such as WI-FI interfaces).Memory 1005 can be high-speed RAM memory, can also be non-labile storage
Device (non-volatile memory), for example, at least a magnetic disk storage.Memory 1005 optionally can also be at least one
A storage device for being located remotely from aforementioned processor 1001.As shown in figure 17, as a kind of memory of computer storage media
May include operating system, network communication module, Subscriber Interface Module SIM and audio effect processing application program in 1005.
In the server 1000 shown in Figure 17, user interface 1003 is mainly used for providing the interface of input to the user, obtains
Take data input by user;Network interface 1004 is mainly used for user terminal into row data communication;And processor 1001 can be with
For calling the audio effect processing application program stored in memory 1005, and specifically execute following operation:
When receiving the play instruction to target audio, the audio data of the target audio is obtained;
The corresponding target audio data packet of the target audio, the target audio data are obtained in sound effect parameters set
Include the digital equalising parameter of target audio and the reverberation parameters of target audio;
Synthesis processing is carried out to the audio data using the target audio data packet, and will synthesis that treated is described
Audio data is sent to sound terminal, treated so that the sound terminal exports the synthesis audio data.
In one embodiment, the processor 1001 execute when receiving the play instruction to target audio it
Before, also execute following operation:
Collecting sample audio, obtains the feature frequency response curve of the sample audio, and obtains the frequency of the sample audio
Information and timbre information;
Based on the feature frequency response curve, the sample frequency information and the sample timbre information, sample sound is obtained
The digital equalising processing curve of frequency and the reverberation parameters of sample audio;
The sample audio label of the sample audio is obtained, equal loudness contour and the sample audio label are based on, to institute
It states the octave characteristic point selected in frequency range in digital equalising processing curve to be adjusted, to obtain each octave feature
The corresponding digital equalising parameter of point;
By the corresponding digital equalising parameter of each octave characteristic point, the reverberation parameters of the sample audio and institute
Sample audio label is stated to preserve into sound effect parameters set.
In one embodiment, the processor 1001 is corresponding digital equal by each octave characteristic point in execution
When weighing apparatus parameter, the reverberation parameters of the sample audio and the sample audio label are preserved into sound effect parameters set, specifically
Execute following operation:
After the corresponding digital equalising parameter of each octave characteristic point, the compression of the reverberation parameters of the sample audio
It is stored as the corresponding audio data packet of the sample audio label;
The sample audio label and audio data packet corresponding with the sample audio label are preserved to the audio
In parameter sets.
In one embodiment, the processor 1001 is when executing the audio data for obtaining the target audio, specifically
Execute following steps:
Obtain the audio data and audio tag of the target audio;
Further, the processor 1001 obtains the corresponding mesh of the target audio in execution in sound effect parameters set
It is specific to execute following operation when mark with phonetic symbols imitates data packet:
The corresponding target audio data packet of the audio tag is obtained in the sound effect parameters set, and reads the mesh
Mark with phonetic symbols imitates the digital equalising parameter of target audio and the reverberation parameters of target audio described in data packet.
In one embodiment, the processor 1001 obtains the audio mark in execution in the sound effect parameters set
It is specific to execute following operation when signing corresponding target audio data packet:
The target sample audio label belonging to the audio tag is searched in sample audio tag set, and in the sound
The corresponding target audio data packet of the target sample audio label is obtained in effect parameter sets.
In one embodiment, the processor 1001 is corresponding digital equal by each octave characteristic point in execution
When weighing apparatus parameter, the reverberation parameters of the sample audio and the sample audio label are preserved into sound effect parameters set, specifically
Execute following operation:
By the corresponding digital equalising parameter of each octave characteristic point, reverberation parameters of the sample audio, described
Sample frequency information, the sample timbre information and the sample audio label are preserved into sound effect parameters set.
In one embodiment, the processor 1001 is when executing the audio data for obtaining the target audio, specifically
Execute following operation:
Obtain audio data, target frequency information and the target timbre information of the target audio;
Further, the processor 1001 obtains the target audio in execution in the sound effect parameters set and corresponds to
Target audio data packet when, it is specific to execute following operation:
By the target frequency information and the sample of target timbre information and each audio data packet in sound effect parameters set
This frequency information and sample timbre information are matched, and the target frequency information and target tone color are obtained after matching
Information and the sample frequency information of each audio data packet and the matching similarity of sample timbre information;
The highest sample frequency information of matching similarity and the corresponding sample audio data packet of sample timbre information are obtained,
Using the sample audio data packet as target audio data packet.
In embodiments of the present invention, by when receiving the play instruction to target audio, obtaining the sound of target audio
Frequency evidence, and in sound effect parameters set after the corresponding target audio data packet of acquisition target audio, using the target audio number
Synthesis processing is carried out to audio data according to packet, finally will synthesis treated that audio data is sent to sound terminal exports.
Adaptive audio is provided by the information based on audio, the audio of most suitable audio content can be built, enrich audio effect processing
Mode improves the intelligent of audio effect processing.
Figure 18 is referred to, for an embodiment of the present invention provides a kind of structural schematic diagrams of sound terminal.As shown in figure 18, institute
Stating sound terminal 2000 may include:At least one processor 2001, such as CPU, at least one network interface 2004, Yong Hujie
Mouth 2003, memory 2005, at least one communication bus 2002.Wherein, communication bus 2002 is for realizing between these components
Connection communication.Wherein, user interface 2003 may include display screen (Display), keyboard (Keyboard), and optional user connects
Mouth 2003 can also include standard wireline interface and wireless interface.Network interface 1004 may include optionally the wired of standard
Interface, wireless interface (such as WI-FI interfaces).Memory 2005 can be high-speed RAM memory, can also be non-labile deposit
Reservoir (non-volatile memory), for example, at least a magnetic disk storage.Memory 2005 optionally can also be at least
One storage device for being located remotely from aforementioned processor 2001.As shown in figure 18, as a kind of storage of computer storage media
May include operating system, network communication module, Subscriber Interface Module SIM and audio effect processing application program in device 2005.
In the sound terminal 2000 shown in Figure 18, user interface 2003 is mainly used for providing the interface of input to the user,
Obtain data input by user;Network interface 2004 is mainly used for user terminal into row data communication;And processor 2001 can
For calling the audio effect processing application program stored in memory 2005, and specifically execute following operation:
When receiving the play instruction to target audio, the audio number for the target audio that server is sent is received
According to;
The sound effect parameters set that the server is sent is received, the target audio is obtained in the sound effect parameters set
Corresponding target audio data packet, the target audio data packet include the digital equalising parameter and target audio of target audio
Reverberation parameters;
Synthesis processing is carried out to the audio data using the target audio data packet, and exports the synthesis treated
The audio data.
In one embodiment, the processor 2001 is executing the audio for receiving the target audio that server is sent
When data, following steps are specifically executed:
Receive audio data and audio tag that server sends the target audio;
Further, the processor 2001 obtains the target audio in execution in the sound effect parameters set and corresponds to
Target audio data packet when, it is specific to execute following operation:
The corresponding target audio data packet of the audio tag is obtained in the sound effect parameters set, and reads the mesh
Mark with phonetic symbols imitates the digital equalising parameter of target audio and the reverberation parameters of target audio described in data packet.
In one embodiment, the processor 2001 obtains the audio mark in execution in the sound effect parameters set
It is specific to execute following operation when signing corresponding target audio data packet:
The target sample audio label belonging to the audio tag is searched in sample audio tag set, and in the sound
The corresponding target audio data packet of the target sample audio label is obtained in effect parameter sets.
In one embodiment, the processor 2001 is executing the audio for receiving the target audio that server is sent
It is specific to execute following operation when data:
Receive audio data, target frequency information and the target timbre information of the target audio that server is sent;
Further, the processor 2001 obtains the target audio in execution in the sound effect parameters set and corresponds to
Target audio data packet when, it is specific to execute following operation:
By the target frequency information and the sample of target timbre information and each audio data packet in sound effect parameters set
This frequency information and sample timbre information are matched, and the target frequency information and target tone color are obtained after matching
Information and the sample frequency information of each audio data packet and the matching similarity of sample timbre information;
The highest sample frequency information of matching similarity and the corresponding sample audio data packet of sample timbre information are obtained,
Using the sample audio data packet as target audio data packet.
In embodiments of the present invention, when server receives the play instruction to target audio, target audio is obtained
Audio data is simultaneously sent to audio effect processing equipment, meanwhile, the audio tag set for carrying sound effect parameters is sent to audio effect processing
Equipment is sent to audio effect processing equipment, and audio effect processing equipment obtains the corresponding target audio of target audio in sound effect parameters set
After data packet, synthesis processing is carried out to audio data using the target audio data packet, it finally will synthesis treated audio number
According to being exported.Adaptive audio is provided by the information based on audio, the audio of most suitable audio content can be built, is enriched
Audio effect processing mode, improves the intelligent of audio effect processing.
One of ordinary skill in the art will appreciate that realizing all or part of flow in above-described embodiment method, being can be with
Relevant hardware is instructed to complete by computer program, the program can be stored in a computer read/write memory medium
In, the program is when being executed, it may include such as the flow of the embodiment of above-mentioned each method.Wherein, the storage medium can be magnetic
Dish, CD, read-only memory (Read-Only Memory, ROM) or random access memory (Random Access
Memory, RAM) etc..
The above disclosure is only the preferred embodiments of the present invention, cannot limit the right model of the present invention with this certainly
It encloses, therefore equivalent changes made in accordance with the claims of the present invention, is still within the scope of the present invention.
Claims (24)
1. a kind of sound effect treatment method, which is characterized in that including:
When receiving the play instruction to target audio, the audio data of the target audio is obtained;
The corresponding target audio data packet of the target audio, the target audio data packet packet are obtained in sound effect parameters set
Include the digital equalising parameter of target audio and the reverberation parameters of target audio;
Synthesis processing is carried out to the audio data using the target audio data packet;And
Export the synthesis treated the audio data.
2. according to the method described in claim 1, it is characterized in that, it is described when receiving the play instruction to target audio it
Before, further include:
Collecting sample audio, obtains the feature frequency response curve of the sample audio, and obtains the sample frequency of the sample audio
Information and sample timbre information;
Based on the feature frequency response curve, the sample frequency information and the sample timbre information, sample audio is obtained
Digital equalising handles the reverberation parameters of curve and sample audio;
The sample audio label of the sample audio is obtained, equal loudness contour and the sample audio label are based on, to the number
The octave characteristic point selected in word equilibrium treatment curve in frequency range is adjusted, to obtain each octave characteristic point pair
The digital equalising parameter answered;
By the corresponding digital equalising parameter of each octave characteristic point, the reverberation parameters of the sample audio and the sample
This audio label is preserved into sound effect parameters set.
3. according to the method described in claim 2, it is characterized in that, described by the corresponding number of each octave characteristic point
Balance parameters, the reverberation parameters of the sample audio and the sample audio label are preserved into sound effect parameters set, including:
It will be stored after the corresponding digital equalising parameter of each octave characteristic point, the compression of the reverberation parameters of the sample audio
For the corresponding audio data packet of the sample audio label;
The sample audio label and audio data packet corresponding with the sample audio label are preserved to the sound effect parameters
In set.
4. according to the method described in claim 2, it is characterized in that, the audio data for obtaining the target audio, including:
Obtain the audio data and audio tag of the target audio;
It is described to obtain the corresponding target audio data packet of the target audio in sound effect parameters set, including:
The corresponding target audio data packet of the audio tag is obtained in the sound effect parameters set, and reads the target sound
Imitate the digital equalising parameter of target audio and the reverberation parameters of target audio described in data packet.
5. according to the method described in claim 4, it is characterized in that, described obtain the audio in the sound effect parameters set
The corresponding target audio data packet of label, including:
The target sample audio label belonging to the audio tag is searched in sample audio tag set, and is joined in the audio
Manifold obtains the corresponding target audio data packet of the target sample audio label in closing.
6. according to the method described in claim 2, it is characterized in that, described by the corresponding number of each octave characteristic point
Balance parameters, the reverberation parameters of the sample audio and the sample audio label are preserved into sound effect parameters set, including:
By the corresponding digital equalising parameter of each octave characteristic point, the reverberation parameters of the sample audio, the sample
Frequency information, the sample timbre information and the sample audio label are preserved into sound effect parameters set.
7. according to the method described in claim 6, it is characterized in that, the audio data for obtaining the target audio, including:
Obtain audio data, target frequency information and the target timbre information of the target audio;
It is described to obtain the corresponding target audio data packet of the target audio in the sound effect parameters set, including:
By the target frequency information and the sample frequency of target timbre information and each audio data packet in sound effect parameters set
Rate information and sample timbre information are matched, and the target frequency information and target timbre information are obtained after matching
With the sample frequency information of each audio data packet and the matching similarity of sample timbre information;
The highest sample frequency information of matching similarity and the corresponding sample audio data packet of sample timbre information are obtained, by institute
Sample audio data packet is stated as target audio data packet.
8. according to the method described in claim 1, it is characterized in that, when the sound effect treatment method operates in server side;
Described output synthesis treated audio data, including:
Will synthesis treated that the audio data is sent to sound terminal so that after the sound terminal exports synthesis processing
The audio data.
9. according to the method described in claim 1, it is characterized in that, when the sound effect treatment method operates in sound terminal side
When;
It is described to obtain the audio data of the target audio when receiving the play instruction to target audio, including:
When receiving the play instruction to target audio, the audio data for the target audio that server is sent is received;
It is described that the corresponding target audio data packet of the target audio, the target audio data are obtained in sound effect parameters set
Include the digital equalising parameter of target audio and the reverberation parameters of target audio, including:
Receive the sound effect parameters set that the server is sent;
The corresponding target audio data packet of the target audio, the target audio data are obtained in the sound effect parameters set
Include the digital equalising parameter of target audio and the reverberation parameters of target audio.
10. a kind of audio effect processing equipment, which is characterized in that including:
Information acquisition unit, the audio number for when receiving the play instruction to target audio, obtaining the target audio
According to;
Parameter acquiring unit, for obtaining the corresponding target audio data packet of the target audio, institute in sound effect parameters set
It includes the digital equalising parameter of target audio and the reverberation parameters of target audio to state target audio data packet;
Data outputting unit for carrying out synthesis processing to the audio data using the target audio data packet, and will close
At treated, the audio data is sent to sound terminal, treated so that the sound terminal exports the synthesis sound
Frequency evidence.
11. equipment according to claim 10, which is characterized in that the equipment further includes:
Sample information acquiring unit is used for collecting sample audio, obtains the feature frequency response curve of the sample audio, and obtain institute
State the sample frequency information and sample timbre information of sample audio;
Sample parameter acquiring unit, for being based on the feature frequency response curve, the sample frequency information and the sample sound
Color information obtains the digital equalising processing curve of sample audio and the reverberation parameters of sample audio;
Sample parameter adjustment unit, the sample audio label for obtaining the sample audio, based on equal loudness contour and described
Sample audio label, the octave characteristic point handled in curve in selected frequency range the digital equalising are adjusted, with
Obtain the corresponding digital equalising parameter of each octave characteristic point;
Sample information storage unit, for by the corresponding digital equalising parameter of each octave characteristic point, the sample sound
The reverberation parameters of frequency and the sample audio label are preserved into sound effect parameters set.
12. equipment according to claim 11, which is characterized in that the sample information storage unit, including:
Data packet obtains subelement, for by the corresponding digital equalising parameter of each octave characteristic point, the sample sound
It is stored as the corresponding audio data packet of the sample audio label after the reverberation parameters compression of frequency;
Information saving subunit is used for the sample audio label and audio data packet corresponding with the sample audio label
It preserves into the sound effect parameters set.
13. equipment according to claim 11, which is characterized in that described information acquiring unit is specifically used for:
Obtain the audio data and audio tag of the target audio;
The parameter acquiring unit is specifically used for:
The corresponding target audio data packet of the audio tag is obtained in the sound effect parameters set, and reads the target sound
Imitate the digital equalising parameter of target audio and the reverberation parameters of target audio described in data packet.
14. equipment according to claim 13, which is characterized in that the parameter acquiring unit is specifically used for:
The target sample audio label belonging to the audio tag is searched in sample audio tag set, and is joined in the audio
Manifold obtains the corresponding target audio data packet of the target sample audio label in closing.
15. equipment according to claim 11, which is characterized in that the sample information storage unit is specifically used for will be described
The each corresponding digital equalising parameter of octave characteristic point, the reverberation parameters of the sample audio, the sample frequency information, institute
It states sample timbre information and the sample audio label is preserved into sound effect parameters set.
16. equipment according to claim 15, which is characterized in that described information acquiring unit is specifically used for:
Obtain audio data, target frequency information and the target timbre information of the target audio;
The parameter acquiring unit includes:
Similarity obtains subelement, for will the target frequency information and target timbre information with it is every in sound effect parameters set
The sample frequency information and sample timbre information of a audio data packet are matched, and the target frequency is obtained after matching
Information and target timbre information and the sample frequency information of each audio data packet and the matching of sample timbre information
Similarity;
Data packet obtains subelement, is corresponded to for obtaining the highest sample frequency information of matching similarity and sample timbre information
Sample audio data packet, using the sample audio data packet as target audio data packet.
17. a kind of computer storage media, which is characterized in that the computer storage media is stored with a plurality of instruction, the finger
It enables and is suitable for being loaded by processor and executing following steps:
When receiving the play instruction to target audio, the audio data of the target audio is obtained;
The corresponding target audio data packet of the target audio, the target audio data packet packet are obtained in sound effect parameters set
Include the digital equalising parameter of target audio and the reverberation parameters of target audio;
Synthesis processing is carried out to the audio data using the target audio data packet, and will synthesis treated the audio
Data are sent to sound terminal, treated so that the sound terminal exports the synthesis audio data.
18. a kind of server, which is characterized in that including:Processor and memory;Wherein, the memory is stored with computer
Program, the computer program are suitable for being loaded by the processor and executing following steps:
When receiving the play instruction to target audio, the audio data of the target audio is obtained;
The corresponding target audio data packet of the target audio, the target audio data packet packet are obtained in sound effect parameters set
Include the digital equalising parameter of target audio and the reverberation parameters of target audio;
Synthesis processing is carried out to the audio data using the target audio data packet, and will synthesis treated the audio
Data are sent to sound terminal, treated so that the sound terminal exports the synthesis audio data.
19. a kind of audio effect processing equipment, which is characterized in that including:
Information receiving unit, for when receiving the play instruction to target audio, receiving the target that server is sent
The audio data of audio;
Parameter acquiring unit, the sound effect parameters set sent for receiving the server, is obtained in the sound effect parameters set
It includes the digital equalising ginseng of target audio to take the corresponding target audio data packet of the target audio, the target audio data packet
The reverberation parameters of number and target audio;
Data outputting unit, for carrying out synthesis processing, and output to the audio data using the target audio data packet
The synthesis treated the audio data.
20. equipment according to claim 19, which is characterized in that described information receiving unit is specifically used for:
Receive the audio data and audio tag of the target audio that server is sent;
The parameter acquiring unit is specifically used for:
The corresponding target audio data packet of the audio tag is obtained in the sound effect parameters set, and reads the target sound
Imitate the digital equalising parameter of target audio and the reverberation parameters of target audio described in data packet.
21. equipment according to claim 20, which is characterized in that the parameter acquiring unit is specifically used for:
The target sample audio label belonging to the audio tag is searched in sample audio tag set, and is joined in the audio
Manifold obtains the corresponding target audio data packet of the target sample audio label in closing.
22. equipment according to claim 19, which is characterized in that described information receiving unit is specifically used for:
Receive audio data, target frequency information and the target timbre information of the target audio that server is sent;
The parameter acquiring unit includes:
Similarity obtains subelement, for will the target frequency information and target timbre information with it is every in sound effect parameters set
The sample frequency information and sample timbre information of a audio data packet are matched, and the target frequency is obtained after matching
Information and target timbre information and the sample frequency information of each audio data packet and the matching of sample timbre information
Similarity;
Data packet obtains subelement, is corresponded to for obtaining the highest sample frequency information of matching similarity and sample timbre information
Sample audio data packet, using the sample audio data packet as target audio data packet.
23. a kind of computer storage media, which is characterized in that the computer storage media is stored with a plurality of instruction, the finger
It enables and is suitable for being loaded by processor and executing following steps:
When receiving the play instruction to target audio, the audio data for the target audio that server is sent is received;
The sound effect parameters set that the server is sent is received, the target audio is obtained in the sound effect parameters set and is corresponded to
Target audio data packet, the target audio data packet include target audio digital equalising parameter and target audio it is mixed
Ring parameter;
Synthesis processing is carried out to the audio data using the target audio data packet, and exports the synthesis treated and is described
Audio data.
24. a kind of sound terminal, which is characterized in that including:Processor and memory;Wherein, the memory is stored with calculating
Machine program, the computer program are suitable for being loaded by the processor and executing following steps:
When receiving the play instruction to target audio, the audio data for the target audio that server is sent is received;
The sound effect parameters set that the server is sent is received, the target audio is obtained in the sound effect parameters set and is corresponded to
Target audio data packet, the target audio data packet include target audio digital equalising parameter and target audio it is mixed
Ring parameter;
Synthesis processing is carried out to the audio data using the target audio data packet, and exports the synthesis treated and is described
Audio data.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710999163.9A CN108305603B (en) | 2017-10-20 | 2017-10-20 | Sound effect processing method and equipment, storage medium, server and sound terminal thereof |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710999163.9A CN108305603B (en) | 2017-10-20 | 2017-10-20 | Sound effect processing method and equipment, storage medium, server and sound terminal thereof |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108305603A true CN108305603A (en) | 2018-07-20 |
CN108305603B CN108305603B (en) | 2021-07-27 |
Family
ID=62870103
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710999163.9A Active CN108305603B (en) | 2017-10-20 | 2017-10-20 | Sound effect processing method and equipment, storage medium, server and sound terminal thereof |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108305603B (en) |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109151702A (en) * | 2018-09-21 | 2019-01-04 | 歌尔科技有限公司 | Effect adjusting method, audio frequency apparatus and the readable storage medium storing program for executing of audio frequency apparatus |
CN109254752A (en) * | 2018-09-25 | 2019-01-22 | Oppo广东移动通信有限公司 | 3D sound effect treatment method and Related product |
CN109410972A (en) * | 2018-11-02 | 2019-03-01 | 广州酷狗计算机科技有限公司 | Generate the method, apparatus and storage medium of sound effect parameters |
CN109448740A (en) * | 2018-12-18 | 2019-03-08 | 网易(杭州)网络有限公司 | Processing method, device and the voice system of voice audio |
CN109686347A (en) * | 2018-11-30 | 2019-04-26 | 北京达佳互联信息技术有限公司 | Sound effect treatment method, sound-effect processing equipment, electronic equipment and readable medium |
CN109686348A (en) * | 2018-12-13 | 2019-04-26 | 广州艾美网络科技有限公司 | A kind of audio processing system restoring professional audio |
CN109920397A (en) * | 2019-01-31 | 2019-06-21 | 李奕君 | A kind of physics sound intermediate frequency function manufacturing system and production method |
CN110297543A (en) * | 2019-06-28 | 2019-10-01 | 维沃移动通信有限公司 | A kind of audio frequency playing method and terminal device |
WO2020073562A1 (en) * | 2018-10-12 | 2020-04-16 | 北京字节跳动网络技术有限公司 | Audio processing method and device |
WO2020073565A1 (en) * | 2018-10-12 | 2020-04-16 | 北京字节跳动网络技术有限公司 | Audio processing method and apparatus |
CN111326132A (en) * | 2020-01-22 | 2020-06-23 | 北京达佳互联信息技术有限公司 | Audio processing method and device, storage medium and electronic equipment |
WO2020177190A1 (en) * | 2019-03-01 | 2020-09-10 | 腾讯音乐娱乐科技(深圳)有限公司 | Processing method, apparatus and device |
CN112309352A (en) * | 2020-01-15 | 2021-02-02 | 北京字节跳动网络技术有限公司 | Audio information processing method, apparatus, device and medium |
CN114449339A (en) * | 2022-02-16 | 2022-05-06 | 深圳万兴软件有限公司 | Background sound effect conversion method and device, computer equipment and storage medium |
CN114697804A (en) * | 2020-12-28 | 2022-07-01 | 深圳Tcl数字技术有限公司 | Audio equalization method and device, intelligent terminal and computer readable storage medium |
Citations (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5652797A (en) * | 1992-10-30 | 1997-07-29 | Yamaha Corporation | Sound effect imparting apparatus |
US20060000347A1 (en) * | 2004-06-17 | 2006-01-05 | Preece Kenneth A | Acoustical device and method |
CN101060316A (en) * | 2006-03-31 | 2007-10-24 | 索尼株式会社 | Signal processing apparatus, signal processing method, and sound field correction system |
CN101155438A (en) * | 2006-09-26 | 2008-04-02 | 张秀丽 | Frequency response adaptive equalization method for audio device |
CN102622999A (en) * | 2011-01-26 | 2012-08-01 | 英华达(南京)科技有限公司 | System for automatically adjusting sound effect and method thereof |
CN103137136A (en) * | 2011-11-22 | 2013-06-05 | 雅马哈株式会社 | Sound processing device |
CN103151055A (en) * | 2013-03-05 | 2013-06-12 | 广东欧珀移动通信有限公司 | Method and system for automatically switching sound effect |
US20140334630A1 (en) * | 2013-05-13 | 2014-11-13 | Sound In Motion Ltd. | Adding audio sound effects to movies |
CN105808204A (en) * | 2016-03-31 | 2016-07-27 | 联想(北京)有限公司 | Sound effect adjusting method and electronic equipment |
CN106126176A (en) * | 2016-06-16 | 2016-11-16 | 广东欧珀移动通信有限公司 | A kind of audio collocation method and mobile terminal |
CN106155623A (en) * | 2016-06-16 | 2016-11-23 | 广东欧珀移动通信有限公司 | A kind of audio collocation method, system and relevant device |
CN106658304A (en) * | 2017-01-11 | 2017-05-10 | 广东小天才科技有限公司 | Wearable device and audio output control method used for same |
CN106878642A (en) * | 2017-02-13 | 2017-06-20 | 微鲸科技有限公司 | Audio automatic equalization system and method |
CN107071680A (en) * | 2017-04-19 | 2017-08-18 | 歌尔科技有限公司 | A kind of tuning method and apparatus of acoustic product |
CN107249080A (en) * | 2017-06-26 | 2017-10-13 | 维沃移动通信有限公司 | A kind of method, device and mobile terminal for adjusting audio |
CN107659637A (en) * | 2017-09-21 | 2018-02-02 | 广州酷狗计算机科技有限公司 | Audio method to set up, device, storage medium and terminal |
-
2017
- 2017-10-20 CN CN201710999163.9A patent/CN108305603B/en active Active
Patent Citations (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5652797A (en) * | 1992-10-30 | 1997-07-29 | Yamaha Corporation | Sound effect imparting apparatus |
US20060000347A1 (en) * | 2004-06-17 | 2006-01-05 | Preece Kenneth A | Acoustical device and method |
CN101060316A (en) * | 2006-03-31 | 2007-10-24 | 索尼株式会社 | Signal processing apparatus, signal processing method, and sound field correction system |
CN101155438A (en) * | 2006-09-26 | 2008-04-02 | 张秀丽 | Frequency response adaptive equalization method for audio device |
CN102622999A (en) * | 2011-01-26 | 2012-08-01 | 英华达(南京)科技有限公司 | System for automatically adjusting sound effect and method thereof |
CN103137136A (en) * | 2011-11-22 | 2013-06-05 | 雅马哈株式会社 | Sound processing device |
CN103151055A (en) * | 2013-03-05 | 2013-06-12 | 广东欧珀移动通信有限公司 | Method and system for automatically switching sound effect |
US20140334630A1 (en) * | 2013-05-13 | 2014-11-13 | Sound In Motion Ltd. | Adding audio sound effects to movies |
CN105808204A (en) * | 2016-03-31 | 2016-07-27 | 联想(北京)有限公司 | Sound effect adjusting method and electronic equipment |
CN106126176A (en) * | 2016-06-16 | 2016-11-16 | 广东欧珀移动通信有限公司 | A kind of audio collocation method and mobile terminal |
CN106155623A (en) * | 2016-06-16 | 2016-11-23 | 广东欧珀移动通信有限公司 | A kind of audio collocation method, system and relevant device |
CN106658304A (en) * | 2017-01-11 | 2017-05-10 | 广东小天才科技有限公司 | Wearable device and audio output control method used for same |
CN106878642A (en) * | 2017-02-13 | 2017-06-20 | 微鲸科技有限公司 | Audio automatic equalization system and method |
CN107071680A (en) * | 2017-04-19 | 2017-08-18 | 歌尔科技有限公司 | A kind of tuning method and apparatus of acoustic product |
CN107249080A (en) * | 2017-06-26 | 2017-10-13 | 维沃移动通信有限公司 | A kind of method, device and mobile terminal for adjusting audio |
CN107659637A (en) * | 2017-09-21 | 2018-02-02 | 广州酷狗计算机科技有限公司 | Audio method to set up, device, storage medium and terminal |
Non-Patent Citations (2)
Title |
---|
S CECCHI: ""automotive audio equalization"", 《AES 36TH INTERNATIONAL CONFERENCE》 * |
叶文杰: ""基于SOC的卡拉ok音效处理器设计及实现"", 《中国优秀硕士学位论文全文数据库信息科技辑》 * |
Cited By (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109151702A (en) * | 2018-09-21 | 2019-01-04 | 歌尔科技有限公司 | Effect adjusting method, audio frequency apparatus and the readable storage medium storing program for executing of audio frequency apparatus |
CN109254752A (en) * | 2018-09-25 | 2019-01-22 | Oppo广东移动通信有限公司 | 3D sound effect treatment method and Related product |
CN109254752B (en) * | 2018-09-25 | 2022-03-15 | Oppo广东移动通信有限公司 | 3D sound effect processing method and related product |
WO2020073565A1 (en) * | 2018-10-12 | 2020-04-16 | 北京字节跳动网络技术有限公司 | Audio processing method and apparatus |
CN111045635B (en) * | 2018-10-12 | 2021-05-07 | 北京微播视界科技有限公司 | Audio processing method and device |
CN111045635A (en) * | 2018-10-12 | 2020-04-21 | 北京微播视界科技有限公司 | Audio processing method and device |
WO2020073562A1 (en) * | 2018-10-12 | 2020-04-16 | 北京字节跳动网络技术有限公司 | Audio processing method and device |
CN109410972A (en) * | 2018-11-02 | 2019-03-01 | 广州酷狗计算机科技有限公司 | Generate the method, apparatus and storage medium of sound effect parameters |
CN109686347A (en) * | 2018-11-30 | 2019-04-26 | 北京达佳互联信息技术有限公司 | Sound effect treatment method, sound-effect processing equipment, electronic equipment and readable medium |
CN109686348A (en) * | 2018-12-13 | 2019-04-26 | 广州艾美网络科技有限公司 | A kind of audio processing system restoring professional audio |
CN109448740A (en) * | 2018-12-18 | 2019-03-08 | 网易(杭州)网络有限公司 | Processing method, device and the voice system of voice audio |
CN109448740B (en) * | 2018-12-18 | 2022-05-27 | 网易(杭州)网络有限公司 | Voice sound effect processing method and device and voice system |
CN109920397A (en) * | 2019-01-31 | 2019-06-21 | 李奕君 | A kind of physics sound intermediate frequency function manufacturing system and production method |
CN109920397B (en) * | 2019-01-31 | 2021-06-01 | 李奕君 | System and method for making audio function in physics |
WO2020177190A1 (en) * | 2019-03-01 | 2020-09-10 | 腾讯音乐娱乐科技(深圳)有限公司 | Processing method, apparatus and device |
CN110297543A (en) * | 2019-06-28 | 2019-10-01 | 维沃移动通信有限公司 | A kind of audio frequency playing method and terminal device |
CN112309352A (en) * | 2020-01-15 | 2021-02-02 | 北京字节跳动网络技术有限公司 | Audio information processing method, apparatus, device and medium |
CN111326132B (en) * | 2020-01-22 | 2021-10-22 | 北京达佳互联信息技术有限公司 | Audio processing method and device, storage medium and electronic equipment |
CN111326132A (en) * | 2020-01-22 | 2020-06-23 | 北京达佳互联信息技术有限公司 | Audio processing method and device, storage medium and electronic equipment |
US11636836B2 (en) | 2020-01-22 | 2023-04-25 | Beijing Dajia Internet Information Technology Co., Ltd. | Method for processing audio and electronic device |
CN114697804A (en) * | 2020-12-28 | 2022-07-01 | 深圳Tcl数字技术有限公司 | Audio equalization method and device, intelligent terminal and computer readable storage medium |
CN114449339A (en) * | 2022-02-16 | 2022-05-06 | 深圳万兴软件有限公司 | Background sound effect conversion method and device, computer equipment and storage medium |
CN114449339B (en) * | 2022-02-16 | 2024-04-12 | 深圳万兴软件有限公司 | Background sound effect conversion method and device, computer equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN108305603B (en) | 2021-07-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108305603A (en) | Sound effect treatment method and its equipment, storage medium, server, sound terminal | |
CN109087669B (en) | Audio similarity detection method and device, storage medium and computer equipment | |
CN105159639B (en) | Audio cover display methods and device | |
CN105120421B (en) | A kind of method and apparatus for generating virtual surround sound | |
CA2650612C (en) | An adaptive user interface | |
WO2021103314A1 (en) | Listening scene constructing method and related device | |
CN111128214B (en) | Audio noise reduction method and device, electronic equipment and medium | |
CN103886857B (en) | A kind of noise control method and equipment | |
CN106898340A (en) | The synthetic method and terminal of a kind of song | |
CN103559876A (en) | Sound effect processing method and sound effect processing system | |
CN108847215A (en) | The method and device of speech synthesis is carried out based on user's tone color | |
DE102012103553A1 (en) | AUDIO SYSTEM AND METHOD FOR USING ADAPTIVE INTELLIGENCE TO DISTINCT THE INFORMATION CONTENT OF AUDIOSIGNALS IN CONSUMER AUDIO AND TO CONTROL A SIGNAL PROCESSING FUNCTION | |
US20080047415A1 (en) | Wind instrument phone | |
CN101295504A (en) | Entertainment audio only for text application | |
CN113823250B (en) | Audio playing method, device, terminal and storage medium | |
JP2000194384A (en) | System and method for recording and synthesizing sound, and infrastracture for distributing recorded sound to be reproduced on remote place | |
KR20190005103A (en) | Electronic device-awakening method and apparatus, device and computer-readable storage medium | |
WO2022089097A1 (en) | Audio processing method and apparatus, electronic device, and computer-readable storage medium | |
CN110349582A (en) | Display device and far field speech processing circuit | |
d'Escrivan | Music technology | |
Hove et al. | Increased levels of bass in popular music recordings 1955–2016 and their relation to loudness | |
WO2022111381A1 (en) | Audio processing method, electronic device and readable storage medium | |
CN110853606A (en) | Sound effect configuration method and device and computer readable storage medium | |
CN103167161A (en) | System and method for achieving mobile phone instrument playing based on microphone input | |
CN106847249B (en) | Pronunciation processing method and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |