CN106060707B - Reverberation processing method and device - Google Patents
Reverberation processing method and device Download PDFInfo
- Publication number
- CN106060707B CN106060707B CN201610365847.9A CN201610365847A CN106060707B CN 106060707 B CN106060707 B CN 106060707B CN 201610365847 A CN201610365847 A CN 201610365847A CN 106060707 B CN106060707 B CN 106060707B
- Authority
- CN
- China
- Prior art keywords
- audio
- clip
- segment
- value
- segments
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R3/00—Circuits for transducers, loudspeakers or microphones
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Acoustics & Sound (AREA)
- Signal Processing (AREA)
- Circuit For Audible Band Transducer (AREA)
- Stereophonic System (AREA)
Abstract
The disclosure relates to a reverberation processing method and device, and belongs to the technical field of audio and video processing. The method comprises the following steps: acquiring at least two audio clips, wherein each audio clip is acquired by one sound channel and has the same audio content and duration; acquiring a volume value of each audio clip; determining an audio clip to be processed from the at least two audio clips according to the volume values of the at least two audio clips; and performing reverberation processing on the audio segment to be processed. According to the method and the device, the obtained multiple audio segments are not directly subjected to reverberation processing, the audio segments to be processed are determined from the multiple audio segments on the basis of the volume value of the audio segment collected by each sound channel, and the audio segments to be processed can reflect the recording scene of the collected multiple audio segments better, so that the audio segments after reverberation processing are prevented from being distorted, and the sound effect of the obtained reverberation audio segments is better.
Description
Technical Field
The present disclosure relates to the field of audio and video processing technologies, and in particular, to a reverberation processing method and apparatus.
Background
In the field of audio and video processing, an original audio clip refers to an audio clip which is not subjected to any post-processing and processing after being acquired, and audio clips acquired directly by a microphone are all the original audio clips. The reverberation audio segment refers to an audio segment obtained by reverberation processing of an original audio segment. In general, the sound in the original audio segment is more astringent and shriveled, and the sound in the reverberation audio segment is mellow and full. Reverberation processing of the acquired original audio segment is often required for the user to obtain a better hearing experience.
At present, in the audio and video recording process, after a terminal collects an original audio segment through two channels or multiple channels, the original audio segment collected through the two channels or the multiple channels is directly subjected to reverberation processing.
Disclosure of Invention
The present disclosure provides a reverberation processing method and apparatus.
According to a first aspect of embodiments of the present disclosure, there is provided a reverberation processing method including:
acquiring at least two audio clips, wherein each audio clip is acquired by one sound channel and has the same audio content and duration;
acquiring a volume value of each audio clip;
determining an audio clip to be processed from the at least two audio clips according to the volume values of the at least two audio clips;
and performing reverberation processing on the audio segment to be processed.
In another embodiment of the present disclosure, the obtaining the volume value of each audio clip includes:
acquiring the energy value of each audio clip in unit time length;
and taking the energy value of each audio clip in unit time length as the volume value of each audio clip.
In another embodiment of the present disclosure, the obtaining the energy value of each audio segment in a unit time length includes:
for any audio segment, applying the following formula to calculate an energy value of the audio segment:
wherein y represents the energy value of the audio clip in the unit time length, T represents the unit time length, T represents any moment in the unit time length, | StAnd | represents the amplitude of the audio segment at time t.
In another embodiment of the present disclosure, the determining an audio segment to be processed from the at least two audio segments according to the volume values of the at least two audio segments includes:
obtaining a maximum volume value from the volume values of the at least two audio segments;
and taking the audio segment corresponding to the maximum volume value as the audio segment to be processed.
In another embodiment of the present disclosure, the determining an audio segment to be processed from the at least two audio segments according to the volume values of the at least two audio segments includes:
determining a weight value of the volume value of each audio clip in the sum of the volume values of all the audio clips according to the volume values of the at least two audio clips;
determining a target volume value according to the volume value and the weight value of each volume segment;
and adjusting the volume value of one of the at least two audio segments according to the target volume value to obtain the audio segment to be processed.
In another embodiment of the present disclosure, after performing reverberation processing on the audio segment to be processed, the method further includes:
and copying the reverberation audio segment obtained by the reverberation processing to a storage unit corresponding to each channel.
According to a second aspect of embodiments of the present disclosure, there is provided a reverberation processing device including:
the device comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring at least two audio clips, each audio clip is acquired by one sound channel, and the at least two audio clips have the same audio content and duration;
the second acquisition module is used for acquiring the volume value of each audio clip;
the determining module is used for determining an audio clip to be processed from the at least two audio clips according to the volume values of the at least two audio clips;
and the processing module is used for carrying out reverberation processing on the audio frequency segment to be processed.
In another embodiment of the present disclosure, the second obtaining module is configured to obtain an energy value of each audio segment in a unit time length; and taking the energy value of each audio clip in unit time length as the volume value of each audio clip.
In another embodiment of the present disclosure, the second obtaining module is further configured to apply the following formula to any audio segment to calculate an energy value of the audio segment:
wherein y represents the energy value of the audio clip in the unit time length, T represents the unit time length, T represents any moment in the unit time length, | StAnd | represents the amplitude of the audio segment at time t.
In another embodiment of the present disclosure, the determining module is configured to obtain a maximum volume value from the volume values of the at least two audio segments; and taking the audio segment corresponding to the maximum volume value as the audio segment to be processed.
In another embodiment of the disclosure, the determining module is configured to determine, according to the volume values of the at least two audio segments, a weight value of the volume value of each audio segment in the sum of the volume values of all audio segments; determining a target volume value according to the volume value and the weight value of each volume segment; and adjusting the volume value of one of the at least two audio segments according to the target volume value to obtain the audio segment to be processed.
In another embodiment of the present disclosure, the apparatus further comprises:
and the copying module is used for copying the reverberation audio segment obtained by the reverberation processing into the storage unit corresponding to each sound channel.
According to a third aspect of embodiments of the present disclosure, there is provided a reverberation processing device including:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
acquiring at least two audio clips, wherein each audio clip is acquired by one sound channel and has the same audio content and duration;
acquiring a volume value of each audio clip;
determining an audio clip to be processed from the at least two audio clips according to the volume values of the at least two audio clips;
and performing reverberation processing on the audio segment to be processed.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects:
according to the method and the device, the obtained multiple audio segments are not directly subjected to reverberation processing, the audio segments to be processed are determined from the multiple audio segments on the basis of the volume value of the audio segment collected by each sound channel, and the audio segments to be processed can reflect the recording scene of the collected multiple audio segments better, so that the audio segments after reverberation processing are prevented from being distorted, and the sound effect of the obtained reverberation audio segments is better.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
FIG. 1 is a schematic diagram illustrating an audio capture scenario in accordance with an exemplary embodiment.
Fig. 2 is a flow diagram illustrating a reverberation processing method according to an exemplary embodiment.
Fig. 3 is a flow diagram illustrating a reverberation processing method according to an exemplary embodiment.
FIG. 4 is a waveform diagram illustrating an audio clip according to an exemplary embodiment.
Fig. 5 is a schematic diagram illustrating a reverberation processing procedure according to an exemplary embodiment.
Fig. 6 is a schematic diagram illustrating a structure of a reverberation processing apparatus according to an exemplary embodiment.
Fig. 7 is a block diagram illustrating a reverberation processing device according to an exemplary embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
With the development of science and technology, terminals with audio and video recording functions are widely applied to the life of users, such as smart phones, tablet computers, recording pens, video recorders and the like. Since these terminals generally have at least two microphones, and each microphone can collect the sound emitted by a sound source, when these terminals are used for audio/video recording, the terminals can collect at least two audio clips. In fact, the volume of the audio segment captured by each microphone is different due to the different distance of each microphone from the sound source, see fig. 1, which shows an audio capture scenario. In the scene, when a user records an audio clip by using a smart phone, the distance between the microphone a positioned above the smart phone and a sound source is greater than that between the microphone B positioned below the smart phone, so that when the sound source makes a sound, the microphone B collects the sound firstly, the microphone a collects the sound secondly, and the volume value of the audio clip collected by the microphone B is greater than that of the audio clip collected by the microphone a. In the field of audio and video processing, different microphones are divided into different sound channels based on the acquisition time of the microphones in a terminal to the sound emitted by a sound source, each microphone corresponds to one sound channel, and in this embodiment, an audio clip acquired by each microphone can be regarded as being acquired by the corresponding sound channel.
Based on the audio segments collected by different sound channels, the current reverberation processing method calculates the average volume value of at least two audio segments according to the volume values of the audio segments collected by at least two sound channels, obtains an audio segment to be processed by adjusting the volume value of one of the at least two audio segments based on the average volume value, and further performs reverberation processing on the audio segment to be processed. However, actually, the volume values of the audio segments collected by different sound channels are different, and the volume values of the audio segments collected by different sound channels are averaged, so that the reverberation audio segment obtained after reverberation processing is distorted, and the original recorded scene cannot be truly restored, and thus the sound effect is poor during playing.
Fig. 2 is a flowchart illustrating a reverberation processing method according to an exemplary embodiment, as shown in fig. 2, the reverberation processing method is used in a terminal, including the following steps.
In step 201, at least two audio segments are obtained, each audio segment is captured by one channel, and the at least two audio segments have the same audio content and duration.
In step 202, a volume value for each audio piece is obtained.
In step 203, an audio segment to be processed is determined from the at least two audio segments according to the volume values of the at least two audio segments.
In step 204, the audio segment to be processed is reverberated.
According to the method provided by the embodiment of the disclosure, the obtained multiple audio segments are not directly subjected to reverberation processing, but the audio segments to be processed are determined from the multiple audio segments based on the volume value of the audio segment collected by each sound channel, and the audio segments to be processed can reflect the recording scene of the collected multiple audio segments better, so that the audio segments after reverberation processing are prevented from being distorted, and the sound effect of the obtained reverberation audio segments is better.
In another embodiment of the present disclosure, obtaining a volume value for each audio clip comprises:
acquiring the energy value of each audio clip in unit time length;
and taking the energy value of each audio clip in the unit time length as the volume value of each audio clip.
In another embodiment of the present disclosure, obtaining an energy value of each audio piece in a unit time length includes:
for any audio segment, the following formula is applied to calculate the energy value of the audio segment:
wherein y represents the energy value of the audio segment in the unit time length, T represents the unit time length, T represents any moment in the unit time length, | StAnd | represents the amplitude of the audio piece at time t.
In another embodiment of the present disclosure, determining an audio segment to be processed from among at least two audio segments according to volume values of the at least two audio segments comprises:
obtaining the maximum volume value from the volume values of at least two audio segments;
and taking the audio segment corresponding to the maximum volume value as the audio segment to be processed.
In another embodiment of the present disclosure, determining an audio segment to be processed from among at least two audio segments according to volume values of the at least two audio segments comprises:
determining a weight value of the volume value of each audio clip in the sum of the volume values of all the audio clips according to the volume values of at least two audio clips;
determining a target volume value according to the volume value and the weight value of each volume segment;
and adjusting the volume value of one of the at least two audio segments according to the target volume value to obtain the audio segment to be processed.
In another embodiment of the present disclosure, after performing reverberation processing on the audio segment to be processed, the method further includes:
and copying the reverberation audio segment obtained by the reverberation processing to a storage unit corresponding to each channel.
All the above optional technical solutions may be combined arbitrarily to form the optional embodiments of the present disclosure, and are not described herein again.
Fig. 3 is a flowchart illustrating a reverberation processing method according to an exemplary embodiment, as shown in fig. 3, the reverberation processing method is used in a terminal, including the following steps.
In step 301, the terminal obtains at least two audio clips, each of which is captured by one channel and has the same audio content and duration.
In an embodiment of the disclosure, at least two microphones are installed in a terminal, each microphone corresponds to one sound channel, in an audio and video recording process, each microphone can collect sounds in a surrounding environment in real time to obtain one audio clip, and the terminal acquires the audio clips collected by the at least two microphones as the acquired at least two audio clips. In this embodiment, each microphone corresponds to one channel, and the audio segment collected by each microphone can be regarded as collected by the corresponding channel. Since the sounds collected by the at least two microphones are emitted by the same sound source, the at least two audio segments each have the same audio content and duration.
In another embodiment of the present disclosure, the local storage of the terminal may store the audio segments collected by the microphones, and in order to relieve the processing pressure, the terminal may obtain the at least two audio segments collected by the at least two microphones from the local storage after the audio and video recording is completed. The local memory may be at least one of a volatile memory (e.g., a memory) and a nonvolatile memory (e.g., a hard disk), and the present embodiment does not specifically limit the local memory.
In step 302, the terminal obtains a volume value for each audio clip.
Since sound is generated by object vibration, each audio segment corresponds to a waveform, and the amplitude of each time on the waveform mainly depends on the volume value of the acquired audio segment at the time, generally, the larger the volume value is, the larger the amplitude is. The amplitude of any moment on the waveform can reflect the energy value of the audio segment at the moment, and the larger the amplitude is, the larger the energy value is. The volume is used for representing the strength of sound, and the unit of the volume is decibel.
It can be known from the above that, the energy value of an audio segment can also reflect the volume value of the audio segment, the energy value of an audio segment is in a direct proportion relationship with the volume value, if the energy value of an audio segment at a certain time is larger than the energy values of other audio segments, it indicates that the volume value of the audio segment at the time is larger, and accordingly, the amplitude value of the waveform corresponding to the audio segment at the time is larger; conversely, if the energy value of an audio segment at a certain time is smaller than the energy values of other audio segments, it means that the volume value of the audio segment at the certain time is smaller, and accordingly the amplitude value of the waveform corresponding to the audio segment at the certain time is smaller. Based on the relationship between the energy value and the volume value of the audio segment, the present embodiment may obtain the volume value of each audio segment by obtaining the energy value of each audio segment.
Since the position of the user relative to the sound source may change during the process of recording audio and video by using the terminal, the volume values of the audio segments collected by different microphones may be different in different time periods. In order to adapt to the position change of the user to obtain a more accurate volume value, in this embodiment, the terminal may obtain an energy value of each audio segment in a unit time length, and take the unit time length of each audio segment as the energy value of each audio segment. The unit duration may be set by the terminal according to the minimum duration of the change of the self-computing capability or the user location, and the unit duration may be 1 second, 2 seconds, or the like.
For any audio clip, when the terminal acquires the energy value of the audio clip within a unit time length, the following two cases are divided.
In one embodiment of the disclosure, if the duration of the audio segment is less than or equal to the unit duration, directly calculating the energy value of the audio segment in the duration, and taking the energy value of the audio segment in the duration as the energy value of the audio segment.
In another embodiment of the present disclosure, if the duration of an audio clip is greater than a unit duration, the audio clip is divided into a plurality of audio sub-clips according to the unit duration, an energy value of each audio sub-clip in the unit duration is obtained, and the energy value of each audio sub-clip in the unit duration is used as the energy value of each audio sub-clip.
In this embodiment, for any audio segment, the following formula can be applied to calculate the energy value of the audio segment:
wherein y represents the energy value of the audio segment in the unit time length, T represents the unit time length, T represents any moment in the unit time length, | StAnd | represents the amplitude of the audio piece at time t. It should be noted that the superposition of amplitudes in the above formula is a scalar superposition, not a vector superposition.
It should be noted that, the above is described by taking the example that the duration of the audio segment is less than or equal to the unit duration, when the duration of the audio segment is greater than the unit duration, y represents an energy value of one audio sub-segment of the audio segment in the unit duration, T represents any time in the unit duration, | StAnd | represents the amplitude of the audio sub-segment at time t. Referring to fig. 4, which shows a waveform of any audio segment, it can be seen from fig. 4 that, when the duration of the audio segment is greater than the unit duration, the energy value of the audio segment in the unit duration is calculated, the audio segment is divided into a plurality of audio sub-segments, and the energy value of each audio sub-segment in the unit duration is calculated respectively.
In step 303, the terminal determines an audio segment to be processed from the at least two audio segments according to the volume values of the at least two audio segments.
In this embodiment, the terminal determines the audio segment to be processed from the at least two audio segments according to the volume values of the at least two audio segments, and may adopt the following two ways:
the first mode is as follows: and the terminal acquires the maximum volume value from the volume values of the at least two audio segments, and takes the audio segment corresponding to the maximum volume value as the audio segment to be processed.
For example, the terminal acquires 4 audio segments, which are respectively an audio segment a, an audio segment b, an audio segment c, and an audio segment d, where a volume value of the audio segment a is 10 db, a volume value of the audio segment b is 12 db, a volume value of the audio segment c is 15 db, and a volume value of the audio segment d is 13 db, and the volume value of the audio segment c is the largest, so the audio segment c is taken as an audio segment to be processed.
The second mode is as follows: the terminal determines the weight value of the volume value of each audio clip in the sum of the volume values of all the audio clips according to the volume values of at least two audio clips, determines a target volume value according to the volume value and the weight value of each volume clip, and then adjusts the volume value of one audio clip of at least two audio clips according to the target volume value to obtain the audio clip to be processed.
For example, the terminal acquires 3 audio segments, which are respectively an audio segment a, an audio segment b, and an audio segment c, where a volume value of the audio segment a is 5 db, a volume value of the audio segment b is 8 db, and a volume value of the audio segment c is 12 db. The sum of the volume values of all the audio segments acquired by the terminal is 25 db, the weight value of the volume value of the audio segment a in the sum of the volume values of all the audio segments is 0.2, the weight value of the volume value of the audio segment b in the sum of the volume values of all the audio segments is 0.32, and the weight value of the volume value of the audio segment c in the sum of the volume values of all the audio segments is 0.48, so that according to the volume value and the weight value of each volume segment, a target volume value of 5 × 0.2+8 × 0.32+12 × 0.48 of 9.32 db can be determined, and according to the target volume value, any one of the audio segment a, the audio segment b and the audio segment c can be adjusted to obtain the audio segment to be processed.
It should be noted that, if the duration of each audio segment is greater than the unit duration, when determining the audio segments to be processed, it is necessary to determine each audio sub-segment to be processed respectively, and combine the determined multiple audio sub-segments to be processed into the audio segments to be processed according to the time sequence.
In step 304, the terminal performs reverberation processing on the audio segment to be processed.
And on the basis of the determined audio segment to be processed, the terminal performs superposition calculation on the audio segment to be processed and the reverberation effect file to be added to obtain the reverberation audio file. During the specific superposition calculation, the method includes, but is not limited to, superposing the waveform of the audio segment to be processed and the waveform corresponding to the reverberation effect file by using a convolution calculation method. Because the waveform of the audio segment to be processed can be represented by one wave function, the waveform corresponding to the reverberation effect file can also be represented by one wave function, and the wave function corresponding to the audio segment to be processed and the wave function corresponding to the reverberation effect file can be synthesized into one wave function through convolution calculation, the waveform of the audio segment to be processed and the waveform corresponding to the reverberation effect file can be superposed through the convolution calculation. Convolution calculation is an important operation in analytical mathematics, and two functions can be combined into a third function through convolution calculation. For example, if h (x) ═ f × g (x), h (x) is a convolution of f and g.
In step 305, the terminal copies the reverberant audio segment obtained through the reverberation processing to a storage unit corresponding to each channel.
After the terminal obtains the reverberation audio segment through reverberation processing, the obtained reverberation audio segment is copied to the storage unit corresponding to each sound channel, so that during subsequent playing, the terminal can obtain the reverberation audio segment from the storage unit corresponding to each sound channel and play the reverberation audio segment according to the playing mode corresponding to the sound channel.
For the above reverberation processing, fig. 5 will be used as an example for the following explanation for easy understanding.
Referring to fig. 5, during the recording process of an audio or video file, at least two microphones in a terminal collect sounds in the environment to obtain at least two audio segments, and each microphone corresponds to one channel, so that the collected at least two audio segments can be regarded as collected by at least two channels. In the process of recording an audio or video file, or after the recording of the audio or video file is completed, the terminal acquires audio segments acquired by at least two sound channels, acquires the volume value of each audio segment, and takes the audio segment with the maximum volume value as the audio segment to be processed based on the volume values of the at least two audio segments, or determines a target volume value according to the volume value of each audio segment, and obtains the audio segment to be processed by adjusting the volume value of one audio segment according to the target volume value. And then, the terminal performs reverberation processing on the audio frequency segment to be processed to obtain a reverberation audio frequency segment, and copies the obtained reverberation audio frequency segment to a storage unit corresponding to each sound channel.
According to the method provided by the embodiment of the disclosure, the obtained multiple audio segments are not directly subjected to reverberation processing, but the audio segments to be processed are determined from the multiple audio segments based on the volume value of the audio segment collected by each sound channel, and the audio segments to be processed can reflect the recording scene of the collected multiple audio segments better, so that the audio segments after reverberation processing are prevented from being distorted, and the sound effect of the obtained reverberation audio segments is better.
Fig. 6 is a schematic diagram illustrating a reverberation processing device according to an exemplary embodiment. Referring to fig. 6, the apparatus includes: a first obtaining module 601, a second obtaining module 602, a determining module 603 and a processing module 604.
The first obtaining module 601 is configured to obtain at least two audio segments, each of the audio segments is collected by one channel, and the at least two audio segments have the same audio content and duration;
the second obtaining module 602 is configured to obtain a volume value of each audio piece;
the determining module 603 is configured to determine an audio segment to be processed from the at least two audio segments according to the volume values of the at least two audio segments;
the processing module 604 is configured to perform reverberation processing on the audio segment to be processed.
In another embodiment of the present disclosure, the second obtaining module 602 is configured to obtain an energy value of each audio piece in a unit time length; and taking the energy value of each audio clip in the unit time length as the volume value of each audio clip.
In another embodiment of the present disclosure, the second obtaining module 602 is configured to apply the following formula to calculate the energy value of the audio segment for any audio segment:
wherein y represents the energy value of the audio segment in the unit time length, T represents the unit time length, T represents any moment in the unit time length, | StAnd | represents the amplitude of the audio piece at time t.
In another embodiment of the present disclosure, the determining module 603 is configured to obtain a maximum volume value from the volume values of the at least two audio segments; and taking the audio segment corresponding to the maximum volume value as the audio segment to be processed.
In another embodiment of the present disclosure, the determining module 603 is configured to determine, according to the volume values of at least two audio pieces, a weight value of the volume value of each audio piece in the sum of the volume values of all audio pieces; determining a target volume value according to the volume value and the weight value of each volume segment; and adjusting the volume value of one of the at least two audio segments according to the target volume value to obtain the audio segment to be processed.
In another embodiment of the present disclosure, the apparatus further comprises: and copying the module.
The copying module is configured to copy the reverberation audio segment obtained by the reverberation processing to a storage unit corresponding to each channel.
According to the device provided by the embodiment of the disclosure, the obtained multiple audio segments are not directly subjected to reverberation processing, but the audio segments to be processed are determined from the multiple audio segments based on the volume value of the audio segment collected by each sound channel, and the audio segments to be processed can reflect the recording scene of the collected multiple audio segments better, so that the audio segments after reverberation processing are prevented from being distorted, and the sound effect of the obtained reverberation audio segments is better.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
Fig. 7 is a block diagram illustrating an apparatus 700 for reverberation processing according to an exemplary embodiment. For example, the apparatus 700 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 7, apparatus 700 may include one or more of the following components: a processing component 702, a memory 704, a power component 706, a multimedia component 708, an audio component 710, an input/output (I/O) interface 712, a sensor component 714, and a communication component 716.
The processing component 702 generally controls overall operation of the device 700, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 702 may include one or more processors 720 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 702 may include one or more modules that facilitate interaction between the processing component 702 and other components. For example, the processing component 702 may include a multimedia module to facilitate interaction between the multimedia component 708 and the processing component 702.
The memory 704 is configured to store various types of data to support operations at the apparatus 700. Examples of such data include instructions for any application or method operating on device 700, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 704 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 706 provides power to the various components of the device 700. The power components 706 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the apparatus 700.
The multimedia component 708 includes a screen that provides an output interface between the device 700 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 708 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the device 700 is in an operation mode, such as a photographing mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 710 is configured to output and/or input audio signals. For example, audio component 710 includes a Microphone (MIC) configured to receive external audio signals when apparatus 700 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signal may further be stored in the memory 704 or transmitted via the communication component 716. In some embodiments, audio component 710 also includes a speaker for outputting audio signals.
The I/O interface 712 provides an interface between the processing component 702 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 714 includes one or more sensors for providing status assessment of various aspects of the apparatus 700. For example, sensor assembly 714 may detect an open/closed state of device 700, the relative positioning of components, such as a display and keypad of device 700, sensor assembly 714 may also detect a change in position of device 700 or a component of device 700, the presence or absence of user contact with device 700, orientation or acceleration/deceleration of device 700, and a change in temperature of device 700. The sensor assembly 714 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 714 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 714 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 716 is configured to facilitate wired or wireless communication between the apparatus 700 and other devices. The apparatus 700 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 716 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 716 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 700 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer readable storage medium comprising instructions, such as the memory 704 comprising instructions, executable by the processor 720 of the device 700 to perform the above-described method is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
According to the device provided by the embodiment of the disclosure, the obtained multiple audio segments are not directly subjected to reverberation processing, but the audio segments to be processed are determined from the multiple audio segments based on the volume value of the audio segment collected by each sound channel, and the audio segments to be processed can reflect the recording scene of the collected multiple audio segments better, so that the audio segments after reverberation processing are prevented from being distorted, and the sound effect of the obtained reverberation audio segments is better.
A non-transitory computer readable storage medium having instructions therein which, when executed by a processor of a mobile terminal, enable the mobile terminal to perform a reverberation processing method, the method comprising:
acquiring at least two audio clips, wherein each audio clip is acquired by one sound channel, and the at least two audio clips have the same audio content and duration;
acquiring a volume value of each audio clip;
determining an audio clip to be processed from the at least two audio clips according to the volume values of the at least two audio clips;
and performing reverberation processing on the audio segment to be processed.
In another embodiment of the present disclosure, obtaining a volume value for each audio clip comprises:
acquiring the energy value of each audio clip in unit time length;
and taking the energy value of each audio clip in the unit time length as the volume value of each audio clip.
In another embodiment of the present disclosure, obtaining an energy value of each audio piece in a unit time length includes:
for any audio segment, the following formula is applied to calculate the energy value of the audio segment:
where y represents the energy of the audio piece per unit time lengthValue, T represents unit duration, T represents any time within the unit duration, | StAnd | represents the amplitude of the audio piece at time t.
In another embodiment of the present disclosure, determining an audio segment to be processed from among at least two audio segments according to volume values of the at least two audio segments comprises:
obtaining the maximum volume value from the volume values of at least two audio segments;
and taking the audio segment corresponding to the maximum volume value as the audio segment to be processed.
In another embodiment of the present disclosure, determining an audio segment to be processed from among at least two audio segments according to volume values of the at least two audio segments comprises:
determining a weight value of the volume value of each audio clip in the sum of the volume values of all the audio clips according to the volume values of at least two audio clips;
determining a target volume value according to the volume value and the weight value of each volume segment;
and adjusting the volume value of one of the at least two audio segments according to the target volume value to obtain the audio segment to be processed.
In another embodiment of the present disclosure, after performing reverberation processing on the audio segment to be processed, the method further includes:
and copying the reverberation audio segment obtained by the reverberation processing to a storage unit corresponding to each channel.
The non-transitory computer-readable storage medium provided by the embodiment of the disclosure does not directly perform reverberation processing on the acquired multiple audio segments, but determines an audio segment to be processed from the multiple audio segments based on the volume value of the audio segment acquired by each channel, and because the audio segment to be processed can better reflect the recording scene of the acquired multiple audio segments, distortion of the audio segment after reverberation processing is avoided, and the sound effect of the obtained reverberation audio segment is better.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.
Claims (5)
1. A reverberation processing method applied to a terminal, the method comprising:
acquiring at least two audio clips, wherein each audio clip is acquired by one sound channel, the at least two audio clips have the same audio content and duration, and the volumes of the audio clips acquired at positions with different distances from a sound source are different; calculating energy values of the at least two audio segments, and acquiring the volume values of the at least two audio segments according to the energy values of the at least two audio segments on the basis of the proportional relation between the energy values and the volume values of the audio segments;
determining an audio clip to be processed from the at least two audio clips according to the volume values of the at least two audio clips; superposing the waveform of the audio segment to be processed and the waveform of the reverberation effect file to be added by adopting a convolution calculation method so as to realize reverberation processing of the audio segment to be processed; copying the obtained reverberation audio segment to a storage unit corresponding to each sound channel so that the terminal can obtain the reverberation audio segment from the storage unit corresponding to each sound channel and play the reverberation audio segment according to a playing mode corresponding to each sound channel;
for each of the at least two audio segments, the obtaining the volume values of the at least two audio segments comprises:
if the duration of each audio clip is less than or equal to the unit duration, calculating the energy value of each audio clip in the unit duration, and taking the energy value of each audio clip in the unit duration as the volume value of each audio clip, wherein the unit duration is set by the terminal according to the minimum duration of the change of the self-computing capability or the user position; if the duration of each audio clip is greater than the unit duration, dividing each audio clip into a plurality of audio sub-clips according to the unit duration, acquiring the energy value of each audio sub-clip in the unit duration, and taking the energy value of each audio sub-clip in the unit duration as the energy value of each audio sub-clip;
the determining the audio segment to be processed from the at least two audio segments according to the volume values of the at least two audio segments comprises:
obtaining a maximum volume value from the volume values of the at least two audio segments, and taking the audio segment corresponding to the maximum volume value as the audio segment to be processed; or determining the weight value of the volume value of each audio clip in the sum of the volume values of all the audio clips according to the volume values of the at least two audio clips, determining a target volume value according to the volume value and the weight value of each audio clip, and adjusting the volume value of one audio clip of the at least two audio clips according to the target volume value to obtain the audio clip to be processed.
2. The method of claim 1, wherein the obtaining the energy value of each of the at least two audio segments in a unit time duration comprises:
for any audio segment, applying the following formula to calculate an energy value of the audio segment:
whereinY represents the energy value of the audio segment in the unit time length, T represents the unit time length, T represents any moment in the unit time length, | StAnd | represents the amplitude of the audio segment at time t.
3. A reverberation processing apparatus, characterized in that the apparatus comprises:
the first acquisition module is used for acquiring at least two audio clips, each audio clip is acquired by one sound channel, the at least two audio clips have the same audio content and duration, and the volumes of the audio clips acquired at positions with different distances from a sound source are different;
the second obtaining module is used for calculating the energy values of the at least two audio segments, and obtaining the volume values of the at least two audio segments according to the energy values of the at least two audio segments on the basis of the proportional relation between the energy values and the volume values of the audio segments;
the determining module is used for determining an audio clip to be processed from the at least two audio clips according to the volume values of the at least two audio clips;
the copying module is used for copying the reverberation audio segment obtained through reverberation processing to the storage unit corresponding to each sound channel so that the terminal can acquire the reverberation audio segment from the storage unit corresponding to each sound channel and play the reverberation audio segment according to the playing mode corresponding to each sound channel;
the processing module is used for superposing the waveform of the audio segment to be processed and the waveform of the reverberation effect file required to be added by adopting a convolution calculation method so as to realize reverberation processing of the audio segment to be processed;
the second obtaining module is configured to, for each of the at least two audio segments, calculate an energy value of each audio segment in the unit time length if the time length of each audio segment is less than or equal to the unit time length, and use the energy value of each audio segment in the unit time length as a volume value of each audio segment, where the unit time length is set by the terminal according to a self-calculation capability or a minimum time length during which a user position changes; if the duration of each audio clip is greater than the unit duration, dividing each audio clip into a plurality of audio sub-clips according to the unit duration, acquiring the energy value of each audio sub-clip in the unit duration, and taking the energy value of each audio sub-clip in the unit duration as the energy value of each audio sub-clip;
the determining module is configured to obtain a maximum volume value from the volume values of the at least two audio segments, and use an audio segment corresponding to the maximum volume value as the audio segment to be processed; or, the processing unit is configured to determine, according to the volume values of the at least two audio segments, a weight value of the volume value of each audio segment in the sum of the volume values of all the audio segments, determine a target volume value according to the volume value and the weight value of each audio segment, and adjust the volume value of one of the at least two audio segments according to the target volume value to obtain the audio segment to be processed.
4. The apparatus of claim 3, wherein the second obtaining module is further configured to apply the following formula to any audio segment to calculate an energy value of the audio segment:
wherein y represents the energy value of the audio clip in the unit time length, T represents the unit time length, T represents any moment in the unit time length, | StAnd | represents the amplitude of the audio segment at time t.
5. A reverberation processing apparatus, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
acquiring at least two audio clips, wherein each audio clip is acquired by one sound channel, the at least two audio clips have the same audio content and duration, and the volumes of the audio clips acquired at positions with different distances from a sound source are different;
calculating energy values of the at least two audio segments, and acquiring the volume values of the at least two audio segments according to the energy values of the at least two audio segments on the basis of the proportional relation between the energy values and the volume values of the audio segments;
determining an audio clip to be processed from the at least two audio clips according to the volume values of the at least two audio clips; superposing the waveform of the audio segment to be processed and the waveform of the reverberation effect file to be added by adopting a convolution calculation method so as to realize reverberation processing of the audio segment to be processed; copying the obtained reverberation audio segment to a storage unit corresponding to each sound channel so that the terminal can obtain the reverberation audio segment from the storage unit corresponding to each sound channel and play the reverberation audio segment according to a playing mode corresponding to each sound channel;
for each of the at least two audio segments, the obtaining the volume values of the at least two audio segments comprises:
if the duration of each audio clip is less than or equal to the unit duration, calculating the energy value of each audio clip in the unit duration, and taking the energy value of each audio clip in the unit duration as the volume value of each audio clip, wherein the unit duration is set by the terminal according to the minimum duration of the change of the self-computing capability or the user position; if the duration of each audio clip is greater than the unit duration, dividing each audio clip into a plurality of audio sub-clips according to the unit duration, acquiring the energy value of each audio sub-clip in the unit duration, and taking the energy value of each audio sub-clip in the unit duration as the energy value of each audio sub-clip;
the determining the audio segment to be processed from the at least two audio segments according to the volume values of the at least two audio segments comprises:
obtaining a maximum volume value from the volume values of the at least two audio segments, and taking the audio segment corresponding to the maximum volume value as the audio segment to be processed; or determining the weight value of the volume value of each audio clip in the sum of the volume values of all the audio clips according to the volume values of the at least two audio clips, determining a target volume value according to the volume value and the weight value of each audio clip, and adjusting the volume value of one audio clip of the at least two audio clips according to the target volume value to obtain the audio clip to be processed.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610365847.9A CN106060707B (en) | 2016-05-27 | 2016-05-27 | Reverberation processing method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610365847.9A CN106060707B (en) | 2016-05-27 | 2016-05-27 | Reverberation processing method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106060707A CN106060707A (en) | 2016-10-26 |
CN106060707B true CN106060707B (en) | 2021-05-04 |
Family
ID=57174943
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610365847.9A Active CN106060707B (en) | 2016-05-27 | 2016-05-27 | Reverberation processing method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106060707B (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109413492B (en) * | 2017-08-18 | 2021-05-28 | 武汉斗鱼网络科技有限公司 | Audio data reverberation processing method and system in live broadcast process |
CN108198572A (en) * | 2017-12-29 | 2018-06-22 | 珠海市君天电子科技有限公司 | A kind of audio-frequency processing method and device |
CN111045633A (en) * | 2018-10-12 | 2020-04-21 | 北京微播视界科技有限公司 | Method and apparatus for detecting loudness of audio signal |
CN112863530B (en) * | 2021-01-07 | 2024-08-27 | 广州欢城文化传媒有限公司 | Sound work generation method and device |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1845573A (en) * | 2006-04-30 | 2006-10-11 | 南京大学 | Simultaneous interpretation video conference system and method for supporting high capacity mixed sound |
CN1953048A (en) * | 2005-10-18 | 2007-04-25 | 腾讯科技(深圳)有限公司 | A processing method of mix sound |
CN101335867A (en) * | 2007-09-27 | 2008-12-31 | 深圳市迪威新软件技术有限公司 | Voice excited control method of meeting television system |
CN101841379A (en) * | 2009-03-13 | 2010-09-22 | 三洋电机株式会社 | Receiving system |
CN103888580A (en) * | 2014-03-31 | 2014-06-25 | 宇龙计算机通信科技(深圳)有限公司 | Noise reduction method in terminal recording process and terminal |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2010112996A (en) * | 2008-11-04 | 2010-05-20 | Sony Corp | Voice processing device, voice processing method and program |
-
2016
- 2016-05-27 CN CN201610365847.9A patent/CN106060707B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1953048A (en) * | 2005-10-18 | 2007-04-25 | 腾讯科技(深圳)有限公司 | A processing method of mix sound |
CN1845573A (en) * | 2006-04-30 | 2006-10-11 | 南京大学 | Simultaneous interpretation video conference system and method for supporting high capacity mixed sound |
CN101335867A (en) * | 2007-09-27 | 2008-12-31 | 深圳市迪威新软件技术有限公司 | Voice excited control method of meeting television system |
CN101841379A (en) * | 2009-03-13 | 2010-09-22 | 三洋电机株式会社 | Receiving system |
CN103888580A (en) * | 2014-03-31 | 2014-06-25 | 宇龙计算机通信科技(深圳)有限公司 | Noise reduction method in terminal recording process and terminal |
Also Published As
Publication number | Publication date |
---|---|
CN106060707A (en) | 2016-10-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3163748B1 (en) | Method, device and terminal for adjusting volume | |
CN107105314B (en) | Video playing method and device | |
EP3125530A1 (en) | Video recording method and device | |
CN105828201B (en) | Method for processing video frequency and device | |
CN107493500B (en) | Multimedia resource playing method and device | |
CN107360326B (en) | Terminal volume adjustment processing method and volume adjustment terminal | |
CN104519282A (en) | Image shooting method and device | |
CN106060707B (en) | Reverberation processing method and device | |
CN103927165A (en) | Wallpaper picture processing method and device | |
CN108845787B (en) | Audio adjusting method, device, terminal and storage medium | |
CN104021148A (en) | Method and device for adjusting sound effect | |
CN108462784A (en) | In Call method of adjustment and device | |
CN111883164A (en) | Model training method and device, electronic equipment and storage medium | |
CN108629814B (en) | Camera adjusting method and device | |
CN110660403B (en) | Audio data processing method, device, equipment and readable storage medium | |
CN108600503B (en) | Voice call control method and device | |
CN111510846B (en) | Sound field adjusting method and device and storage medium | |
EP3851876A1 (en) | Method and device for processing information based on radar waves, terminal, and storage medium | |
CN105245898B (en) | Image data recording method and device | |
CN105204841B (en) | Range method of adjustment and device | |
CN104065877A (en) | Picture pushing method, picture pushing device and terminal equipment | |
CN103973883B (en) | A kind of method and device controlling voice-input device | |
CN112954596B (en) | Data sharing method, device, equipment and storage medium | |
CN109712629B (en) | Audio file synthesis method and device | |
CN108491180B (en) | Audio playing method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |