CN111739544B - Voice processing method, device, electronic equipment and storage medium - Google Patents

Voice processing method, device, electronic equipment and storage medium Download PDF

Info

Publication number
CN111739544B
CN111739544B CN201910227101.5A CN201910227101A CN111739544B CN 111739544 B CN111739544 B CN 111739544B CN 201910227101 A CN201910227101 A CN 201910227101A CN 111739544 B CN111739544 B CN 111739544B
Authority
CN
China
Prior art keywords
voice
voice signal
signal
tone
time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910227101.5A
Other languages
Chinese (zh)
Other versions
CN111739544A (en
Inventor
陈岩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201910227101.5A priority Critical patent/CN111739544B/en
Publication of CN111739544A publication Critical patent/CN111739544A/en
Application granted granted Critical
Publication of CN111739544B publication Critical patent/CN111739544B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/003Changing voice quality, e.g. pitch or formants
    • G10L21/007Changing voice quality, e.g. pitch or formants characterised by the process used
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • G10L21/0216Noise filtering characterised by the method used for estimating noise
    • G10L21/0224Processing in the time domain
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/45Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of analysis window

Landscapes

  • Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Quality & Reliability (AREA)
  • Telephone Function (AREA)

Abstract

The disclosure provides a voice processing method, a device, an electronic device and a computer readable storage medium, and relates to the technical field of audio processing, wherein the voice processing method comprises the following steps: receiving a voice signal acquired and transmitted by an audio acquisition device; performing tone changing processing for adjusting sampling frequency on a time domain signal corresponding to the voice signal to obtain a tone-changed voice signal; performing playing time retention on a time domain signal corresponding to the tone-changed voice signal to obtain a target voice signal; the playing time of the voice signal after the tone change is the same as the playing time of the voice signal. The method and the device can quickly and accurately perform voice tone shifting.

Description

Voice processing method, device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of audio processing technology, and in particular, to a voice processing method, a voice processing apparatus, an electronic device, and a computer readable storage medium.
Background
In the audio processing, the audio pitch change processing is a very important function. In the related art, the tone-changing method mainly comprises the following steps: realizing tone variation of voice audio by changing the sampling rate of playing; synthesizing tone-changing voice by adopting a method of combining a linear predictive coding technology and differential glottal waves; or the pitch is changed by adopting a frequency spectrum envelope of the voice signal and a pitch-changing algorithm; or the delay is performed by the delay factor so as to realize the tone changing effect.
In the above manner, changing the sampling rate of playing to realize tone variation affects the playing duration of the voice, so that the tone quality of the voice may be further affected, and the calculated amount is large, so that the quick tone variation of the voice cannot be realized.
It should be noted that the information disclosed in the above background section is only for enhancing understanding of the background of the present disclosure and thus may include information that does not constitute prior art known to those of ordinary skill in the art.
Disclosure of Invention
The present disclosure is directed to a method, an apparatus, an electronic device, and a computer-readable storage medium for processing speech, which at least partially overcome the problem that speech modification cannot be rapidly and accurately achieved due to the limitations and disadvantages of the related art.
Other features and advantages of the present disclosure will be apparent from the following detailed description, or may be learned in part by the practice of the disclosure.
According to one aspect of the present disclosure, there is provided a voice processing method including: receiving a voice signal acquired and transmitted by an audio acquisition device; performing tone changing processing for adjusting sampling frequency on a time domain signal corresponding to the voice signal to obtain a tone-changed voice signal; performing playing time retention on a time domain signal corresponding to the tone-changed voice signal to obtain a target voice signal; the playing time of the voice signal after the tone change is the same as the playing time of the voice signal.
In an exemplary embodiment of the present disclosure, performing a pitch-shifting process for adjusting a sampling frequency on a time-domain signal corresponding to the speech signal, to obtain a pitch-shifted speech signal includes: framing the time domain signal corresponding to the voice signal; windowing is carried out on the time domain signals corresponding to the voice signals after framing, and the time domain signals corresponding to the voice signals after windowing are obtained; and processing the time domain signal corresponding to the windowed voice signal according to an interpolation algorithm or a decimation algorithm to obtain the modified voice signal.
In one exemplary embodiment of the present disclosure, windowing the framed time domain signal includes: and carrying out windowing processing on the time domain signal of the voice signal after framing by adopting a Hamming window.
In an exemplary embodiment of the present disclosure, processing, according to an interpolation algorithm or an extraction algorithm, a time domain signal corresponding to the windowed speech signal, to obtain the modified speech signal includes: and determining the voice signal after the tone change according to the sampling frequency of the voice signal, the sampling frequency of the voice signal after the tone change and the length of each frame of voice signal.
In an exemplary embodiment of the present disclosure, the voice signal rising tone corresponds to an increase in the playing time of the voice signal after the tone modification, and the voice signal falling tone corresponds to a decrease in the playing time of the voice signal after the tone modification.
In an exemplary embodiment of the present disclosure, performing playback time preservation on a time domain signal corresponding to a modified speech signal to obtain a target speech signal includes: determining a comparison result of the overlapping length between the time sequence variable and two frames of voice signals obtained by framing; and combining the comparison result, processing the length of each frame of voice signal after tone modification according to the length of each frame of voice signal, and determining the target voice signal when the playing time of the tone-modified voice signal is the same as the playing time of the voice signal.
In an exemplary embodiment of the present disclosure, in combination with the comparison result, processing the length of each frame of the modified speech signal according to the length of each frame of the speech signal, and determining the target speech signal when the playing time of the modified speech signal is the same as the playing time of the speech signal includes: if the time sequence variable is smaller than the overlapping length, determining the target voice signal according to the length of each frame of voice signal, the length of each frame of voice signal after tone modification and the overlapping length; and if the time sequence variable is greater than or equal to the overlapping length, taking the voice signal after tone change as the target voice signal.
According to one aspect of the present disclosure, there is provided a voice processing apparatus including: the voice acquisition module is used for receiving the voice signals acquired and transmitted by the audio acquisition equipment; the voice tone changing module is used for carrying out tone changing processing for adjusting the sampling frequency on the time domain signal corresponding to the voice signal to obtain a tone-changed voice signal; the time keeping module is used for keeping the playing time of the time domain signal corresponding to the voice signal after the tone change so as to obtain a target voice signal; the playing time of the voice signal after the tone change is the same as the playing time of the voice signal.
According to one aspect of the present disclosure, there is provided an electronic device including: a processor; and a memory for storing executable instructions of the processor; wherein the processor is configured to perform any one of the above-described speech processing methods via execution of the executable instructions.
According to one aspect of the present disclosure, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the speech processing method of any one of the above.
In the voice processing method, the device, the electronic equipment and the computer readable storage medium provided in the present exemplary embodiment, on one hand, the time domain signal corresponding to the voice signal sent to the audio processor is subjected to the tone changing process for adjusting the sampling frequency, and because the time domain signal is subjected to the tone changing process, the problems of introducing harmonic waves and affecting the voice quality in the processing process are avoided, and the audio quality and the accuracy are improved; on the other hand, the time domain signal of the voice signal after tone change is played for a time keeping to obtain a target voice signal, so that the influence on the playing time is avoided, and the voice can be played normally and accurately; on the other hand, as the tone shifting processing is only carried out on the time domain signals corresponding to the voice signals, the complex calculation process is avoided, the calculation amount is reduced, the calculation efficiency is improved, and the voice tone shifting can be rapidly realized.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure. It will be apparent to those of ordinary skill in the art that the drawings in the following description are merely examples of the disclosure and that other drawings may be derived from them without undue effort.
Fig. 1 schematically illustrates a schematic diagram of a speech processing method in an exemplary embodiment of the present disclosure.
Fig. 2 schematically shows a specific flowchart of the tone change process in the exemplary embodiment of the present disclosure.
Fig. 3 schematically illustrates a flow chart of play time preservation in an exemplary embodiment of the present disclosure.
Fig. 4 schematically shows a block diagram of a speech processing apparatus in an exemplary embodiment of the present disclosure.
Fig. 5 schematically illustrates a block diagram of a speech processing system in an exemplary embodiment of the present disclosure.
Fig. 6 schematically illustrates a schematic diagram of an electronic device in an exemplary embodiment of the present disclosure.
Fig. 7 schematically illustrates a schematic diagram of a computer-readable storage medium in an exemplary embodiment of the present disclosure.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. However, the exemplary embodiments may be embodied in many forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of the example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the present disclosure. One skilled in the relevant art will recognize, however, that the aspects of the disclosure may be practiced without one or more of the specific details, or with other methods, components, devices, steps, etc. In other instances, well-known technical solutions have not been shown or described in detail to avoid obscuring aspects of the present disclosure.
Furthermore, the drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus a repetitive description thereof will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in software or in one or more hardware modules or integrated circuits or in different networks and/or processor devices and/or microcontroller devices.
The tone shifting method in the related art includes the following: the tone change of the voice audio is realized by changing the playing sampling rate, when the sampling rate is increased, the playing speed of the voice is increased, the tone increasing effect is generated, but the playing time is also shortened, when the sampling rate is reduced, the playing speed of the voice is reduced, the tone increasing effect is generated, and the playing time is also prolonged. The change of the tone is achieved by interpolation in the frequency domain, for example, a tone requiring twice the frequency, then some frequency components with half the energy of the original frequency point are interpolated. The method of interpolation is adopted to realize tone change in the frequency domain, which introduces harmonic waves and affects tone quality. Synthesizing the tone-changing voice by adopting a method of combining a linear predictive coding technology and a differential glottal wave, carrying out finer simulation on a residual signal obtained by a voice signal through an inverse filter in the linear predictive coding technology by using a differential glottal wave model to obtain a high-quality glottal excitation signal, thereby synthesizing the high-quality tone-changing voice; or the cepstrum sequence of the voice signal is utilized to derive a spectrum envelope, then the spectrum envelope is utilized to separate out the excitation component of the voice signal, and the excitation component is processed by a tone-changing algorithm to change the tone of the excitation component; the process of calculating the spectrum envelope and the like needs to carry out Fourier transform and inverse transform on the voice signal, has large calculated amount and is not suitable for running on a DSP.
In order to solve the above-described problems, in the present exemplary embodiment, there is first provided a voice processing method that can be applied to an application scenario of a game or other application program capable of using voice interaction. A voice processing method in the present exemplary embodiment will be described in detail with reference to fig. 1.
In step S110, a speech signal acquired and transmitted by an audio acquisition device is received.
In this exemplary embodiment, the audio collection device may be a microphone on a terminal, and the terminal may be a terminal capable of performing a call, such as a smart phone, a computer, a smart watch, and a smart speaker, which is described here by taking the smart phone as an example. In addition, the present exemplary embodiment can be applied to a game or other application program, and in an application scenario in which special processing is required for collected voice in order to satisfy confidentiality or to satisfy other requirements, that is, voice interaction or voice call has a variable sound effect.
In the present exemplary embodiment, a voice chat in a game will be described as an example. On the basis of the voice call with the variable sound effect, whether the variable sound effect is started or not can be judged firstly, specifically, the state of a control or a button for representing the variable sound effect can be judged, and the state of the control or the button can also be judged in other modes, and the specific description is omitted here. If the tone changing effect is detected to be started, the audio acquisition device (microphone) can acquire a voice signal sent by a user in a mobile phone game. Further, the microphone may send the collected voice signal to a DSP (Digital Signal Processing, digital signal processor) in the handset, so that the DSP processes the received voice signal.
In step S120, a tone-shifting process for adjusting the sampling frequency is performed on the time-domain signal corresponding to the speech signal, so as to obtain a tone-shifted speech signal.
In the present exemplary embodiment, the voice signal may include a time domain signal and a frequency domain signal. Wherein, the time domain signal is a relation of describing mathematical function or physical signal to time, and the time domain waveform of a voice signal can express the change of the voice signal along with time. The frequency domain signal refers to a speech signal that is represented by coordinates on a frequency axis. In converting from a time domain signal to a frequency domain signal, it is necessary to implement it by fourier series and fourier transform.
The main functions of the pitch shifting process may include, but are not limited to: the tone-shifting process is performed on the voice signal in the time domain, that is, the time domain signal corresponding to the voice signal is processed to realize the tone shifting. Pitch shifting refers to increasing (increasing) or decreasing (decreasing) the pitch of a speech signal. In addition, the pitch variation of the speech signal may be associated with the sampling frequency. For example, if the sampling frequency after the tone change increases, the tone is increased; and if the sampling frequency after the tone change is reduced, the tone is reduced. Based on this, the tone change process can be considered to adjust the sampling frequency. The sampling frequency defines the number of samples per second that are extracted from the continuous speech signal and that constitute the discrete signal. The specific implementation of step S120 may be as shown in fig. 2.
A flow chart of the tone change process is schematically shown in fig. 2. Referring to fig. 2, mainly includes steps S210 to S230, wherein:
in step S210, the time domain signal corresponding to the speech signal is framed.
In this step, in order to maintain the stability of the voice signal to meet the signal processing requirement, the voice signal may be framed. Framing refers to segmenting a speech signal to analyze its characteristic parameters, where each segment is referred to as a frame, and the frame length is typically taken to be 20-50 ms. Thus, for the entire speech signal, a time series of feature parameters consisting of the feature parameters of each frame is analyzed. In the present exemplary embodiment, the voice signal collected by the microphone may be represented by x (n). After the speech signal is divided into frames, the length of each frame may be N, and the overlapping length (frame shift) between two preceding and following frames for preventing discontinuity between two frames may be W. N in x (N) represents a point in time, which may be referred to as a time sequence variable, N is an integer, and n= 0,1,1,3, … N-1. Framing the speech signal x (n), the resulting framed speech signal may be denoted as x m (N), where m represents the number of frames, is the mth frame, and the length N of each frame of the speech signal may take on a value of 512, although other values may be taken, which is not particularly limited herein. In this step, the time domain signal of the speech signal is subjected to framing processing.
In step S220, the time domain signal corresponding to the framed speech signal is windowed, so as to obtain a time domain signal corresponding to the windowed speech signal.
In this step, the time domain signal of the speech signal is still processed. The purpose of the windowing process is to smooth out the less continuous places in the speech signal (the junction of the last point and the first point), avoiding significant abrupt changes, i.e. the windowing process is used to smooth out the edges of the frame signal. In the windowing process, the original integrated function is integrated with a specific window function in fourier integration, and the result can have the effect of time-frequency localization. The windowing is generally a filter, the system function in the passband is not necessarily a constant value, the windowing is performed in the time domain, the frequency domain shape of the window function is a window, and the out-of-band components are filtered out and are equivalent to a low-pass filter; if the filter is a rectangular filter, the filter is equivalent to low-pass filtering, and out-of-band high-frequency components are directly filtered.
In the present exemplary embodiment, when windowing the time domain signal of the speech signal after framing, the processing may be specifically performed using a hamming window or a rectangular window, which will be described here as an example. The shape of the main part of the window function corresponding to the hamming window is like the shape of sin (x) in the interval 0 to pi, and the rest is 0, and only a part of the function multiplied by any other function has non-zero values. The hamming window can make a certain correction to the original sequence of the voice signal, thereby obtaining a better voice signal.
The hamming window can be specifically expressed by the formula (1):
where N is an integer representing one point (timing parameter) in timing, and n=0, 1,2,3.
The hamming window in the formula (1) is adopted to carry out windowing processing on the time domain signal of the voice signal after framing, and the time domain signal after windowing as shown in the formula (2) can be obtained:
through the steps S210 and S220, the preprocessing operations such as framing, windowing and the like are carried out on the collected voice signals, so that the influence on the quality of the voice signals caused by the human sounding organ and the factors such as aliasing, higher harmonic distortion, high frequency and the like brought by equipment for collecting the voice signals can be eliminated; the method ensures that the signals obtained by the subsequent voice processing are more uniform and smoother as far as possible, provides high-quality parameters for signal parameter extraction, and improves the voice processing quality.
In step S230, the time domain signal corresponding to the windowed speech signal is processed according to an interpolation algorithm or an extraction algorithm, so as to obtain the modified speech signal.
In this step, the interpolation algorithm and the extraction algorithm are both tone-shifting algorithms for shifting the speech signal by adjusting the sampling point number or the sampling frequency of the speech signal, and the tone-shifting algorithms are all executed for the time domain signal of the speech signal. In particular, interpolation algorithms refer to the insertion of zero values (i.e., 0 s) where interpolation is required to compose a sequence of new speech signals. Interpolation algorithms may include, for example, but are not limited to, linear function interpolation, cubic interpolation, and the like, and are used to increase pitch, i.e., upscaling. Specific procedures may include, for example, zero-padding the speech signal and interpolation filtering. The extraction algorithm refers to extracting a sequence of points in the speech signal that in turn constitute a new speech signal, and the purpose of the extraction algorithm is to reduce the pitch, i.e. the down-tone.
The specific process of processing the time domain signal corresponding to the windowed voice signal according to the interpolation algorithm or the extraction algorithm to obtain the modified voice signal comprises the following steps: and determining the voice signal after the tone change according to the sampling frequency of the voice signal, the sampling frequency of the voice signal after the tone change and the length of each frame of voice signal.
For example, if the sampling frequency of the speech signal before the tone modification is f, the sampling frequency of the speech signal after the tone modification is f 0 The speech signal after the decimation process or the interpolation process can be expressed by the formula (3):
wherein n=0, 1,2. (N-1) ×l+1, [ 1 ]]Representing a rounding operation, mod represents a remainder operation.Wherein M and L are positive integers, and +.>Is the shortest score.
Further, after interpolation or extraction, a modified speech signal as shown in formula (4) can be obtained:
y m (n)=z m (Mn) (4)
wherein n=0, 1,2.
It can be seen that when f>f 0 When M is>L, rising tone of the tone-changed voice signal; when f<f 0 When M is<And L, reducing the tone of the tone-changed voice signal.
In step S130, playing time retention is performed on the time domain signal corresponding to the modified speech signal, so as to obtain a target speech signal; the playing time of the voice signal after the tone change is the same as the playing time of the voice signal.
In the present exemplary embodiment, if an interpolation algorithm is used to raise the voice signal, the playing time of the voice signal after the tone change is increased; if the extraction algorithm is adopted to reduce the tone of the voice signal, the playing time of the tone-changed voice signal is reduced. In order to avoid the influence of the pitch-shifting process on the play time, a play time holding process may be performed on the pitch-shifted speech signal. The play time holding refers to processing the time domain signal of the tone-changed voice signal, so that the play time of the tone-changed voice signal is the same as the play time of the tone-changed voice signal, and the influence on the play time of the voice caused by the change of the play speed of the voice when the tone change is realized through the sampling rate in the related technology is avoided.
Further, referring to the flow chart of the exemplary real-time playback time preservation in fig. 3, referring to fig. 3, playback time preservation is performed according to a time domain signal corresponding to the modified speech signal, so as to obtain a target speech signal, where the step S310 and the step S320 are included:
step S310, determining the comparison result of the overlapping length between the time sequence variable and the two frames of voice signals obtained by framing. Specifically, the size relationship between the time sequence variable n (i.e. a point on the time sequence) and the overlapping length W between two frames of voice signals obtained by framing is determined. For example, when n=1, 2 … W-1, it can be determined that the timing variable n is smaller than the overlap length. When n=w, w+ … N, it can be determined that the timing variable N is equal to or greater than the overlap length.
Step S320, in combination with the comparison result, processes the length of each frame of the modified voice signal according to the length of each frame of the voice signal, and determines the target voice signal when the playing time of the modified voice signal is the same as the playing time of the voice signal. That is, the playback time of the speech signal after the modification is changed to the playback time of the speech signal before the modification by combining the magnitude relation between the time series variable n and the overlap length W. Since there is a correspondence between the play time and the length of each frame of the voice signal, that is, the length of each frame of the voice signal is the same, it can be determined that the play time of the voice signal is the same. Based on this, the modified speech signals can be spliced together so that the length of the speech signals remains consistent. Further, when the length of each frame of the modified speech signal is equal to the length of the original speech signal, that is, when the playing time of the modified speech signal is the same as the playing time of the original speech signal, the speech signal may be determined as the target speech signal.
Specifically, in combination with the comparison result, processing the length of each frame of the modified voice signal according to the length of each frame of the voice signal, and determining the target voice signal when the playing time of the modified voice signal is the same as the playing time of the voice signal includes the following two cases: in case one, if the time sequence variable is smaller than the overlapping length, according to the voice signal of each frame The length of each frame of the modified speech signal, and the overlap length. For example, assume that the length of each frame of the speech signal of the pre-transposition speech signal is N, and the post-transposition signal y m The length of each frame of the voice signal of (N) becomes N/α, and if the playing time of the voice signal is to be kept unchanged, the length of each frame of the voice signal after tone change needs to be still N. If the timing variable is smaller than the overlap length, the target speech signal can be determined according to equation (5) based on the overlap length between two frames, the synthesized displacement (i.e., the difference between the length of the speech signal per frame and the overlap length), and the offset (the start position of the overlap of two frames).
And secondly, if the time sequence variable is larger than or equal to the overlapping length, taking the voice signal after tone change as the target voice signal. If the time sequence variable is greater than or equal to the overlapping length and does not exceed the length N of each frame of the speech signal before the modification, after the length alignment is performed, the modified speech signal can be directly used as a final target speech signal, and the target speech signal can be specifically determined by the formula (5).
Wherein W is the overlapping length of two frames, s is the synthetic displacement and s=n-W, k m Is the offset. The offset has the following meaning: when playback time recovery synthesis is performed on the modified speech signal, frames overlap each other, but the modified speech signal cannot be directly overlapped and synthesized, so that noise and noise can be caused. To reduce this phenomenon, a start position where two frames overlap may be determined, and the start position may be determined as an offset. Since the offset is dynamically changed, noise can be minimized when the defined formula (6) is satisfied, and the offset can be as shown in the formula (6):
wherein the offset represents the distance between the optimal matching point and the mth window.
In the present exemplary embodiment, the time domain signal corresponding to the voice signal is interpolated and extracted through steps S110 to S130, and the playing time can be kept unchanged, so that the quick modulation of the voice signal is realized, and the influence on the playing time is avoided. In addition, because the voice signal is interpolated in the time domain, the problem of influencing the tone quality caused by introducing harmonic waves is avoided, and the quality of the voice signal is improved. Further, because interpolation and play time reduction are carried out on the time domain signals of the voice signals, complex operations such as Fourier transformation and inverse transformation are not needed to be carried out on the voice signals, the calculated amount is reduced, the whole tone changing process can be directly operated in the DSP without occupying a CPU, the delay is reduced, and the game performance and the user experience are improved.
In the present exemplary embodiment, there is also provided a voice processing apparatus, and referring to fig. 4, the voice processing apparatus 400 mainly includes: a voice acquisition module 401, a voice pitch module 402, and a time keeping module 403, wherein:
a voice acquisition module 401, which may be used to receive the voice signal acquired and transmitted by the audio acquisition device;
the voice tone changing module 402 may be configured to perform tone changing processing for adjusting a sampling frequency on a time domain signal corresponding to the voice signal, so as to obtain a tone-changed voice signal;
the time keeping module 403 may be configured to keep playing time of the time domain signal corresponding to the modified speech signal, so as to obtain a target speech signal; the playing time of the voice signal after the tone change is the same as the playing time of the voice signal.
In one exemplary embodiment of the present disclosure, a speech modification module includes: the framing module is used for framing the time domain signal corresponding to the voice signal; the windowing module is used for windowing the time domain signal corresponding to the voice signal after framing to obtain the time domain signal corresponding to the voice signal after windowing; and the tone-changing control module is used for processing the time domain signal corresponding to the windowed voice signal according to an interpolation algorithm or an extraction algorithm to obtain the tone-changing voice signal.
In one exemplary embodiment of the present disclosure, a windowing module includes: and the windowing control module is used for carrying out the windowing processing on the time domain signal of the voice signal after framing by adopting a Hamming window.
In one exemplary embodiment of the present disclosure, the pitch control module includes: the voice determining module is used for determining the voice signal after the tone change according to the sampling frequency of the voice signal, the sampling frequency of the voice signal after the tone change and the length of each frame of voice signal.
In an exemplary embodiment of the present disclosure, the voice signal rising tone corresponds to an increase in the playing time of the voice signal after the tone modification, and the voice signal falling tone corresponds to a decrease in the playing time of the voice signal after the tone modification.
In one exemplary embodiment of the present disclosure, the time keeping module includes: the signal comparison module is used for determining a comparison result of the overlapping length between the time sequence variable and two frames of voice signals obtained by framing; and the target voice determining module is used for processing the length of each frame of voice signal after the tone change according to the length of each frame of voice signal in combination with the comparison result, and determining the target voice signal when the playing time of the voice signal after the tone change is the same as the playing time of the voice signal.
In one exemplary embodiment of the present disclosure, the target voice determination module includes: the first determining module is configured to determine the target voice signal according to the length of each frame of the voice signal, the length of each frame of the voice signal after the tone modification, and the overlapping length if the time sequence variable is smaller than the overlapping length; and the second determining module is used for taking the modified voice signal as the target voice signal if the time sequence variable is greater than or equal to the overlapping length.
It should be noted that, the specific details of each module in the above-mentioned voice processing apparatus have been described in detail in the corresponding voice processing method, so that the details are not repeated here.
In addition, a speech processing system is provided, and referring to fig. 5, the speech processing system 50 mainly includes: a digital signal processor 51 and a central processing unit 52, wherein:
the digital signal processor 51 may be configured to change the tone of the voice signal, and maintain the playing time of the tone-changed voice signal, so as to obtain the target voice signal. Referring to fig. 5, the digital signal processor 51 mainly includes the following modules: the tone shifting module 511 is configured to perform tone shifting processing on a time domain signal corresponding to the voice signal; and a playing time keeping module 512, configured to keep the playing time of the modified speech signal so that the playing time of the modified speech signal is the same as the playing time of the speech signal before the modification. Specifically, the tone shifting module 511 mainly includes a framing module 5111 for framing, a windowing module 5112 for performing windowing processing, and a tone shifting control module 5113 for performing tone shifting.
A central processor 52 for running games or applications.
In addition, the speech processing system 50 may also include an audio acquisition device 53 for collecting speech signals and transmitting the collected speech signals to the digital signal processor 51.
As such, the overall process may include: the game runs on the CPU of the mobile phone, when a user starts the tone-changing sound effect of the voice call, the microphone firstly collects voice signals and sends the collected voice signals to the DSP; then, the tone changing module carries out tone increasing or decreasing treatment on the time domain signal corresponding to the voice signal; thirdly, the playing time of the voice signal processed by the tone changing module is prolonged or shortened, so that the voice signal is transmitted to the playing time keeping module, and the playing time before and after tone changing is kept unchanged; further, the voice passing through the play time holding module is sent to the game process running on the CPU by the DSP. In this way, when the game chat is performed, the tone of the voice signal is changed, but the playing time of the voice signal is not changed, so that the voice tone changing effect can be quickly and accurately realized. Because the algorithm for changing tone and keeping playing time can be operated on the DSP, the CPU is not occupied, the game performance and the user experience are not affected, and the processing efficiency can be improved.
It should be noted that although in the above detailed description several modules or units of a device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit in accordance with embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into a plurality of modules or units to be embodied.
Furthermore, although the steps of the methods in the present disclosure are depicted in a particular order in the drawings, this does not require or imply that the steps must be performed in that particular order or that all illustrated steps be performed in order to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step to perform, and/or one step decomposed into multiple steps to perform, etc.
In an exemplary embodiment of the present disclosure, an electronic device capable of implementing the above method is also provided.
Those skilled in the art will appreciate that the various aspects of the invention may be implemented as a system, method, or program product. Accordingly, aspects of the invention may be embodied in the following forms, namely: an entirely hardware embodiment, an entirely software embodiment (including firmware, micro-code, etc.) or an embodiment combining hardware and software aspects may be referred to herein as a "circuit," module "or" system.
An electronic device 600 according to this embodiment of the invention is described below with reference to fig. 6. The electronic device 600 shown in fig. 6 is merely an example, and should not be construed as limiting the functionality and scope of use of embodiments of the present invention.
As shown in fig. 6, the electronic device 600 is in the form of a general purpose computing device. Components of electronic device 600 may include, but are not limited to: the at least one processing unit 610, the at least one memory unit 620, and a bus 630 that connects the various system components, including the memory unit 620 and the processing unit 610.
Wherein the storage unit stores program code that is executable by the processing unit 610 such that the processing unit 610 performs steps according to various exemplary embodiments of the present invention described in the above-described "exemplary methods" section of the present specification. For example, the processing unit 610 may perform the steps as shown in fig. 1.
The storage unit 620 may include readable media in the form of volatile storage units, such as Random Access Memory (RAM) 6201 and/or cache memory unit 6202, and may further include Read Only Memory (ROM) 6203.
The storage unit 620 may also include a program/utility 6204 having a set (at least one) of program modules 6205, such program modules 6205 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment.
Bus 630 may be a local bus representing one or more of several types of bus structures including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or using any of a variety of bus architectures.
The display unit 640 may be a display having a display function to display the processing result obtained by the processing unit 610 performing the method in the present exemplary embodiment through the display. The display includes, but is not limited to, a liquid crystal display or other display.
The electronic device 600 may also communicate with one or more external devices 800 (e.g., keyboard, pointing device, bluetooth device, etc.), one or more devices that enable a user to interact with the electronic device 600, and/or any device (e.g., router, modem, etc.) that enables the electronic device 600 to communicate with one or more other computing devices. Such communication may occur through an input/output (I/O) interface 650. Also, electronic device 600 may communicate with one or more networks such as a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the Internet, through network adapter 660. As shown, network adapter 660 communicates with other modules of electronic device 600 over bus 630. It should be appreciated that although not shown, other hardware and/or software modules may be used in connection with electronic device 600, including, but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, data backup storage systems, and the like.
From the above description of embodiments, those skilled in the art will readily appreciate that the example embodiments described herein may be implemented in software, or may be implemented in software in combination with the necessary hardware. Thus, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (may be a CD-ROM, a U-disk, a mobile hard disk, etc.) or on a network, including several instructions to cause a computing device (may be a personal computer, a server, a terminal device, or a network device, etc.) to perform the method according to the embodiments of the present disclosure.
In an exemplary embodiment of the present disclosure, a computer-readable storage medium having stored thereon a program product capable of implementing the method described above in the present specification is also provided. In some possible embodiments, the various aspects of the invention may also be implemented in the form of a program product comprising program code for causing a terminal device to carry out the steps according to the various exemplary embodiments of the invention as described in the "exemplary methods" section of this specification, when said program product is run on the terminal device.
Referring to fig. 7, a program product 700 for implementing the above-described method according to an embodiment of the present invention is described, which may employ a portable compact disc read only memory (CD-ROM) and include program code, and may be run on a terminal device, such as a personal computer. However, the program product of the present invention is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium can be, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The computer readable signal medium may include a data signal propagated in baseband or as part of a carrier wave with readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of remote computing devices, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., connected via the Internet using an Internet service provider).
Furthermore, the above-described drawings are only schematic illustrations of processes included in the method according to the exemplary embodiment of the present application, and are not intended to be limiting. It will be readily appreciated that the processes shown in the above figures do not indicate or limit the temporal order of these processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, for example, among a plurality of modules.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (7)

1. A method of speech processing, comprising:
after the variable sound effect is started, receiving a voice signal acquired and transmitted by audio acquisition equipment;
framing the time domain signal corresponding to the voice signal, windowing the time domain signal corresponding to the framed voice signal, and performing tone changing processing for adjusting the sampling frequency on the time domain signal corresponding to the windowed voice signal according to an interpolation algorithm or a decimation algorithm to obtain a tone-changed voice signal;
splicing the modified voice signals according to the corresponding relation between the playing time and the length of each frame of voice signals so as to keep the lengths of the voice signals consistent, keeping the playing time of the time domain signals corresponding to the modified voice signals, and obtaining target voice signals when the playing time of the modified voice signals is the same as the playing time of the voice signals;
the step of maintaining the playing time of the time domain signal corresponding to the modified voice signal, and the step of obtaining the target voice signal when the playing time of the modified voice signal is the same as the playing time of the voice signal comprises the following steps:
Determining a comparison result of the overlapping length between the time sequence variable and two frames of voice signals obtained by framing the time domain signals corresponding to the voice signals;
if the time sequence variable is smaller than the overlapping length, when the offset makes noise generated by playing time reduction synthesis of the modified voice signal minimum, determining the target voice signal according to the offset, the difference between the length of each frame of voice signal and the overlapping length between the two frames of voice signals after the modification; the offset represents the distance between the optimal matching point and the mth window;
and if the time sequence variable is greater than or equal to the overlapping length and does not exceed the length of each frame of voice signal before tone modification, after the length alignment is carried out, taking the voice signal after tone modification as the target voice signal.
2. The method of claim 1, wherein windowing the framed time domain signal comprises:
and carrying out windowing processing on the time domain signal of the voice signal after framing by adopting a Hamming window.
3. The method according to claim 1, wherein processing the time domain signal corresponding to the windowed speech signal according to an interpolation algorithm or an extraction algorithm to obtain the modified speech signal comprises:
And determining the voice signal after the tone change according to the sampling frequency of the voice signal, the sampling frequency of the voice signal after the tone change and the length of each frame of voice signal.
4. The method of claim 1, wherein the rising tone of the speech signal corresponds to an increase in the playing time of the modified speech signal and the falling tone of the speech signal corresponds to a decrease in the playing time of the modified speech signal.
5. A speech processing apparatus, comprising:
the voice acquisition module is used for receiving the voice signal acquired and sent by the audio acquisition equipment after the variable sound effect is started;
the voice tone changing module is used for framing the time domain signals corresponding to the voice signals, windowing the time domain signals corresponding to the framed voice signals, and carrying out tone changing processing for adjusting the sampling frequency on the time domain signals corresponding to the windowed voice signals according to an interpolation algorithm or a decimation algorithm to obtain tone-changed voice signals;
the time keeping module is used for splicing the voice signals after the tone change according to the corresponding relation between the playing time and the length of each frame of voice signals so as to keep the lengths of the voice signals consistent, and keeping the playing time of the time domain signals corresponding to the voice signals after the tone change, and obtaining a target voice signal when the playing time of the voice signals after the tone change is the same as the playing time of the voice signals; the playing time of the tone-changed voice signal is the same as that of the voice signal;
The step of maintaining the playing time of the time domain signal corresponding to the modified voice signal, and the step of obtaining the target voice signal when the playing time of the modified voice signal is the same as the playing time of the voice signal comprises the following steps:
determining a comparison result of the overlapping length between the time sequence variable and two frames of voice signals obtained by framing the time domain signals corresponding to the voice signals;
if the time sequence variable is smaller than the overlapping length, when the offset makes noise generated by playing time reduction synthesis of the modified voice signal minimum, determining the target voice signal according to the offset, the difference between the length of each frame of voice signal and the overlapping length between the two frames of voice signals after the modification; the offset represents the distance between the optimal matching point and the mth window;
and if the time sequence variable is greater than or equal to the overlapping length and does not exceed the length of each frame of voice signal before tone modification, after the length alignment is carried out, taking the voice signal after tone modification as the target voice signal.
6. An electronic device, comprising:
a processor; and
a memory for storing executable instructions of the processor;
Wherein the processor is configured to perform the speech processing method of any of claims 1-4 via execution of the executable instructions.
7. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the speech processing method of any of claims 1-4.
CN201910227101.5A 2019-03-25 2019-03-25 Voice processing method, device, electronic equipment and storage medium Active CN111739544B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910227101.5A CN111739544B (en) 2019-03-25 2019-03-25 Voice processing method, device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910227101.5A CN111739544B (en) 2019-03-25 2019-03-25 Voice processing method, device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111739544A CN111739544A (en) 2020-10-02
CN111739544B true CN111739544B (en) 2023-10-20

Family

ID=72646293

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910227101.5A Active CN111739544B (en) 2019-03-25 2019-03-25 Voice processing method, device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111739544B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112420062B (en) * 2020-11-18 2024-07-19 腾讯音乐娱乐科技(深圳)有限公司 Audio signal processing method and equipment
CN113593540B (en) * 2021-07-28 2023-08-11 展讯半导体(成都)有限公司 Voice processing method, device and equipment
CN114121029A (en) * 2021-12-23 2022-03-01 北京达佳互联信息技术有限公司 Training method and device of speech enhancement model and speech enhancement method and device
CN114449339B (en) * 2022-02-16 2024-04-12 深圳万兴软件有限公司 Background sound effect conversion method and device, computer equipment and storage medium

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH08202389A (en) * 1995-01-31 1996-08-09 Matsushita Electric Ind Co Ltd Sound quality converting method and its device
TW594672B (en) * 2003-01-14 2004-06-21 Sounding Technology Inc Method for changing voice tone
KR101333162B1 (en) * 2012-10-04 2013-11-27 부산대학교 산학협력단 Tone and speed contorol system and method of audio signal using imdct input
CN103440862A (en) * 2013-08-16 2013-12-11 北京奇艺世纪科技有限公司 Method, device and equipment for synthesizing voice and music
CN104575508A (en) * 2013-10-15 2015-04-29 京微雅格(北京)科技有限公司 Processing method and device for audio signal modulation
CN105304092A (en) * 2015-09-18 2016-02-03 深圳市海派通讯科技有限公司 Real-time voice changing method based on intelligent terminal
CN106228973A (en) * 2016-07-21 2016-12-14 福州大学 Stablize the music voice modified tone method of tone color
CN108269579A (en) * 2018-01-18 2018-07-10 厦门美图之家科技有限公司 Voice data processing method, device, electronic equipment and readable storage medium storing program for executing
CN108281150A (en) * 2018-01-29 2018-07-13 上海泰亿格康复医疗科技股份有限公司 A kind of breaking of voice change of voice method based on derivative glottal flow model
CN108492832A (en) * 2018-03-21 2018-09-04 北京理工大学 High quality sound transform method based on wavelet transformation
CN109147818A (en) * 2018-10-30 2019-01-04 Oppo广东移动通信有限公司 Acoustic feature extracting method, device, storage medium and terminal device

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4470823B2 (en) * 2005-07-04 2010-06-02 ヤマハ株式会社 Pitch name detector and program
KR101334366B1 (en) * 2006-12-28 2013-11-29 삼성전자주식회사 Method and apparatus for varying audio playback speed

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH08202389A (en) * 1995-01-31 1996-08-09 Matsushita Electric Ind Co Ltd Sound quality converting method and its device
TW594672B (en) * 2003-01-14 2004-06-21 Sounding Technology Inc Method for changing voice tone
KR101333162B1 (en) * 2012-10-04 2013-11-27 부산대학교 산학협력단 Tone and speed contorol system and method of audio signal using imdct input
CN103440862A (en) * 2013-08-16 2013-12-11 北京奇艺世纪科技有限公司 Method, device and equipment for synthesizing voice and music
CN104575508A (en) * 2013-10-15 2015-04-29 京微雅格(北京)科技有限公司 Processing method and device for audio signal modulation
CN105304092A (en) * 2015-09-18 2016-02-03 深圳市海派通讯科技有限公司 Real-time voice changing method based on intelligent terminal
CN106228973A (en) * 2016-07-21 2016-12-14 福州大学 Stablize the music voice modified tone method of tone color
CN108269579A (en) * 2018-01-18 2018-07-10 厦门美图之家科技有限公司 Voice data processing method, device, electronic equipment and readable storage medium storing program for executing
CN108281150A (en) * 2018-01-29 2018-07-13 上海泰亿格康复医疗科技股份有限公司 A kind of breaking of voice change of voice method based on derivative glottal flow model
CN108492832A (en) * 2018-03-21 2018-09-04 北京理工大学 High quality sound transform method based on wavelet transformation
CN109147818A (en) * 2018-10-30 2019-01-04 Oppo广东移动通信有限公司 Acoustic feature extracting method, device, storage medium and terminal device

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
《一种基于SOLA的数字音频变调算法及其在TMS320C6713上的实现》;蔡杰,叶梧,冯穗力;《电子技术应用》;20061206(第12期);第28-30页,图1-2 *
一种改进的时域音频变调方法及其软件实现;蔡杰;《电声技术》;20060917(第09期);全文 *
一种有效的语音变调算法研究;梅铁民等;《沈阳理工大学学报》;20160815(第04期);全文 *
语音变调方法分析及音效评估;张晓蕊等;《山东大学学报(工学版)》;20110216(第01期);全文 *

Also Published As

Publication number Publication date
CN111739544A (en) 2020-10-02

Similar Documents

Publication Publication Date Title
CN111739544B (en) Voice processing method, device, electronic equipment and storage medium
CN107068161B (en) Speech noise reduction method and device based on artificial intelligence and computer equipment
US8484020B2 (en) Determining an upperband signal from a narrowband signal
CN101505443B (en) Virtual supper bass enhancing method and system
EP2001013A2 (en) Audio time scale modification algorithm for dynamic playback speed control
EP3564954A1 (en) Improved subband block based harmonic transposition
JPH07248794A (en) Method for processing voice signal
CN111383646B (en) Voice signal transformation method, device, equipment and storage medium
JP6073456B2 (en) Speech enhancement device
CN110070884B (en) Audio starting point detection method and device
CN104134444B (en) A kind of song based on MMSE removes method and apparatus of accompanying
EP4189677B1 (en) Noise reduction using machine learning
CN105144290A (en) Signal processing device, signal processing method, and signal processing program
CN110070885B (en) Audio starting point detection method and device
JP2009223210A (en) Signal band spreading device and signal band spreading method
CN113035223B (en) Audio processing method, device, equipment and storage medium
CN113496706B (en) Audio processing method, device, electronic equipment and storage medium
CN110085214B (en) Audio starting point detection method and device
WO2023224550A1 (en) Method and system for real-time and low latency synthesis of audio using neural networks and differentiable digital signal processors
CN111326166B (en) Voice processing method and device, computer readable storage medium and electronic equipment
JP2002175099A (en) Method and device for noise suppression
CN109378012B (en) Noise reduction method and system for recording audio by single-channel voice equipment
CN111724808A (en) Audio signal processing method, device, terminal and storage medium
CN115206345B (en) Music and human voice separation method, device, equipment and medium based on time-frequency combination
CN114678036B (en) Speech enhancement method, electronic device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant