WO2015126228A1 - 신호 분류 방법 및 장치, 및 이를 이용한 오디오 부호화방법 및 장치 - Google Patents
신호 분류 방법 및 장치, 및 이를 이용한 오디오 부호화방법 및 장치 Download PDFInfo
- Publication number
- WO2015126228A1 WO2015126228A1 PCT/KR2015/001783 KR2015001783W WO2015126228A1 WO 2015126228 A1 WO2015126228 A1 WO 2015126228A1 KR 2015001783 W KR2015001783 W KR 2015001783W WO 2015126228 A1 WO2015126228 A1 WO 2015126228A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- signal
- current frame
- classification result
- music
- classification
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 41
- 206010019133 Hangover Diseases 0.000 claims description 18
- 230000004044 response Effects 0.000 claims description 4
- 230000007704 transition Effects 0.000 claims description 2
- 230000005236 sound signal Effects 0.000 abstract description 48
- 238000012937 correction Methods 0.000 abstract description 38
- 238000010586 diagram Methods 0.000 description 12
- 230000006870 function Effects 0.000 description 10
- 238000004891 communication Methods 0.000 description 8
- 230000008569 process Effects 0.000 description 7
- 230000008859 change Effects 0.000 description 6
- 230000004048 modification Effects 0.000 description 5
- 238000012986 modification Methods 0.000 description 5
- 230000003595 spectral effect Effects 0.000 description 5
- 230000007774 longterm Effects 0.000 description 4
- 238000002715 modification method Methods 0.000 description 3
- 239000002689 soil Substances 0.000 description 2
- 241001122767 Theaceae Species 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 238000006731 degradation reaction Methods 0.000 description 1
- 239000006260 foam Substances 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 230000014509 gene expression Effects 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 230000001052 transient effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/78—Detection of presence or absence of voice signals
- G10L25/81—Detection of presence or absence of voice signals for discriminating voice from music
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L19/00—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
- G10L19/005—Correction of errors induced by the transmission channel, if related to the coding algorithm
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L19/00—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
- G10L19/02—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using spectral analysis, e.g. transform vocoders or subband vocoders
- G10L19/0212—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using spectral analysis, e.g. transform vocoders or subband vocoders using orthogonal transformation
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L19/00—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
- G10L19/02—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using spectral analysis, e.g. transform vocoders or subband vocoders
- G10L19/022—Blocking, i.e. grouping of samples in time; Choice of analysis windows; Overlap factoring
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L19/00—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
- G10L19/04—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
- G10L19/08—Determination or coding of the excitation function; Determination or coding of the long-term prediction parameters
- G10L19/12—Determination or coding of the excitation function; Determination or coding of the long-term prediction parameters the excitation function being a code excitation, e.g. in code excited linear prediction [CELP] vocoders
- G10L19/125—Pitch excitation, e.g. pitch synchronous innovation CELP [PSI-CELP]
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L19/00—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
- G10L19/04—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
- G10L19/16—Vocoder architecture
- G10L19/18—Vocoders using multiple modes
- G10L19/20—Vocoders using multiple modes using sound class specific coding, hybrid encoders or object based coding
Definitions
- the present invention relates to audio encoding, and more particularly, to a signal classification method and apparatus for improving reconstructed sound quality and reducing delay due to switching of encoding mode, and an audio encoding method and apparatus using the same.
- An object of the present invention is to provide a signal classification method and apparatus capable of improving a reconstructed sound quality by determining an encoding mode suitable for a characteristic of an audio signal, and an audio encoding method and apparatus using the same.
- An object of the present invention is to provide a signal classification method and apparatus for reducing a delay due to coding mode switching while determining an encoding mode to suit the characteristics of an audio signal, and an audio encoding method and apparatus using the same.
- the signal classification method may include classifying a current frame into one of a voice signal and a music signal, determining whether an error exists in a classification result of the current frame based on feature parameters obtained from a plurality of frames, and determining the error.
- the method may include modifying a classification result of the current frame.
- the apparatus for classifying a signal classifies the current frame into one of a voice signal and a music signal, determines whether an error exists in the classification result of the current frame based on a feature parameter obtained from a plurality of frames, and corresponds to the determination result.
- the processor may include at least one processor configured to modify the classification result of the current frame.
- an audio encoding method includes classifying a current frame into one of a voice signal and a music signal, and determining whether an error exists in a classification result of the current frame based on feature parameters obtained from a plurality of frames. Correspondingly, modifying a classification result of the current frame, and encoding the current frame based on the classification result or the modified classification result of the current frame.
- the audio encoding apparatus classifies a current frame into one of a voice signal and a music signal, determines whether an error exists in a classification result of the current frame based on a feature parameter obtained from a plurality of frames, and corresponds to the determination result.
- at least one processor configured to modify a classification result of the current frame and to encode the current frame based on the classification result of the current frame or the modified classification result.
- FIG. 1 is a block diagram showing the configuration of an audio signal classification apparatus according to an embodiment.
- FIG. 2 is a block diagram showing a configuration of an audio signal classification apparatus according to another embodiment.
- FIG. 3 is a block diagram illustrating a configuration of an audio encoding apparatus according to an embodiment.
- FIG. 4 is a flowchart illustrating a signal classification modification method in a CELP core according to an embodiment.
- FIG. 5 is a flowchart illustrating a signal classification modification method in an HQ core according to an embodiment.
- FIG. 6 illustrates a state machine for context-based signal classification modification in a CELP core according to an embodiment.
- FIG. 7 illustrates a state machine for context-based signal classification modification in an HQ core according to an embodiment.
- FIG. 8 is a block diagram illustrating a configuration of an encoding mode determining apparatus according to an embodiment.
- FIG. 9 is a flowchart illustrating an audio signal classification method, according to an exemplary embodiment.
- FIG. 10 is a block diagram illustrating a configuration of a multimedia apparatus according to an embodiment.
- FIG. 11 is a block diagram illustrating a configuration of a multimedia apparatus according to another embodiment.
- first and second may be used to describe various components, but the components should not be limited by the terms. The terms may be used only for the purpose of distinguishing one component from another component.
- each component shown in the embodiments are shown independently to represent different characteristic functions, and does not mean that each component is made of separate hardware or one software component unit.
- Each component is listed as each component for convenience of description, and at least two of the components may be combined into one component, or one component may be divided into a plurality of components to perform a function.
- FIG. 1 is a block diagram showing the configuration of an audio signal classification apparatus according to an embodiment.
- the audio signal classification apparatus 100 illustrated in FIG. 1 may include a signal classification unit 110 and a correction unit 130.
- each component may be integrated into at least one module and implemented as at least one processor (not shown), except that it may be necessary to implement a separate hardware.
- the audio signal may mean a music signal or a voice signal, or a mixed signal of music and voice.
- the signal classifier 110 may classify whether an audio signal corresponds to a music signal or a voice signal based on various initial classification parameters.
- the audio signal classification process may include at least one step.
- the audio signal may be classified into a voice signal or a music signal based on signal characteristics of a current frame and a plurality of previous frames.
- the signal characteristic may include at least one of a short-term characteristic and a long-term characteristic.
- the signal characteristic may include at least one of a time domain characteristic and a frequency domain characteristic.
- CELP Code Excited Linear Prediction
- the music signal may be encoded using a transform coder.
- an example of a transform coder may be a Modified Discrete Cosine Transform (MDCT) coder, but is not limited thereto.
- MDCT Modified Discrete Cosine Transform
- the audio signal classification process includes a first step of classifying an audio signal into a voice signal and a general audio signal, that is, a music signal, and a general audio signal according to whether the audio signal has a voice characteristic. And a second step of determining whether it is suitable for a generic signal audio coder (GSC).
- GSC generic signal audio coder
- the classification result of the first step and the classification result of the second step may be combined to determine whether the audio signal can be classified as a voice signal or a music signal.
- a voice signal it may be encoded by a CELP type coder.
- CELP type coders can be used in Unvoiced Coding (UC) mode, Voiced Coding (VC) mode, Transient Coding (TC) mode, and Generic Coding (GC) mode depending on the bit rate or signal characteristics. It may include a plurality. Meanwhile, the GSC mode may be implemented as a separate coder or included as one mode of a CELP type coder. When classified as a music signal, it can be encoded using either a transform coder or a CELP / transform hybrid coder. In detail, the transform coder may be applied to a music signal, and the CELP / transform hybrid coder may be applied to a non-music signal or a mixed signal of music and voice, not a voice signal.
- UC Unvoiced Coding
- VC Voiced Coding
- TC Transient Coding
- GC Generic Coding
- both the CELP type coder, the CELP / transform hybrid coder and the transform coder may be used, or the CELP type coder and the transform coder may be used depending on the bandwidth.
- CELP type coders and transform coders are used, for wideband (WB), ultra-wideband (SWB), and full-band (FB)
- WB wideband
- SWB ultra-wideband
- FB full-band
- CELP type coders CELP / transform hybrid coders and transformers.
- Foam coders can be used.
- the CELP / Transform hybrid coder is a combination of an LP-based coder and a transform domain coder that operates in the time domain, and is also known as a generic signal audio coder (GSC).
- GSC generic signal audio coder
- the signal classification of the first step may be based on a Gaussian Mixture Model (GMM).
- GMM Gaussian Mixture Model
- Various signal characteristics may be used for the GMM. Examples of signal characteristics include characteristics such as open loop pitch, normalized correlation, spectral envelope, tonal stability, signal non-stationary, LP register error, spectral difference, and spectral stationary. However, the present invention is not limited thereto.
- Examples of signal characteristics used for the second stage signal classification include spectral energy fluctuation characteristics, tilt characteristics of LP analysis residual energy, high-band spectral peaking characteristics, correlation characteristics, voicing characteristics, and tonal characteristics. It is not limited to this.
- the feature used in the first step is to determine whether it is suitable to be coded by a CELP type coder or not
- the feature used in the second step is to determine whether it is suitable to be encoded by GSC.
- the set of frames classified as the music signal in the first step may be converted to the voice signal in the second step and encoded in one of the CELP modes. That is, in the case of a signal or an attack signal having a large pitch period and high stability and having a high correlation, it may be converted from a music signal to a voice signal in a second step.
- the encoding mode may be changed according to the signal classification result.
- the correction unit 130 may correct or maintain the classification result of the signal classification unit 110 based on at least one correction parameter.
- the correction unit 130 may modify or maintain the classification result of the signal classification unit 110 based on the context. For example, when the current frame is classified as a voice signal, it may be modified as a music signal or maintained as a voice signal. When the current frame is classified as a music signal, it may be modified as a voice signal or maintained as a music signal.
- the characteristics of a plurality of frames including the current frame may be used to determine whether an error exists in the classification result of the current frame. For example, eight frames may be used, but the present invention is not limited thereto.
- the SAT neolreo tea may comprise a one or two soil neolreo T (ton 3) of the kHz region Sat neolreo T (ton 2) and 2 ⁇ 4 kHz region, and each can be defined by the following expressions (1) and (2) have.
- tonality2 [-1] represents tonality in the 1-2 kHz region of one frame before the frame.
- lt_tonality may indicate long-term tonality of the entire band.
- linear prediction error LP err may be defined by Equation 3 below.
- sfa i and sfb i may vary according to the type and bandwidth of the feature parameter, and are used to approximate each feature parameter to the range [0; 1].
- E (1) represents the energy of the first LP coefficient
- E (13) represents the energy of the 13th LP coefficient.
- M cor represents a correlation map of a frame.
- condition 1 and condition 2 may indicate conditions for changing the voice state SPEECH_STATE
- condition 3 and condition 4 may refer to conditions for changing the music state MUSIC_STATE.
- condition 1 may change the voice state SPEECH_STATE from 0 to 1
- condition 2 may change the voice state SPEECH_STATE from 1 to 0.
- condition 3 may change the music state MUSIC_STATE from 0 to 1
- condition 4 may change the music state MUSIC_STATE from 1 to 0.
- the speech state SPEECH_STATE is 1, it means that the probability of speech is high, that is, CELP type coding is appropriate, and if it is 0, it means that the probability of speech is not high.
- a music state (MUSIC_STATE) of 1 means that it is suitable for transform coding, and 0 means that it is suitable for CELP / transform hybrid coding, that is, GSC.
- the music state MUSIC_STATE is 1, it may be suitable for transform coding, and 0 may mean that it is suitable for CELP type coding.
- Condition 1 may be defined as follows, for example. That is, d vcor > 0.4 AND d ft ⁇ 0.1 AND FV s (1)> (2 * FV s (7) +0.12) AND ton 2 ⁇ d vcor AND ton 3 ⁇ d vcor AND ton LT ⁇ d vcor AND FV s (7) If ⁇ d vcor AND FV s (1)> d vcor AND FV s (1)> 0.76, f A may be set to one.
- Condition 2 (f B ) may be defined as follows, for example. That is, if d vcor ⁇ 0.4, f B may be set to one.
- Condition 3 may be defined as follows, for example. That is, if 0.26 ⁇ ton 2 ⁇ 0.54 AND ton 3 > 0.22 AND 0.26 ⁇ ton LT ⁇ 0.54 AND LP err > 0.5, f C may be set to 1.
- Condition 4 may be defined as follows, for example. That is, if ton 2 ⁇ 0.34 AND ton 3 ⁇ 0.26 AND 0.26 ⁇ ton LT ⁇ 0.45, f D may be set to 1.
- each constant value is merely exemplary and may be set to an optimal value according to an implementation method.
- the correction unit 130 may correct an error existing in the initial classification result by using two independent state machines, for example, a voice state machine and a music state machine.
- Each state machine has two states, and hangovers are used in each state to prevent frequent transitions.
- the hangover may consist of six frames, for example. If the hangover variable is hang sp in the voice state machine and the hangover variable is hang mus in the music state machine. It decreases by 1 for. State changes can only occur if the hangover is reduced to zero.
- Each state machine may use a correction parameter generated by combining at least one feature extracted from the audio signal.
- FIG. 2 is a block diagram showing a configuration of an audio signal classification apparatus according to another embodiment.
- the audio signal classification apparatus 200 illustrated in FIG. 2 may include a signal classification unit 210, a correction unit 230, and a fine classifier 250.
- the difference from the audio signal classification apparatus 100 of FIG. 1 further includes a fine classifier 250, and the functions of the signal classifier 210 and the correction unit 230 are the same as those of FIG. Detailed description will be omitted.
- the detailed classification unit 250 may classify the classification result modified or maintained in the correction unit 230 based on the detailed classification parameter.
- the detailed classifier 250 determines and corrects whether an audio signal classified as a music signal is suitable to be encoded by a CELP / transform hybrid coder, that is, GSC. In this case, as a modification method, a specific parameter or a flag is changed to prevent the transform coder from being selected. If the classification result output from the correction unit 230 is a music signal, the subdivision classification unit 250 may classify whether the music signal or the voice signal is performed again by performing the subdivision.
- the classification code When the classification result of the subclassifier 250 is a music signal, the classification code may be encoded using the transform coder as it is, and when the classification result of the subclassifier 250 is a voice signal, it may be encoded as the third encoding mode. It can be coded using a CELP / Transform hybrid coder. On the other hand, when the classification result output from the correction unit 230 is a speech signal, it may be encoded using a CELP type coder as the first encoding mode.
- An example of the subdivision parameter may include features such as tonality, voicing, correlation, pitch gain, pitch difference, and the like, but is not limited thereto.
- FIG. 3 is a block diagram illustrating a configuration of an audio encoding apparatus according to an embodiment.
- the audio encoding apparatus 300 illustrated in FIG. 3 may include an encoding mode determiner 310 and an encoding module 330.
- the encoding mode determiner 310 may include components of the audio signal classification apparatus 100 of FIG. 1 or the audio signal classification apparatus 200 of FIG. 2.
- the encoding module 330 may include first to third encoding units 331, 333, 335.
- the first encoder 331 may correspond to a CELP type coder
- the second encoder 333 may correspond to a CELP / transform hybrid coder
- the third encoder 335 may correspond to a transform coder.
- the encoding module 330 may include first and third encoders 331 and 335.
- the encoding module 330 and the first encoder 331 may have various configurations according to the bit rate or the bandwidth.
- the encoding mode determiner 310 may classify whether an audio signal is a music signal or an audio signal based on signal characteristics, and determine an encoding mode according to the classification result.
- the encoding mode may be performed in a superframe unit, a frame unit, or a band unit.
- the encoding mode may be performed in units of a plurality of superframe groups, a plurality of frame groups, and a plurality of band groups.
- two examples of the encoding mode may be a transform domain mode and a linear prediction domain mode, but are not limited thereto.
- the linear prediction domain mode may include UC, VC, TC, and GC modes.
- the GSC mode may be classified as a separate encoding mode or may be included as a detailed mode of the linear prediction domain mode.
- the encoding mode may be further subdivided, and the encoding scheme may be further subdivided according to the encoding mode.
- the encoding mode determiner 310 may classify the audio signal into one of a music signal and a voice signal based on an initial classification parameter.
- the encoding mode determiner 310 may modify or maintain the classification result, which is a music signal, as a voice signal or correct or maintain the classification result, which is a voice signal, as a music signal based on the correction parameter.
- the encoding mode determiner 310 may classify the modified or maintained classification result into, for example, one of a music signal and a voice signal based on the detailed classification parameter.
- the encoding mode determiner 310 may determine encoding modes using the final classification result.
- the encoding mode determiner 310 may determine an encoding mode based on at least one of a bit rate and a bandwidth.
- the first encoding unit 331 may operate when the classification result of the correction unit 130 or 230 corresponds to a voice signal.
- the second encoder 333 may operate when the classification result of the correction unit 130 corresponds to a music signal or the classification result of the sub classification unit 350 corresponds to a voice signal.
- the third encoder 335 may be operated when the classification result of the correction unit 130 corresponds to a music signal or the classification result of the sub classification unit 350 corresponds to a music signal.
- FIG. 4 is a flowchart illustrating a signal classification correction method in a CELP core according to an embodiment, which may be performed by the correction units 130 and 230 of FIG. 1 or 2.
- correction parameters such as condition 1 and condition 2 may be received.
- the hangover information of the voice state machine may be received.
- an initial classification result may be received.
- the initial classification result may be provided from the signal classification unit 110 or 210 of FIG. 1 or 2.
- step 420 it may be determined whether the voice state is 0, the condition 1 (f A ) is 1, and the hang sp of the voice state machine is 0.
- the speech state is zero, while condition 1 is 1 and the voice state when it is determined that the hangover (hang sp) is 0, the machine, change in step 430 the speech state to 1 and the hangover (hang sp) 6 Can be initialized.
- the initialized hangover value may be provided in step 460.
- step 420 if the voice state is not 0, condition 1 is not 1, or the hang sp of the voice state machine is not 0, the process may proceed to step 440.
- step 440 it may be determined whether the voice state is 1, the condition 2 (f B ) is 1, and the hang sp of the voice state machine is 0.
- a negative status is one, yet Condition 2 is 1 in the negative state when it is determined that the hangover (hang sp) is 0, the machine, at 450 steps to change the voice state to zero, and the hangover (hang sp) 6 Can be initialized.
- the initialized hangover value may be provided in step 460.
- the process proceeds to step 460 to reduce the hangover by 1; Can be done.
- FIG. 5 is a flowchart illustrating a signal classification correction method in an HQ core according to an embodiment, which may be performed by the correction units 130 and 230 of FIG. 1 or 2.
- correction parameters such as condition 3 and condition 4 may be received.
- the hangover information of the music state machine may be received.
- an initial classification result may be received.
- the initial classification result may be provided from the signal classification unit 110 or 210 of FIG. 1 or 2.
- step 520 it may be determined whether the music state is 1, the condition 3 (f C ) is 1, and the hang mus of the music state machine is 0. If it is determined in step 520 that the music state is 1, condition 3 is 1, and the hang nus of the music state machine is 0, in step 530, the music state is changed to 0 and the hang mus is set to 6; Can be initialized. The initialized hangover value may be provided in step 560. In contrast , when the music state is not 1, the condition 3 is not 1, or the hang mus of the music state machine is not 0 in step 520, the process proceeds to step 540.
- step 540 it may be determined whether the music state is 0, the condition 4 (f D ) is 1, and the hang mus of the music state machine is 0. If it is determined in step 540 that the music state is 0, condition 4 is 1, and the hang mus of the music state machine is 0, in step 550, the music state is changed to 1 and the hang mus is set to 6; Can be initialized. The initialized hangover value may be provided in step 560. On the other hand, if the music state is not 0, the condition 4 is not 1, or the hang mus of the music state machine is not 0 in step 540, the process proceeds to step 560 to reduce the hangover by 1. Can be done.
- FIG. 6 illustrates a state machine for context-based signal classification modification in a state suitable for a CELP core, that is, a voice state, according to an embodiment, and may correspond to FIG. 4.
- a correction for the classification result may be applied according to the music state determined at the music state machine and the voice state determined at the voice state machine.
- the initial classification result when the initial classification result is set to the music signal, it may be changed to the voice signal based on the correction parameter.
- the classification result of the first stage is a music signal and the voice state becomes 1 among the initial classification results
- both the classification result of the first stage and the classification result of the second stage may be changed to the speech signal. In this case, it is determined that an error exists in the initial classification result, so that the classification result can be corrected.
- FIG. 7 illustrates a state machine for modifying a context-based signal classification in a state suitable for a high quality (HQ) core, that is, a music state, according to an embodiment, and may correspond to FIG. 5.
- HQ high quality
- a correction for the classification result may be applied according to the music state determined by the music state machine and the voice state determined by the voice state machine.
- the initial classification result when the initial classification result is set to the voice signal, it may be changed to the music signal based on the correction parameter.
- the classification result of the first stage is an audio signal and the music state becomes 1 among the initial classification results
- both the classification result of the first stage and the classification result of the second stage may be changed to the music signal.
- the initial classification result is set to the music signal, it can be changed to the audio signal based on the correction parameter. In this case, it is determined that an error exists in the initial classification result, so that the classification result can be corrected.
- FIG. 8 is a block diagram illustrating a configuration of an encoding mode determining apparatus according to an embodiment.
- the encoding mode determiner illustrated in FIG. 8 may include an initial encoding mode determiner 810 and a correction unit 830.
- the initial encoding mode determiner 810 may determine whether an audio signal has a voice characteristic, and may determine the first encoding mode as the initial encoding mode when the audio signal has the voice characteristic.
- the audio signal may be encoded by a CELP type coder.
- the initial encoding mode determiner 810 may determine the second encoding mode as the initial encoding mode when the audio signal does not have speech characteristics.
- the audio signal may be encoded by a transform coder.
- the initial encoding mode determiner 810 may determine one of the second encoding mode and the third encoding mode as the initial encoding mode according to the bit rate.
- the audio signal may be encoded by a CELP / transform hybrid coder.
- the initial encoding mode determiner 810 may use a three-way method.
- the correction unit 830 may correct the second encoding mode based on the correction parameter when the initial encoding mode is determined as the first encoding mode. For example, if the initial classification result is a voice signal but has a music characteristic, the initial classification result may be modified to a music signal. Meanwhile, when the initial encoding mode is determined as the second encoding mode, the correction unit 830 may modify the first encoding mode or the third encoding mode based on the correction parameter. For example, if the initial classification result is a music signal but has a voice characteristic, the initial classification result may be modified into a voice signal.
- FIG. 9 is a flowchart illustrating an audio signal classification method, according to an exemplary embodiment.
- an audio signal may be classified into one of a music signal and a voice signal.
- the current frame may be classified as a music signal or a voice signal based on signal characteristics. Operation 910 may be performed by the signal classification units 110 and 210 of FIG. 1 or 2.
- operation 930 it may be determined whether an error exists in the classification result in operation 910 based on the correction parameter.
- operation 950 when it is determined that an error exists in the classification result in operation 930, the classification result may be corrected.
- operation 970 when it is determined that no error exists in the classification result in operation 930, the classification result may be maintained as it is. Steps 930 to 970 may be performed by the correction units 130 and 230 of FIG. 1 or 2.
- FIG. 10 is a block diagram illustrating a configuration of a multimedia apparatus according to an embodiment.
- the multimedia apparatus 1000 illustrated in FIG. 10 may include a communication unit 1010 and an encoding module 1030.
- the storage unit 1050 may further include an audio bitstream according to the use of the audio bitstream obtained as a result of the encoding.
- the multimedia apparatus 1000 may further include a microphone 1070. That is, the storage unit 1050 and the microphone 1070 may be provided as an option.
- the multimedia apparatus 1000 illustrated in FIG. 28 may further include an arbitrary decoding module (not shown), for example, a decoding module for performing a general decoding function or a decoding module according to an embodiment of the present invention.
- the encoding module 1030 may be integrated with other components (not shown) included in the multimedia apparatus 1000 and implemented as at least one or more processors (not shown).
- the communication unit 1010 may receive at least one of audio and an encoded bitstream provided from the outside, or may transmit at least one of reconstructed audio and an audio bitstream obtained as a result of encoding the encoding module 1030. Can be.
- the communication unit 1010 includes wireless Internet, wireless intranet, wireless telephone network, wireless LAN (LAN), Wi-Fi, Wi-Fi Direct (WFD), 3G (Generation), 4G (4 Generation), and Bluetooth.
- Wireless networks such as Bluetooth, Infrared Data Association (IrDA), Radio Frequency Identification (RFID), Ultra WideBand (UWB), Zigbee, Near Field Communication (NFC), wired telephone networks, wired Internet It is configured to send and receive data with external multimedia device or server through wired network.
- the encoding module 1030 may perform encoding on an audio signal of a time domain provided through the communication unit 1010 or the microphone 1050.
- the encoding process may be implemented using the apparatus or method shown in FIGS. 1 to 9.
- the storage unit 1050 may store various programs necessary for operating the multimedia apparatus 1000.
- the microphone 1070 may provide a user or an external audio signal to the encoding module 1030.
- FIG. 11 is a block diagram illustrating a configuration of a multimedia apparatus according to another embodiment.
- the multimedia device 1100 illustrated in FIG. 11 may include a communication unit 1110, an encoding module 1120, and a decoding module 1130.
- the storage unit 1140 may further include an audio bitstream or a restored audio signal according to a use of the audio bitstream obtained from the encoding or the restored audio signal obtained as the decoding result.
- the multimedia device 1100 may further include a microphone 1150 or a speaker 1160.
- the encoding module 1120 and the decoding module 1130 may be integrated with other components (not shown) included in the multimedia device 1100 and implemented as at least one processor (not shown).
- the decoding module 1130 may receive a bitstream provided through the communication unit 1110 and perform decoding on an audio spectrum included in the bitstream.
- the decoding module 1130 may be implemented corresponding to the encoding module 330 of FIG. 3.
- the speaker 1170 may output the restored audio signal generated by the decoding module 1130 to the outside.
- the multimedia apparatus 1000 and 1100 include a voice communication terminal including a telephone, a mobile phone, and the like, a broadcast or music dedicated apparatus including a TV, an MP3 player, and the like.
- a fusion terminal device of a broadcast or music dedicated device may be included, but is not limited thereto.
- the multimedia devices 1000 and 1100 may be used as a client, a server, or a transducer disposed between the client and the server.
- the multimedia device (1000, 1100) is a mobile phone, for example, although not shown, a user input unit, such as a keypad, a display unit for displaying information processed in the user interface or mobile phone, processor for controlling the overall function of the mobile phone It may further include.
- the mobile phone may further include a camera unit having an imaging function and at least one component that performs a function required by the mobile phone.
- the multimedia apparatus 1000 or 1100 when the multimedia apparatus 1000 or 1100 is a TV, for example, although not shown, the multimedia apparatus 1000 or 1100 may further include a user input unit such as a keypad, a display unit for displaying received broadcast information, and a processor for controlling overall functions of the TV. .
- the TV may further include at least one or more components that perform a function required by the TV.
- the method according to the embodiments can be written in a computer executable program and can be implemented in a general-purpose digital computer operating the program using a computer readable recording medium.
- data structures, program instructions, or data files that can be used in the above-described embodiments of the present invention can be recorded on a computer-readable recording medium through various means.
- the computer-readable recording medium may include all kinds of storage devices in which data that can be read by a computer system is stored. Examples of computer-readable recording media include magnetic media, such as hard disks, floppy disks, and magnetic tape, optical media such as CD-ROMs, DVDs, floppy disks, and the like.
- Such as magneto-optical media, and hardware devices specifically configured to store and execute program instructions such as ROM, RAM, flash memory, and the like.
- the computer-readable recording medium may also be a transmission medium for transmitting a signal specifying a program command, a data structure, or the like.
- Examples of program instructions may include high-level language code that can be executed by a computer using an interpreter as well as machine code such as produced by a compiler.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computational Linguistics (AREA)
- Signal Processing (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Spectroscopy & Molecular Physics (AREA)
- Compression, Expansion, Code Conversion, And Decoders (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
Description
Claims (15)
- 현재 프레임을 음성신호과 음악신호 중 하나로 분류하는 단계;복수개의 프레임으로부터 얻어지는 특징 파라미터에 근거하여 상기 현재 프레임의 분류결과에 에러가 존재하는지 판단하는 단계; 및상기 판단결과에 대응하여, 상기 현재 프레임의 분류결과를 수정하는 단계를 포함하는 신호 분류방법.
- 제1 항에 있어서, 상기 수정 단계는 복수개의 독립적인 상태 머쉰에 근거하여 수행되는 신호 분류방법.
- 제2 항에 있어서, 상기 복수개의 독립적인 상태 머쉰은 음악 상태 머쉰과 음성 상태 머쉰을 포함하는 신호 분류방법.
- 제1 항에 있어서, 상기 특징 파라미터는 현재 프레임과 복수개의 이전 프레임으로부터 얻어지는 신호 분류방법.
- 제1 항에 있어서, 상기 판단 단계는 상기 현재 프레임의 분류결과가 음악 신호이고, 상기 현재 프레임이 음성 특징을 갖는 것으로 판단된 경우 상기 분류결과에 에러가 존재하는 것으로 판단하는 신호 분류방법.
- 제1 항에 있어서, 상기 판단 단계는 상기 현재 프레임의 분류결과가 음성 신호이고, 상기 현재 프레임이 음악 특징을 갖는 것으로 판단된 경우 상기 분류결과에 에러가 존재하는 것으로 판단하는 신호 분류방법.
- 제2 항에 있어서, 상기 각 상태 머쉰은 빈번한 상태 트랜지션을 방지하기 위하여 복수개의 프레임에 해당하는 행오버를 사용하는 신호 분류방법.
- 제1 항에 있어서, 상기 수정 단계는 상기 현재 프레임의 분류결과가 음악 신호이고, 상기 현재 프레임이 음성 특징을 갖는 것으로 판단된 경우 상기 분류결과를 음성 신호로 수정하는 신호 분류방법.
- 제1 항에 있어서, 상기 수정 단계는 상기 현재 프레임의 분류결과가 음성 신호이고, 상기 현재 프레임이 음악 특징을 갖는 것으로 판단된 경우 상기 분류결과를 음악 신호로 수정하는 신호 분류방법.
- 현재 프레임을 음성신호과 음악신호 중 하나로 분류하는 단계;복수개의 프레임으로부터 얻어지는 특징 파라미터에 근거하여 상기 현재 프레임의 분류결과에 에러가 존재하는지 판단하는 단계; 및상기 판단결과에 대응하여, 상기 현재 프레임의 분류결과를 수정하는 단계를 실행하기 위한 프로그램을 기록한 컴퓨터로 읽을 수 있는 기록매체.
- 현재 프레임을 음성신호과 음악신호 중 하나로 분류하는 단계;복수개의 프레임으로부터 얻어지는 특징 파라미터에 근거하여 상기 현재 프레임의 분류결과에 에러가 존재하는지 판단하는 단계;상기 판단결과에 대응하여, 상기 현재 프레임의 분류결과를 수정하는 단계; 및상기 현재 프레임의 분류 결과 혹은 수정된 분류 결과에 근거하여 상기 현재 프레임을 부호화하는 단계를 포함하는 오디오 부호화방법.
- 제12 항에 있어서, 상기 부호화 단계는 CELP 타입 코더와 트랜스폼 코더 중 하나를 이용하여 수행되는 오디오 부호화방법.
- 제12 항에 있어서, 상기 부호화 단계는 CELP 타입 코더, 트랜스폼 코더 및 CELP/트랜스폼 하이브리드 코더 중 하나를 이용하여 수행되는 오디오 부호화방법.
- 현재 프레임을 음성신호과 음악신호 중 하나로 분류하고, 복수개의 프레임으로부터 얻어지는 특징 파라미터에 근거하여 상기 현재 프레임의 분류결과에 에러가 존재하는지 판단하고, 상기 판단결과에 대응하여, 상기 현재 프레임의 분류결과를 수정하도록 구성된 적어도 하나의 프로세서를 포함하는 신호 분류 장치.
- 현재 프레임을 음성신호과 음악신호 중 하나로 분류하고, 복수개의 프레임으로부터 얻어지는 특징 파라미터에 근거하여 상기 현재 프레임의 분류결과에 에러가 존재하는지 판단하고, 상기 판단결과에 대응하여, 상기 현재 프레임의 분류결과를 수정하고, 상기 현재 프레임의 분류 결과 혹은 수정된 분류 결과에 근거하여 상기 현재 프레임을 부호화하도록 구성된 적어도 하나의 프로세서를 포함하는 오디오 부호화 장치.
Priority Applications (11)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP15751981.0A EP3109861B1 (en) | 2014-02-24 | 2015-02-24 | Signal classifying method and device, and audio encoding method and device using same |
KR1020167023217A KR102354331B1 (ko) | 2014-02-24 | 2015-02-24 | 신호 분류 방법 및 장치, 및 이를 이용한 오디오 부호화방법 및 장치 |
SG11201607971TA SG11201607971TA (en) | 2014-02-24 | 2015-02-24 | Signal classifying method and device, and audio encoding method and device using same |
KR1020227036099A KR102552293B1 (ko) | 2014-02-24 | 2015-02-24 | 신호 분류 방법 및 장치, 및 이를 이용한 오디오 부호화방법 및 장치 |
US15/121,257 US10090004B2 (en) | 2014-02-24 | 2015-02-24 | Signal classifying method and device, and audio encoding method and device using same |
ES15751981T ES2702455T3 (es) | 2014-02-24 | 2015-02-24 | Procedimiento y dispositivo de clasificación de señales, y procedimiento y dispositivo de codificación de audio que usan los mismos |
CN201911345336.0A CN110992965B (zh) | 2014-02-24 | 2015-02-24 | 信号分类方法和装置以及使用其的音频编码方法和装置 |
CN201580021378.2A CN106256001B (zh) | 2014-02-24 | 2015-02-24 | 信号分类方法和装置以及使用其的音频编码方法和装置 |
KR1020227001823A KR102457290B1 (ko) | 2014-02-24 | 2015-02-24 | 신호 분류 방법 및 장치, 및 이를 이용한 오디오 부호화방법 및 장치 |
JP2016570753A JP6599368B2 (ja) | 2014-02-24 | 2015-02-24 | 信号分類方法及びその装置、並びにそれを利用したオーディオ符号化方法及びその装置 |
US16/148,708 US10504540B2 (en) | 2014-02-24 | 2018-10-01 | Signal classifying method and device, and audio encoding method and device using same |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201461943638P | 2014-02-24 | 2014-02-24 | |
US61/943,638 | 2014-02-24 | ||
US201462029672P | 2014-07-28 | 2014-07-28 | |
US62/029,672 | 2014-07-28 |
Related Child Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/121,257 A-371-Of-International US10090004B2 (en) | 2014-02-24 | 2015-02-24 | Signal classifying method and device, and audio encoding method and device using same |
US16/148,708 Continuation US10504540B2 (en) | 2014-02-24 | 2018-10-01 | Signal classifying method and device, and audio encoding method and device using same |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2015126228A1 true WO2015126228A1 (ko) | 2015-08-27 |
Family
ID=53878629
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2015/001783 WO2015126228A1 (ko) | 2014-02-24 | 2015-02-24 | 신호 분류 방법 및 장치, 및 이를 이용한 오디오 부호화방법 및 장치 |
Country Status (8)
Country | Link |
---|---|
US (2) | US10090004B2 (ko) |
EP (1) | EP3109861B1 (ko) |
JP (1) | JP6599368B2 (ko) |
KR (3) | KR102457290B1 (ko) |
CN (2) | CN106256001B (ko) |
ES (1) | ES2702455T3 (ko) |
SG (1) | SG11201607971TA (ko) |
WO (1) | WO2015126228A1 (ko) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
NO2780522T3 (ko) * | 2014-05-15 | 2018-06-09 | ||
CN111177454B (zh) * | 2019-12-11 | 2023-05-30 | 广州荔支网络技术有限公司 | 一种音频节目分类的修正方法 |
US20240038258A1 (en) * | 2020-08-18 | 2024-02-01 | Dolby Laboratories Licensing Corporation | Audio content identification |
CN115881138A (zh) * | 2021-09-29 | 2023-03-31 | 华为技术有限公司 | 解码方法、装置、设备、存储介质及计算机程序产品 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2009110751A2 (ko) * | 2008-03-04 | 2009-09-11 | Lg Electronics Inc. | 오디오 신호 처리 방법 및 장치 |
WO2010008179A1 (ko) * | 2008-07-14 | 2010-01-21 | 한국전자통신연구원 | 음성/음악 통합 신호의 부호화/복호화 방법 및 장치 |
WO2010008173A2 (ko) * | 2008-07-14 | 2010-01-21 | 한국전자통신연구원 | 오디오 신호의 상태결정 장치 |
US20110046965A1 (en) * | 2007-08-27 | 2011-02-24 | Telefonaktiebolaget L M Ericsson (Publ) | Transient Detector and Method for Supporting Encoding of an Audio Signal |
US20120069899A1 (en) * | 2002-09-04 | 2012-03-22 | Microsoft Corporation | Entropy encoding and decoding using direct level and run-length/level context-adaptive arithmetic coding/decoding modes |
Family Cites Families (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6453285B1 (en) | 1998-08-21 | 2002-09-17 | Polycom, Inc. | Speech activity detector for use in noise reduction system, and methods therefor |
JP3616307B2 (ja) * | 2000-05-22 | 2005-02-02 | 日本電信電話株式会社 | 音声・楽音信号符号化方法及びこの方法を実行するプログラムを記録した記録媒体 |
CA2388439A1 (en) * | 2002-05-31 | 2003-11-30 | Voiceage Corporation | A method and device for efficient frame erasure concealment in linear predictive based speech codecs |
EP2458588A3 (en) * | 2006-10-10 | 2012-07-04 | Qualcomm Incorporated | Method and apparatus for encoding and decoding audio signals |
KR100883656B1 (ko) * | 2006-12-28 | 2009-02-18 | 삼성전자주식회사 | 오디오 신호의 분류 방법 및 장치와 이를 이용한 오디오신호의 부호화/복호화 방법 및 장치 |
CN101025918B (zh) * | 2007-01-19 | 2011-06-29 | 清华大学 | 一种语音/音乐双模编解码无缝切换方法 |
CN101393741A (zh) * | 2007-09-19 | 2009-03-25 | 中兴通讯股份有限公司 | 一种宽带音频编解码器中的音频信号分类装置及分类方法 |
EP2259253B1 (en) * | 2008-03-03 | 2017-11-15 | LG Electronics Inc. | Method and apparatus for processing audio signal |
US8428949B2 (en) * | 2008-06-30 | 2013-04-23 | Waves Audio Ltd. | Apparatus and method for classification and segmentation of audio content, based on the audio signal |
MX2011000370A (es) * | 2008-07-11 | 2011-03-15 | Fraunhofer Ges Forschung | Un aparato y un metodo para decodificar una señal de audio codificada. |
PL2301011T3 (pl) * | 2008-07-11 | 2019-03-29 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Sposób i dyskryminator do klasyfikacji różnych segmentów sygnału audio zawierającego segmenty mowy i muzyki |
KR101230183B1 (ko) | 2008-07-14 | 2013-02-15 | 광운대학교 산학협력단 | 오디오 신호의 상태결정 장치 |
KR101381513B1 (ko) * | 2008-07-14 | 2014-04-07 | 광운대학교 산학협력단 | 음성/음악 통합 신호의 부호화/복호화 장치 |
KR101073934B1 (ko) * | 2008-12-22 | 2011-10-17 | 한국전자통신연구원 | 음성/음악 판별장치 및 방법 |
CN102044244B (zh) * | 2009-10-15 | 2011-11-16 | 华为技术有限公司 | 信号分类方法和装置 |
CN102237085B (zh) * | 2010-04-26 | 2013-08-14 | 华为技术有限公司 | 音频信号的分类方法及装置 |
RU2010152225A (ru) * | 2010-12-20 | 2012-06-27 | ЭлЭсАй Корпорейшн (US) | Обнаружение музыки с использованием анализа спектральных пиков |
CN102543079A (zh) * | 2011-12-21 | 2012-07-04 | 南京大学 | 一种实时的音频信号分类方法及设备 |
US9111531B2 (en) * | 2012-01-13 | 2015-08-18 | Qualcomm Incorporated | Multiple coding mode signal classification |
WO2014010175A1 (ja) * | 2012-07-09 | 2014-01-16 | パナソニック株式会社 | 符号化装置及び符号化方法 |
SG11201503788UA (en) | 2012-11-13 | 2015-06-29 | Samsung Electronics Co Ltd | Method and apparatus for determining encoding mode, method and apparatus for encoding audio signals, and method and apparatus for decoding audio signals |
-
2015
- 2015-02-24 US US15/121,257 patent/US10090004B2/en active Active
- 2015-02-24 CN CN201580021378.2A patent/CN106256001B/zh active Active
- 2015-02-24 SG SG11201607971TA patent/SG11201607971TA/en unknown
- 2015-02-24 KR KR1020227001823A patent/KR102457290B1/ko active IP Right Grant
- 2015-02-24 JP JP2016570753A patent/JP6599368B2/ja active Active
- 2015-02-24 CN CN201911345336.0A patent/CN110992965B/zh active Active
- 2015-02-24 WO PCT/KR2015/001783 patent/WO2015126228A1/ko active Application Filing
- 2015-02-24 KR KR1020227036099A patent/KR102552293B1/ko active IP Right Grant
- 2015-02-24 EP EP15751981.0A patent/EP3109861B1/en active Active
- 2015-02-24 ES ES15751981T patent/ES2702455T3/es active Active
- 2015-02-24 KR KR1020167023217A patent/KR102354331B1/ko active IP Right Grant
-
2018
- 2018-10-01 US US16/148,708 patent/US10504540B2/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120069899A1 (en) * | 2002-09-04 | 2012-03-22 | Microsoft Corporation | Entropy encoding and decoding using direct level and run-length/level context-adaptive arithmetic coding/decoding modes |
US20110046965A1 (en) * | 2007-08-27 | 2011-02-24 | Telefonaktiebolaget L M Ericsson (Publ) | Transient Detector and Method for Supporting Encoding of an Audio Signal |
WO2009110751A2 (ko) * | 2008-03-04 | 2009-09-11 | Lg Electronics Inc. | 오디오 신호 처리 방법 및 장치 |
WO2010008179A1 (ko) * | 2008-07-14 | 2010-01-21 | 한국전자통신연구원 | 음성/음악 통합 신호의 부호화/복호화 방법 및 장치 |
WO2010008173A2 (ko) * | 2008-07-14 | 2010-01-21 | 한국전자통신연구원 | 오디오 신호의 상태결정 장치 |
Non-Patent Citations (1)
Title |
---|
See also references of EP3109861A4 * |
Also Published As
Publication number | Publication date |
---|---|
US10504540B2 (en) | 2019-12-10 |
US20170011754A1 (en) | 2017-01-12 |
KR20160125397A (ko) | 2016-10-31 |
KR20220148302A (ko) | 2022-11-04 |
CN106256001B (zh) | 2020-01-21 |
KR20220013009A (ko) | 2022-02-04 |
EP3109861A1 (en) | 2016-12-28 |
KR102457290B1 (ko) | 2022-10-20 |
CN110992965B (zh) | 2024-09-03 |
CN106256001A (zh) | 2016-12-21 |
JP6599368B2 (ja) | 2019-10-30 |
US20190103129A1 (en) | 2019-04-04 |
EP3109861A4 (en) | 2017-11-01 |
CN110992965A (zh) | 2020-04-10 |
SG11201607971TA (en) | 2016-11-29 |
KR102354331B1 (ko) | 2022-01-21 |
JP2017511905A (ja) | 2017-04-27 |
EP3109861B1 (en) | 2018-12-12 |
US10090004B2 (en) | 2018-10-02 |
ES2702455T3 (es) | 2019-03-01 |
KR102552293B1 (ko) | 2023-07-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6790029B2 (ja) | 音声プロファイルを管理し、発話信号を生成するためのデバイス | |
US9218820B2 (en) | Audio fingerprint differences for end-to-end quality of experience measurement | |
JP5319704B2 (ja) | オーディオ信号の処理方法及び装置 | |
WO2015126228A1 (ko) | 신호 분류 방법 및 장치, 및 이를 이용한 오디오 부호화방법 및 장치 | |
WO2016024853A1 (ko) | 음질 향상 방법 및 장치, 음성 복호화방법 및 장치와 이를 채용한 멀티미디어 기기 | |
WO2015065137A1 (ko) | 광대역 신호 생성방법 및 장치와 이를 채용하는 기기 | |
US20140365212A1 (en) | Receiver Intelligibility Enhancement System | |
US20210089863A1 (en) | Method and apparatus for recurrent auto-encoding | |
US10854212B2 (en) | Inter-channel phase difference parameter modification | |
US8868418B2 (en) | Receiver intelligibility enhancement system | |
US11463833B2 (en) | Method and apparatus for voice or sound activity detection for spatial audio | |
JP2013537325A (ja) | ピッチサイクルエネルギーを判断し、励起信号をスケーリングすること | |
WO2021139772A1 (zh) | 一种音频信息处理方法、装置、电子设备以及存储介质 | |
RU2648632C2 (ru) | Классификатор многоканального звукового сигнала | |
CN117711420B (en) | Target human voice extraction method, electronic device and storage medium | |
WO2015152666A1 (ko) | Hoa 신호를 포함하는 오디오 신호를 디코딩하는 방법 및 장치 | |
US20240233741A9 (en) | Controlling local rendering of remote environmental audio | |
KR20230141251A (ko) | 성도 및 여기 신호 정보를 이용한 자동 음성 인식 방법 및 장치 | |
CN117711420A (zh) | 目标人声提取方法、电子设备及存储介质 | |
CN114927127A (zh) | 一种多媒体音频分析和处理方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 15751981 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2016570753 Country of ref document: JP Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 20167023217 Country of ref document: KR Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
REEP | Request for entry into the european phase |
Ref document number: 2015751981 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2015751981 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 15121257 Country of ref document: US |