CN1794887A - Audio processing method for enhancing three-dimensional - Google Patents

Audio processing method for enhancing three-dimensional Download PDF

Info

Publication number
CN1794887A
CN1794887A CN 200510102181 CN200510102181A CN1794887A CN 1794887 A CN1794887 A CN 1794887A CN 200510102181 CN200510102181 CN 200510102181 CN 200510102181 A CN200510102181 A CN 200510102181A CN 1794887 A CN1794887 A CN 1794887A
Authority
CN
China
Prior art keywords
signal
audio
alliteration
synthetic
weighted
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN 200510102181
Other languages
Chinese (zh)
Other versions
CN100539741C (en
Inventor
罗发龙
胡胜发
万享
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Ankai Microelectronics Co.,Ltd.
Original Assignee
ANKAI (GUANGZHOU) SOFTWARE TECHN Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ANKAI (GUANGZHOU) SOFTWARE TECHN Co Ltd filed Critical ANKAI (GUANGZHOU) SOFTWARE TECHN Co Ltd
Priority to CNB200510102181XA priority Critical patent/CN100539741C/en
Publication of CN1794887A publication Critical patent/CN1794887A/en
Application granted granted Critical
Publication of CN100539741C publication Critical patent/CN100539741C/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Stereophonic System (AREA)

Abstract

This invention discloses an audio process method for increasing 3-D efficiency, which first of all carries out two stages of strengthening process to the main component in the sphere of time frequency to the double-track input then the process of multiple stages based on the double track synthesis in space to strengthen the single channel or stereo signals with 3-D sound field to generate stereo output so as to sense strengthened music with earphones or two loudspeakers.

Description

Strengthen the audio-frequency processing method of 3-D audio
Technical field
The present invention relates to a kind of audio-frequency processing method, especially relate to the audio-frequency processing method that strengthens 3-D audio.
Background technology
There are many feasible methods can produce the effect that 3-D audio (3D) strengthens at present.3D strengthens audio and utilizes ears synthetic technology and relevant digital signal processing method to realize usually, it is exactly the process that the corresponding transfer function of different heads (HRTF) that utilizes the audience that different orientation around its three dimensions is produced carries out bi-directional filtered processing that the ears synthetic technology is used in that 3D strengthens in the audio, build acoustic image by earphone or loudspeaker in all directions, make the point of articulation that a little surpasses actual loudspeaker that sends of sound that the people feels.Connecing under the situation of loudspeaker, it is very necessary to eliminate cross-talk, because cross-talk is that this cross-talk can be destroyed correct azimuth information because the signal that enters from left ear when passing to auris dextra or pass to the process of left ear from the signal that auris dextra enters leakage takes place produces.Except that existing method, be starved of a kind of new technology of exploitation, can serving as to provide better 3D to strengthen audio, and do not contain any ears cross-talk information through signal two-way or simplex code.
Summary of the invention
The object of the present invention is to provide a kind of audio-frequency processing method that strengthens 3-D audio, can eliminate any ears cross-talk information, make the listener, produce sensation on the spot in person by earphone or two enhancing sound fields that loudspeaker are experienced to have more feeling of immersion and spatial impression.
Above-mentioned purpose can realize by following technical measures: a kind of audio-frequency processing method that strengthens 3-D audio, by alliteration being imported in the advanced line frequency time range, carry out again in the spatial dimension based on the synthetic enhancement process of alliteration based on the enhancement process of principal component.
Divides two-stage based on the enhancement process of principal component in the described frequency time range, wherein the detailed process of the first order is: about two sound channel signals average, the signal S after average obtains the signal that once filters through filter F1; Then, after handling through signed magnitude arithmetic(al), the signal after this filters, obtains the signal of secondary filter again by another filter F 2; Once the signal times of Guo Lving be weighted with former L channel input signal after with gain coefficient and, the signal times of secondary filter be weighted once more with above-mentioned L channel weighted sum gained signal after with gain coefficient and, obtain the L channel output signal L1 of output at the corresponding levels; Once the signal times of Guo Lving be weighted with former R channel input signal after with gain coefficient and, the signal times of secondary filter be weighted once more with above-mentioned R channel weighted sum gained signal after with gain coefficient and, obtain the R channel output signal R1 of output at the corresponding levels;
Wherein partial detailed process is: the signal after two sound channels are imported on average about former filters through filter F3, trap signal multiply by behind the gain coefficient with the first order in the L channel output signal exported be weighted and, obtain the L channel output signal L2 of second level output; The R channel output signal of exporting in this signal and the first order be weighted and, obtain the R channel output signal R2 of second level output.
At least need one-level based on the synthetic enhancement process of alliteration in the described spatial dimension, progression can be determined according to the sound source image number that setting is distributed in different orientation, different orientation sound source image number of many settings, just need increase the synthetic enhancement process of one-level alliteration, the processing procedure of each grade is identical; Wherein the synthetic enhancement process detailed process of first order alliteration is:
The above-mentioned L channel output signal L2 signal that acquisition is filtered through filter F4, through the signal that acquisition is filtered through filter F5 again after the delay process, two trap signals of gained are weighted and picked up signal T4 above-mentioned R channel output signal R2 earlier; This signal T4 multiply by behind the gain coefficient be weighted with L channel output signal L2 and, obtain the L channel output signal L3 of the synthetic enhancement process output of this grade alliteration;
The simultaneously above-mentioned R channel output signal R2 signal that acquisition is filtered through filter F4, through the signal that acquisition is filtered through filter F5 again after the delay process, two trap signals of gained are weighted and picked up signal T5 above-mentioned L channel output signal L2 earlier; This signal T5 multiply by behind the gain coefficient be weighted with R channel output signal R2 and, obtain the R channel output signal R3 of the synthetic enhancement process output of this grade alliteration.
Between the enhancement process that described alliteration is synthetic at different levels continuous tupe or parallel tupe or the tupe of continuous parallel mixing.
The process of the enhancement process that described alliteration is synthetic is a two-stage, and the first order and the second level are two continuous tupes, and the process of the tupe that it is continuous is the output of the synthetic input of the filtering ears of next stage from upper level.
The process of the enhancement process that described alliteration is synthetic is a two-stage, and the first order and the second level are two parallel tupes, and the synthetic input of the filtering ears of next stage and upper level is all from the output of enhancement process in the frequency time range of the second level.
Described in the frequency time range in the enhanced processes frequency response of three filters according to the characteristic of frequency spectrum and the three rank director datas that produced owing to nonlinear operation by average signal.
The frequency response of two filters is according to the both sides level disparity (ILD) and the both sides sound time lag difference (ITD) of determining to place orientation, acoustic image place in advance in the process of the synthetic enhancement process of described alliteration.
Unit delay processing time value is by in the orientation, place of determining to place acoustic image in advance in the process of the synthetic enhancement process of described alliteration, and sound passes to the another ear and time difference of producing and determining from an ear.
Describedly after the synthetic enhancement process of alliteration, increase the one-level volume and regulate control and treatment, with control audio signal output reposefully and since reinforced effects produce the human-made noise of sense of hearing aspect be reduced to minimum.
In first and second grade of the present invention frequency time range based in the enhancing process of principal component analysis and third and fourth grade spatial dimension based on after the synthetic enhancing process of alliteration, there have a plurality of virtual sound sources to be positioned to be three-dimensional different local, make the output of dual track or monophony will produce the audio frequency effect of expection, the sound field of enhancing has more feeling of immersion and spatial impression.
Description of drawings
Fig. 1 is the theory diagram of specific embodiments of the invention;
Fig. 2 is the schematic flow sheet of the interior enhancement process of first order frequency time range among Fig. 1;
Fig. 3 is the schematic flow sheet of the interior enhancement process of second level frequency time range among Fig. 1;
Fig. 4 is the schematic flow sheet of the synthetic enhancement process of the interior alliteration of third level spatial dimension among Fig. 1;
Fig. 5 is the continuous mode schematic flow sheet of the synthetic enhancement process of the interior alliteration of fourth stage spatial dimension among Fig. 1;
Fig. 6 is the parallel processing mode schematic flow sheet of the synthetic enhancement process of the interior alliteration of fourth stage spatial dimension among Fig. 1.
Embodiment
As shown in Figure 1, present embodiment divides the level Four enhancement process, its first and second grade is the interior enhancing process based on principal component analysis of frequency time range, third and fourth grade is based on the synthetic enhancing process of alliteration in the spatial dimension, following mask body is discussed enhanced processes at different levels, for simplicity, the signal of setting input is the stereo input of left and right acoustic channels and Play System adopts earphone.
As shown in Figure 2, wherein the enhancement process detailed process in the first order frequency time range is: about two sound channel signal L0, R0 average, the signal S after average obtains the signal that once filters through filter F1; Then, after handling through signed magnitude arithmetic(al), the signal after this filters, obtains the signal of secondary filter again by another filter F 2; Once the signal times of Guo Lving with gain coefficient G1 after with former L channel input signal L0 be weighted and, the signal times of secondary filter be weighted once more with above-mentioned L channel weighted sum gained signal T1 after with gain coefficient G2 and, obtain the L channel output signal L1 of output at the corresponding levels; Once the signal times of Guo Lving with gain coefficient G1 after with former R channel input signal R0 be weighted and, the signal times of secondary filter be weighted once more with above-mentioned R channel weighted sum gained signal T2 after with gain coefficient G2 and, obtain the R channel output signal R1 of output at the corresponding levels;
As shown in Figure 3, enhancement process detailed process in the frequency time range of the second level is: the signal S after two sound channel signal L0, R0 import on average about former filters through filter F3, trap signal multiply by behind the gain coefficient G3 with the first order in the L channel output signal L1 that exports be weighted and, obtain the L channel output signal L2 of second level output; The R channel output signal R1 that exports in this signal and the first order be weighted and, obtain the R channel output signal R2 of second level output.
The value of above-mentioned gain coefficient G1, G2 and G3 all should be for just, and less than 1.
The order of above-mentioned three filter F 1, F2 and F3 is in order to strengthen the principal component in the input audio frequency, the characteristic of the frequency response frequency spectrum of these three filters and the three rank director datas that produced owing to nonlinear operation (as the signed magnitude arithmetic(al) of phase I) by average signal S, so, the composition of different frequency is handled by difference filtration frequency response.The enhancing in these two stages mainly occurs in the temporal frequency scope.
As shown in Figure 4, then in third level spatial dimension based on the synthetic enhancement process detailed process of alliteration be: above-mentioned L channel output signal L2 obtains the signal of filtration through filter F4, through obtain the signal of filtration behind the delay process D1 again through filter F5, two trap signals of gained are weighted and picked up signal T4 above-mentioned R channel output signal R2 earlier; This signal T4 multiply by behind the gain coefficient G4 be weighted with L channel output signal L2 and, obtain the L channel output signal L3 of the synthetic enhancement process output of this grade alliteration; Above-mentioned R channel output signal R2 of the while signal that acquisition is filtered through filter F4, again through the signal of filter F5 acquisition filtration, two trap signals of gained are weighted and picked up signal T5 after the above-mentioned L channel output signal L2 process time-delay D1 of the elder generation processing; This signal T5 multiply by behind the gain coefficient G4 be weighted with R channel output signal R2 and, obtain the R channel output signal R3 of the synthetic enhancement process output of this grade alliteration.
Utilize filter F 4 and F 5 to carry out ears in this one-level and synthesize, the frequency response of these two filters is according to the both sides level disparity (ILD) and the both sides sound time lag difference (ITD) of determining to place acoustic image place orientation S1 in advance.Suppose the listener directly over be 0 degree, the value in its orientation, left side is for negative, angle is-180 ° to 0 °; The value in the orientation on its right is for just, and angle is 0 ° to 180 °.Simultaneously, unit delay processing time value D1 is being by determining to place orientation, the place S1 of acoustic image in advance, and sound passes to the another ear and time difference of producing and determining from an ear.In this one-level, after the left and right sound track signals L2 of input, R2 handled, left and right sides ear produced corresponding alliteration output L3 and R3 respectively.In other words, after handling like this, the listener can feel sound from S1 ,-place beyond S1 or the head.And this can be controlled by the frequency response of regulating filter F 4 and F5.And after ears composite signal T4 and T5 are revised by gain coefficient G4 (gain coefficient G4 for just and less than 1), be increased to respectively again that respective input signals L2 and R2 go up and when obtaining corresponding alliteration output L3 and R3, be only the effect that has realized that fully 3D strengthens.Such combination makes the listener experience the 3D sound field of enhancing, has more feeling of immersion and spatial impression to a certain extent.
The enhancement process of synthesizing based on alliteration in the above-mentioned spatial dimension can be multistage, progression can be determined according to the sound source image number that setting is distributed in different orientation, different orientation sound source image number of many settings just need increase the synthetic enhancement process of one-level alliteration, and the processing procedure of each grade is identical.
The process of the enhancement process that the present embodiment alliteration is synthetic is a two-stage, and this secondary is two continuous tupes, and the synthetic input of the filtering ears of next stage is from the output of upper level.As shown in Figure 5, identical with the third level shown in Fig. 4 based on the synthetic enhancement process detailed process of alliteration in the fourth stage spatial dimension, this level can further strengthen the 3D audio frequency effect.The same with the third level shown in Fig. 4, this one-level also has two different filter F 6 and F7 to be respectively applied for generation new dual track output T6 and T7.The method of the frequency response of decision F6 and F7 is the same with F5 with decision filter F 4.The difference at these the two poles of the earth only be back one-level sound source image will be distributed in different orientation S_2 and-S_2.Equally, simultaneously, unit delay processing time value D2 is being by determining to place orientation, the place S2 of acoustic image in advance, and sound passes to the another ear and time difference of producing and determining from an ear.In this one-level, the input signal that two dual tracks are handled is respectively L3 and R3, and two the input signal L2 and the R2 of the first order shown in this and Fig. 4 are different.In other words, by new synthetic ears composite signal T6 and T7, the hearer not only can feel sound be positioned at S2 and-S2, and sound field expanded to respectively S1 and-position of S1.After ears composite signal T6 and T7 are revised by gain coefficient G5 (gain coefficient G5 for just and less than 1), be increased to respectively again that respective input signals L3 and R3 go up and when obtaining corresponding alliteration output L4 and R4, such one has six sound sources is positioned at three-dimensional different place, to form the output of final complete system.Have four to be virtual in these six sound sources, lay respectively at S_1 ,-S_1, S_2 and-S_2, two other sound source is a frequency time range enhanced stereo sound signal.
As shown in Figure 6, also can be two parallel tupes in addition between the third level and the fourth stage, concrete processing procedure is with shown in Figure 5 the same, and difference is that the filtering ears of next stage and upper level synthesize the output of importing all from enhancement process in the frequency time range of the second level.During parallel tupe, in this one-level, the input signal that two dual tracks are handled is the same with two input signals of the third level shown in Fig. 4 all to be the output L2 and the R2 of the interior enhancement process of second level frequency time range.After ears composite signal T6 and T7 are revised by gain coefficient G5 (gain coefficient G5 for just and less than 1), be increased to respectively again that respective input signals L3 and R3 go up and system's output of obtaining effect same.
After the synthetic enhancement process of above-mentioned secondary alliteration, also can increase more booster stage, the progression that increases is with the same with the principle of handling output in the above-mentioned level, increase at different levels between can be continuous tupe or the parallel tupe or the tupe of continuous parallel mixing.Under the extra situation that increases level, different location, as S_3, S_4 is added in final system's output, sound field is further extended, and the cost that sound field further expands to be exactly method become more complicated.
Also can increase the one-level volume in the booster stage output in the end and regulate control and treatment, with the output of control audio signal reposefully and since reinforced effects produce the human-made noise of sense of hearing aspect be reduced to minimum, when especially the inventive method is used in digital field.
When monophonic signal is imported, the average signal S of first and second in the foregoing description grade becomes monophony input, i.e. average signal S=L channel input signal L0=R channel input signal R0.Following equation appears in this case: first order L channel output signal L1=first order R channel output signal R1, the L channel output signal L2=second level, second level R channel output signal R2, furtherly, the third level produces stereo output by following computing:
L3=L2+G4*F4(L2)
R3=R2+G4*F5(D1(R2))
The filter F 4 here,, F5 and D1 determine according to concrete location S1.The same fourth stage also produces stereo output:
L4=L3+G5*F6(L3)
R4=R3+G5*F7(D2(R3))
Here filter F 6, F7 and D2 determines according to concrete location S2, sound field further expanded to the S2 position.Under monaural situation, main because L2=R2, so not for-S1 and-handle the location of S2.Another method of handling the monophony input is at first to adopt I-Q right angle method to produce the new stereo output of left and right acoustic channels, handles (shown in Fig. 1-5) according to the identical method of the foregoing description neutral body vocal input situation then.
When Play System is two loudspeaker, it is very necessary that third and fourth level is eliminated cross-talk, and this two-stage is eliminated the frequency response of the filter of cross-talk and can be determined in advance according to filter F 4, F5, F6, F7, D1 and D2.Listen because cross-talk can be destroyed alliteration choosing, and the cancellation of cross-talk all losses can not avoid the alliteration choosing to listen the time, therefore, the effect when using 3D reinforced effects under the loudspeaker situation not use earphone is remarkable.And, should the keep at a distance distance of 1 meter on two loudspeaker of hearer.
The inventive method can be supported any sample rate, comprises 96kHz, 48kHz, 44.1KHz and 32kHz, 16kHz and 8kHz.The sample rate difference, the frequency response of the filter that all relate to is with different.But, under the relatively low situation of sample rate, because reducing of space and frequency resolution weakened the reinforced effects of 3D.
Related filter can be that the IIR filter also can be the FIR filter among the present invention.The order of the switch of IIR filter and FIR filter is in poised state (performance, speed and moderate complexity).For simplifying the enforcement of the inventive method, many IIR filter of secondary instruction can be together in series to substitute long switch FIR filter or high order IIR filter
The inventive method can be supported any existing monophony and stereo audio signal, as MP3, WMA, MIDI, digital TV, digital broadcasting and network audio etc.This method is applicable to any software and hardware, also can be built in the relevant audio player.

Claims (10)

1, a kind of audio-frequency processing method that strengthens 3-D audio is characterized in that carrying out in spatial dimension based on the synthetic enhancement process of alliteration based on the enhancement process of principal component by monophone or left and right sides alliteration input audio frequency being carried out earlier in the frequency time range again.
2, the audio-frequency processing method of enhancing 3-D audio according to claim 1, it is characterized in that the interior enhancement process branch two-stage of described frequency time range based on principal component, wherein the detailed process of the first order is: about two sound channel signals average, the signal S after average obtains the signal that once filters through filter F1; Then, after handling through signed magnitude arithmetic(al), the signal after this filters, obtains the signal of secondary filter again by another filter F 2; Once the signal times of Guo Lving be weighted with former L channel input signal after with gain coefficient and, the signal times of secondary filter be weighted once more with above-mentioned L channel weighted sum gained signal after with gain coefficient and, obtain the L channel output signal L1 of output at the corresponding levels; Once the signal times of Guo Lving be weighted with former R channel input signal after with gain coefficient and, the signal times of secondary filter be weighted once more with above-mentioned R channel weighted sum gained signal after with gain coefficient and, obtain the R channel output signal R1 of output at the corresponding levels;
Wherein partial detailed process is: the signal after two sound channels are imported on average about former filters through filter F3, trap signal multiply by behind the gain coefficient with the first order in the L channel output signal exported be weighted and, obtain the L channel output signal L2 of second level output; The R channel output signal of exporting in this signal and the first order be weighted and, obtain the R channel output signal R2 of second level output.
3, the audio-frequency processing method of enhancing 3-D audio according to claim 1 and 2, it is characterized in that needing one-level at least based on the synthetic enhancement process of alliteration in the described spatial dimension, progression can be determined according to the sound source image number that setting is distributed in different orientation, different orientation sound source image number of many settings, just need increase the synthetic enhancement process of one-level alliteration, the processing procedure of each grade is identical; Wherein the synthetic enhancement process detailed process of first order alliteration is:
The above-mentioned L channel output signal L2 signal that acquisition is filtered through filter F4, through the signal that acquisition is filtered through filter F5 again after the delay process, two trap signals of gained are weighted and picked up signal T4 above-mentioned R channel output signal R2 earlier; This signal T4 multiply by behind the gain coefficient be weighted with L channel output signal L2 and, obtain the L channel output signal L3 of the synthetic enhancement process output of this grade alliteration;
The simultaneously above-mentioned R channel output signal R2 signal that acquisition is filtered through filter F4, through the signal that acquisition is filtered through filter F5 again after the delay process, two trap signals of gained are weighted and picked up signal T5 above-mentioned L channel output signal L2 earlier; This signal T5 multiply by behind the gain coefficient be weighted with R channel output signal R2 and, obtain the R channel output signal R3 of the synthetic enhancement process output of this grade alliteration.
4, the audio-frequency processing method of enhancing 3-D audio according to claim 3 is characterized in that between enhancement process that alliteration is synthetic at different levels being continuous tupe or parallel tupe or the tupe of parallel mixing continuously.
5, the audio-frequency processing method of enhancing 3-D audio according to claim 3, the process that it is characterized in that the enhancement process that alliteration is synthetic is a two-stage, the first order and the second level are two continuous tupes, and the process of the tupe that it is continuous is the output of the synthetic input of the filtering ears of next stage from upper level.
6, the audio-frequency processing method of enhancing 3-D audio according to claim 3, the process that it is characterized in that the enhancement process that alliteration is synthetic is a two-stage, the first order and the second level are two parallel tupes, and the synthetic input of the filtering ears of next stage and upper level is all from the output of enhancement process in the frequency time range of the second level.
7, the audio-frequency processing method of enhancing 3-D audio according to claim 2 is characterized in that in the frequency time range the three rank director datas that the frequency response of three filters in the enhanced processes produces owing to nonlinear operation according to the characteristic of frequency spectrum and the signal that is neutralized.
8, the audio-frequency processing method of enhancing 3-D audio according to claim 3, the frequency response that it is characterized in that two filters in the process of the synthetic enhancement process of alliteration is according to the both sides level disparity (ILD) and the both sides sound time lag difference (ITD) of determining to place orientation, acoustic image place in advance.
9, the audio-frequency processing method of enhancing 3-D audio according to claim 3, it is characterized in that unit delay processing time value in the process of the synthetic enhancement process of alliteration by in the orientation, place of determining the placement acoustic image in advance, sound passes to the another ear and time difference of producing and determining from an ear.
10, the audio-frequency processing method of enhancing 3-D audio according to claim 1 is characterized in that increasing the one-level volume after the synthetic enhancement process of alliteration regulates control and treatment.
CNB200510102181XA 2005-12-09 2005-12-09 Strengthen the audio-frequency processing method of 3-D audio Active CN100539741C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNB200510102181XA CN100539741C (en) 2005-12-09 2005-12-09 Strengthen the audio-frequency processing method of 3-D audio

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNB200510102181XA CN100539741C (en) 2005-12-09 2005-12-09 Strengthen the audio-frequency processing method of 3-D audio

Publications (2)

Publication Number Publication Date
CN1794887A true CN1794887A (en) 2006-06-28
CN100539741C CN100539741C (en) 2009-09-09

Family

ID=36806091

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB200510102181XA Active CN100539741C (en) 2005-12-09 2005-12-09 Strengthen the audio-frequency processing method of 3-D audio

Country Status (1)

Country Link
CN (1) CN100539741C (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104754241A (en) * 2013-12-31 2015-07-01 广州励丰文化科技股份有限公司 Panoramic multichannel audio frequency control method based on variable domain acoustic images
CN105049975A (en) * 2015-07-13 2015-11-11 青岛歌尔声学科技有限公司 Earphone and design method of earphone
WO2016197478A1 (en) * 2015-06-12 2016-12-15 青岛海信电器股份有限公司 Method and system for eliminating crosstalk
WO2017215237A1 (en) * 2016-06-12 2017-12-21 深圳奥尼电子股份有限公司 3d sound effect processing circuit for earphone or earplug, and processing method therefor
CN107889044A (en) * 2017-12-19 2018-04-06 维沃移动通信有限公司 The processing method and processing device of voice data
WO2022088425A1 (en) * 2020-10-28 2022-05-05 歌尔股份有限公司 Control method for audio component and intelligent head-mounted device

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI554943B (en) * 2015-08-17 2016-10-21 李鵬 Method for audio signal processing and system thereof

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104754241A (en) * 2013-12-31 2015-07-01 广州励丰文化科技股份有限公司 Panoramic multichannel audio frequency control method based on variable domain acoustic images
CN104754241B (en) * 2013-12-31 2017-10-13 广州励丰文化科技股份有限公司 Panorama multi-channel audio control method based on variable domain acoustic image
WO2016197478A1 (en) * 2015-06-12 2016-12-15 青岛海信电器股份有限公司 Method and system for eliminating crosstalk
CN105049975A (en) * 2015-07-13 2015-11-11 青岛歌尔声学科技有限公司 Earphone and design method of earphone
WO2017215237A1 (en) * 2016-06-12 2017-12-21 深圳奥尼电子股份有限公司 3d sound effect processing circuit for earphone or earplug, and processing method therefor
CN107889044A (en) * 2017-12-19 2018-04-06 维沃移动通信有限公司 The processing method and processing device of voice data
CN107889044B (en) * 2017-12-19 2019-10-15 维沃移动通信有限公司 The processing method and processing device of audio data
WO2022088425A1 (en) * 2020-10-28 2022-05-05 歌尔股份有限公司 Control method for audio component and intelligent head-mounted device

Also Published As

Publication number Publication date
CN100539741C (en) 2009-09-09

Similar Documents

Publication Publication Date Title
CN1875656B (en) Audio frequency reproduction system and method for producing surround sound from front located loudspeakers
CN102165797B (en) Apparatus and method for determining spatial output multi-channel audio signal
Jot et al. Digital signal processing issues in the context of binaural and transaural stereophony
CN100539741C (en) Strengthen the audio-frequency processing method of 3-D audio
US8571232B2 (en) Apparatus and method for a complete audio signal
CN1863416A (en) Audio device and method for generating surround sound
CN1135904C (en) Sound image localizing device
CN1901761A (en) Method and apparatus to reproduce wide mono sound
CN1129346C (en) Method and device for producing multi-way sound channel from single sound channel
CN1976546A (en) Apparatus and method for reproducing expanded sound using mono speaker
US8259960B2 (en) Phase layering apparatus and method for a complete audio signal
CN107534825A (en) Audio signal processor and method
CN109089203A (en) The multi-channel signal conversion method and car audio system of car audio system
CN1126431C (en) A mono-stereo conversion device, an audio reproduction system using such a device and a mono-stereo conversion method
CN1929698A (en) Sound reproduction apparatus and method of enhancing low frequency component
EP1558061A2 (en) Sound Feature Positioner
CN100364367C (en) Dynamic 3D stereo effect processing system
US7502477B1 (en) Audio reproducing apparatus
Jot et al. Binaural concert hall simulation in real time
CN114363793B (en) System and method for converting double-channel audio into virtual surrounding 5.1-channel audio
JP3686989B2 (en) Multi-channel conversion synthesizer circuit system
CN1528105A (en) Method of generating a left modified and a right modified audio signal for a stereo system
JP2003179999A (en) Multichannel conversion synthesize circuit system

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C56 Change in the name or address of the patentee

Owner name: ANKAI (GUANGZHOU) MICROELECTRONICS TECHNOLOGY CO.,

Free format text: FORMER NAME: ANKAI( GUANGZHOU ) SOFTWARE TECHNOLOGY CO., LTD.

CP03 Change of name, title or address

Address after: No. 182, science Avenue, Science Town, Guangzhou hi tech Industrial Development Zone, Guangdong 301-303, 401-402, C1

Patentee after: Anyka (Guangzhou) Microelectronics Technology Co., Ltd.

Address before: Guangdong city of Guangzhou province science and Technology Park Tianhe Software Park in Gaotang New District No. 1033 building 6 high way

Patentee before: Anyka (Guangzhou) Software Technology Co., Ltd.

CP01 Change in the name or title of a patent holder
CP01 Change in the name or title of a patent holder

Address after: 510663 301-303, 401-402, zone C1, 182 science Avenue, Science City, Guangzhou high tech Industrial Development Zone, Guangdong Province

Patentee after: Guangzhou Ankai Microelectronics Co.,Ltd.

Address before: 510663 301-303, 401-402, zone C1, 182 science Avenue, Science City, Guangzhou high tech Industrial Development Zone, Guangdong Province

Patentee before: ANYKA (GUANGZHOU) MICROELECTRONICS TECHNOLOGY Co.,Ltd.

CP02 Change in the address of a patent holder
CP02 Change in the address of a patent holder

Address after: 510555 No. 107 Bowen Road, Huangpu District, Guangzhou, Guangdong

Patentee after: Guangzhou Ankai Microelectronics Co.,Ltd.

Address before: 510663 301-303, 401-402, zone C1, 182 science Avenue, Science City, Guangzhou high tech Industrial Development Zone, Guangdong Province

Patentee before: Guangzhou Ankai Microelectronics Co.,Ltd.