CN106255031A - Virtual sound field generator and virtual sound field production method - Google Patents
Virtual sound field generator and virtual sound field production method Download PDFInfo
- Publication number
- CN106255031A CN106255031A CN201610597728.6A CN201610597728A CN106255031A CN 106255031 A CN106255031 A CN 106255031A CN 201610597728 A CN201610597728 A CN 201610597728A CN 106255031 A CN106255031 A CN 106255031A
- Authority
- CN
- China
- Prior art keywords
- sound field
- user
- space
- loudspeaker array
- sound
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S3/00—Systems employing more than two channels, e.g. quadraphonic
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S7/00—Indicating arrangements; Control arrangements, e.g. balance control
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S7/00—Indicating arrangements; Control arrangements, e.g. balance control
- H04S7/30—Control circuits for electronic adaptation of the sound field
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Acoustics & Sound (AREA)
- Signal Processing (AREA)
- Stereophonic System (AREA)
Abstract
This application discloses a kind of virtual sound field generator and virtual sound field production method, this virtual sound field generator includes: source of sound input equipment, for from external reception media sound;Alignment system, the position of user's face and the position of loudspeaker array in located space, and calculate and calibrate the acoustic model in described space;Sound field controller, controls described sound for acoustic model based on described space and specific sound field pattern;Sound field output device, for being connected with described loudspeaker array, by the voice output that controls through described sound field controller to described loudspeaker array.By the virtual sound field generator according to above-described embodiment, it is possible to produce and automatically follow the multiple virtual sound field of customer location in space, user select to use.And user is followed by sound field automatically without wearing the headset equipments such as earphone, it is achieved that freely with listening.
Description
Technical field
The disclosure relates generally to sound field technical field, is specifically related to virtual sound field generator and virtual sound field generation side
Method.
Background technology
Along with the development of amusement audio visual technology, in audiovisual entertainment activity, increasingly in the face of diversified demand.Such as,
When, watching TV programme when, the same space having other people be engaged in other activities, such as sleep of having a rest.The most too loudly
Can affect other people movable, sound is little oneself to be experienced the best or can not hear clearly.Although wearing earphone it can be avoided that to other people activity
Interference, but earphone is to head and ear build-up of pressure, experiences the best.
Although playing it addition, TV is capable of many pictures, if many people see TV simultaneously, wish again to see different frequencies
Road, prior art can realize different people by picture segmentation and see different pictures, but to carry out sound segmentation the most tired simultaneously
Difficult.In this case, although in theory by everyone wear an earphone can realize sound separate, but multiple earphone connect
The line length of inconvenience and connecting line limits, and all makes this recreation experience listened to decline.
Furthermore, when carrying out somatic sensation television game activity, the virtual sound field that game creates is full of whole room, owing to sound field is disperseed
And experience effect is declined.
Summary of the invention
In view of drawbacks described above of the prior art or deficiency, it is desirable to provide a kind of technical scheme, overcome drawbacks described above or not
Foot.
In a first aspect of the present invention, it is provided that a kind of virtual sound field generator, including:
Source of sound input equipment, for from external reception media sound;
Alignment system, the position of the position of user's face, loudspeaker array in located space, and calculate and calibration institute
State the acoustic model in space;
Sound field controller, controls described media sound for acoustic model based on described space and specific sound field pattern
Sound;
Sound field output device, for being connected with described loudspeaker array, the media sound that will control through described sound field controller
Sound exports described loudspeaker array.
Preferably, to include that photographic head and microphone array, described microphone array are used for receiving described for described alignment system
The calibration signal that loudspeaker array sends determines the acoustic model in described space, described photographic head based on described user, described in raise
Sound device array and the image of described microphone array and the acoustic model in described space, determine described user face location and
The position of loudspeaker array.
Preferably, described sound field pattern includes panning mode, exclusively enjoys pattern or synthesized modeling;Wherein, at described panorama mould
Under formula, described loudspeaker array operates as multichannel audio, and described sound field controller controls the foundation of described loudspeaker array and fills
The sound field in full whole described space, with center, room for sound field center;Under exclusively enjoying pattern, described sound field controller controls described
Loudspeaker array is set up with the sound field of each user-center in described space, between the sound field that each user sets up each other
It does not interfere with each other;Under synthesized modeling, described sound field controller controls described loudspeaker array and sets up with in described space
The various dimensions sound field of user-center selects for user.
Preferably, described virtual sound field generator also includes sound field pattern input terminal, is used for inputting described user choosing
The sound field pattern selected or the sound field pattern automatically determined according to the media sound-content play.
Further, the specific sound field pattern default configuration of described virtual sound field generator is panning mode.
Preferably, the acoustic model in described space is updated with the location updating of described user.
Alternatively, any of the above-described virtual sound field generator also includes loudspeaker array, for filling with the output of described sound field
Put connection and play described media sound.
In a second aspect of the present invention, also provide for a kind of virtual sound field production method, including:
From external reception media sound;
The face location of user, loudspeaker array position in located space, and calculate and calibrate the acoustic mode in described space
Type;
Acoustic model based on described space and specific sound field pattern control described media sound;
Export the described media sound through controlling.
Preferably, described method farther includes: determine the acoustic model in described space, acoustic mode based on described space
The image of type and described user, described loudspeaker array and described microphone array determines the face location of described user and raises
Sound device array position.
Preferably, described method farther includes: described sound field pattern includes panning mode, exclusively enjoys pattern or comprehensive mould
Formula;Wherein, under described panning mode, described loudspeaker array operates as multichannel audio, and described sound field controller controls
Described loudspeaker array sets up the sound field being full of whole described space, with center, room for sound field center;Under exclusively enjoying pattern, institute
State sound field controller and control the described loudspeaker array foundation sound field with each user-center in described space, for each use
Do not interfere with each other between the sound field that family is set up;Under synthesized modeling, described sound field controller controls described loudspeaker array and builds
Stand and select for user with the various dimensions sound field of a user-center in described space.
Preferably, described method farther includes: wherein, and wherein, described specific sound field pattern is inputted by described user
Or the media sound-content according to playing automatically determines.
Preferably, described method farther includes: described specific sound field pattern default configuration is panning mode.
Preferably, described method farther includes: the acoustic model in described space is with the location updating of described user and quilt
Update.
By virtual sound field generator according to embodiments of the present invention and virtual sound field production method, it is possible to produce automatically
Follow the multiple virtual sound field of customer location in space, user select to use.And user is without wearing the wear-types such as earphone
Equipment and automatically followed by sound field, it is achieved freely with listening.
Accompanying drawing explanation
By the detailed description that non-limiting example is made made with reference to the following drawings of reading, other of the application
Feature, purpose and advantage will become more apparent upon:
Fig. 1 illustrates the figure of the applied environment of the most virtual sound field generator;
Fig. 2 illustrates the composition frame chart of the most virtual sound field generator;
Fig. 3 illustrates the signal that virtual sound field generator according to the above embodiment of the present invention operates under panning mode
Figure;
Fig. 4 illustrates the signal that virtual sound field generator according to the above embodiment of the present invention operates under exclusively enjoying pattern
Figure;
Fig. 5 illustrates the signal that virtual sound field generator according to the above embodiment of the present invention operates under synthesized modeling
Figure.
Fig. 6 illustrates that virtual sound field generator according to the above embodiment of the present invention carries out the schematic diagram of calibration operation;
Fig. 7 illustrates the flow chart of virtual sound field production method according to the above embodiment of the present invention.
Detailed description of the invention
With embodiment, the application is described in further detail below in conjunction with the accompanying drawings.It is understood that this place is retouched
The specific embodiment stated is used only for explaining related invention, rather than the restriction to this invention.It also should be noted that, in order to
It is easy to describe, accompanying drawing illustrate only and invent relevant part.
It should be noted that in the case of not conflicting, the embodiment in the application and the feature in embodiment can phases
Combination mutually.Describe the application below with reference to the accompanying drawings and in conjunction with the embodiments in detail.
Refer to Fig. 1, Fig. 1 and the applied environment of the most virtual sound field generator is shown
Figure.Being provided with virtual sound field generator 1 according to embodiments of the present invention in space 100, space can be room, parlor or
Particular place.The microphone array 6 of its two mike compositions for location comprised shown in Fig. 1 and a photographic head 5,
The composition of virtual sound field generator is not completely shown.The mike 6 of composition microphone array and the quantity of photographic head 5 can be divided
It is not limited to two or one.Be arranged on virtual sound field generator both sides is loudspeaker array, including left side array SPL1,
SPL2...SPLN and right side array SPL1, SPL2...SPLN.N can be positive integer.
Fig. 2 illustrates the composition frame chart of this virtual sound field generator 1.It includes source of sound input equipment 102, alignment system
104, sound field controller 106 and sound field output device 108.Various piece composition is as follows:
---source of sound input equipment 102, it can use existing various connected mode to be connected with external sound source, for defeated
Enter the foreign medium sound wanting to play, including wired or radio connection, such as audio connecting cord or WiFi.Media sound
Such as from television set, video disc player or network audio.
---alignment system 104, the face of user and the position of external loudspeaker array in located space, thus count
Calculate and calibrate the acoustic model in this space.Here acoustic model be included in current spatial arrange under environment calibrate sound from raising one's voice
Each loudspeaker pass in device array arrives the analytic solution square of the analytic solution composition of each microphone array 6
Battle array, and the calibration sound each loudspeaker pass from loudspeaker array arrives the analytic solution matrix of user face.
When having multiple user in space, alignment system 104 determines the face location of each user and calibrates sound from loudspeaker array
In each loudspeaker pass arrive each user's head analytic solution matrix.
---sound field controller 106, control described sound for above-mentioned acoustic model based on space and certain sound field pattern
The output in source.
---sound field output device 108, it is connected with outside loudspeaker array, for exporting outward through the source of sound controlled
The loudspeaker array in portion produces virtual sound field, and it can also use wired or wireless connected mode.
By the virtual sound field generator according to above-described embodiment, it is possible to produce and automatically follow customer location in space
Multiple virtual sound field, is selected to use by user.And user is followed by sound field automatically without wearing the headset equipments such as earphone,
Achieve freely with listening.
Here sound field pattern includes: panning mode, exclusively enjoy pattern or synthesized modeling.Fig. 3, Fig. 4, Fig. 5 are shown respectively root
According to the virtual sound field generator of the we's embodiment operation chart under the configuration of three kinds of sound field patterns.
As it is shown on figure 3, panning mode is following sound field pattern: loudspeaker array is as common multi-channel audio equipment
Operation, sound field controller 106 controls loudspeaker array and sets up with (such as room) center, space or with the certain bits in space
It is set to sound field center, is full of the sound field in whole described space.This ad-hoc location does not associates with customer location.This pattern is applicable to many
Individual user wishes to share the situation of same sound-content.
As shown in Figure 4, the pattern that exclusively enjoys is following sound field pattern: sound field controller 106 controls loudspeaker array foundation point
Not with the sound field of each user-center in the multiple users in space, and be between the sound field of each user foundation each other
It does not interfere with each other.Under this pattern, sound field controller 106 enters by source of sound input equipment 102 is received the media acoustical signal come
Line phase is modulated, and such as, sends to the acoustical signal of loudspeaker array, and the acoustical signal that part speaker receives is shifted by,
Thus realize the sound field in spatial distribution with specific center.Each center is separated from each other, and does not interfere with each other.This most permissible
By the various sound modulation technology according to prior art.In such a mode, between multi-user, each enjoy respective sound field,
It does not interfere with each other each other, and respective sound field center is moved with the position of each user, it is to avoid tradition headband receiver is to people
The constraint of position.
As it is shown in figure 5, synthesized modeling is following sound field pattern: sound field controller 106 control loudspeaker array set up with
The various dimensions sound field of a user-center in this space, selects for user.Such as, sound field controller 106 controls to raise
Sound device array sets up the sound field of five kinds of effects with a user-center in space, and user can select any of which
Or multiple superposition uses.In such a mode, a user has multiple sound field effect and selects, thus creates more horn of plenty for user
Experience.The somatic sensation television game that this is immersion provides more flexible selection.
Above-mentioned alignment system 104 includes photographic head and microphone array.Microphone array is sent out by receiving loudspeaker array
The calibration acoustical signal gone out determines the acoustic model in space.Photographic head is used for identifying user, loudspeaker array and microphone array 6
Image, alignment system 104 by calculate under current spatial arranges environment the sound each speaker from loudspeaker array
Propagate the analytic solution arriving each microphone array 6, and the calibration sound each speaker from loudspeaker array
Propagate the analytic solution arriving each user face, determine the acoustic model in space.Acoustic model based on this space, really
Determine the face location of user, loudspeaker array position and microphone array 6 position.Here the position of face, can include referring to
Gather the sensing of the face location obtained in image, i.e. replace position with the direction indication of the two dimension obtained in image.Such as, with
Loudspeaker array center is position reference, and each user people can be pointed to by each from loudspeaker array in the position of face
The sensing of face characterizes the position of each user.Above-mentioned calibration sound can also be common acoustic.
Fig. 6 illustrates that virtual sound field generator according to the above embodiment of the present invention carries out the schematic diagram of calibration operation.?
During calibration operation, sound field controller 106 controls loudspeaker array and sends calibration acoustical signal, and this calibration acoustical signal is in space
Air borne arrive microphone array 6, the most multiple mikes 6 or pick-up head, obtained by microphone array 6, according to sound
Spread speed and the propagation time of acoustical signal such that it is able to be informed under the current arrangements in space sound from loudspeaker array
In each loudspeaker pass arrive the sound propagation properties of each mike in microphone array 6, i.e. about loudspeaker array
Analytic solution.
It addition, the photographic head in alignment system 104 includes photographic head 5, it shoots and identifies the speaker arranged in space
Array and microphone array and the image of user's face such that it is able to know loudspeaker array, microphone apparatus and user people
Face relative position each other.Permissible to the technology of the image recognition of loudspeaker array, microphone array and user's face
Carry out based on prior art.Letter is transmitted in the space about loudspeaker array obtained by microphone array by alignment system 104
Number is combined with the relative position information obtained by photographic head, it is thus achieved that the sound each loudspeaker pass from loudspeaker array
Arrive the analytic solution of each user face, i.e. about the analytic solution of each user, so that it is determined that this space
Complete sound model.This complete sound model is included in current spatial and arranges every from loudspeaker array of sound under environment
Individual loudspeaker pass arrives the analytic solution matrix of the analytic solution composition of each microphone array 6, and sound
Each loudspeaker pass from loudspeaker array arrives the analytic solution matrix of user's head.The acoustic model in this space
It it is a multivariate model.When the layout change in space causes acoustic transmission path to change, the acoustic model in space also changes.
When the position change of user, the acoustic model in space also changes.That is: the acoustic model in space is with the location updating of user and quilt
Update.This is capable of updating sound field center according to user's real time position, it is achieved sound field is followed the tracks of.
Sound field controller tut based on space model, the output of regulation sound field output device, in conjunction with specifically
Sound field pattern controls output to the media sound of loudspeaker array.
Photographic head in alignment system 104 identifies that the number of users in space configures sound field pattern automatically.When identifying sky
During an interior only user, the default configuration of virtual sound field generator is panning mode, i.e. with in space center or space
Ad-hoc location centered by set up virtual sound field.When having two or more user in identifying space, virtual sound field generator
Default configuration is for exclusively enjoying pattern.In one embodiment, sound field pattern can also be selected to determine by user.Preferably, virtual
Sound field generator can have sound field pattern input terminal, user input selection sound field pattern.Such as user is according to individual
Hobby selects synthesized modeling or panning mode.It addition, this sound field pattern input terminal can be also used for input according to be play
Source of sound content and the sound field pattern that automatically determines.Such as, when carrying out somatic sensation television game, a kind of game station is defeated according to game content
Go out sound field pattern for exclusively enjoying pattern (or panning mode).In the case, the sound field pattern input of virtual sound field generator
Son can be connected with the sound field pattern outfan of game device, thus obtains the pattern that exclusively enjoys (or the panorama of foreign medium content
Pattern) input, and replace the sound field pattern of the sound field controller 106 of inside to control, now sound field controller 106 is according to spatial sound
Learn model and the sound field pattern of outside input and operate loudspeaker array.
In one embodiment, virtual sound field generator also includes loudspeaker array.Loudspeaker array for receive from
The acoustical signal of sound field output device 108 output.
Fig. 7 illustrates virtual sound field production method according to embodiments of the present invention.Comprise the following steps:
S702, from external reception media sound;
S704, the face location of user, loudspeaker array position in located space, and calculate and calibrate the sound in described space
Learn model;
S706, acoustic model based on described space and specific sound field pattern control described media voice output;
S708, exports the described media sound through controlling.
Preferably, said method farther includes: determine the acoustic model in described space, acoustic mode based on described space
The image of type and described user, described loudspeaker array and described photographic head 5 determines the face location of described user, speaker
Array position and microphone array column position.
Preferably, described method farther includes: described sound field pattern includes panning mode, exclusively enjoys pattern or comprehensive mould
Formula, their operational circumstances is described above.
Other of above-mentioned virtual sound field production method preferred embodiment can obtain from the above description.
Although it should be noted that, describe composition and the operation of method of assembly of the invention in the accompanying drawings with particular order,
But, this does not requires that or implies and must according to this particular order to perform these operations, or have to carry out the most shown
Operation could realize desired result.On the contrary, the step described in flow chart can change execution sequence.Additionally or alternative
Multiple steps are merged into a step and are performed, and/or a step is decomposed into multiple step by ground, it is convenient to omit some step
Rapid execution.
Claims (13)
1. a virtual sound field generator, it is characterised in that including:
Source of sound input equipment, for from external reception media sound;
Alignment system, the position of user's face and the position of loudspeaker array in located space, and calculate and calibration is described
The acoustic model in space;
Sound field controller, controls described media sound for acoustic model based on described space and specific sound field pattern;
Sound field output device, for being connected with described loudspeaker array, by defeated for the media sound controlled through described sound field controller
Go out to described loudspeaker array.
Virtual sound field generator the most according to claim 1, it is characterised in that wherein, described alignment system includes taking the photograph
As head and microphone array, the calibration signal that described microphone array sends for receiving described loudspeaker array determines described sky
Between acoustic model, described photographic head based on gather described user, described loudspeaker array and the figure of described microphone array
Picture and the acoustic model in described space, determine position and the position of loudspeaker array of described user's face.
Virtual sound field generator the most according to claim 1 and 2, it is characterised in that wherein, described sound field pattern includes
Panning mode, exclusively enjoy pattern or synthesized modeling;Wherein, under described panning mode, described loudspeaker array is as multichannel sound
Ringing operation, described sound field controller controls described loudspeaker array and sets up the sound field being full of whole described space, with center, room
For sound field center;Under exclusively enjoying pattern, it is every that described sound field controller controls that described loudspeaker array sets up with in described space
The sound field of individual user-center, does not interferes with each other between the sound field that each user sets up;Under synthesized modeling, described sound field
Controller controls described loudspeaker array and sets up the various dimensions sound field with a user-center in described space for user's choosing
Select use.
Virtual sound field generator the most according to claim 2, it is characterised in that described virtual sound field generator also wraps
Include sound field pattern input terminal, for input described user select sound field pattern or according to play media sound-content from
The dynamic sound field pattern determined.
Virtual sound field generator the most according to claim 1, it is characterised in that the spy of described virtual sound field generator
Fixed sound field pattern default configuration is panning mode.
Virtual sound field generator the most according to claim 1, it is characterised in that wherein, the acoustic model in described space
It is updated with the location updating of described user.
7. according to the virtual sound field generator according to any one of claim 1-6, it is characterised in that described virtual sound field is produced
Generating apparatus also includes loudspeaker array, for being connected the described sound of broadcasting with described sound field output device.
8. a virtual sound field production method, it is characterised in that it is characterized in that, including:
From external reception sound;
The face location of user, loudspeaker array position in located space, and calculate and calibrate the acoustic model in described space;
Acoustic model based on described space and specific sound field pattern control described sound;
Export the described sound through controlling.
Virtual sound field production method the most according to claim 8, it is characterised in that described method farther includes: determine
The acoustic model in described space, acoustic model based on described space and described user, described loudspeaker array and described wheat
The image of gram wind array determines face location and the loudspeaker array position of described user.
Virtual sound field production method the most according to claim 8 or claim 9, it is characterised in that wherein, described sound field pattern bag
Include panning mode, exclusively enjoy pattern or synthesized modeling;Wherein, under described panning mode, described loudspeaker array is as multichannel
Sound equipment operates, and described sound field controller controls described loudspeaker array and sets up the sound field being full of whole described space, with in space
The heart is sound field center;Under exclusively enjoying pattern, described sound field controller controls described loudspeaker array and sets up with in described space
The sound field of each user-center, does not interferes with each other between the sound field that each user sets up;Under synthesized modeling, described sound
Field controller controls described loudspeaker array and sets up the various dimensions sound field with a user-center in described space for user
Select to use.
11. virtual sound field production methods according to claim 9, it is characterised in that wherein, described specific sound field pattern
Inputted by described user or automatically determine according to the media sound-content play.
12. virtual sound field production methods according to claim 8, it is characterised in that described specific sound field pattern is default
It is configured to panning mode.
13. virtual sound field production methods according to claim 8, it is characterised in that wherein, the acoustic model in described space
It is updated with the location updating of described user.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610597728.6A CN106255031B (en) | 2016-07-26 | 2016-07-26 | Virtual sound field generation device and virtual sound field production method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610597728.6A CN106255031B (en) | 2016-07-26 | 2016-07-26 | Virtual sound field generation device and virtual sound field production method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106255031A true CN106255031A (en) | 2016-12-21 |
CN106255031B CN106255031B (en) | 2018-01-30 |
Family
ID=57604788
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610597728.6A Active CN106255031B (en) | 2016-07-26 | 2016-07-26 | Virtual sound field generation device and virtual sound field production method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106255031B (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107464552A (en) * | 2017-08-24 | 2017-12-12 | 徐银海 | A kind of distributed locomotive active noise reduction system and method |
CN109151671A (en) * | 2017-06-15 | 2019-01-04 | 宏达国际电子股份有限公司 | Apparatus for processing audio, audio-frequency processing method and computer program product |
CN110677771A (en) * | 2019-11-05 | 2020-01-10 | 常州听觉工坊智能科技有限公司 | Wireless multi-channel sound system and automatic sound channel calibration method thereof |
CN111954146A (en) * | 2020-07-28 | 2020-11-17 | 贵阳清文云科技有限公司 | Virtual sound environment synthesizing device |
CN112119646A (en) * | 2018-05-22 | 2020-12-22 | 索尼公司 | Information processing apparatus, information processing method, and program |
WO2022153359A1 (en) * | 2021-01-12 | 2022-07-21 | 日本電信電話株式会社 | Microphone position presentation method, device therefor, and program |
GB2616073A (en) * | 2022-02-28 | 2023-08-30 | Audioscenic Ltd | Loudspeaker control |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102547533A (en) * | 2010-11-05 | 2012-07-04 | 索尼公司 | Acoustic control apparatus and acoustic control method |
GB2495623A (en) * | 2011-10-10 | 2013-04-17 | Korea Electronics Telecomm | Preventing excessive output of energy when reproducing a three dimensional (3D) sound field |
CN104125524A (en) * | 2013-04-23 | 2014-10-29 | 华为技术有限公司 | Sound effect adjustment method, apparatus and devices |
CN104335605A (en) * | 2012-06-06 | 2015-02-04 | 索尼公司 | Audio signal processing device, audio signal processing method, and computer program |
WO2015035492A1 (en) * | 2013-09-13 | 2015-03-19 | Mixgenius Inc. | System and method for performing automatic multi-track audio mixing |
CN104581604A (en) * | 2013-10-17 | 2015-04-29 | 奥迪康有限公司 | Method for reproducing acoustical sound field |
CN104756526A (en) * | 2012-11-02 | 2015-07-01 | 索尼公司 | Signal processing device, signal processing method, measurement method, and measurement device |
CN105325014A (en) * | 2013-05-02 | 2016-02-10 | 微软技术许可有限责任公司 | Sound field adaptation based upon user tracking |
-
2016
- 2016-07-26 CN CN201610597728.6A patent/CN106255031B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102547533A (en) * | 2010-11-05 | 2012-07-04 | 索尼公司 | Acoustic control apparatus and acoustic control method |
GB2495623A (en) * | 2011-10-10 | 2013-04-17 | Korea Electronics Telecomm | Preventing excessive output of energy when reproducing a three dimensional (3D) sound field |
CN104335605A (en) * | 2012-06-06 | 2015-02-04 | 索尼公司 | Audio signal processing device, audio signal processing method, and computer program |
CN104756526A (en) * | 2012-11-02 | 2015-07-01 | 索尼公司 | Signal processing device, signal processing method, measurement method, and measurement device |
CN104125524A (en) * | 2013-04-23 | 2014-10-29 | 华为技术有限公司 | Sound effect adjustment method, apparatus and devices |
CN105325014A (en) * | 2013-05-02 | 2016-02-10 | 微软技术许可有限责任公司 | Sound field adaptation based upon user tracking |
WO2015035492A1 (en) * | 2013-09-13 | 2015-03-19 | Mixgenius Inc. | System and method for performing automatic multi-track audio mixing |
CN104581604A (en) * | 2013-10-17 | 2015-04-29 | 奥迪康有限公司 | Method for reproducing acoustical sound field |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109151671A (en) * | 2017-06-15 | 2019-01-04 | 宏达国际电子股份有限公司 | Apparatus for processing audio, audio-frequency processing method and computer program product |
CN109151671B (en) * | 2017-06-15 | 2021-06-01 | 宏达国际电子股份有限公司 | Audio processing apparatus, audio processing method, and computer program product |
CN107464552A (en) * | 2017-08-24 | 2017-12-12 | 徐银海 | A kind of distributed locomotive active noise reduction system and method |
CN112119646A (en) * | 2018-05-22 | 2020-12-22 | 索尼公司 | Information processing apparatus, information processing method, and program |
CN112119646B (en) * | 2018-05-22 | 2022-09-06 | 索尼公司 | Information processing apparatus, information processing method, and computer-readable storage medium |
US11463836B2 (en) | 2018-05-22 | 2022-10-04 | Sony Corporation | Information processing apparatus and information processing method |
CN110677771A (en) * | 2019-11-05 | 2020-01-10 | 常州听觉工坊智能科技有限公司 | Wireless multi-channel sound system and automatic sound channel calibration method thereof |
CN111954146A (en) * | 2020-07-28 | 2020-11-17 | 贵阳清文云科技有限公司 | Virtual sound environment synthesizing device |
CN111954146B (en) * | 2020-07-28 | 2022-03-01 | 贵阳清文云科技有限公司 | Virtual sound environment synthesizing device |
WO2022153359A1 (en) * | 2021-01-12 | 2022-07-21 | 日本電信電話株式会社 | Microphone position presentation method, device therefor, and program |
GB2616073A (en) * | 2022-02-28 | 2023-08-30 | Audioscenic Ltd | Loudspeaker control |
Also Published As
Publication number | Publication date |
---|---|
CN106255031B (en) | 2018-01-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106255031B (en) | Virtual sound field generation device and virtual sound field production method | |
US9983846B2 (en) | Systems, methods, and apparatus for recording three-dimensional audio and associated data | |
US20220159380A1 (en) | Microphone for recording multi-dimensional acoustic effects | |
CN104604258B (en) | Bi-directional interconnect for communication between a renderer and an array of individually addressable drivers | |
CN104604256B (en) | The reflected sound of object-based audio is rendered | |
CN101681663B (en) | A device for and a method of processing audio data | |
JP4694763B2 (en) | Headphone device | |
JP2011515942A (en) | Object-oriented 3D audio display device | |
JP2015092695A (en) | Audio animation system | |
US9788134B2 (en) | Method for processing of sound signals | |
JP6111611B2 (en) | Audio amplifier | |
WO2022004421A1 (en) | Information processing device, output control method, and program | |
CN107211230A (en) | Sound reproduction system | |
US10419870B1 (en) | Applying audio technologies for the interactive gaming environment | |
JP2006287878A (en) | Portable telephone terminal | |
US20050122067A1 (en) | Action seating | |
KR20180127396A (en) | A method for reproducing sound in consideration of individual requirements | |
WO2022124084A1 (en) | Reproduction apparatus, reproduction method, information processing apparatus, information processing method, and program | |
US20230336916A1 (en) | Themed ornaments with internet radio receiver | |
JP6817282B2 (en) | Voice generator and voice generator | |
Neoran et al. | Virtual reality music in the real world | |
Digenis | Challenges of the headphone mix in games | |
CN102413398A (en) | Voice direction or stereophonic effect adjustable earphone | |
CN104618837B (en) | Loudspeaker box control method and system of film and television drop tower | |
CN102395086B (en) | Earphone with adjustable voice direction and stereo effect |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right | ||
TR01 | Transfer of patent right |
Effective date of registration: 20181102 Address after: 100080 10-031 10, B block 3, Haidian street, Haidian District, Beijing. Co-patentee after: Nanjing horizon Robot Technology Co., Ltd. Patentee after: Beijing horizon information technology Co., Ltd. Address before: 100080 10-031 10, B block 3, Haidian street, Haidian District, Beijing. Patentee before: Beijing horizon information technology Co., Ltd. |