CN106060758B - The processing method of virtual reality sound field metadata - Google Patents

The processing method of virtual reality sound field metadata Download PDF

Info

Publication number
CN106060758B
CN106060758B CN201610391252.0A CN201610391252A CN106060758B CN 106060758 B CN106060758 B CN 106060758B CN 201610391252 A CN201610391252 A CN 201610391252A CN 106060758 B CN106060758 B CN 106060758B
Authority
CN
China
Prior art keywords
audio object
point
moment
audio
virtual reality
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610391252.0A
Other languages
Chinese (zh)
Other versions
CN106060758A (en
Inventor
杨洪旭
孙学京
张晨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Tuoling Inc
Original Assignee
Beijing Tuoling Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Tuoling Inc filed Critical Beijing Tuoling Inc
Priority to CN201610391252.0A priority Critical patent/CN106060758B/en
Publication of CN106060758A publication Critical patent/CN106060758A/en
Application granted granted Critical
Publication of CN106060758B publication Critical patent/CN106060758B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S5/00Pseudo-stereo systems, e.g. in which additional channel signals are derived from monophonic signals by means of phase shifting, time delay or reverberation 
    • H04S5/005Pseudo-stereo systems, e.g. in which additional channel signals are derived from monophonic signals by means of phase shifting, time delay or reverberation  of the pseudo five- or more-channel type, e.g. virtual surround

Abstract

The invention discloses a kind of processing method of virtual reality sound field metadata, following steps are the treating method comprises:Judge the motor pattern of audio object, if the motor pattern of audio object is linear motion, make motor pattern parameter m=0;If the motor pattern of audio object is curvilinear motion, motor pattern parameter m=1 is made;As m=0, with the azimuth information of the right-angle coordinate representation audio object, while the movement locus of audio object is also handled in rectangular coordinate system;As m=1, the azimuth information of the audio object is represented with polar coordinate system, while the movement locus of audio object is also handled in polar coordinate system;Positional information generates virtual surround sound according to corresponding to the movement locus of audio object.Methods described can more ideally show the direction of motion and the track of audio object in the complex object for handling linear motion and curvilinear motion and depositing.Feeling of immersion is improved in virtual reality, it is more truly, more specifically, vivider.

Description

The processing method of virtual reality sound field metadata
Technical field
The present invention relates to signal processing technology field, and in particular to a kind of processing method of virtual reality sound field metadata.
Background technology
When with virtual reality helmet (Head-Mounted Display, HMD) to user's presentation content, in audio Hold and played by stereophone to user.At this moment need that the problem of how improving virtual surrounding sound effect faced.Virtually existing In real application, when playing audio content by stereophone, the purpose of virtual 3D audios is intended to reach a kind of effect, allows User is true just as being listened with loudspeaker array (such as 5.1 or 7.1), or even as listening the sound in reality.
When making virtual reality audio content, generally there is that some sources of sound are distributed in different positions and environment is formed together Sound field.User's head action (head tracking) is tracked, sound is handled accordingly.Such as if original sound quilt User is perceived as coming from front, and after 90 degree of user's rotary head to the left, sound should be processed so that user perceives sound from just 90 degree of right.
Virtual reality device can have a many types herein, such as the display device of headed tracking, or simply one The stereophone of portion's headed tracking transducer.
Realize that head tracking also there are a variety of methods.Relatively common is to use multiple sensors.Motion sensor external member is led to Often include accelerometer, gyroscope and magnetometric sensor.Every kind of sensor has oneself in terms of motion tracking and absolute direction Intrinsic strong point and weakness.Therefore practices well is to use sensor " fusion " (sensor fusion), will come from each sensor Signal combine, produce a more accurate motion detection result.
, it is necessary to be changed accordingly to sound after end rotation angle is obtained.Generation virtual reality sound field has several Method, a kind of way are to audio object, according to its positional information, use corresponding HRTF (Head Related Transfer Function, head related transfer function) wave filter is filtered, obtain virtual surround sound.HRTF is in the name corresponding to time-domain Title is HRIR (Head Related Impulse Response).Another way is that sound is gone to the B in ambisonic domains Format signal, virtual speaker array signal is converted to by the B format signals, and virtual speaker array signal is filtered by HRTF Ripple device is filtered, and obtains virtual surround sound.
It can be seen that both approaches are required for defining, design and handling the azimuth information of audio object in space.One As way be by the position rectangular coordinate system in space (x, y, z) represent.Assuming that whole one side in sound field space A length of 2 cube represents (2 × 2 × 2), and the head of hearer assumes the central point in rectangular co-ordinate in cubical central point The coordinate of system is (0,0,0), as shown in Figure 1, it is assumed that the coordinate of the audio object in the sound field is (xi, yi, zi), then understands such as Lower example:
The audio object coordinate at directly to the left sound field edge is (- 1,0,0);
The audio object coordinate at front sound field edge is (0,1,0);
The audio object coordinate at surface sound field edge is (0,0,1);
The audio object coordinate at right front sound field edge is (1,1,0);
As shown in Fig. 2 three components of wherein coordinate must meet:
-1<=xi<=1
-1<=yi<=1
-1<=zi<=1
The definition method of this audio object azimuth information is very suitable for the object moved along straight path, such as object It is linearly moved to from (x1, y1, the z1) at t1 moment (x2, y2, the z2) at t2 moment, then ti pairs of any moment between t1 and t2 The position Point_i for the object answered can be expressed as with linear interpolation:
Point_i=Point_1+ (Point_2-Point_1) × (ti-t1)/(t2-t1);
Wherein Point_i is defined as:(xi,yi,zi);Point_1 is:(x1,y1,z1);Point_2 is:(x2,y2, z2);
But if audio object is riding, the metadata of this azimuth information based on rectangular coordinate system is determined Justice and interpolation method are not just very suitable.For example audio object rotates a circle around central point, marked respectively all around 1st, 2,3,4 totally 4 azimuthal points (see Fig. 3):
Just preceding Point_1 (0,1,0)
Just right Point_2 (1,0,0)
Point_3 (0, -1,0) after just
Just left Point_4 (- 1,0,0)
The movement locus that so actual interpolation obtains not is a circular motion, but a broken line motion, and track is One rhombus, as shown in figure 3, the difference of this movement locus can cause the distortion of virtual reality audio content, influence audio effect Fruit.
In the prior art to solve this technical problem, using the sampled point taken on time dimension than comparatively dense, Carry out approximate simulation curvilinear motion.But disadvantage of this is that track is approximate curve, and imperfect while intensive adopt Sampling point causes amount of metadata too big, occupies more bandwidth.
The content of the invention
It is an object of the invention to provide a kind of processing method of virtual reality sound field metadata, for describing virtual reality Audio object curvilinear motion track in sound field, makes the more efficient of metadata, while audio object is done curve fortune in space Track is smoother when dynamic, while further uses hybrid (mixing) mode, compatible with the straight line or curve of audio object Motion.
To achieve the above object, the processing method of virtual reality sound field metadata of the present invention comprises the following steps:
Judge the motor pattern of audio object, if the motor pattern of audio object is linear motion, make motor pattern join Number m=0;If the motor pattern of audio object is curvilinear motion, motor pattern parameter m=1 is made;
As m=0, with the azimuth information of the right-angle coordinate representation audio object, while the movement locus of audio object Also handled in rectangular coordinate system;
As m=1, the azimuth information of the audio object, while the movement locus of audio object are represented with polar coordinate system Handled in polar coordinate system;
Positional information generates virtual surround sound according to corresponding to the movement locus of audio object.
As m=0, rectangular co-ordinate Point_1 of the audio object at the t1 moment is obtained, obtains audio object at the t2 moment Rectangular co-ordinate Point_2, it is Point_i to make rectangular co-ordinate corresponding to ti moment audio objects, wherein the ti moment be the t1 moment with Any time between the t2 moment;
Point_i particular location coordinate is obtained by linear interpolation, and the formula of linear interpolation is:
Point_i=Point_1+ (Point_2-Point_1) × (ti-t1)/(t2-t1).
As m=1, audio object is obtained in polar coordinates Point ' _ 1 at t1 moment, obtains audio object at the t2 moment Polar coordinates Point ' _ 2, it is Point ' _ i to make polar coordinates corresponding to ti moment audio objects, and the wherein ti moment is t1 moment and t2 Any time between moment;
Point ' _ i particular location coordinate is obtained by linear interpolation, and the formula of linear interpolation is:
Point ' _ i=Point ' _ 1+ (Point ' _ 2-Point ' _ 1) × (ti-t1)/(t2-t1).
Positional information corresponding to the described movement locus according to audio object generates concretely comprising the following steps for virtual surround sound:
If m=1, polar coordinates Point ' _ i corresponding to ti moment audio objects is obtained;
If m=0, rectangular co-ordinate Point_i corresponding to the ti moment audio objects of linear interpolation acquisition is converted into Polar coordinates Point ' _ i corresponding to ti moment audio objects;
According to polar coordinates Point ' _ i corresponding to ti moment audio objects, audio object is encoded to high-order B- forms letter Number;The B- format signals are converted into virtual speaker array signal;
Binaural room impulse response progress ears transcoding is based on to the virtual speaker array signal of audio object, obtained Ears to audio object export virtual ring around acoustical signal.
The audio object can be one or more.
The binaural room impulse response generates offline, using true measurement or by Software Create.
The invention has the advantages that:The processing method of virtual reality sound field metadata of the present invention is in processing straight line fortune When dynamic and curvilinear motion and the complex object deposited, it can more ideally show the direction of motion and the track of audio object.Virtual Feeling of immersion is improved in reality, it is more truly, more specifically, vivider, and the object handles for making movement locus complicated get up it is simpler It is single quick, also improve operating efficiency while improving feeling of immersion.
Brief description of the drawings
Fig. 1 is the schematic diagram for using right-angle coordinate representation sound field space in the prior art.
Fig. 2 is the schematic diagram of three coordinate components in rectangular coordinate system shown in Fig. 1.
Fig. 3 is the audio object movement locus schematic diagram obtained in the prior art with rectangular coordinate system interpolation.
Fig. 4 is the schematic diagram in present invention polar coordinate representation sound field space.
Fig. 5 is the schematic diagram of three coordinate components in polar coordinates shown in Fig. 4.
Fig. 6 is the audio object movement locus schematic diagram that the present invention is obtained with polar coordinates interpolation.
Embodiment
Following examples are used to illustrate the present invention, but are not limited to the scope of the present invention.
As Figure 4-Figure 6, the present invention provides a kind of processing method of virtual reality sound field metadata, comprises the following steps:
Judge the motor pattern of audio object, if the motor pattern of audio object is linear motion, make motor pattern join Number m=0;If the motor pattern of audio object is curvilinear motion, motor pattern parameter m=1 is made;
As m=0, with the azimuth information of the right-angle coordinate representation audio object, while the movement locus of audio object Also handled in rectangular coordinate system;
As m=1, the azimuth information of the audio object, while the movement locus of audio object are represented with polar coordinate system Handled in polar coordinate system;
Positional information generates virtual surround sound according to corresponding to the movement locus of audio object.
As m=0, rectangular co-ordinate Point_1 of the audio object at the t1 moment is obtained, obtains audio object at the t2 moment Rectangular co-ordinate Point_2, it is Point_i to make rectangular co-ordinate corresponding to ti moment audio objects, wherein the ti moment be the t1 moment with Any time between the t2 moment;
Point_i particular location coordinate is obtained by linear interpolation, and the formula of linear interpolation is:
Point_i=Point_1+ (Point_2-Point_1) × (ti-t1)/(t2-t1).
Wherein Point_1 rectangular co-ordinate is defined as (x1, y1, z1), Point_2 rectangular co-ordinate be defined as (x2, y2, Z2), Point_i rectangular co-ordinate is defined as (xi, yi, zi).
As m=1, audio object is obtained in polar coordinates Point ' _ 1 at t1 moment, obtains audio object at the t2 moment Polar coordinates Point ' _ 2, it is Point ' _ i to make polar coordinates corresponding to ti moment audio objects, and the wherein ti moment is t1 moment and t2 Any time between moment;
Point ' _ i particular location coordinate is obtained by linear interpolation, and the formula of linear interpolation is:
Point ' _ i=Point ' _ 1+ (Point ' _ 2-Point ' _ 1) × (ti-t1)/(t2-t1).
The polar coordinates of wherein Point ' _ 1 are defined as (ρ 1, θ 1, φ 1);The polar coordinates of Point ' _ 2 are defined as (ρ 2, θ 2, φ 2);Point ' _ i polar coordinates are defined as (ρ i, θ i, φ i).
The position of the centre of sphere is defined as (0,0,0) in polar coordinates, ρ 1 represent t1 moment audio object apart from the centre of sphere away from From;θ 1 represents horizontal angle of the t1 moment audio object relative to the centre of sphere;φ 1 represents t1 moment audio object facing upward relative to the centre of sphere Angle.ρ 2 represents distance of the t2 moment audio object apart from the centre of sphere;θ 2 represents horizontal angle of the t2 moment audio object relative to the centre of sphere; φ 2 represents the elevation angle of the t2 moment audio object relative to the centre of sphere.ρ i represent distance of the ti moment audio object apart from the centre of sphere;θ i tables Show horizontal angle of the ti moment audio object relative to the centre of sphere;φ i represent the elevation angle of the ti moment audio object relative to the centre of sphere.
It is using polar beneficial effect, it is assumed that audio object rotates a circle around central point, respectively all around With 4 azimuthal points of polar coordinates mark:
Just preceding Point_1 (1,0,0)
Just right Point_2 (1,90,0)
Point_3 (1,180,0) after just
Just left Point_4 (1,270,0)
The movement locus that so interpolation obtains is exactly a perfect circular motion, as shown in Figure 6.
In the complex object for handling linear motion and curvilinear motion and depositing, the present invention can more ideally show audio pair The direction of motion of elephant and track.Feeling of immersion is improved in virtual reality, it is more truly, more specifically, vivider, and make movement locus Complicated object handles are got up simpler quick, and operating efficiency is also improved while improving feeling of immersion.
The definition specification for the virtual reality sound field sound intermediate frequency object metadata that the present invention designs (by taking 2 audio objects as an example) It is as follows:
Positional information corresponding to the described movement locus according to audio object generates concretely comprising the following steps for virtual surround sound:
If m=1, polar coordinates Point ' _ i corresponding to ti moment audio objects is obtained;
If m=0, rectangular co-ordinate Point_i corresponding to the ti moment audio objects of linear interpolation acquisition is converted into Polar coordinates Point ' _ i corresponding to ti moment audio objects;
The formula that rectangular co-ordinate (xi, yi, zi) is converted into polar coordinates (ρ i, θ i, φ i) is:
θ i=arccos (z/ ρ i)
φ i=arctan (y/sqrt (x*x+z*z))
Wherein sqrt represents sqrt computing.
According to polar coordinates Point ' _ i corresponding to ti moment audio objects, it (is preferably 2 ranks that audio object is encoded into high-order Or 3 ranks) B- format signals;The B- format signals are converted into virtual speaker array signal;Believed with a single order B- form Number [W1 X1 Y1 Z1]TExemplified by, it is converted into virtual speaker array signal [L1 L2…LN]TProcess be exactly to carry out lower column operations (wherein X1=cos θ1cosφ1;Y1=sin θ1cosφ1;Z1=sin φ1):
Wherein, N is the number for the virtual speaker that virtual speaker topological structure includes.G matrix used in above formula For ambisonic decoding matrix, can be drawn by seeking pseudo inverse matrix.
Binaural room impulse response (BRIR) progress ears are based on to the virtual speaker array signal of audio object Transcoding (being typically 3-dimensional, i.e., comprising elevation information), the ears for obtaining audio object export virtual ring around acoustical signal.Specifically:From Virtual speaker signal goes to the stereo BRIR matrixes in two roads corresponding to earphone signal, the stereo matrix in the roads of Jiang Gai bis- and virtually raises Sound device array signal carries out matrix multiplication, obtains virtual surround sound.BRIR matrixes areThen virtual surround sound is
The audio object can be one or more.
The binaural room impulse response is preferably offline generation, can use true measurement or be given birth to by special software Into, therefore need to store substantial amounts of BRIR during online generating mode not necessarily like using under prior art, reduce memory consumption.
Describe below and how audio object is encoded to ambisonic domains.
Audio object is encoded to single order ambisonic signals:
siIt is i-th of audio object, i=1 ... k, k are the numbers of audio object.θiIt is the angle (orientation in plane Angle), φiThe angle being vertically oriented.W sound channel signals represent omnirange sound wave, X sound channel signals, Y sound channel signals and Z sound channels Signal represents the sound wave along three orthogonal orientation X, Y, Z of space respectively.
Single order B- format signals are expressed as
Similarly, audio object is encoded to 2 ranks or 3 rank B- format signals and preferably defines progress according to following table:
Trigonometric function in upper table is even function for azimuth angle theta, then the respective component of corresponding B- format signals is left Right symmetrical, if the trigonometric function in upper table is odd function for azimuth angle theta, the respective component of corresponding B- format signals is It is heterochiral.By taking single order B- format signals as an example, from the point of view of physical significance and coordinate, w, x, z is regardless of left and right, if so listened The position of person is symmetrical, and assumes that corresponding HRTF coefficients are also approximate symmetrical, then ears corresponding to w, x, z export Component be identical for the left and right passage of output.And y is just reverse for left and right.So point that ears corresponding to y export Amount is opposite for left and right passage.For the component with symmetry, fast algorithm, i.e. pair in calculating process can be used The optimization of title property, can further reduce operand.
The orientation such as left and right, horizontal, vertical alleged by of the invention is the perspective definition from hearer (i.e. user).
Although above with general explanation and specific embodiment, the present invention is described in detail, at this On the basis of invention, it can be made some modifications or improvements, this will be apparent to those skilled in the art.Therefore, These modifications or improvements without departing from theon the basis of the spirit of the present invention, belong to the scope of protection of present invention.

Claims (5)

  1. A kind of 1. processing method of virtual reality sound field metadata, it is characterised in that the place of the virtual reality sound field metadata Reason method comprises the following steps:
    Judge the motor pattern of audio object, if the motor pattern of audio object is linear motion, make motor pattern parameter m= 0;If the motor pattern of audio object is curvilinear motion, motor pattern parameter m=1 is made;
    As m=0, also existed with the movement locus of the azimuth information of the right-angle coordinate representation audio object, while audio object Handled in rectangular coordinate system;
    As m=1, the azimuth information of the audio object is represented with polar coordinate system, while the movement locus of audio object is also in pole Handled in coordinate system;
    Positional information generates virtual surround sound according to corresponding to the movement locus of audio object;It is concretely comprised the following steps:
    If m=1, polar coordinates Point ' _ i corresponding to ti moment audio objects is obtained;
    If m=0, when rectangular co-ordinate Point_i corresponding to the ti moment audio objects of linear interpolation acquisition is converted into ti Carve polar coordinates Point ' _ i corresponding to audio object;
    According to polar coordinates Point ' _ i corresponding to ti moment audio objects, audio object is encoded to high-order B- format signals;Will The B- format signals are converted into virtual speaker array signal;
    Binaural room impulse response progress ears transcoding is based on to the virtual speaker array signal of audio object, obtains sound The ears of frequency object export virtual ring around acoustical signal.
  2. 2. the processing method of virtual reality sound field metadata as claimed in claim 1, it is characterised in that as m=0, obtain sound Rectangular co-ordinate Point_1 of the frequency object at the t1 moment, rectangular co-ordinate Point_2 of the audio object at the t2 moment is obtained, when making ti Rectangular co-ordinate corresponding to carving audio object is Point_i, wherein any time of the ti moment between t1 moment and t2 moment;
    Point_i particular location coordinate is obtained by linear interpolation, and the formula of linear interpolation is:
    Point_i=Point_1+ (Point_2-Point_1) × (ti-t1)/(t2-t1).
  3. 3. the processing method of virtual reality sound field metadata as claimed in claim 1, it is characterised in that as m=1, obtain sound Frequency object obtains audio object in polar coordinates Point ' _ 2 at t2 moment, makes the ti moment in polar coordinates Point ' _ 1 at t1 moment Polar coordinates corresponding to audio object are Point ' _ i, wherein any time of the ti moment between t1 moment and t2 moment;
    Point ' _ i particular location coordinate is obtained by linear interpolation, and the formula of linear interpolation is:
    Point ' _ i=Point ' _ 1+ (Point ' _ 2-Point ' _ 1) × (ti-t1)/(t2-t1).
  4. 4. the processing method of virtual reality sound field metadata as claimed in claim 1, it is characterised in that the audio object is one It is individual or multiple.
  5. 5. the processing method of virtual reality sound field metadata as claimed in claim 4, it is characterised in that the binaural room impulse The offline generation of response, using true measurement or by Software Create.
CN201610391252.0A 2016-06-03 2016-06-03 The processing method of virtual reality sound field metadata Active CN106060758B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610391252.0A CN106060758B (en) 2016-06-03 2016-06-03 The processing method of virtual reality sound field metadata

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610391252.0A CN106060758B (en) 2016-06-03 2016-06-03 The processing method of virtual reality sound field metadata

Publications (2)

Publication Number Publication Date
CN106060758A CN106060758A (en) 2016-10-26
CN106060758B true CN106060758B (en) 2018-03-23

Family

ID=57169737

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610391252.0A Active CN106060758B (en) 2016-06-03 2016-06-03 The processing method of virtual reality sound field metadata

Country Status (1)

Country Link
CN (1) CN106060758B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106604200A (en) * 2017-01-19 2017-04-26 浪潮(苏州)金融技术服务有限公司 Audio data processing method and apparatus
GB2567244A (en) * 2017-10-09 2019-04-10 Nokia Technologies Oy Spatial audio signal processing
CN109462811B (en) * 2018-11-23 2020-11-17 武汉轻工大学 Sound field reconstruction method, device, storage medium and device based on non-central point
CN109618275B (en) * 2018-12-29 2020-11-17 武汉轻工大学 Multi-channel non-center point sound field reconstruction method, equipment, storage medium and device
CN112584297B (en) * 2020-12-01 2022-04-08 中国电影科学技术研究所 Audio data processing method and device and electronic equipment
CN114020235B (en) * 2021-09-29 2022-06-17 北京城市网邻信息技术有限公司 Audio processing method in live-action space, electronic terminal and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101521843A (en) * 2008-02-27 2009-09-02 索尼株式会社 Head-related transfer function convolution method and head-related transfer function convolution device
CN101874414A (en) * 2007-10-30 2010-10-27 索尼克埃莫申股份公司 Method and device for improved sound field rendering accuracy within a preferred listening area
WO2013067712A1 (en) * 2011-11-12 2013-05-16 Liv Runchun Method for establishing 5.1 sound track on headset
CN104284291A (en) * 2014-08-07 2015-01-14 华南理工大学 Headphone dynamic virtual replaying method based on 5.1 channel surround sound and implementation device thereof
CN105075293A (en) * 2013-03-29 2015-11-18 三星电子株式会社 Audio apparatus and audio providing method thereof
CN105376690A (en) * 2015-11-04 2016-03-02 北京时代拓灵科技有限公司 Method and device of generating virtual surround sound
CN105451152A (en) * 2015-11-02 2016-03-30 上海交通大学 Hearer-position-tracking-based real-time sound field reconstruction system and method

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101874414A (en) * 2007-10-30 2010-10-27 索尼克埃莫申股份公司 Method and device for improved sound field rendering accuracy within a preferred listening area
CN101521843A (en) * 2008-02-27 2009-09-02 索尼株式会社 Head-related transfer function convolution method and head-related transfer function convolution device
WO2013067712A1 (en) * 2011-11-12 2013-05-16 Liv Runchun Method for establishing 5.1 sound track on headset
CN105075293A (en) * 2013-03-29 2015-11-18 三星电子株式会社 Audio apparatus and audio providing method thereof
CN104284291A (en) * 2014-08-07 2015-01-14 华南理工大学 Headphone dynamic virtual replaying method based on 5.1 channel surround sound and implementation device thereof
CN105451152A (en) * 2015-11-02 2016-03-30 上海交通大学 Hearer-position-tracking-based real-time sound field reconstruction system and method
CN105376690A (en) * 2015-11-04 2016-03-02 北京时代拓灵科技有限公司 Method and device of generating virtual surround sound

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
坐标系在大学物理运动学中作用的探究;郭龙;《基础科学》;20130531;162-163页 *

Also Published As

Publication number Publication date
CN106060758A (en) 2016-10-26

Similar Documents

Publication Publication Date Title
CN106060758B (en) The processing method of virtual reality sound field metadata
US11032661B2 (en) Music collection navigation device and method
CN107205207B (en) A kind of virtual sound image approximation acquisition methods based on middle vertical plane characteristic
US10397728B2 (en) Differential headtracking apparatus
CN106210990B (en) A kind of panorama sound audio processing method
CN107105384B (en) The synthetic method of near field virtual sound image on a kind of middle vertical plane
CN106331977B (en) A kind of virtual reality panorama acoustic processing method of network K songs
CN105682000B (en) A kind of audio-frequency processing method and system
CN102572676A (en) Real-time rendering method for virtual auditory environment
US10609502B2 (en) Methods and systems for simulating microphone capture within a capture zone of a real-world scene
CN101184349A (en) Three-dimensional ring sound effect technique aimed at dual-track earphone equipment
CN107347173A (en) The implementation method of multi-path surround sound dynamic ears playback system based on mobile phone
Cooper et al. Ambisonic sound in virtual environments and applications for blind people
CN113347530A (en) Panoramic audio processing method for panoramic camera
CN103402158B (en) Dimensional sound extension method for handheld playing device
GB2542579A (en) Spatial audio generator
Masterson et al. HRIR order reduction using approximate factorization
Sakamoto et al. SENZI and ASURA: New high-precision sound-space sensing systems based on symmetrically arranged numerous microphones
Kaneko et al. Spheroidal ambisonics: Spatial audio in spheroidal bases
Salvati et al. Sound spatialization control by means of acoustic source localization system
Cater et al. An investigation into the use of spatialised sound in locative games
Cho et al. Front-back confusion resolution in 3d sound localization with hrtf databases
O'Dwyer et al. A machine learning approach to detecting sound-source elevation in adverse environments
CN108900962B (en) Three-model 3D sound effect generation method and acquisition method thereof
Liu et al. Spatial Audio Empowered Smart speakers with Xblock-A Pose-Adaptive Crosstalk Cancellation Algorithm for Free-moving Users

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant