CN103347245B - Method and device for restoring sound source azimuth information in stereophonic sound system - Google Patents

Method and device for restoring sound source azimuth information in stereophonic sound system Download PDF

Info

Publication number
CN103347245B
CN103347245B CN201310273067.8A CN201310273067A CN103347245B CN 103347245 B CN103347245 B CN 103347245B CN 201310273067 A CN201310273067 A CN 201310273067A CN 103347245 B CN103347245 B CN 103347245B
Authority
CN
China
Prior art keywords
sound
sound field
module
signal
acoustic properties
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201310273067.8A
Other languages
Chinese (zh)
Other versions
CN103347245A (en
Inventor
胡瑞敏
张茂胜
王樱
涂卫平
王晓晨
李登实
姜林
王松
高丽
章佩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University WHU
Original Assignee
Wuhan University WHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University WHU filed Critical Wuhan University WHU
Priority to CN201310273067.8A priority Critical patent/CN103347245B/en
Publication of CN103347245A publication Critical patent/CN103347245A/en
Application granted granted Critical
Publication of CN103347245B publication Critical patent/CN103347245B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention discloses a method and device for restoring sound source azimuth information in a stereophonic sound system. The device comprises an acoustic attribute computing module, a signal pre-distribution module, a sound field rebuilding acoustic attribute computing module, an acoustic attribute matching module, a gain determining module and a signal distribution module. The method includes the steps of obtaining the particle speed at a sound listening point in an original sound field, carrying out stereophonic sound replay through a left independent sound playing channel and a right independent sound playing channel in a rebuilt sound field, pre-distributing signals to two loudspeakers in the rebuilt sound field, building a sound image azimuth restoring model according to the consistency between the particle speed, generated by the stereophonic sound system, at a sound listening point and the particle speed at the sound listening point in the original sound field, obtaining the weighting factors of loudspeaker signals, then distributing corresponding signals to each loudspeaker, and completing the rebuilding of the original sound field. The method and device for restoring the sound source azimuth information in the stereophonic sound system can accurately restore the sound image azimuth information in the original sound field, and are simple in operation and high in stability.

Description

The method and apparatus of sound bearing information is recovered in a kind of stereophonic sound system
Technical field
The present invention relates to and recover sound bearing areas of information technology, particularly relate in stereophonic sound system the method and apparatus recovering sound bearing information.
Background technology
People learn to utilize loudspeaker to complete the reproduction of sound very early, but rebuild original sound field only by a loudspeaker, and the sound that it reproduces cannot to audience with impression on the spot in person.Along with the appearance of equipment that can realize electronic recording and playback, people also rise to new height for the pursuit of perfect audio reproduction.Stereosonic appearance, meets the demand of people for sound quality to a great extent.Compared with the sound field produced with monophonic, stereo produced sound field, people is not only allowed to experience the sense of direction of source of sound, and with a kind of by sound around surround and sound source to surrounding away from diffusion sensation, enhance the depth feelings of sound, telepresenc and spatial impression, the sound making viewer can not only experience the sound source from front, rear, left and right to send, and the movement of sound event can be experienced.
People, to the perception of sound perspective, comprise the perception of direction to voice signal and distance.Therefore, how the key technology that sound bearing information becomes stereo realization is recovered exactly.In traditional recovery sound bearing information approach, in order to reappear original sound field in reconstruction sound field, often through amplitude translation, the mode of time-shifting carries out the adjustment of signal, make sound effect close to the effect of original sound source, but this method rebuilds effect generally, and the acoustic image direction that reconstruction sound field pleasant to the ear point of articulation place can be made to produce and the acoustic image direction at the pleasant to the ear point of articulation place of original sound field produce deviation.
Summary of the invention
In order to overcome the existing stereo deficiency rebuilding the deviation in sound field in sound bearing information, the invention provides a kind of method and apparatus that can recover to recover in the stereophonic sound system in stereosonic acoustic image orientation sound bearing information accurately.
The technical scheme that method of the present invention adopts is: a kind of method recovering sound bearing information in stereophonic sound system, is characterized in that, comprises the following steps:
Step 1: in original sound field, voice signal S (t) of known input and the positional information of sound source, calculate the particle rapidity V that the point of articulation is listened at distance sound source r place s(t);
Step 2: by described particle rapidity V st () is transformed to frequency domain by time domain through Fourier transformation, calculate the described particle rapidity V listening point of articulation place in frequency domain s(ω);
Step 3: reconstruction sound field in, with left and right two-way independently reproducing channel carry out stereophonic reproduction, to two loudspeaker pre-dispense signal q in described reconstruction sound field 1(ω), q 2(ω);
Step 4: the reconstruction sound field of two loudspeaker compositions described in analysis, calculates the particle rapidity V listening point of articulation place that stereophonic sound system produces in a frequency domain r(ω);
Step 5: according to described particle rapidity V r(ω) with the particle rapidity V at the described pleasant to the ear point of articulation place of original sound field s(ω) uniformity, sets up acoustic image orientation Restoration model;
Step 6: solve described acoustic image orientation Restoration model, determine gain factor, the weight coefficient w of two loudspeaker distributing signals described in acquisition 1, w 2;
Step 7: according to described weight coefficient w 1, w 2, calculate the signal that described each loudspeaker distributes, after distributing corresponding signal to described each loudspeaker, the reconstruction of original sound field can be completed.
The technical scheme that device of the present invention adopts is: the device recovering sound bearing information in a kind of stereophonic sound system, it is characterized in that, comprising: acoustic properties computing module, signal forward allocator module, reconstruction sound field acoustic properties computing module, acoustic properties matching module, gain determination module and signal distribution module;
Described acoustic properties computing module is used in original sound field, calculates and listens the particle rapidity at point of articulation place and it changed to frequency domain by time domain, and will finally export the acoustic properties matching module described in access;
Described signal forward allocator module is used for two loudspeaker pre-dispense signal, and preallocated signal is input to described reconstruction sound field acoustic properties computing module;
Described reconstruction sound field acoustic properties computing module is used in reconstruction sound field, calculating the particle rapidity that stereophonic sound system listens point of articulation place in a frequency domain, the final acoustic properties matching module exported described in access;
Described acoustic properties matching module is used for listening the original sound field in the particle rapidity at point of articulation place and described acoustic properties computing module to listen point of articulation place particle rapidity uniformity, Modling model by the stereophonic sound system calculated in described reconstruction sound field acoustic properties computing module;
Described gain determination module is used for according to the model set up in described acoustic properties matching module, obtains the weight coefficient in distributing signal, and the signal distribution module that will finally export described in access;
Described signal distribution module is used for the weight coefficient by obtaining the signal in described gain determination module, the loudspeaker rebuild in the stereophonic sound system of sound field is carried out to the distribution of signal.
The present invention, relative to prior art, can recover the acoustic image azimuth information in original sound field accurately, and simple to operate, and stability is high.
Accompanying drawing explanation
Fig. 1: the device workflow diagram of the embodiment of the present invention.
Detailed description of the invention
By reference to the accompanying drawings technical scheme of the present invention and system are described further with specific embodiment below.
The technical scheme that method of the present invention adopts is: a kind of method recovering sound bearing information in stereophonic sound system, comprises the following steps:
Step 1: in original sound field, voice signal S (t) of known input and the positional information of sound source, calculate the particle rapidity V that the point of articulation is listened at distance sound source r place s(t);
Step 2: by particle rapidity V st () is transformed to frequency domain by time domain through Fourier transformation, calculate the particle rapidity V at audition point place in frequency domain s(ω);
V s ( ω ) = G e - ik | r - ϵ | | r - ϵ | ( 1 + 1 ik | r - ϵ | ) * 1 | r - ϵ | x - ϵ x y - ϵ y z - ϵ z s ( ω )
Wherein, G is the parameter relevant with room acoustics attribute, and be a constant, i is imaginary unit, and k is wave number, r=(x, y, z) tfor the dimensional orientation at the pleasant to the ear point of articulation place of original sound field, ε=(ε x, ε y, ε z) tfor the dimensional orientation of sound source in original sound field, s (ω) represents the frequency domain form of voice signal.
Consider sound source position and listen the distance between point of articulation position to be generally greater than 1 meter, much larger than wave number, therefore above formula can be reduced to:
V s ( ω ) = G e - ik | r - ϵ | | r - ϵ | 2 x - ϵ x y - ϵ y z - ϵ z s ( ω ) .
Step 3: in reconstruction sound field, with left and right two-way independently reproducing channel carry out stereophonic reproduction, to two the loudspeaker pre-dispense signal q rebuild in sound field 1(ω), q 2(ω).
Wherein, q 1(ω)=w 1* s (ω), q 2(ω)=w 2* s (ω), w 1, w 2for the weight coefficient of signal.
Step 4: the reconstruction sound field analyzing two-loudspeaker composition, calculates the particle rapidity V listening point of articulation place that stereophonic sound system produces in a frequency domain r(ω);
V r ( ω ) = G Σ j = 1 2 e - ik | r - ϵ ( j ) | | r - ϵ ( j ) | 2 x - ϵ x ( j ) y - ϵ y ( j ) z - ϵ z ( j ) q j ( ω )
Wherein, , j=1,2, be the dimensional orientation of a jth loudspeaker.
Step 5: according to particle rapidity V r(ω) with the particle rapidity V at the described pleasant to the ear point of articulation place of original sound field s(ω) uniformity, sets up acoustic image orientation Restoration model;
V s ( ω ) = G e - ik | r - ϵ | | r - ϵ | 2 x - ϵ x y - ϵ y z - ϵ z s ( ω )
= G Σ j = 1 2 e - ik | r - ϵ ( j ) | | r - ϵ ( j ) | 2 x - ϵ x ( j ) y - ϵ y ( j ) z - ϵ z ( j ) q j ( ω ) = V r ( ω )
Step 6: solve acoustic image orientation Restoration model, determine gain factor, obtains the weight coefficient w of two loudspeaker distributing signals 1, w 2; Its specific implementation comprises again following sub-step:
Step 6.1: due to
r = ( x , y , z ) T , ϵ = ( ϵ x , ϵ y , ϵ z ) T , ϵ ( j ) = ( ϵ x ( j ) , ϵ y ( j ) , ϵ z ( j ) ) T , j = 1,2
S (ω) is known, if
e - ik | r - ϵ | | r - ϵ | 2 = A , x - ϵ x = a 1 , y - ϵ y = a 2 , z - ϵ z = a 3 ,
e - ik | r - ϵ ( 1 ) | | r - ϵ ( 1 ) | 2 = B , x - ϵ x ( 1 ) = b 1 , y - ϵ y ( 1 ) = b 2 , z - ϵ z ( 1 ) = b 3 ,
e - ik | r - ϵ ( 2 ) | | r - ϵ ( 2 ) | 2 = C , x - ϵ x ( 2 ) = c 1 , y - ϵ y ( 2 ) = c 2 , z - ϵ z ( 2 ) = c 3 , ,
Then former equation is reduced to:
A a 1 a 2 a 3 s ( ω ) = B b 1 b 2 b 3 s ( ω ) * w 1 + C c 1 c 2 c 3 s ( ω ) * w 2 , Obtain following equation group:
A * a 1 = B * b 1 * w 1 + C * c 1 * w 2 ( 1 ) A * a 2 = B * b 2 * w 1 + C * c 2 * w 2 ( 2 ) A * a 3 = B * b 3 * w 1 + C * c 3 * w 2 ( 3 )
Step 6.2: solving equations: (2) formula/(1) formula obtains:
a 2 a 1 = B * b 2 * w 1 + C * c 2 * w 2 B * b 1 * w 1 + C * c 1 * w 2 - - - ( 4 )
Calculate:
w 1 w 2 = ( a 1 * C * c 2 - a 2 * C * c 1 ) ( a 2 * B * b 1 - a 1 * B * b 2 ) - - - ( 5 )
(5) formula is substituted in (3) formula and can solve w 1, w 2value, the weight coefficient of loudspeaker distributing signal can be obtained;
Step 7: according to weight coefficient w 1, w 2, calculate the signal q that described each loudspeaker distributes 1(ω)=w 1* s (ω), q 2(ω)=w 2* s (ω), after distributing corresponding signal, can complete the reconstruction of original sound field to each loudspeaker, keeps listening the sound bearing of point of articulation place receiving belt consistent with the orientation of acoustic source.
Ask for an interview Fig. 1, the technical scheme that device of the present invention adopts is: the device recovering sound bearing information in a kind of stereophonic sound system, comprising: acoustic properties computing module 1, signal forward allocator module 2, reconstruction sound field acoustic properties computing module 3, acoustic properties matching module 4, gain determination module 5 and signal distribution module 6;
Acoustic properties computing module 1, in original sound field, calculates and listens the particle rapidity at point of articulation place and it changed to frequency domain by time domain, and finally will export access acoustic properties matching module 4;
Preallocated signal for two loudspeaker pre-dispense signal, and is input to and rebuilds sound field acoustic properties computing module 3 by signal forward allocator module 2;
Rebuild sound field acoustic properties computing module 3 in reconstruction sound field, calculate the particle rapidity that stereophonic sound system listens point of articulation place in a frequency domain, final output access acoustic properties matching module 4;
Acoustic properties matching module 4 is for listening the original sound field in the particle rapidity at point of articulation place and acoustic properties computing module 1 to listen point of articulation place particle rapidity uniformity, Modling model by rebuilding the stereophonic sound system calculated in sound field acoustic properties computing module 3;
Gain determination module 5, for according to the model set up in acoustic properties matching module 4, obtains the weight coefficient in distributing signal, and finally will export access signal distribution module 6;
Signal distribution module 6, for the weight coefficient by obtaining the signal in gain determination module 5, carries out the distribution of signal to the loudspeaker rebuild in the stereophonic sound system of sound field.

Claims (2)

1. recover a method for sound bearing information in stereophonic sound system, it is characterized in that, comprise the following steps:
Step 1: in original sound field, voice signal S (t) of known input and the positional information of sound source, calculate the particle rapidity V that the point of articulation is listened at distance sound source r place s(t);
Step 2: by described particle rapidity V st () is transformed to frequency domain by time domain through Fourier transformation, calculate the described particle rapidity V listening point of articulation place in a frequency domain s(ω);
Step 3: reconstruction sound field in, with left and right two-way independently reproducing channel carry out stereophonic reproduction, to two loudspeaker pre-dispense signal q in described reconstruction sound field 1(ω), q 2(ω);
Step 4: the reconstruction sound field of two loudspeaker compositions described in analysis, calculates the particle rapidity V listening point of articulation place that stereophonic sound system produces in a frequency domain r(ω);
Step 5: according to described particle rapidity V r(ω) with the particle rapidity V at the described pleasant to the ear point of articulation place of original sound field s(ω) uniformity, sets up acoustic image orientation Restoration model;
Step 6: solve described acoustic image orientation Restoration model, determine gain factor, the weight coefficient w of two loudspeaker distributing signals described in acquisition 1, w 2;
Step 7: according to described weight coefficient w 1, w 2, calculate the signal that two described loudspeakers distribute, after distributing corresponding signal to described two loudspeakers, the reconstruction of original sound field can be completed.
2. in a stereophonic sound system, recover the device of sound bearing information, it is characterized in that, comprising: acoustic properties computing module (1), signal forward allocator module (2), reconstruction sound field acoustic properties computing module (3), acoustic properties matching module (4), gain determination module (5) and signal distribution module (6);
Described acoustic properties computing module (1), in original sound field, calculates and listens the particle rapidity at point of articulation place and it changed to frequency domain by time domain, and will finally export the acoustic properties matching module (4) described in access;
Preallocated signal for two loudspeaker pre-dispense signal, and is input to described reconstruction sound field acoustic properties computing module (3) by described signal forward allocator module (2);
Described reconstruction sound field acoustic properties computing module (3), in reconstruction sound field, calculates the particle rapidity that stereophonic sound system listens point of articulation place in a frequency domain, and will finally export the acoustic properties matching module (4) described in access;
Described acoustic properties matching module (4) listens the original sound field in the particle rapidity at point of articulation place and described acoustic properties computing module (1) to listen point of articulation place particle rapidity uniformity, Modling model for the stereophonic sound system by calculating in described reconstruction sound field acoustic properties computing module (3);
Described gain determination module (5) is for according to the model set up in described acoustic properties matching module (4), obtain the weight coefficient in distributing signal, and the signal distribution module (6) that will finally export described in access;
Described signal distribution module (6), for the weight coefficient by the signal in the gain determination module (5) described in acquisition, carries out the distribution of signal to the loudspeaker rebuild in the stereophonic sound system of sound field.
CN201310273067.8A 2013-07-01 2013-07-01 Method and device for restoring sound source azimuth information in stereophonic sound system Expired - Fee Related CN103347245B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310273067.8A CN103347245B (en) 2013-07-01 2013-07-01 Method and device for restoring sound source azimuth information in stereophonic sound system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310273067.8A CN103347245B (en) 2013-07-01 2013-07-01 Method and device for restoring sound source azimuth information in stereophonic sound system

Publications (2)

Publication Number Publication Date
CN103347245A CN103347245A (en) 2013-10-09
CN103347245B true CN103347245B (en) 2015-03-25

Family

ID=49282016

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310273067.8A Expired - Fee Related CN103347245B (en) 2013-07-01 2013-07-01 Method and device for restoring sound source azimuth information in stereophonic sound system

Country Status (1)

Country Link
CN (1) CN103347245B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103826194B (en) * 2014-02-28 2015-06-03 武汉大学 Method and device for rebuilding sound source direction and distance in multichannel system
CN104363555A (en) * 2014-09-30 2015-02-18 武汉大学深圳研究院 Method and device for reconstructing directions of 5.1 multi-channel sound sources
CN109474882A (en) * 2018-12-04 2019-03-15 武汉轻工大学 Sound field rebuilding method, equipment, storage medium and device based on audition point tracking

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007187758A (en) * 2006-01-11 2007-07-26 Yamaha Corp Sound reproducing system
JP2008301205A (en) * 2007-05-31 2008-12-11 Toshiba Corp Sound output device and sound output method
JP2010139476A (en) * 2008-12-15 2010-06-24 Nittobo Acoustic Engineering Co Ltd Calculation method of sound impedance, and system
JP2010252220A (en) * 2009-04-20 2010-11-04 Nippon Hoso Kyokai <Nhk> Three-dimensional acoustic panning apparatus and program therefor

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2004058059A2 (en) * 2002-12-30 2004-07-15 Koninklijke Philips Electronics N.V. Audio reproduction apparatus, feedback system and method
JP4551652B2 (en) * 2003-12-02 2010-09-29 ソニー株式会社 Sound field reproduction apparatus and sound field space reproduction system
JP4625671B2 (en) * 2004-10-12 2011-02-02 ソニー株式会社 Audio signal reproduction method and reproduction apparatus therefor

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007187758A (en) * 2006-01-11 2007-07-26 Yamaha Corp Sound reproducing system
JP2008301205A (en) * 2007-05-31 2008-12-11 Toshiba Corp Sound output device and sound output method
JP2010139476A (en) * 2008-12-15 2010-06-24 Nittobo Acoustic Engineering Co Ltd Calculation method of sound impedance, and system
JP2010252220A (en) * 2009-04-20 2010-11-04 Nippon Hoso Kyokai <Nhk> Three-dimensional acoustic panning apparatus and program therefor

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
一种全新的声还原技术――波前合成;朱晓天;《电声技术》;20060117(第01期);第14至第17页 *

Also Published As

Publication number Publication date
CN103347245A (en) 2013-10-09

Similar Documents

Publication Publication Date Title
CN103826194B (en) Method and device for rebuilding sound source direction and distance in multichannel system
CN106454685B (en) A kind of sound field rebuilding method and system
CN102972047B (en) Method and apparatus for reproducing stereophonic sound
JP5449330B2 (en) Angle-dependent motion apparatus or method for obtaining a pseudo-stereoscopic audio signal
CN105120418B (en) Double-sound-channel 3D audio generation device and method
CN105392102B (en) Three-dimensional sound signal generation method and system for aspherical loudspeaker array
CN107820158B (en) Three-dimensional audio generation device based on head-related impulse response
Gálvez et al. Dynamic audio reproduction with linear loudspeaker arrays
CN104363555A (en) Method and device for reconstructing directions of 5.1 multi-channel sound sources
CN105120421A (en) Method and apparatus of generating virtual surround sound
CN103888889A (en) Multi-channel conversion method based on spherical harmonic expansion
CN106303843B (en) A kind of 2.5D playback methods of multizone different phonetic sound source
CN103347245B (en) Method and device for restoring sound source azimuth information in stereophonic sound system
Madmoni et al. Beamforming-based binaural reproduction by matching of binaural signals
WO2015017914A1 (en) Media production and distribution system for custom spatialized audio
US20120101609A1 (en) Audio Auditioning Device
Geronazzo et al. Auditory navigation with a tubular acoustic model for interactive distance cues and personalized head-related transfer functions: An auditory target-reaching task
Xie et al. Report on research projects on head-related transfer functions and virtual auditory displays in China
EP3530006B1 (en) Apparatus and method for weighting stereo audio signals
Kunchur 3D imaging in two-channel stereo sound: Portrayal of elevation
US11388540B2 (en) Method for acoustically rendering the size of a sound source
CN103052018A (en) Audio-visual distance information recovery method
CN103402158B (en) Dimensional sound extension method for handheld playing device
Zheng et al. A linear robust binaural sound reproduction system with optimal source distribution strategy
Gutierrez-Parera et al. On the influence of headphone quality in the spatial immersion produced by Binaural Recordings

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20150325

Termination date: 20210701