EP2874412A1 - Signalverarbeitungsschaltung - Google Patents

Signalverarbeitungsschaltung Download PDF

Info

Publication number
EP2874412A1
EP2874412A1 EP20130193367 EP13193367A EP2874412A1 EP 2874412 A1 EP2874412 A1 EP 2874412A1 EP 20130193367 EP20130193367 EP 20130193367 EP 13193367 A EP13193367 A EP 13193367A EP 2874412 A1 EP2874412 A1 EP 2874412A1
Authority
EP
European Patent Office
Prior art keywords
signal
distance
received
microphone
acoustic transmitter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP20130193367
Other languages
English (en)
French (fr)
Inventor
Christophe Macours
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
NXP BV
Original Assignee
NXP BV
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by NXP BV filed Critical NXP BV
Priority to EP20130193367 priority Critical patent/EP2874412A1/de
Publication of EP2874412A1 publication Critical patent/EP2874412A1/de
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S7/00Indicating arrangements; Control arrangements, e.g. balance control
    • H04S7/30Control circuits for electronic adaptation of the sound field
    • H04S7/302Electronic adaptation of stereophonic sound system to listener position or orientation
    • H04S7/303Tracking of listener position or orientation
    • H04S7/304For headphones

Definitions

  • This disclosure relates to a signal processing circuit, in particular a signal processing circuit that can perform head tracking in order to provide output audio signalling representative of a sound field.
  • a signal processing circuit comprising:
  • the signal processing circuit can advantageously use the left acoustic transmitter and the right acoustic transmitter to perform head-tracking using the ultrasound signals, such that additional head-tracking components and sensors may not be required.
  • the sound engine may be configured to:
  • the signal processing circuit may further comprise an offset calculator configured to determine an offset value for the position signalling in accordance with the determined left distance and the determined right distance.
  • the sound engine may be configured to add the offset value to the received position input signalling in order to provide offset position input signalling; and determine the output audio signalling in accordance with the audio input signalling and the offset position input signalling.
  • the offset calculator may be an angle calculator that is configured to calculate an azimuth angle offset value.
  • the output signalling may comprise a left audio signal for the left acoustic transmitter and a right audio signal for the right acoustic transmitter.
  • the left acoustic transmitter and the right acoustic transmitter can be used for both head tracking and providing audio signals to a user.
  • the left reference signal and the left received signal may be representative of a first continuous signal containing a first set of ultrasound frequency components.
  • the right reference signal and the right received signal may be representative of a second continuous signal containing a second set of ultrasound frequency components.
  • the first set of ultrasound components may be at different frequencies to the second set of frequency components.
  • the distance calculator may be configured to:
  • the left reference signal and the left received signal may comprise a first series of time multiplexed ultrasound pulses.
  • the right reference signal and the right received signal may comprise a second series of time multiplexed ultrasound pulses.
  • the distance calculator may be configured to determine the left distance by calculating a time-of-flight of the first series of time multiplexed ultrasound pulses between the left acoustic transmitter and the first microphone; and determine the right distance by calculating a time-of-flight of the second series of time multiplexed ultrasound pulses between the right acoustic transmitter and the first microphone.
  • the distance calculator may be further configured to:
  • the distance calculator may be configured to:
  • the distance calculator may be configured to: apply an averaging method or triangulation method to the left received signal and the second left received signal in order to determine the left distance; and apply an averaging method or triangulation method to the right received signal and the second right received signal in order to determine the right distance.
  • the left and right acoustic transmitters may comprise a pair of headphones or earphones.
  • the first microphone may comprise a microphone assembly comprising one or more individual microphones.
  • the second microphone may comprises a microphone assembly comprising one or more individual microphones.
  • a surround sound system comprising:
  • a method of signal processing comprising:
  • a mobile computing device comprising any signal processing circuit disclosed herein.
  • the computer program may be a software implementation, and the computer may be considered as any appropriate hardware, including a digital signal processor, a microcontroller, and an implementation in read only memory (ROM), erasable programmable read only memory (EPROM) or electronically erasable programmable read only memory (EEPROM), as non-limiting examples.
  • the software may be an assembly program.
  • the computer program may be provided on a computer readable medium, which may be a physical computer readable medium such as a disc or a memory device, or may be embodied as a transient signal.
  • a transient signal may be a network download, including an internet download.
  • One or more examples disclosed herein can enable the simultaneous playback of audio signals and ultrasound signals for head tracking or head localization, for example from speakers / transmitters associated with headphones. This can enable the modification of a sound field that is represented by the audio signals to accommodate the movements of a user's head. In this way, additional head tracking or head localization sensors may not be required as one or more speakers can be used to both provide the audio signals to the user and provide signalling for head tracking or head localization.
  • Figure 1 illustrates the layout of a surround sound television 100 and associated loudspeakers 104 - 114.
  • a user 102 is illustrated with his head directed towards the television 100.
  • a sound field may be created by driving a plurality of physically separated loudspeakers that are remote from the user 102; for example there may be a left speaker 104, a central speaker 106, a right speaker 108, a left surround speaker 110, a central surround speaker 112 and a right surround speaker 114 distributed around the user 102 as shown in figure 1 .
  • the sound field generated by the plurality of speakers may provide the user 102 with the perception that different components of an audio signal originate from different locations relative to the user 102.
  • the different components could, for example, correspond to different musical instruments or different sound effects.
  • other arrangements of a larger or smaller number of loudspeakers are possible and may comprise a surround sound system.
  • other types of surround sound devices exist, for example, Hi-Fi systems, mobile telephones, personal computers and many other devices.
  • the television 100 may also provide sound to the user via a pair of headphones 116.
  • the headphones 116 may be configured to give the user 102 the same impression as a plurality of separately located loudspeakers by means of appropriate processing of the audio signals supplied to the left and right channels of the headphones 116.
  • the processing of left and right audio signals is intended to ensure that the sound waves supplied to the user's 102 eardrums are essentially the same as those that would be provided by the plurality of remote speakers 104 - 114 shown in figure 1 .
  • This provides the user 102 with the perception that a virtual array of loudspeakers exists.
  • references herein to headphones may equivalently be to earphones or other suitable acoustic devices.
  • Figure 2 shows how the surround sound field changes when a user 202 is listening to a pair of headphones 216 and turns his head away from a surround sound television 200.
  • Figure 2 illustrates the configuration of six virtual loudspeakers 204 - 214. It will be appreciated that these virtual loudspeakers 104 - 144 do not exist, but are representative of the locations, relative to the user 202, of an array of loudspeakers capable of providing a surround sound field to the user 202, via the headphones 216, that corresponds to the surround sound field provided by the array of loudspeakers shown in figure 1 .
  • the surround sound field corresponding to the virtual array of loudspeakers 204 - 214 appears to rotate with the user 202.
  • This apparent rotation of the sound field can be undesirable because the spatial characteristics of the sound field are usually fixed relative to the position of the television 200.
  • the sound field may simulate sounds coming from different locations depicted by the television screen.
  • Figure 3 shows a surround sound television 300 that has a head-tracking capability.
  • the system tracks the orientation of the user's head 302 in real time.
  • Left and right audio signals supplied to headphones 316 are adapted to account for the movement of the user's head 302. In this way, it is possible to provide the user 302 with the impression that a virtual array of loudspeakers 304 - 314, and correspondingly a surround sound field, have not moved relative to the television 300 when he turns his head.
  • FIG. 4 illustrates a system for tracking a user's 402 head orientation relative to a mobile telephone 400.
  • the telephone 400 comprises a microphone 408, and in this example also a display screen 414.
  • the system also comprises a left headphone 404 and a right headphone 406, both of which are moveable relative to the telephone 400.
  • the telephone 400 provides a left audio signal to the left headphone 404 and a right audio signal to the right headphone 406.
  • the telephone 400 also provides a left ultrasound signal to the left headphone 404 and a right ultrasound signal to the right headphone 406.
  • the ultrasound signals will be used for head-tracking.
  • the audio signals may comprise components with frequencies in the range of 20 Hz to 20,000 Hz,for example.
  • Ultrasound signals may comprise components with frequencies greater than 20,000 Hz, or greater than 30,000 Hz, 40,000 Hz, 50,000 Hz or more.
  • the microphone 408 is configured to detect the ultrasound signals transmitted by the left headphone 404 and the right headphone 406.
  • a processor (not shown) associated with the microphone 408 can process the detected ultrasound signals in order to determine how the left headphone 404 and right headphone 406 have been moved relative to the microphone 408.
  • the telephone 400 can calculate left distance data 410 representative of the distance between the left headphone 404 and the microphone 408 and can calculate right distance data 412 representative of the distance between the right headphone 406 and the microphone 408. This is described in more detail below with reference to figure 6 .
  • the telephone 400 can perform a geometric analysis to determine the orientation of the user's 402 head relative to the telephone 400.
  • the signals provided to the left headphone 404 and the right headphone 406 can be adapted accordingly to maintain a consistent configuration of the surround sound field relative to the telephone 400. This may be particularly valuable in cases where the telephone 400 comprises a display screen 414, such that the apparent points of origin of sounds can be maintained in the appropriate positions relative to events depicted by the display screen 414.
  • the head-tracking system illustrated in figure 4 may be implemented in a surround sound device that does not comprise any display screen.
  • maintaining the configuration of the sound field relative to a Hi-Fi audio system may be desirable such that the apparent position of origin of sounds produced by different instruments in an orchestra remains consistent, despite the user changing the orientation of his head.
  • the head-tracking system illustrated in figure 4 may also be implemented in a surround sound device in which the microphone is not co-located with the display screen.
  • a surround sound television may have a screen in one location and a unit comprising the microphone in a physically separate location. Such examples may utilise a known location of the microphone relative to the screen to accurately compensate for movements of the user's head.
  • one or more of the head-tracking systems disclosed herein use existing sensors and apparatus in order to perform head-tracking, thereby reducing cost, complexity and size and weight of the systems concerned.
  • This can be preferable to head-tracking systems that instead use additional sensors and associated hardware that is not otherwise present in a surround sound device.
  • a head-tracking system that uses gyroscopes, accelerometers or magnetic sensors to measure a user's head orientation.
  • Figure 5 illustrates operation of the system of figure 4 .
  • the user has rotated his head 502 through an angle ⁇ 520 relative to the mobile telephone 500.
  • the consequence of this rotation is that the left distance 510 between the left headphone 504 and the microphone 508 has become smaller, whereas the right distance 512 between the right headphone 506 and the microphone 508 has become greater.
  • the surround sound device can calculate the new orientation from the updated left distance 510 and right distance 512 and then adapt the signals supplied to the left headphone 504 and the right headphone 506 accordingly, as discussed below.
  • the angle of rotation ⁇ 520 of the user's 502 head may be expressed.
  • the angle of rotation may be the angle of intersection of a headphone axis / line 530 that connects the left headphone 504 to the right headphone 506 with a reference axis / line 532 that extends from a fixed reference point associated with the telephone 500 at a predetermined orientation.
  • the fixed reference point may correspond to the location of the microphone 508 and the predetermined orientation may be perpendicular to a longitudinal axis of the telephone 500.
  • the headphone line 530 and the reference line 532 shown in figure 5 will intersect at an angle of 90 degrees plus the angle ⁇ 520 through which the user 502 has rotated his head when his head is rotated clockwise.
  • one or more of the circuits described below can use the determined left distance 510 and the determined right distance 512, along with a known value for the diameter of the user's head 502, to calculate both the absolute position of the user's head and the angle of the user's head. Therefore, one or more of the circuits disclosed herein can provide a user localization system as well as, or instead of, head tracking. Alternatively, if the diameter of the user's head is not known, then one or more of the circuits described in this document may assume that the user's head does not change position during use; therefore any change in the determined left distance 510 and the determined right distance 512 is attributed to a rotation of the user's head 502. It will be appreciated that references to head tracking in this document can also be intended to refer to head localization.
  • Figure 6 illustrates a block diagram of a signal processing circuit 600 for use in head-tracking with a surround sound system.
  • the signal processing circuit 600 comprises a sound engine 602 and a distance calculator 604.
  • the sound engine 602 receives an audio input signal 606 and a position input signal 608.
  • the audio input signal 606 is an example of audio input signalling.
  • the position input signal 608 is an example of associated position input signalling.
  • the position input signal 608 comprises information that describes an apparent position of origin, within a sound field, of different components of the audio input signal 606. The position of origin of a particular sound relative to a user is customarily described by an azimuthal angle and an elevation angle relative to the user.
  • the distance calculator 604 is configured to receive a left reference signal 610 and a right reference signal 612.
  • the reference signals may also be referred to as driver signals.
  • the left reference signal 610 and the right reference signal 612 are representative of ultrasound signals supplied to a left headphone and a right headphone, respectively (not shown).
  • the left headphone and right headphone are examples of a left acoustic transmitter and a right acoustic transmitter, respectively.
  • the distance calculator 604 is further configured to receive a left received signal 614 and a right received signal 616.
  • the left received signal 614 and right received signal 616 are representative of ultrasound signals transmitted by the left and right headphones respectively and received at a microphone (not shown), which is an example of a first microphone.
  • the distance calculator 604 can calculate left distance data 618 which represents the distance between the left headphone and the microphone, from analysis of the left reference signal 610 and the left received signal 614. For example, if the left reference signal 610 and the left received signal 614 represent a first continuous ultrasound signal then it is possible to track the position of the peak of the cross-correlation of the left reference signal 610 with the left received signal 614. The position of the peak provides the time delay between the left reference signal 610 and the left received signal 614 from which the left distance data 618 can be determined using the speed of sound in air, which is of course known.
  • the left reference signal 610 comprises a first, optionally discrete, series of time multiplexed ultrasound pulses then it is possible to determine the time of flight of the pulses because each pulse in the left received signal 614 can be identified as relating to a corresponding pulse within the left reference signal 610.
  • the left distance data 618 can be computed from the time difference between the corresponding pulses and using the known speed of sound in air.
  • the right distance data 620 which represents the distance between the right headphone and the microphone, from the right reference signal 612 and the right received signal 616. That is, the right reference signal 612 and the right received signal 616 may be representative of a second continuous ultrasound signal. Alternatively, the right reference signal 612 and the right received signal 616 may comprise a second series of time multiplexed ultrasound pulses.
  • both the left reference signal 610 and the right reference signal 612 are continuous ultrasound signals then it may be necessary for the distance calculator 604 to discriminate between them.
  • the left reference signal 610 may contain a first set of ultrasound frequency components and the right reference signal 612 may contain a second set of ultrasound frequency components, wherein the first set of ultrasound components are at different frequencies to the second set of frequency components.
  • Each set of frequency components may comprises a single frequency.
  • the sets of frequency components may comprise different non-overlapping ultrasound frequency ranges.
  • both the left reference signal 610 and the right reference signal 612 comprise a series of discrete ultrasound pulses then different pulses may be distinguished either by using different frequencies or by using different time multiplexing for the left reference signal 610 compared with the right reference signal 612.
  • the distance calculator 604 can then supply the calculated left distance data 618 and the calculated right distance data 620 to the sound engine 602.
  • the sound engine 602 is configured to process the audio input signal 606, the position input signal 608, the received left distance data 618 and right distance data 620 to provide a left audio output signal 622 and a right audio output signal 624.
  • the left audio output signal 622 and the right audio output signal 624 are provided to a left headphone and a right headphone respectively, the sound signal created will provide the user with a surround sound field that adapts according to the orientation of the user's head relative to the surround sound device. Therefore, the left audio output signal 622 and the right audio output signal 624 can be considered as examples of output audio signalling representative of a sound field, that are determined in accordance with the determined left distance and the determined right distance.
  • one adaptation is to rotate the sound field about the azimuth by an offset corresponding to the angle of rotation of the user's head.
  • the adaptation may take into account both the absolute position of the user's head and the angle of the user's head. For example, the head rotation angle can be taken relative to the angle between the device (microphone) and the listener. In this case, if the user moves to the left and rotates his head to the right to keep looking at the screen, then no compensation would be applied.
  • the amplitude of part of, or all of, the sound field may be increased or decreased as the distance values represented by the left distance data 618 and the right distance data 620 decrease or increase in order to account for a user moving further away from or closer to the microphone. Modifications to the elevation of components of the sound field are also possible.
  • Figure 7 illustrates a block diagram comprising a surround sound system that includes a signal processing circuit 700 for head-tracking, a pair of headphones comprising a left acoustic transmitter / headphone 740 and a right acoustic transmitter / headphone 746, a first microphone 750 and a second microphone 752.
  • the signal processing circuit 700 comprises a 3D sound engine 702, a distance calculator 704 and an angle calculator 706.
  • the angle calculator 706 is one example of an offset calculator.
  • the 3D sound engine 702 receives audio input signals 708. In this case a pair of audio input signals 708 corresponding to a stereo signal is illustrated. It will be appreciated that the audio input signals 708 could alternatively comprise a monaural signal or any other known form of multichannel audio signal.
  • the 3D sound engine 702 also receives sound position information 710 which comprises information relating to the azimuth and elevation of each component of the audio input 708. Of course, the audio input signals 708 and sound position information 710 could be received at a common input terminal.
  • the distance calculator 704 receives a left reference signal 712, a right reference signal 714, a left received signal 716 and a right received signal 718.
  • the distance calculator 704 is configured to calculate distance information 720 comprising left distance data and right distance data as described above in relation to figure 6 .
  • the distance calculator then provides this distance information 720 to the angle calculator 706.
  • the angle calculator 706 is configured to determine the orientation angle of a user's head as described in relation to figures 4 and 5 .
  • the angle calculator 706 then provides the orientation angle 722 to the 3D sound engine 702, which can use the orientation angle 722 as an offset value.
  • the offset value is an azimuth angle offset value.
  • the 3D sound engine 702 is configured to provide a left audio output signal 724 and a right audio output signal 726 suitable for providing a surround sound field to a pair of headphones 740, 746, adapted in accordance with the user's head orientation angle 722.
  • the 3D sound engine 702 can add the orientation angle 722 to the received sound position information 710 in order to provide offset position input signalling.
  • the 3D sound engine 702 can then determine the right audio output signal 726 and the left audio output signal 724 in accordance with the audio input signals 708 and the offset position input signalling.
  • the offset calculator may be an angle calculator that is configured to calculate an azimuth angle offset value.
  • a left mixer 727 mixes the left audio output signal 724 with a left ultrasound signal 728 received from a left ultrasound signal generator 730 in order to provide a combined left audio and ultrasound signal 736.
  • the left ultrasound signal generator 730 also provides the left reference signal 712, which is representative of the left ultrasound signal 728, to the distance calculator 704.
  • a right mixer 725 mixes the right audio output signal 726 with a right ultrasound signal 732 received from a right ultrasound signal generator 734 in order to provide a combined right audio and ultrasound signal 742.
  • the right ultrasound signal generator 734 also provides the right reference signal 714, which is representative of the right ultrasound signal 732, to the distance calculator 704.
  • the combined left audio and ultrasound signal 736 is provided to a left amplifier 738 which provides an amplified signal to the left headphone 740.
  • the combined right audio and ultrasound signal 742 is provided to a right amplifier 744 which provides an amplified signal to the right headphone 746.
  • the ultrasound signals transmitted by the left headphone 740 and the right headphone 746 are detected by a microphone assembly 748.
  • the microphone assembly 748 comprises at least the first microphone 750 and the second microphone 752.
  • the first microphone 750 provides the left received signal 716 and the right received signal 718 to the distance calculator 704.
  • the left received signal 716 and the right received signal 718 may be provided to the distance calculator 704 as a combined signal, as illustrated in figure 7 .
  • the distance calculator 704 will then process the combined signal in order to determine the left received signal 716 and right received signal 718, for example by distinguishing different frequency components or different time multiplexing as described above in relation to figure 6 .
  • the second microphone 752 provides a second left received signal 717 and a second received right received signal 719 to the distance calculator 704.
  • the microphone assembly 748 may comprise a processor (not shown) configured to separate the left received signal 716 and the right received signal 718 from the signal received at the first microphone 750, such that the microphone assembly 748 may provide the left received signal 716 and the right received signal 718, separately, to the distance calculator 704.
  • the processor may be configured to separate the second left received signal 717 and the second received signal 719 from the signal received at the second microphone 752.
  • the distance calculator 704 can then determine the left distance in accordance with the left reference signal 712, the left received signal 716 and the second left received signal 717.
  • the distance calculator 704 can also determine the right distance in accordance with the right reference signal 714, the right received signal 718 and the second right received signal 719.
  • the microphone assembly 748 may comprise a single microphone or an array / plurality of microphones. Where a plurality of microphones is present, the microphone assembly 748 may provide a plurality of left received signals and a plurality of right received signals to the distance calculator 704, wherein each such signal is associated with a particular microphone. The distance calculator 704 may then use the plurality of left received signals and right received signals in conjunction with the left reference signal 712 and the right reference signal 714 to determine the left distance and the right distance. The use of a plurality of left received signals and right received signals may advantageously enable a more accurate determination of the left distance and the right distance to be made.
  • the distance calculator 704 may apply averaging methods or triangulation methods to the received plurality of left received signals and the plurality of right received signals, in cases where the plurality of microphones are located at positions remote from one another.
  • an array of microphones may provide the advantage of improving the signal to noise ratio of the left received signals and the right received signals.
  • the first microphone 750 may itself comprise a microphone assembly comprising one or more individual microphones. Also, the second microphone 752 may itself comprise a microphone assembly comprising one or more individual microphones.
  • Figure 8 shows a block diagram of a sound engine 800.
  • the sound engine 800 receives an audio input signal 802 and a position input signal 804.
  • the position input signal 804 may comprise head-tracking information determined by systems such as those described in relation to figures 3 - 7 above.
  • the sound engine 800 generates signals that enable the simulation of sounds originating from locations remote from a user's headphones. Therefore, it is desirable to adapt the audio input signal 802 to account for the changes to the sounds that would occur if those sounds had actually originated from those remote locations.
  • the position input signal 804 is provided to an interaural gain and delay block 806.
  • the interaural gain and delay block 806 determines the appropriate delay 807 required to form a left delayed audio signal 810 and a right delayed audio signal 814.
  • the delay 807 corresponds to the time difference that a user would perceive in the arrival of a sound at the user's left ear compared to the arrival of the sound at the user's right ear if the sound had originated from a particular location remote from the user.
  • An interaural delay block 808 processes the required delay 807 and the audio input signal 802 to provide the left delayed audio signal 810 and the right delayed audio signal 814.
  • the interaural gain and delay block 806 also determines the appropriate left gain 812 and right gain 815 to apply to the left delayed audio signal 810 and to the right delayed audio signal 814 respectively.
  • the respective left gain 812 and right gain 815 correspond to a difference in volume of the sound that the user would perceive in the sound arriving at the user's left ear compared to the sound arriving at the user's right ear, if the sound had originated from the particular location remote from the user.
  • the left delayed audio signal 810 and the left gain 812 are provided to a left amplifier 811.
  • the left amplifier 811 amplifies the left delayed audio signal 810 in accordance with the left gain 812 to provide a left amplified audio signal 813.
  • the right delayed audio signal 814 and the right gain 815 are provided to a right amplifier 816.
  • the right amplifier 816 amplifies the right delayed audio signal 814 in accordance with the right gain 815 to provide a right amplified audio signal 817.
  • the sound engine 800 also comprises an HRTF library block 818.
  • the HRTF library block 818 comprises a record of Head Related Transfer Functions (HRTFs). These enable the transformation of audio signals to adapt the signals to the form appropriate for supply to a user's ears. This adaptation is necessary because a sound supplied from a position remote from a user will be altered by interacting with the user's body before it arrives at the user's eardrum. For example, the sound will arrive directly at the user's left ear, but it will also arrive indirectly by reflecting from the user's left shoulder into the user's left ear. The sound will also be modified by, for example, interacting with the user's left pinna before it arrives at the user's left eardrum.
  • HRTFs Head Related Transfer Functions
  • the Head Related Transfer Functions comprise the information, in the form of filter coefficients, required to achieve these adaptations.
  • the HRTF library block 818 selects a set of left filter coefficients 820 suitable to transform the left amplified audio signal 813.
  • the left amplified audio signal 813 and the left filter coefficients are provided to a left finite impulse response filter 822.
  • the left finite impulse response filter 822 filters the left amplified audio signal 813 in accordance with the left filter coefficients 820 to provide a left output signal 824.
  • the HRTF library block 818 selects a set of right filter coefficients 826 suitable to transform the right amplified audio signal 817.
  • the right amplified audio signal 817 and the right filter coefficients 826 are provided to a right finite impulse response filter 828.
  • the right finite impulse response filter 828 filters the right amplified audio signal 817 in accordance with the right filter coefficients 826 to provide a right output signal 830.
  • the left output signal 824 and the right output signal 830 will be suitable for supply, directly or indirectly, to respectively a left headphone and a right headphone (not shown). Thereby the headphones will provide an appropriate surround sound field to the user.
  • Figure 9 illustrates a method of signal processing. The method comprises the following steps.
  • the first step 902 comprises receiving a left reference signal representative of an ultrasound signal supplied to a left acoustic transmitter.
  • the second step 904 comprises receiving, from a first microphone, a left received signal representative of an ultrasound signal transmitted by the left acoustic transmitter.
  • the ultrasound signal will have travelled through the air between the left acoustic transmitter and the first microphone.
  • the third step 906 comprises determining a left distance between the left acoustic transmitter and the first microphone in accordance with the received left reference signal and the received left received signal.
  • the left distance can be determined by analysing the left reference signal and the left received signal as discussed above in relation to figure 6 .
  • the fourth step 908 comprises receiving a right reference signal representative of an ultrasound signal supplied to a right acoustic transmitter.
  • the fifth step 910 comprises receiving, from the first microphone, a right received signal representative of an ultrasound signal transmitted by the right acoustic transmitter.
  • the ultrasound signal will have travelled through the air between the right acoustic transmitter and the first microphone.
  • the sixth step 912 consists of determining a right distance between the right acoustic transmitter and the first microphone in accordance with the received right reference signal and the received right received signal.
  • the right distance can be determined by analysing the right reference signal and the right received signal as discussed above in relation to figure 6 .
  • the seventh step 914 consists of providing output audio signalling representative of a sound field in accordance with the determined left distance and the determined right distance.
  • the sound field may be supplied to a user, such that the sound field remains consistent despite movements of the users head.
  • the audio output signalling may be provided to the left acoustic transmitter and the right acoustic transmitter, thereby avoiding the need to have separate components to provide the audio output signalling to a user and to provide the ultrasound signals required for head-tracking.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Stereophonic System (AREA)
EP20130193367 2013-11-18 2013-11-18 Signalverarbeitungsschaltung Withdrawn EP2874412A1 (de)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP20130193367 EP2874412A1 (de) 2013-11-18 2013-11-18 Signalverarbeitungsschaltung

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
EP20130193367 EP2874412A1 (de) 2013-11-18 2013-11-18 Signalverarbeitungsschaltung

Publications (1)

Publication Number Publication Date
EP2874412A1 true EP2874412A1 (de) 2015-05-20

Family

ID=49626824

Family Applications (1)

Application Number Title Priority Date Filing Date
EP20130193367 Withdrawn EP2874412A1 (de) 2013-11-18 2013-11-18 Signalverarbeitungsschaltung

Country Status (1)

Country Link
EP (1) EP2874412A1 (de)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116320880A (zh) * 2023-05-25 2023-06-23 荣耀终端有限公司 音频处理方法和装置

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0438281A2 (de) * 1990-01-19 1991-07-24 Sony Corporation Schallsignalwiedergabegerät
US20050226437A1 (en) * 2002-05-27 2005-10-13 Sonicemotion Ag Method and device for generating information relating to relative position of a set of at least three acoustic transducers (as amended)
US20060045294A1 (en) * 2004-09-01 2006-03-02 Smyth Stephen M Personalized headphone virtualization
US20110293129A1 (en) * 2009-02-13 2011-12-01 Koninklijke Philips Electronics N.V. Head tracking

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0438281A2 (de) * 1990-01-19 1991-07-24 Sony Corporation Schallsignalwiedergabegerät
US20050226437A1 (en) * 2002-05-27 2005-10-13 Sonicemotion Ag Method and device for generating information relating to relative position of a set of at least three acoustic transducers (as amended)
US20060045294A1 (en) * 2004-09-01 2006-03-02 Smyth Stephen M Personalized headphone virtualization
US20110293129A1 (en) * 2009-02-13 2011-12-01 Koninklijke Philips Electronics N.V. Head tracking

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
C. PHILLIP BROWN; RICHARD O. DUDA: "A structural model for binaural sound synthesis", IEEE TRANSACTIONS ON SPEECH AND AUDIO PROCESSING, vol. 6, no. 5, September 1998 (1998-09-01)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116320880A (zh) * 2023-05-25 2023-06-23 荣耀终端有限公司 音频处理方法和装置
CN116320880B (zh) * 2023-05-25 2023-10-20 荣耀终端有限公司 音频处理方法和装置

Similar Documents

Publication Publication Date Title
US20220116723A1 (en) Filter selection for delivering spatial audio
CN106576203B (zh) 确定和使用房间优化传输函数
EP3253078B1 (de) Am körper tragbare elektronische vorrichtung und system für virtuelle realität
AU2001239516B2 (en) System and method for optimization of three-dimensional audio
EP2503800B1 (de) Räumlich konstanter Raumklang
EP2953383B1 (de) Signalverarbeitungsschaltung
US10341799B2 (en) Impedance matching filters and equalization for headphone surround rendering
US10652686B2 (en) Method of improving localization of surround sound
US9769585B1 (en) Positioning surround sound for virtual acoustic presence
US11546703B2 (en) Methods for obtaining and reproducing a binaural recording
EP3225039B1 (de) System und verfahren zur erzeugung von kopfexternalisiertem 3d-audio durch kopfhörer
KR20140126788A (ko) 은닉된 시간 동기화 신호를 이용한 위치 측정 시스템 및 이를 이용한 위치 측정 방법
CN104853283A (zh) 一种音频信号处理的方法和装置
US11678111B1 (en) Deep-learning based beam forming synthesis for spatial audio
KR102283964B1 (ko) 인터콤시스템 통신명료도 향상을 위한 다채널다객체 음원 처리 장치
US10659903B2 (en) Apparatus and method for weighting stereo audio signals
EP1796427A1 (de) Hörgerät mit einer virtuellen Schallquelle
EP2874412A1 (de) Signalverarbeitungsschaltung
US20070127750A1 (en) Hearing device with virtual sound source
JP2011259299A (ja) 頭部伝達関数生成装置、頭部伝達関数生成方法及び音声信号処理装置
KR101071895B1 (ko) 청취자 위치 추적 기법에 의한 적응형 사운드 생성기
US20240163630A1 (en) Systems and methods for a personalized audio system
US20240089687A1 (en) Spatial audio adjustment for an audio device
CN112954579A (zh) 现场听音效果的重现方法及装置
CN115706895A (zh) 使用多个换能器的沉浸式声音再现

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20140327

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20151121