AU2021101916A4 - A method and system for determining an orientation of a user - Google Patents

A method and system for determining an orientation of a user Download PDF

Info

Publication number
AU2021101916A4
AU2021101916A4 AU2021101916A AU2021101916A AU2021101916A4 AU 2021101916 A4 AU2021101916 A4 AU 2021101916A4 AU 2021101916 A AU2021101916 A AU 2021101916A AU 2021101916 A AU2021101916 A AU 2021101916A AU 2021101916 A4 AU2021101916 A4 AU 2021101916A4
Authority
AU
Australia
Prior art keywords
acoustic signal
electronic device
received
user
transducer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
AU2021101916A
Inventor
Tong Chen
Paul ZRNA
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Idearlabs Pty Ltd
Original Assignee
Idearlabs Pty Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from PCT/AU2019/050988 external-priority patent/WO2020077389A1/en
Application filed by Idearlabs Pty Ltd filed Critical Idearlabs Pty Ltd
Priority to AU2021101916A priority Critical patent/AU2021101916A4/en
Application granted granted Critical
Publication of AU2021101916A4 publication Critical patent/AU2021101916A4/en
Ceased legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Circuit For Audible Band Transducer (AREA)

Abstract

A method (200) for determining an orientation of an anatomical feature of a user in relation to an electronic device. A wearable device is located relative to the anatomical feature of the user, the wearable device comprising a first transducer and a second transducer, and the electronic device comprising a third transducer. The method comprises transmitting and receiving (210) a first acoustic signal between the first transducer and the third transducer. The method further comprises transmitting and receiving (220) a second acoustic signal between the second transducer and the third transducer. The method further comprises determining (230) a first received time associated with the first acoustic signal and determining (240) a second received time associated with the second acoustic signal. The method also comprises determining (250) the orientation of the anatomical feature of the user in relation to the electronic device based on the first and second received times. 2/10 200 transmitting and receiving a first acoustic signal 2101, between the first transducer and the electronic device transmitting and receiving a second acoustic signal 220 4- between the second transducer and the electronic device 230 . determining a first received time associated with the first acoustic signal determining a second received time associated with 240 % the second acoustic signal determining the orientation of the anatomical feature 250 r of the user in relation to the electronic device based on the first and second received times Fig. 2

Description

2/10
200
transmitting and receiving a first acoustic signal 2101, between the first transducer and the electronic device
transmitting and receiving a second acoustic signal 220 4- between the second transducer and the electronic device
230 . determining a first received time associated with the first acoustic signal
determining a second received time associated with 240 % the second acoustic signal
determining the orientation of the anatomical feature 250 r of the user in relation to the electronic device based on the first and second received times
Fig. 2
"A method and system for determining an orientation of a user"
Incorporation by Reference
[0000] This application is a divisional application of International patent application PCT/AU2019/050988 filed on 13 September 2019, which claims the benefit of Australian patent application 2018903881 filed on 15 October 2018, the disclosures of which are incorporated herein by reference in their entirety.
Technical Field
[0001] The present disclosure relates to a method for determining an orientation of an anatomical feature of a user in relation to an electronic device. The anatomical feature may, in some examples, be a head of a user.
Background
[0002] Knowledge of the orientation of a user's anatomical feature may be useful to allow a device to focus audible, visual or other information based on the orientation. For example, knowledge of the user's head orientation may assist in optimising the delivery of audio to the user based on the orientation.
[0003] Throughout this specification the word "comprise", or variations such as "comprises" or "comprising", will be understood to imply the inclusion of a stated element, integer or step, or group of elements, integers or steps, but not the exclusion of any other element, integer or step, or group of elements, integers or steps.
[0004] Any discussion of documents, acts, materials, devices, articles or the like which has been included in the present specification is not to be taken as an admission that any or all of these matters form part of the prior art base or were common general knowledge in the field relevant to the present disclosure as it existed before the priority date of each claim of this application.
Summary
[0005] A method for determining an orientation of a head of a user in relation to an electronic device comprising a microphone, wherein a wearable device is located relative to the head of the user, the wearable device comprising a first speaker to transmit a first acoustic signal and a second speaker to transmit a second acoustic signal, the method comprising: transmitting , from the first speaker, the first acoustic signal and receiving the first acoustic signal with the microphone; transmitting , from the second speaker, the second acoustic signal and receiving the second acoustic signal with the microphone; determining a first received time associated with the first acoustic signal received at the microphone; determining a second received time associated with the second acoustic signal received at the microphone; and determining the orientation of the head of the user in relation to the electronic device based on the first and second received times.
[0006] The first received time may indicate a time when the first acoustic signal is received by the microphone and the second received time may indicate a time when the second acoustic signal is received by the microphone.
[0007] In the method, determining the orientation of the head of the user may be further based on a first time difference and a second time difference. The first time difference may be based on a difference between a first time reference and the first received time, wherein the first time reference indicates a time that the first acoustic signal was transmitted. The second time difference may be based on a difference between a second time reference and the second received time, wherein the second time reference indicates a time that the second acoustic signal was transmitted.
[0008] In the method, the microphone may comprise two or more microphones. The method may further comprise: receiving the first and second acoustic signal at the two or more microphones; and processing the first and second acoustic signals received at the two or more microphones to determine a first location and a second location associated with the first and second speakers respectively.
[0009] The at least two microphones may form at least one microphone cluster in the electronic device.
[0010] The first acoustic signal and the second acoustic signal may comprise an ultrasonic signal. The ultrasonic signal may act as a carrier wave.
[0011] The first acoustic signal and the second acoustic signal maybe acoustic leak signals.
[0012] The first speaker maybe associated with a first ear of the user and the second speaker may be associated with a second ear of the user.
[0013] The electronic device maybe located on the user.
[0014] The first speaker and the second speaker of the wearable device maybe located approximately symmetrical about a central axis associated with the user. The electronic device may be approximately located on the central axis.
[0015] A system for determining an orientation of ahead of a user in relation to an electronic device, the system comprising: a wearable device located relative to the head of the user, the wearable device comprising a first speaker to transmit a first acoustic signal and a second speaker to send a second acoustic signal; the electronic device comprising a microphone; wherein the first acoustic signal is transmitted from the first speaker and received at the microphone; and wherein the second acoustic signal is transmitted from the second speaker and received at the microphone; a processor configured to: determine a first received time associated with the first acoustic signal received at the microphone; determine a second received time associated with the second acoustic signal received at the microphone; and determine the orientation of the head of the user in relation to the electronic device based on the first and second received times.
[0016] In the system, the microphone may comprise two or more microphones.
[0017] In the system, the at least two microphones may format least one microphone cluster in the electronic device.
[0018] In the system, the first acoustic signal and the second acoustic signal may comprise an ultrasonic signal. The ultrasonic signal may act as a carrier wave.
[0019] In the system, the first acoustic signal and the second acoustic signal maybe acoustic leak signals.
[0020] In the system, the first speaker may be associated with a first ear of the user and the second speaker may be associated with a second ear of the user.
Brief Description of Drawings
[0021] Fig. 1 illustrates a schematic diagram of an example system for determining an orientation of an anatomical feature of a user in relation to an electronic device;
[0022] Fig. 2 illustrates a method of determining an orientation of an anatomical feature of a user in relation to an electronic device;
[0023] Fig. 3 illustrates an example configuration of a wearable device and electronic device;
[0024] Fig. 4 illustrates an anatomical feature of the head of a user turned in a left direction;
[0025] Fig. 5 illustrates an anatomical feature of the head of a user turned in a right direction;
[0026] Fig. 6 illustrates an example of a linear microphone array in an electronic device;
[0027] Fig. 7 illustrates an example of microphone clusters in an electronic device;
[0028] Fig. 8 illustrates an example processing device;
[0029] Fig. 9 illustrates results of simulations; and
[0030] Fig. 10 illustrates an example hardware implementation.
Description of Embodiments
Overview of the system 100
[0031] Fig. 1 illustrates a system 100 for determining an orientation of an anatomical feature of a user 110 in relation to an electronic device 120. The system 100 further comprises a wearable device 130 that is located relative to the anatomical feature of the user 110. The wearable device 130 comprises a first transducer 140 and a second transducer 150.
[0032] The first transducer 140 and second transducer 150 may be speakers. In other examples the first transducer 140 and second transducer 150 are microphones. The first transducer 140 and second transducer 150 may be transducers capable of acting as both speakers and microphones. In some example the wearable device 130 may comprise headphones.
[0033] The electronic device 120 comprises a third transducer 160. The third transducer 160 may be a microphone. The third transducer 160 may be a speaker. In other examples the third transducer 160 may be capable of acting as both a speaker and microphone.
[0034] In other examples the electronic device 120 may comprise additional transducers, such as two or more microphones or speakers. In the system 100 a first acoustic signal is transmitted and received between the first transducer 140 and the third transducer 160 (or the additional transducers), and a second acoustic signal is transmitted and received between the second transducer 150 and the third transducer
160 (or additional transducers). In some examples the first acoustic signal and second acoustic signal comprise an ultrasonic signal.
[0035] The system 100 also comprises a processor 170 configured to determine a first received time associated with the first acoustic signal and a second received time associated with the second acoustic signal. In one example, the first received time may indicate a time when the first acoustic signal is received by the electronic device 120 and the second received time may indicate a time when the second acoustic signal is received by the electronic device 120. In some examples the processor may be provided at the electronic device 120.
[0036] The method 200 as illustrated by Fig. 2 includes transmitting and receiving 210 a first acoustic signal between the first transducer 140 and the third transducer 160. The method 200 further comprises transmitting and receiving 220 a second acoustic signal between the second transducer 150 and the third transducer 160. In some examples the first acoustic signal may be transmitted from the first transducer 140, and the second acoustic signal may be transmitted from the second transducer 150.
[0037] The method 200 further comprises determining 230 a first received time associated with the first acoustic signal. The method 200 further comprises determining 240 a second received time associated with the second acoustic signal.
[0038] The method 200 further comprises determining 250 the orientation of the anatomical feature of the user 110 in relation to the electronic device 120 based on the first and second received times.
[0039] A detailed example of the method 200 will now be described.
Method 200
[0040] As described above the method 200 determines an orientation of an anatomical feature of a user 110 in relation to an electronic device 120. A wearable device 130 is located relative to the anatomical feature of the user 110. As described above the wearable device 130 may comprise headphones. In some examples the anatomical feature of the user 110 is a head of the user. In this way, the method 200 determines the orientation of the user's head. More specifically the method 200 determines the orientation of the head in an azimuth direction in relation to the electronic device 120.
[0041] The first transducer 140 and second transducer 150 of the wearable device 130 may be associated with a first ear of the user and second ear of the user respectively. In one example the first transducer 140 and the second transducer 150 of the wearable device 130 are located approximately symmetrical about a central axis 310 associated with the user 110. This is illustrated in Fig. 3.
[0042] The electronic device 120 may be located on the user 110, for example centrally on the body 110. The electronic device 120 may be approximately located on the central axis 310. In one example the electronic device 120 is located in a chest area of the user 110. The electronic device 120 may be located approximately 70 mm down from the base of a neck of the user 110. In other examples the electronic device 120 may be located in a range of 70 mm to 300 mm from the base of the neck of the user. The electronic device 120 may be located at other distances from the base of the neck. The electronic device 120 may extend from the user's body by approximately 50 mm. In other examples the electronic device 120 may extend from the user's body by more than 50 mm. The electronic device 120 may sit flush against the body. In other examples the electronic device 120 may be located in other places on the body.
[0043] The electronic device may comprise a hearing aid, mobile phone, audio player, video player, gaming device or radio. The electronic device 120 may be worn on a lanyard around the neck of the user 110. In other examples the electronic device 120 may be attached to clothing of a user.
Transmitting and receiving a first acoustic signal 210
[0044] As described above the method 200 includes transmitting and receiving 210 a first acoustic signal between the first transducer 140 and the third transducer 160. In one example the first transducer 140 of the wearable device 130 transmits the first acoustic signal. In this way, the third transducer 160 (and/or the additional transducers of the electronic device 120) receives the first acoustic signal from the first transducer 140.
[0045] In another example, the third transducer 160 (and/or the additional transducers of the electronic device) transmits the first acoustic signal and the first transducer 140 receives the first acoustic signal.
[0046] As described above the electronic device 120 may comprise additional transducers, such as two or more microphones. In this way, the first acoustic signal may be received at the two or more microphones of the electronic device 120. In some examples, the at least two microphones of the electronic device 120 may form at least one microphone cluster 120.
Transmitting and receiving a second acoustic signal 220
[0047] As described above the method 200 also includes transmitting and receiving 220 a second acoustic signal between the second transducer 150 and the third transducer 160. The second transducer 150 of the wearable device 130 may transmit the second acoustic signal. In this way, the third transducer 165 and/or additional transducers of the electronic device 120 receives the second acoustic signal from the second transducer 150.
[0048] In another example the third transducer 160 and/or additional transducers of the electronic device 120 transmits the second acoustic signal and the second transducer 150 receives the second acoustic signal.
[0049] The second acoustic signal may be received at the two or more microphones of the electronic device 120.
[0050] The first acoustic signal and second acoustic signal may be transmitted at the same instant from the first transducer 140 and the second transducer 150 respectively. Alternatively the first acoustic signal and second acoustic signal may be transmitted at different times from the first transducer 140 and the second transducer 150 respectively. In other examples the first acoustic signal and second acoustic signal may be transmitted at the same instant from the electronic device 120. Alternatively the first acoustic signal and second acoustic signal may be transmitted at different times from the electronic device 120.
[0051] The first acoustic signal and second acoustic signal may comprise an audible acoustic signal, such as music or audio content. In one example thefirst acoustic signal and second acoustic signal may be acoustic leak (or leakage) signals. This means that the first acoustic signal and second acoustic signal are not purposely emitted, but rather, are leakages from acoustic signals associated with the first transducer 140 and second transducer 150. In this way, the first acoustic signal and second acoustic signal may leak out from the first transducer 140 and second transducer 150 respectively and be detected by the third transducer 160. For example the third transducer 160 and/or additional transducers of the electronic device 120 may detect the first acoustic signal and second acoustic signal as acoustic leak signals.
[0052] In other examples the first acoustic signal and second acoustic signal may comprise a non-audible acoustic signal. In this example the first acoustic signal and second acoustic signal may have a frequency of less than 20 Hz or greater than 20 kHz. In one example the first acoustic signal and second acoustic signal comprise infrasonic signals. In another example the first acoustic signal and second acoustic signal are ultrasonic signals with a frequency greater than 20 kHz. In a further example the first acoustic signal and second acoustic signal are frequency modulated signals, and an ultrasonic signal may act as a carrier wave for the frequency modulated signals. The frequency of the carrier wave may be above 132 kHz.
[0053] In some examples the first acoustic signal and/or the second acoustic signal may be a speech signal in the spectrum from 20 Hz to 20 kHz. That is, the first acoustic signal and/or second acoustic signal may be a baseband signal (prior to modulation). In this way, the baseband signal may be a periodic wave such as a sinusoidal wave. In one example, the period of the baseband signal (that is, the first acoustic signal or second acoustic signal prior to modulation) may be greater than twice the time taken for acoustic sound to travel between ears of the user 110. That is, denoting the distance between ears of the user 110 as d and the speed of sound as v, the period T of the baseband signal (before modulation) may be computed as follows:
d T >-x2 V
[0054] In one example digital modulation is used, so that the first acoustic signal and second acoustic signal are generated by modulating an ultrasound carrier wave by a discrete (baseband) signal. In one example the ultrasound carrier wave is modulated by symbol coded messages. In a further example the ultrasound carrier wave is modulated by orthogonal symbols. The frequency for the symbol coded messages may fall between 20.5 kHz and 21 kHz. A minimum of three symbols may be transmitted.
[0055] An advantage of using an ultrasonic signal as the carrier wave is that ultrasound are not audible to the human ear at frequencies above 20 kHz airborne. For sound frequency of 20 kHz, the safety guideline for exposure limits on airborne ultrasound according to the Occupational Safety and Health Administration (OSHA) is 105 dB. For ultrasound signals with a frequency around 100 kHz the exposure limit increases to 115 dB. This exposure limit can be increased if there is no possibility that the ultrasound signal can be coupled with the human body. This means that for an ultrasound signal with a frequency over 132 kHz, the exposure limit is greater than 115 dB.
[0056] A further advantage of using an ultrasound signal is that ultrasound is more directional than signals within the human audible frequency range. This means that ultrasound signals may have a better defined flight path and spatialfiltering techniques may be utilised to minimise multi-path propagation.
First acoustic signal and second acoustic signal transmission time
[0057] As described above the electronic device 120 may be located on the user 110 about a central axis 310, with the first transducer 140 and second transducer 150 located approximately symmetrical about the central axis 310. This is illustrated in Fig. 3.
[0058] In this configuration, when the user 110 is facing forward (i.e. facing in a direction of the central axis 310) the distance 320, 330 between the first transducer 140 and second transducer 150 to the electronic device 120 respectively is approximately equal.
[0059] In this way, when the electronic device 120 transmits a first acoustic signal and second acoustic signal to the first transducer 140 and second transducer 150 respectively, the first transducer 140 and second transducer 150 may receive the first acoustic signal and second acoustic signal at approximately the same time. In other words, the first acoustic signal and second acoustic signal may require an approximately equal amount of time to arrive at the first transducer 140 and second transducer 150 respectively.
[0060] In other examples the first transducer 140 and second transducer 150 may receive the first acoustic signal and second acoustic signal within a delay of each other.
[0061] Similarly, when the first transducer 140 and second transducer 150 transmit the first acoustic signal and second acoustic respectively to the electronic device 120, the electronic device 120 receives the first acoustic signal and second acoustic signal at approximately the same time.
[0062] In other examples the electronic device 120 may receive the first acoustic signal and second acoustic signal within a delay of each other.
[0063] However, when the first transducer 140 and second transducer 150 are not located approximately symmetrical about the central axis 310 the time taken for the first acoustic signal and second acoustic signal to be transmitted and received between the transducers 140, 150 and electronic device 120 may not be approximately equal. This means that there may be a delay between the times that the electronic device 120 receives the first acoustic signal and second acoustic signals. Similarly, when the electronic device 120 transmits the first acoustic signal and second acoustic signal there may be a delay between the times that the first transducer 140 and second transducer 150 receive the first acoustic signal and second acoustic signal respectively. This may occur when the orientation of the user's head changes (for example, when the user turns his head left or right).
[0064] Referring to Fig. 4, consider the user 110 with the anatomical feature of the head turned in a left direction. In this way, the distance 420 from the first transducer 140 to the electronic device 120 is different to the distance 430 from the second transducer 150 to the electronic device 120. In this example the distance 420 may be less than the distance 430. In this way, the time for the first acoustic signal to transmit and receive between the first transducer 140 and the third transducer 160 (and/or additional transducers of the electronic device 120) may be shorter than the time for the second acoustic signal to transmit and receive between the second transducer 150 and the third transducer 160 (and/or additional transducers of the electronic device 120).
[0065] A similar situation is described in Fig. 5, where the head of the user 110 is turned to the right. In this example the distance 530 from the second transducer 150 to the electronic device 120 may be less than the distance 520 from the first transducer 140 to the electronic device 120. In this way, the time for thefirst acoustic signal to transmit and receive between the second transducer 150 and the third transducer 160 (and/or additional transducers of the electronic device 120) may be shorter than the time for the first acoustic signal to transmit and receive between the first transducer 140 and the third transducer 160 (and/or additional transducers of the electronic device 120).
Determining a first received time associated with the first acoustic signal 230
[0066] As described above the method 200 also includes determining 230 a first received time associated with the first acoustic signal. In one example, the first transducer 140 transmits the first acoustic signal and the first received time indicates a time when the first acoustic signal is received by the third transducer 160 (and/or additional transducers) of the electronic device 120. In some examples the first transducer 140 transmits the first acoustic signal in accordance with a clock signal. The clock signal may be associated with the first transducer 140, the wearable device 130 or another device.
[0067] In another example, the third transducer 160 and/or additional transducers of the electronic device 120 transmits the first acoustic signal and the first received time indicates a time when the first acoustic signal is received by the first transducer 140. In some examples the third transducer 160 and/or additional transducers of the electronic device 120 transmits the first acoustic signal in accordance with a clock signal. The clock signal may be associated with the third transducer 160 and/or additional transducers of the electronic device 120, the electronic device 120 or another device.
Determining a second received time associated with the second acoustic signal 240
[0068] As described above the method 200 also includes determining 240 a second received time associated with the second acoustic signal. In one example, the second transducer 150 transmits the second acoustic signal and the second received time indicates a time when the second acoustic signal is received by the third transducer 160 and/or additional transducers of the electronic device 120. In some examples the second transducer 150 transmits the second acoustic signal in accordance with a clock signal. The clock signal may be associated with the second transducer 140, the wearable device 130 or another device.
[0069] In another example, the third transducer 160 and/or additional transducers of the electronic device 120 transmits the second acoustic signal and the second received time indicates a time when the second acoustic signal is received by the second transducer 150. In some examples the third transducer 160 and/or additional transducers of the electronic device 120 transmits the second acoustic signal in accordance with a clock signal. The clock signal may be associated with the third transducer 160 and/or additional transducers of the electronic device 120, the electronic device 120 or another device.
Determining the orientation of the anatomical feature of the user 250
[0070] As described above the method also includes determining 250 the orientation of the anatomical feature of the user in relation to the electronic device 120 based on the first and second received times.
[0071] In the present disclosure, determining 250 the orientation of the anatomical feature of the user 110 is based on the delay in the first acoustic signal and second acoustic signal being received. As described above the first acoustic signal and second acoustic signal may be received by the third transducer 160 (and/or additional transducers of the electronic device 120). In other examples the first acoustic signal and second acoustic signal may be received by the first transducer 140 and second transducer 150 respectively.
[0072] Determining 250 the orientation of the anatomical feature of the user may be based on a first time difference and a second time difference. The first time difference may be based on a difference between a first time reference and the first received time. Examples of the first received time are described above with respect to step 230 of method 200.
[0073] In one example the first time reference may indicate a time that the first acoustic signal is transmitted by the first transducer 140 or third transducer 160 (or additional transducers of the electronic device 120). In another example the first time reference may indicate a common time for transmission of the first acoustic signal and another acoustic signal, such as the second acoustic signal. In other examples the first time reference may indicate another event associated with the first acoustic signal.
[0074] In a similar way the second time difference may be based on a difference between a second time reference and the second received time. Examples of the second received time are described above with respect to step 240 of method 200.
[0075] The second time difference may indicate a time that the second acoustic signal is transmitted. In another example the second time reference may indicate a common time for transmission as described above. In other examples the second time reference may indicate another event.
[0076] In one example, determining 250 the orientation of the anatomical feature of the user 110 comprises determining the orientation of the user's head in the azimuth direction. In this example, determining 250 the orientation of the user's head may be based on the first time difference and the second time difference. For instance, if the first time difference is greater than the second time difference, determining 250 the orientation may comprise determining that the user's head is oriented in a left direction as illustrated in Fig. 4. If thefirst time difference is less than the second time difference, determining 250 the orientation may comprise determining that the user's head is oriented in a right direction as illustrated in Fig. 5.
[0077] In a similar way, if the first time difference and the second time difference are equal, determining 250 the orientation may comprise determining that the user's head is oriented approximately on the central axis 310. In other examples the orientation may be determined as on the central axis 310 if the values of the first time difference and second time difference are within a threshold of each other.
Third transducer 160 of the electronic device 120
[0078] As described above the electronic device 120 comprises the third transducer. The electronic device 120 may also comprise additional transducers, such as two or more microphones. The two or more microphones may be located in spatially dispersed locations of the electronic device 120. In one example the two or more microphones may form a linear microphone array 600. An example of the linear microphone array 600 is illustrated in Fig. 6.
[0079] The microphones 610, 620, 630, 640 in the linear microphone array 600 may be spaced to avoid spatial aliasing. This means that the microphones 610, 620, 630, 640 may be spaced so that a distance 660 between each microphone is less than half the wavelength of the first acoustic signal and the second acoustic signal. For instance, if the frequency of the first acoustic signal and the second acoustic signal is 132 kHz the corresponding wavelength will be around 2.5 mm. This means that the microphones 610, 620, 630, 640 in the array 600 may be located with a distance of less than 1.25 mm between each microphone.
[0080] In other examples the microphones 610, 620, 630, 640 in the array 600 are located at a distance greater than half the wavelength of the first acoustic signal and the second acoustic signal.
[0081] As also described above the at least two microphones may format least one microphone cluster in the electronic device 120. There may be more than one microphone cluster in the electronic device 120. For example there may be two microphones per microphone cluster. The two microphones may be spaced at a distance from each other to avoid spatial aliasing.
[0082] An example of the microphone clusters is illustrated in Fig. 7. The microphones 730, 740, 750, 760, 770, 780 of the microphone cluster 710, 720 may be spaced at a distance from each other to avoid spatial aliasing. That is, the microphones 730, 740, 750, 760, 770, 780 are located at a distance 790 less than half the wavelength of the first acoustic signal and the second acoustic signal. In a further example each microphone cluster 710, 720 is placed as far apart from each other as allowable by the dimensions of the third transducer 160 and electronic device 120. This is so that the received acoustic signals at the microphones 730, 740, 750, 760, 770, 780 are sampled with less correlation.
Microphone array for localisation
[0083] In some examples the method 200 further comprises processing the signals received at the two or more microphones of the electronic device 120 to determine a first location and a second location, where the first location and second location are associated with the first and second transducers respectively.
[0084] Determining the first location may comprise processing the signals received at the array 600 or clusters 710, 720 to determine the direction of arrival (DOA). This may be based on a time delay of arrival (TDOA) estimation. This may further be based on the delays measured from transmission of the first acoustic signal and second acoustic signal between the electronic device 120 and the transducers 140, 150. In other examples localisation may be based on other acoustic signals transmitted between the electronic device 120 and transducers 140, 150.
[0085] In another example determining the first location comprises performing beamforming on the signals received at the microphones in the array 600 or clusters 710, 210. Examples of beamforming may comprise delay and sum beamformer, or minimum variance distortionless response beamformer.
[0086] In other examples other methods may be used such as Multiple Signal Classification (MUSIC), Estimation of Signal Parameters via Rotational Invariance Technique (ESPRIT) or Degenerate Unmixing Estimation Technique (DUET).
Processor 170
[0087] As described above the system 100 comprises a processor 170. Fig. 8 illustrates an example of a processing device. The processing device includes a processor 810, a memory 820 and an interface device 840 that communicate with each other via a bus 830. The memory 820 stores a computer software program comprising machine-readable instructions 824 and data 822 for implementing the method 200 described above, and the processor 810 performs the instructions from the memory 820 to implement the method 200.
[0088] The interface device 840 may include a communications module that facilitates communication with a communications network, and in some examples, with the user interface 840 and peripherals such as data store 822. It should be noted that although the processing device may be an independent network element, the processing device may also be part of another network element. Further, some functions performed by the processing device may be distributed between multiple network elements.
Model, simulations and implementation examples
Model
[0089] The model used in developing algorithms is derived using estimated average dimensions. These dimensions are used as guideline for evaluating the model and brings about intuition, whereas the model itself does not rely on these dimension measurements.
[0090] A rigid body model is assumed with a neck as a stick leaning naturally forward at 15 degrees with the tip of the neck connected with a free rotating vertical rigid stick, which in turn connects with a free rotating horizontal stick with ends denoting ear positions. The width of the head is assumed to be 155 mm, length of the neck 110 mm, vertical displacement from the tip of the neck to the ear canal 30 mm. In this proposed model, head is free to rotate sideways by up to 45 degrees pivoting on the tip of the neck in all directions, this is represented by the vertical stick pivoting on top of the neck stick.
[0091] The head is also allowed to rotate left and right by 80 degrees each side, which is represented by allowing the horizontal ear stick model to rotate. Neck movement is a lot more restricted, allowing 10 degrees backward, 50 degrees frontal, and 45 degrees sideways, with pivot angle allowance interpolated for directions in between.
[0092] The proposed electronic device 120 is located at 70 mm down from the base of the neck and extends forward by 50 mm. Such a location is used as a general guideline only, the actual location of the microphone plane can be of any orientation as long as its orientation does not change drastically during the life of operation. For example, system can have a number of microphones on the receiving device in an orientation A. If the device was to assume an orientation B after wearer calibration, system needs to be re-calibrated before measurement can be made with reasonable accuracy. For the sake of building a clear model, we assume that horizontally ears sits behind the microphone receiver plane in its home position.
[0093] The system works on the principle of time of arrival differences in different microphones. Timing differences work best when the microphones are further apart, whereas directional information is best obtained from microphones that are closer together, i.e. less than half wavelength apart. An example configuration which considers this positioning of microphones is illustrated in Fig. 7. The details of the significance of placing microphones at different distances of each other are explained in the implementation section. Some implementations rely only on a minimal number of microphones, while others rely on clusters of microphones which will give better performance in hostile environments. This model assumes two microphones being placed on an arbitrary plane for a minimal setup, with more microphones on more than one given plane will generate less ambiguous results as this gives less room for aliasing. Intuitively, a point in space can be determined by four or more reference points in space.
[0094] In the model, as spatial measurements are unreliable due to multipath effect or echoes, time differences of arrival at given microphones are relied on, assuming signals emitted from both ear pieces following similar pathways. From the simulation results, it is clear that in order to have unambiguous timing difference-head position measurements, at least three microphones are required and be located in such a manner that their position vectors can span a 3-D space. The two microphone locations in the modelling positions are assumed to be located diagonally at the corner of the receiving device, i.e. 180 degree rotational invariant. It is further assumed that sound travels in a homogeneous medium before reaching the receiving device, hence assuming a constant speed. This speed is assumed to be 340 m/s.
[0095] Robotics kinematic methods are used to generate all possible ear locations, by encoding above assumptions in to a set of Denavit-Hartenberg (DH) matrices. DH matrices are four by four matrices taking both rotation and displacement into consideration to generate a new set of basis vectors for the next connected rigid frame. In this model, 5 DH matrices are used to find all points of interest, i.e. neck head, left ear, right ear, and arbitrary unit facing direction. The arbitrary unit facing direction is used to generate the corresponding head facing direction in relation to the left and right ear locations. It is a rigid frame connected to horizontal ears frame on a fixedjoint, coinciding with the tip of the vertical head position frame, pointing straight forward when the system is at its home position. Final head facing direction is calculated by subtracting the head position vector from the arbitrary unit facing direction position vector; this gives a unit vector pointing in the same direction as the head is facing.
[0096] Location simulation can be analysed in two ways, using forward and/or reverse kinematics. Forward kinematic formulation was used to generate all possible ear locations with reasonable resolution. Time differences between sounds emitted from both ears are then calculated at both proposed microphones. It will be seen that left and right ear pieces sound emissions are coded so that sound differences calculated can be either positive or negative depending on the reference ear selected.
[0097] It is clear from forward kinematic simulations that there exists a definitive relationship between head movement on a given plane and time differences received at
the microphones, denoted by f(HD, TD, TD2 ,L ), where HD denotes head facing
direction, TDX denotes time delays at microphone indexed by x . However, due to
limitation of microphone locations, which were assumed to be on the same plane, aliasing of measurement on one plane does occur against measurements on other
planes, i.e. f(HD Idirections, TDI, TD2 ,L )= f(HD, TDI, TD ).2
[0098] Fig. 9 illustrates a simulation result 910 with the neck at its natural/home position with head performing both azimuth and altitude rotations. The points on the chart are labelled as (azimuth, altitude) angle pairs.
[0099] From Fig. 9 it is clear that there is much overlapping between the measurements at different angles. As discussed above, such aliasing can be reduced by the inclusion of multiple microphones. Yet a two microphone setup does generate a usable set of measurements if the system is calibrated properly. As can be seen from Fig. 9, all vertical curves at different altitudes are similar but with different offsets. A set of time difference pairs are obtained from unskewing such curves by remapping to its PCA vectors. Values on different curves are then averaged to a single curve which
in turn is fitted to a polynomial function HD = f(TDI, TD2 ,L ). There will be more TD
components to the fitting if more than 2 microphones or microphone clusters are used; this now becomes the model of the system.
Calibration of model and system
[0100] Before using the proposed model it is recommended to use the following calibration sequence:
1. Prompt user to rotate head left and right while holding a steady elevation;
2. Prompt user to look up and then down while facing straight ahead;
3. Prompt user to look straight ahead and rock head side to side;
4. Calculate the average vector differences of time delay represented using (left sensor time difference, right sensor time difference) format. This is done separately on samples obtained from step 1 and step 2. As an example, let
hl,h2,L be samples taken from step 1, and v, v2 ,L be samples taken from step
2. One set of vector differences is found by taking differences between all pairs
of h,h2 ,L and difference vectors are all made to face the same direction. And
another set of vector differences is found using the same method but on
v,v 2 ,L .
[0101] The resulting vectors are then normalised. These normalised vectors are approximately the Principal Component Vectors (but may be 180 degrees out of phase, as direction is not certain) in the model.
[0102] After obtaining the vectors, the principal axis associated with altitude changes can then be used to map new sample points back on to the 0 altitude curve. After
which, HD = f(TDI, TD2 ,L ) can be used to obtain the head direction vector desired.
[0103] Calibration of the proposed electronic device 120 may occur in the following way. At implementation, the electronic device 120 may store two pieces of information:
1. Principal Component Vectors obtained from the calibration process described above;and
2. HD= f(TDI, TD2 ,L ), the mapping function. This mapping function may be stored as a polynomial so that precise calculations may be carried out during run time, or as a look-up table with missing values interpolated at run-time.
At run time, the head orientation of the user 110 may be calculated as follows:
1. Obtaining time difference at both microphones (assuming a dual microphone setup) measurements post pre-processing;
2.Using stored principal vectors to map given points onto the zero elevation curve; and
3.Using the stored mapping function HD f(TDI, TD2 ,L ) either in polynomial or table form to generate an estimate of the final head position.
Implementation
[0104] Fig. 10 illustrates a possible hardware implementation 1000 of the proposed electronic device 120. In this implementation the microphone array 1030 is connected to an audio coder-decoder (CODEC) 1020 which transfers audio signal to a digital signal processor 1010 via multiple 12S buses 1070. In one example there may be three 12S buses.
[0105] In this example implementation 1000 the wearable device 130 maybe connected via a physical connection 1040 to the coder-decoder 1020 or a suitable wireless connection through a radio frequency processing unit 1080. The digital signal processor 1010 may be substituted by a micro-controller unit (MCU) that is able to perform calculations used in the proposed system in a synchronous manner.
[0106] During operation of the electronic device 120, an additional stereo waveform may be either added on to normally transmitted audio signal and sent out on the same speakers as normal audio, or sent separately on dedicated speakers on the wearable device (such as the first transducer 140 and second transducer 150). The stereo waveform may be designed with guidelines detailed later. With open-fit headphones as the wearable device 130, it is adequate to have the stereo waveform energising the same set of speakers as normally transmitted audio signals. Depending on the distance of the speakers/transducers and the microphone pick-up, sound leakage through the open fit headphones can be adequate for proper functioning of the proposed system. If the headphones of the wearable device 130 are designed as closed fit, i.e. only let out a minimum possible amount of sound leakage through the speakers/transducers, a separate set of speakers may need to be designed on top of the headphones in a position that is exposed and sound output from such a position is able to be picked up by designated microphones of the proposed system.
[0107] In one example, the stereo waveform maybe generated directly using the digital signal processor 1010. A combination of the stereo waveform and audio signals may be computed either on the digital signal processor 1010 or using the coder-decoder 1020. When the combination is performed on the digital signal processor 1010 before output, the signals of interest must be re-sampled, so that the output of the waveform is not distorted or aliased. Audio communication buses, such as PCM or 12S buses, if used, also need to be configured with bit clocks and frame clocks according to re sampled sample rate. If the signal combination is performed on the coder-decoder 1020, the signals of interest are transmitted over separate buses allowing signals with different sampling rates to be transferred in synchronisation. The signals are then combined on the coder-decoder using its analog mixer. For a lower cost solution, the signals to be combined may be re-sampled with the same sampling rate. Addition is then performed on the digital signal processor 1010 before being transmitted to either an on-board or external digital-to-analog converter (DAC), which is then output to the final stage of signal conditioning before energising the speakers. In another embodiment, the stereo waveform is generated via the signal generation enabled coder decoder 1020, and addition of the waveform with processed audio signal is done using the coder-decoder's internal analog audio multiplexers (MUX).
Transmitter design
[0108] A good transmitter design is essential in the working of the proposed system. Key design requirements on the transmitter side includes the ability to provide just enough information so that the receivers (microphones) can differentiate left and right sound sources, yet does not strain the receivers too much that dedicated hardware/software are required to decode such information.
[0109] The following rules are recommended:
1. Frequency of the waveform for the carrier wave for modulation should be outside the human audible range, i.e. 20 Hz to 20 kHz, preferably in the lower ultrasonic ranges for easier signal processing.
2. Shape of the waveform of the acoustic signals (baseband signal) is periodically pulsated, with period twice greater than the time taken for sound to travel between two ears. Let T denote period, d denote distance between the ears, and v denote speed of sound, which in free air approximately equals 340 m/s. We require:
d T>-x2 V
Since T is greater than twice the maximum possible delay, the maximum T possible delay is always less than . Given two sequences of waveforms (i.e. 2 one from the left ear and one from the right ear) we can always determine the orientation of the sequences is the orientation that always produces minimum delay. Following this constraint greatly simplifies periodicity detection.
3. The generated waveform can be of any form, but preferably a frequency modulated signal with orthogonal symbols. Such signals can be easily differentiated from a naturally occurring sound.
[0110] In one example (as described above) simple symbol coded message over the carrier wave are recommended. Since low range ultra sound frequencies are preferred as discussed in the previous section, one possible frequency range for such coded message would be the [20.5 kHz, 21 kHz] range. The benefits of using this range are discussed later in the receiver section. At least three symbols are recommended to be transmitted as a symbol count of less than three cannot be guaranteed to be differentiated when transmission delay time differences are longer than one symbol length. It is good practice to design single symbol transmission times longer than the anticipated delayed time differences.
Receiver (microphone) design
[0111] As described above the electronic device 120 may comprise at least one microphone. A single microphone with the ability of receiving sound in the lower ultrasonic range is adequate in normal operation of the system. Multiple microphones may be used when improved performance is desired. Multiple microphones allow the system to perform direction of arrival (DOA) estimates as well as beamforming to improve the signal-to-noise ratio (SNR). Microphone placements for performing these algorithms need to be designed to avoid temporal aliasing, i.e. distance d between the
microphones should be larger than half of the shortest wavelength 2, or simply d 2 This restriction may be relaxed when the expected angle of arrival and the beamforming angles are restricted to a given range. For example for a simple two sensor (microphone( array if arriving angles are in between angles (0,I -0), our
restriction can be relaxed to d coA 2 cos(0)
[0112] Received analog signals are first converted into the digital domain by sampling. Commonly used sampling methods include successive approximation and sigma-delta analog to digital converters (ADC), etc. The signal can be either oversampled or undersampled, depending on available processing restrictions. In a processing-restrained system, undersampling is able to unwrap the higher frequency component without aliasing, and as an extra advantage sampling can be done essentially on a small micro-controller using an on-board ADC module, which bears minimal hardware cost.
[0113] The disadvantages of undersampling include loss of accuracy in phase difference measurement between the arriving signals. This means the direction of arrival estimate may become unstable due to frequency wrapping. Oversampling on the other hand is the method of choice when the system signal processing is not as restricted. Oversampling offers greater resolution, where high frequency components are directly available for processing, hence enables algorithms such as direction of arrival (DOA) and beamforming to be more effectively utilized. Details of both embodiments will be discussed in detail below.
[0114] In signal processing sampling is carried out at the Nyquist sampling rate and above to avoid aliasing of the wanted signal component. The Nyquist sampling theorem states that sampling should be carried out at a rate that is at least double the
frequency component of interest, in another word when sampling at frequency f the
highest frequency component that can be obtained without aliasing is f/ 2 . In comparison, undersampling and oversampling are defined as sampling below and above said Nyquist sampling rate respectively.
Undersampling pre-processing
[0115] Undersampling is generally avoided unless the system processing cost is constrained, as undersampling will always lose some frequency information, and will generally have a lower SNR due to frequency wrapping. In one example the use of undersampling is used to condition the signal before determining head position in the core algorithm.
[0116] In this example, bandpass sampling, which is a subclass of undersampling, is used. Bandpass sampling takes advantage of frequency aliasing to sample a frequency region of interest. It is used to reduce the overhead of oversampling in instances where high frequency sampling is not desired (or is difficult to implement), for example on a power constrained embedded system. Bandpass sampling allows the choice of frequencies on which the warp occurs and warped frequencies positioned is arbitrary.
[0117] The first step inbandpass sampling is to design for an aliasing frequency. This aliasing frequency will be a reference by which all sampled frequencies warp around. One choice for such a frequency warping range is the 500 to 1,000 Hz range with warping occurs at 1000 Hz and headphones emit an ultrasound signal around 20.75kHz.
[0118] This is a preferred warping frequency in the undersampling example. This may provide the following advantages:
1. Operating in this range only requires a sampling rate of no less than 11,000 samples per second compared to at least 50,000 samples per second, which is a saving on processing power. This eliminates the need for a dedicated high speed ADC and should be easily accomplished by many small micro controllers.
2. Doing so may also avoid potential constant noise sources such as car and lawn mower noises, which mostly generate noises below 500 Hz. If waveform frequency is not carefully chosen and coincided with a constant noise source during operation, a reduction in SNR will cause a disruption in the head movement estimation routine.
[0119] The disadvantage of having such a low sampling rate is slightly more memory consuming post processing. Once the bandpass signal is obtained, phase difference rather than cycle difference needs to be examined as time delay is less than the period of designed baseband signal. As a result the signal needs to be interpolated to such a sample rate that time differences between samples need to be at most twice double the final resolution.
Oversampling pre-processing
[0120] When adequate hardware is provided for oversampling pre-processing, received signals are firstly oversampled and anti-alias filtered at each sensor of the microphone array 1030. Depending on the hardware budget and required accuracy, two possible examples are proposed for the initial sampling and filtering stage. In applications requiring high accuracy and with no processing constraints, a multi-cluster microphone array setup is preferred, in a hardware constrained embodiment a system can be made operational with as few as two microphones.
[0121] Ina multi-cluster microphone array setup, at least two microphone arrays are needed, with at least two microphones in each array. The distances between microphones within an array need to be less than half the wavelength of the highest expected incoming signal frequency. For the preferred frequency operation range, this is just below 1 cm. In this example, each microphone array in the microphone array cluster 1030 is set up to beamform towards the ears, i.e. the source of the emitted waveforms. Beamforming is an effective way to combat multi-path reflections in a reverberant environment. As the proposed electronic device 120 is worn in a relatively stationary position to the source of waveform emission, a simple way to design such a beamformer is to use the delay-sum technique. In this technique the received signal at each microphone is delayed and added to produce the final output.
[0122] Given the location of each microphone represented by its coordinates m = (mi', m , mi) and the steering vector w = (1,#,0) with # denoting angle of
azimuth and 0 denoting angle of elevation, we can calculate the delay needed for the
i' microphone as follows:
(cosocosO sinocosO sinO) mj
delay'= speed of sound
[0123] All microphone signals can now be summed with their respective delays to generate a beamformed signal towards azimuth # and elevation 0.
N-i
beamformer(#,0) = s(t - delay') n=0
[0124] The microphones in each array are ideally designed to create two separate beamforming patterns, one for each ear. In this example, both beamformers are designed to point towards 67 degrees elevation, but one with an azimuth angle of+90 degrees and the other -90 degrees. Beamforming is effective in reducing multipath effect in a reverberant environment.
[0125] The more microphones in an array the narrower the main lobe is on the beamformer. With only two microphones, fixed time-delay beamforming is adequate in covering the complete range of head movement. In a more hostile environment a beamformer with more microphones may be required to obtain signal with higher SNR. Under such scenarios, multiple microphones per array are recommended as well as adaptive beamforming. The inclusion of more microphones gives the system ability to beamform on the signal direction. This is possible due to the fact that the signals are predefined and deterministic. In one embodiment, the signal is designed to be interleaving 20.5 kHz and 20.8 kHz sinusoids, resulting in a new periodic signal with a period difference from either of its two components. Yet, the signal is deterministic, and when processed carefully gives a good reference signal for the beamformer to beamform onto. There are two general ways of carrying out this process, either by doing direct beamforming onto the reference signal after filtering, or firstly finding the direction of arrival of the signal then beamform onto given direction.
[0126] In the former method, two possible implementations are recommended. In implementation one, signals received from all microphones are firstly filtered through a bandpass filter that enhances signals present in the desired frequency band. Beamforming is then performed with weight vector (w) that minimises E{wx- yd
. H represents a Hermitian transform operator. x is a matrix of stacked received signal vectors. Yd is the time shifted reference signal found by first extracting the
received signal vector in x , say xO corresponding to the 1 element in the weight vector
(w), then find the time shifted reference signal that corresponds with the maximum
correlation with x0 . In the second preferred implementation, beamscan method is used
to determine the direction corresponds to the most prominent signal in given pass-band. Weight vector of beamscan method can be reused directly to beamform onto calculated direction, hence reduces computation cost of the system.
[0127] In the second approach, the signal again is passed through a bandpass filter. After this a DOA method is used to estimate direction of the most prominent signal in the filtered frequency band. The estimated direction is then used as an input into the beamformer, which in turn extracts sound in the desired direction. A number of common DOA methods, such as beamscan, MUSIC, and ESPRIT can be used depending on the computation complexity the system can handle.
[0128] Using a separate DOA and beamforming stage allows more accurate adaptive beamformer methods to be performed, including minimum variance beamformer, linear constant minimum variance beamformer and general sidelobe canceller beamformers. As these beamforming methods require a known direction to beamform onto, it is essential to firstly estimate the DOA. These beamforming methods also perform better than naive reference signal beamforming as more spatial constraints may be defined with later methods.
Decoding and time difference estimates
[0129] Once signals are received and pre-processed, they are then pushed onto a circularbuffer. Buffer size is designed to be larger than symbol length *4. Thisis adequate given symbol length is assumed to be longer than inter-aural time difference, as discussed in section above. Cross-correlation is then calculated between received signals and time shifted known symbol sequences S,(t -,r) and S2(t -r 2 )
corresponding to left and right ear source encoding. Both S, andS2 are correlated
with all incoming signal streams. Two outcomes are achieved from this:
1. Determining source of given signal stream. By calculating cross-correlation
between S, S2 and given stream, one cross-correlation would result in a higher
max value then the other, hence indicating the correct signal source.
2. Determining time difference. Once signal source is determined, cross correlation results are reused to find r corresponding to the correct signal source and peak of its cross-correlation spectrum. Time difference is then calculated by taking the difference of r s that correspond to the same signal source but at difference receivers.
[0130] Time difference at the different receivers is then used as model input to generate head position estimates.
SNR enhancement
[0131] There is always noise associated with received signals and also quantisation noise associated with DOA, beamforming and cross-correlation stages. Hence in the output stage, a smoothing filter is recommended. This also makes sense physically, as head movements are smooth in nature. There are two ways to perform such task, one is a simple low pass smoothing filter, which may or may not be adaptive. An SNR improvement technique using adaptive filtering is also discussed.
[0132] Ina simple smoothing filter setup, filter can be formulated as
[0133] where x and y are estimated head direction in (x, y) and a is the damping
factor on ox, oy as well as cross damping between them. Direct damping factor is
guarded by the reasonable head rotation speed at given angle and cross damping factor is dictated by the physical correlation between x and y in a spherical constraint. All a s can be made adaptive by using physical limits in a head movement model.
[0134] SNR enhancement technique can also be used at the time estimation stage using a Kalman filter setup if we have prior knowledge on the noise environment. In designing the Kalman filter, we assume the following:
1. Timing difference obtained previously is the actual timing difference plus a small amount of noise due to quantization errors and noise induced quantization errors.
2. Noise induced errors in timing are assumed to be Gaussian distributed and the 3dB attenuation time band of auto-correlation main lobe is assumed to sit within 3 deviations of such distribution.
[0135] From the above assumptions there is the following:
X = N(p,,,) N(timing of x,(-x autocorr(sL)mainlobe @3dB) 2 3 )
Y = N(y,) N(timing of y,(-xautocorr(sR )mainlobe@3dB) 2 )
3
[0136] This gives the final time difference measurement distribution as:
Z = N(p, - px,,a+a c)
[0137] Hence standard deviation of the observation error for Kalman filtering is:
p2 .+U'2
0O xz
[0138] As this is a one dimensional Kalman filter, all variables reduce to numerical values. There is:
Xk ': Xk-l+U k
Zk Xk +Vk
[0139] Where x and z are actual and observed values respectively, u and v are system noise and observation noise respectively. Time update reduces to:
Xk =X'k
k -i
[0140] The measurement update becomes:
Kk = Pk+ R
Pk = (1- Kkzk)
ik - ik + Kk (zk - -k
[0141] Where £k is the smoothed timing estimation.
[0142] It will be appreciated by persons skilled in the art that numerous variations and/or modifications may be made to the above-described embodiments, without departing from the broad general scope of the present disclosure.
[0143] It should be understood that the techniques of the present disclosure might be implemented using a variety of technologies.
[0144] The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive.

Claims (19)

CLAIMS:
1. A method (200) for determining an orientation of a head of a user in relation to an electronic device comprising a microphone, wherein a wearable device is located relative to the head of the user, the wearable device comprising a first speaker to transmit a first acoustic signal and a second speaker to transmit a second acoustic signal, the method comprising:
transmitting, from the first speaker, the first acoustic signal and receiving the first acoustic signal with the microphone;
transmitting, from the second speaker, the second acoustic signal and receiving the second acoustic signal with the microphone;
determining (230) a first received time associated with the first acoustic signal received at the microphone;
determining (240) a second received time associated with the second acoustic signal received at the microphone; and
determining (250) the orientation of the head of the user in relation to the electronic device based on the first and second received times.
2. The method of claim 1 wherein the first received time indicates a time when the first acoustic signal is received by the microphone and the second received time indicates a time when the second acoustic signal is received by the microphone.
3. The method of claim 1 or 2, wherein determining the orientation of the head of the user is further based on a first time difference and a second time difference, the first time difference being based on a difference between a first time reference and the first received time, wherein the first time reference indicates a time that the first acoustic signal was transmitted; the second time difference being based on a difference between a second time reference and the second received time, wherein the second time reference indicates a time that the second acoustic signal was transmitted.
4. The method of any one of the preceding claims, wherein the microphone comprises two or more microphones, the method further comprising:
receiving the first and second acoustic signal at the two or more microphones; and
processing the first and second acoustic signals received at the two or more microphones to determine a first location and a second location associated with the first and second speakers respectively.
5. The method of claim 4 wherein the at least two microphones form at least one microphone cluster in the electronic device.
6. The method of any one of the preceding claims wherein the first acoustic signal and the second acoustic signal comprise an ultrasonic signal.
7. The method of claim 6, wherein the ultrasonic signal acts as a carrier wave.
8. The method of any one of the preceding claims wherein the first acoustic signal and the second acoustic signals are acoustic leak signals.
9. The method of any one of the preceding claims wherein the first speaker is associated with a first ear of the user and the second speaker is associated with a second ear of the user.
10. The method of any one of the preceding claims wherein the electronic device is located on the user.
11. The method of any one of the preceding claims wherein the first speaker and the second speaker of the wearable device are located approximately symmetrical about a central axis associated with the user.
12. The method of claim 11 wherein the electronic device is approximately located on the central axis.
13. A system (100) for determining an orientation of a head of a user (110) in relation to an electronic device (120), the system comprising: a wearable device (130) located relative to the head of the user (110), the wearable device comprising a first speaker (140) to transmit a first acoustic signal and a second speaker (150) to send a second acoustic signal; the electronic device (120) comprising a microphone (160); wherein the first acoustic signal is transmitted from the first speaker and received at the microphone (160); and wherein the second acoustic signal is transmitted from the second speaker and received at the microphone (160); a processor (170) configured to: determine a first received time associated with the first acoustic signal received at the microphone; determine a second received time associated with the second acoustic signal received at the microphone; and determine the orientation of the head of the user (110) in relation to the electronic device (120) based on the first and second received times.
14. The system of claim 13 wherein the microphone comprises two or more microphones.
15. The system of claim 14 wherein the at least two microphones form at least one microphone cluster in the electronic device.
16. The system according to any one of claims 13 to 15, wherein the first acoustic signal and the second acoustic signal comprises an ultrasonic signal.
17. The system according to claim 16, wherein the ultrasonic signal acts as a carrier wave.
18. The system according to any one of claims 13 to 17, wherein the first acoustic signal and the second acoustic signal are acoustic leak signals.
19. The system according to any one of claims 13 to 18, wherein the first speaker is associated with a first ear of the user and the second speaker is associated with a second ear of the user.
Wearable device
~
140 150 User Second First ..I,, 110 transducer transducer . .I,,
0 -
160 170 120 Third transducer
Electronic device
Fig. 1
AU2021101916A 2018-10-15 2021-04-14 A method and system for determining an orientation of a user Ceased AU2021101916A4 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2021101916A AU2021101916A4 (en) 2018-10-15 2021-04-14 A method and system for determining an orientation of a user

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
AU2018903881 2018-10-15
PCT/AU2019/050988 WO2020077389A1 (en) 2018-10-15 2019-09-13 "a method and system for determining an orientation of a user"
AU2021101916A AU2021101916A4 (en) 2018-10-15 2021-04-14 A method and system for determining an orientation of a user

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
PCT/AU2019/050988 Division WO2020077389A1 (en) 2018-10-15 2019-09-13 "a method and system for determining an orientation of a user"

Publications (1)

Publication Number Publication Date
AU2021101916A4 true AU2021101916A4 (en) 2021-06-03

Family

ID=76132821

Family Applications (1)

Application Number Title Priority Date Filing Date
AU2021101916A Ceased AU2021101916A4 (en) 2018-10-15 2021-04-14 A method and system for determining an orientation of a user

Country Status (1)

Country Link
AU (1) AU2021101916A4 (en)

Similar Documents

Publication Publication Date Title
JP5814476B2 (en) Microphone positioning apparatus and method based on spatial power density
US9980075B1 (en) Audio source spatialization relative to orientation sensor and output
CN103308889B (en) Passive sound source two-dimensional DOA (direction of arrival) estimation method under complex environment
US9291697B2 (en) Systems, methods, and apparatus for spatially directive filtering
ES2525839T3 (en) Acquisition of sound by extracting geometric information from arrival direction estimates
JP4675381B2 (en) Sound source characteristic estimation device
KR20210091034A (en) Multiple-source tracking and voice activity detections for planar microphone arrays
Kraljević et al. Free-field TDOA-AOA sound source localization using three soundfield microphones
Pourmohammad et al. N-dimensional N-microphone sound source localization
Ayllón et al. Indoor blind localization of smartphones by means of sensor data fusion
Huang et al. Microphone arrays for video camera steering
Yang et al. Personalizing head related transfer functions for earables
Hoeflinger et al. Passive indoor-localization using echoes of ultrasound signals
WO2020077389A1 (en) "a method and system for determining an orientation of a user"
AU2021101916A4 (en) A method and system for determining an orientation of a user
Jensen et al. An EM method for multichannel TOA and DOA estimation of acoustic echoes
Zhao et al. A robust real-time sound source localization system for olivia robot
JP2018034221A (en) Robot system
US20200169809A1 (en) Wearable beamforming speaker array
Chen et al. Voicemap: Autonomous mapping of microphone array for voice localization
Aprea et al. Acoustic reconstruction of the geometry of an environment through acquisition of a controlled emission
Pfreundtner et al. (W) Earable Microphone Array and Ultrasonic Echo Localization for Coarse Indoor Environment Mapping
JP2011188444A (en) Head tracking device and control program
Suzaki et al. PT-Sync: COTS Speaker-based Pseudo Time Synchronization for Acoustic Indoor Positioning
CN114355292B (en) Wireless earphone and microphone positioning method thereof

Legal Events

Date Code Title Description
FGI Letters patent sealed or granted (innovation patent)
MK22 Patent ceased section 143a(d), or expired - non payment of renewal fee or expiry