GB2618326A - Electronic device for use with an audio device - Google Patents
Electronic device for use with an audio device Download PDFInfo
- Publication number
- GB2618326A GB2618326A GB2206376.2A GB202206376A GB2618326A GB 2618326 A GB2618326 A GB 2618326A GB 202206376 A GB202206376 A GB 202206376A GB 2618326 A GB2618326 A GB 2618326A
- Authority
- GB
- United Kingdom
- Prior art keywords
- signal
- audio
- location
- output
- electronic device
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000005236 sound signal Effects 0.000 claims abstract description 193
- 238000000034 method Methods 0.000 claims description 37
- 238000012545 processing Methods 0.000 description 11
- 238000012937 correction Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 3
- 230000001360 synchronised effect Effects 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 101100535994 Caenorhabditis elegans tars-1 gene Proteins 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000008054 signal transmission Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L19/00—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
- G10L19/008—Multichannel audio signal coding or decoding using interchannel correlation to reduce redundancy, e.g. joint-stereo, intensity-coding or matrixing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S7/00—Indicating arrangements; Control arrangements, e.g. balance control
- H04S7/30—Control circuits for electronic adaptation of the sound field
- H04S7/305—Electronic adaptation of stereophonic audio signals to reverberation of the listening space
-
- G—PHYSICS
- G08—SIGNALLING
- G08C—TRANSMISSION SYSTEMS FOR MEASURED VALUES, CONTROL OR SIMILAR SIGNALS
- G08C23/00—Non-electrical signal transmission systems, e.g. optical systems
- G08C23/02—Non-electrical signal transmission systems, e.g. optical systems using infrasonic, sonic or ultrasonic waves
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S7/00—Indicating arrangements; Control arrangements, e.g. balance control
- H04S7/30—Control circuits for electronic adaptation of the sound field
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L19/00—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
- G10L19/018—Audio watermarking, i.e. embedding inaudible data in the audio signal
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S2400/00—Details of stereophonic systems covered by H04S but not provided for in its groups
- H04S2400/11—Positioning of individual sound objects, e.g. moving airplane, within a sound field
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S2400/00—Details of stereophonic systems covered by H04S but not provided for in its groups
- H04S2400/15—Aspects of sound capture and related signal processing for recording or reproduction
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Signal Processing (AREA)
- Multimedia (AREA)
- Mathematical Physics (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- General Physics & Mathematics (AREA)
- Stereophonic System (AREA)
Abstract
An audio device 160 receives an analogue audio signal 165 from a microphone and generates a digital location signal based on information from a sensor 130 and combines the two into a single analogue output signal 155. The digital location signal (eg.orientation or geographic position) is modulated 140 into a high frequency analogue signal. The embedded location signal is used in virtual reality 3D audio or spatial convolution reverberation.
Description
ELECTRONIC DEVICE FOR USE WITH AN AUDIO DEVICE
TECHNICAL FIELD
The present disclosure relates to an electronic device for use with an audio device. Aspects of the invention relate to an electronic device for use with an audio device, to an audio device, to a system, and to a method for generating an output signal including an audio signal and a location signal of an audio device.
BACKGROUND
It is known to provide electronic audio devices such as microphones or electronic instruments including guitars to detect, generate and/or provide audio signals comprising audio data. Such audio signals may be recorded during live performances and played back. For example, a recording of a musical performance or a presentation may be generated by an electronic microphone. The audio signal may be stored and later played back.
It is also known to provide virtual or extended reality applications to simulate pre-recorded or live-streamed audio performances, presentations or events including musical concerts, shows or presentations. A user of the virtual or extended reality application may optionally be provided with a visual representation of a musical artist, a speaker or performer alongside a pre-recorded or live-streamed audio signal of an event.
SUMMARY OF THE INVENTION
Aspects and embodiments of the invention provide an electronic device for use with an audio device, an audio device, a system, and a method for generating an output signal including an audio signal and a location signal of an audio device as claimed in the appended claims.
According to an aspect of the present invention there is provided an electronic device for use with an audio device, the electronic device comprising: an input configured to receive an audio signal from the audio device; at least one sensor configured to determine location information of the audio device; a controller configured to: generate a location signal based on the location information of the audio device; and combine the audio signal and the location signal to generate a single output signal; and an output configured to output the output signal. Advantageously, the audio signal and the location signal are simultaneously generated and/or received and combined into a single output signal. The output signal therefore comprises location and audio data which are associated with a common time period, and a location of the audio device during the audio signal can be tracked.
In some examples, the controller is configured to generate the location signal during reception of the audio signal to correspond to a time period of the audio signal. Advantageously, the location information of the audio device included in the location signal corresponds to data received from the audio device, and the audio device can be tracked during transmission or generation of the audio signal.
In some examples, the input is configured to receive the audio signal and the at least one sensor is configured to determine the location information simultaneously.
In some examples, the input is configured to receive the audio signal as a continuous stream of audio data. Advantageously, no user input or configuration is required, and the electronic device may directly receive the audio signal from a conventional output of the audio device.
In some examples, the received audio signal is an analogue signal. Advantageously, the audio device may be a conventional audio device, and the audio signal may be combined with the location signal without requiring modification to the audio signal, and without user input required. The signal combination may be performed in the electronic device proximal to the audio device.
In some examples, the audio signal comprises high-resolution digital audio. In one example, the high-resolution digital audio has a frequency above 44Hz and is a 16bit CD quality signal. Advantageously, the location signal may include data outside of the range of typical audible frequencies.
In some examples, the generated location signal is a digital signal; and the controller is configured to: convert the digital location signal into an analogue location signal; and combine the analogue location signal and the audio signal to generate the output signal as an analogue signal.
In some examples, the electronic device further comprises a modulator configured to convert the digital location signal into the analogue location signal; and the analogue location signal is a high frequency location signal. Advantageously, the location signal may be configured to have a frequency outside of the range of typical audible frequencies.
In some examples, the location information comprises at least one of orientation and/or geographic position of the audio device. Advantageously, the location of the audio device may be determined in more than one respect, and with respect to a plurality of axes. That is, a geographic position and an orientation of the audio device may be determined during the audio signal.
In some examples, the at least one sensor includes at least one of a gyroscope and/or a position sensor.
In some examples, the electronic device is configured to couple to the audio device. Advantageously, the electronic device is provided proximal to the audio device in use and may accurately determine the location information of the audio device. The electronic device may also operate on electrical power received from the audio device.
In some examples, the electronic device is configured to couple to the audio device via one of a XLR or a TRS 1/4" connection. Advantageously, the electronic device may couple to the audio device via conventional connection ports, and thus may "plug and play" with existing audio devices. The electronic device may require no user input or configuration.
In some examples, the controller is further configured to include an error-correcting code in the output signal to improve demodulation of the output signal.
In some examples, the error-correcting code comprises a Hamming code.
In some examples, the output is configured to output the output signal to at least one receiving device, so as to enable the external device to demodulate the output signal to separate the audio signal and the location signal. Advantageously, the receiving device is enabled to separate the audio signal and location signal and to determine the location information of the audio device during recording of the audio signal.
According to another aspect of the present invention, there is provided an audio device comprising the electronic device according to any example disclosed herein, and configured to generate the audio signal.
According to another aspect of the present invention, there is provided an audio device configured to output an output signal to an external device, the audio device comprising: an audio source configured to detect or generate audio and to generate an audio signal of the detected or generated audio; at least one sensor configured to determine location information of the audio device; a controller configured to: generate a location signal based on the location information of the audio device during reception of the audio signal; and combine the audio signal and the location signal to generate a single output signal; and an output configured to output the output signal. Advantageously, the audio device is enabled to determine the location information and output the combined output signal as a single integral device.
According to another aspect of the present invention, there is provided a system comprising the electronic device according to any example disclosed herein and/or the audio device disclosed herein, and a receiving device configured to: receive the output signal from the electronic device or the audio device; extract the audio signal and the location signal from the received signal; and output the audio signal and location signal of the audio device indicating location information of the audio device during a time period of the audio signal.
In some examples, the location information comprises at least one of an orientation and/or a geographic position of the audio device.
In some examples, the receiving device is further configured to: demodulate the location signal and remove high frequency data from the output signal to extract the audio signal and the location signal.
In some examples, the receiving device comprises an analogue-to-digital recorder.
In some examples, the receiving device is further configured to: output the audio signal and location information of the audio device to at least one of a virtual reality, extended reality, or virtual production, VP, application; or apply spatial convolution reverberation to the audio signal based on the location information of the audio device. Advantageously, the location information and the audio signal are synchronized with respect to the same time period, and the location of the audio device during recording or generation of the audio signal may be determined.
According to another aspect of the present invention, there is provided a method for generating an output signal including an audio signal and a location signal of an audio device, the method comprising: receiving the audio signal from the audio device; determining location information of the audio device; generating the location signal based on the location information of the audio device; combining the audio signal and the location signal to generate a single output signal; and outputting the output signal.
In some examples, the method comprises demodulating the location signal and removing high frequency data from the output signal to extract the audio signal and the location signal; and outputting the audio signal and the location signal to at least one of a virtual reality, extended reality, or virtual production, VP, application; or applying spatial convolution reverberation to the audio signal based on the location information of the audio device.
According to another aspect of the present invention, there is provided a computer readable storage medium comprising instructions thereon which when executed by a processor cause the processor to perform any method disclosed herein.
According to another aspect of the present invention, there is provided computer software which when executed by a processor cause the processor to perform any method disclosed herein.
Within the scope of this application it is expressly intended that the various aspects, embodiments, examples and alternatives set out in the preceding paragraphs, in the claims and/or in the following description and drawings, and in particular the individual features thereof, may be taken independently or in any combination. That is, all embodiments and/or features of any embodiment can be combined in any way and/or combination, unless such features are incompatible. The applicant reserves the right to change any originally filed claim or file any new claim accordingly, including the right to amend any originally filed claim to depend from and/or incorporate any feature of any other claim although not originally claimed in that manner.
BRIEF DESCRIPTION OF THE DRAWINGS
One or more embodiments of the invention will now be described, by way of example only, with reference to the accompanying drawings, in which: Figure 1 shows a block diagram representing an electronic device according to an embodiment of the present invention; Figure 2 shows a block diagram representing an audio device according to an embodiment of the present invention; Figure 3 shows a first flow chart showing a method for generating an output signal including an audio signal and a location signal of an audio device according to an embodiment of the invention Figure 4 shows a second flow chart showing a method for processing the output signal of Figure 3; and Figure 5 shows an example of audio and location signals in accordance with an embodiment of the invention.
DETAILED DESCRIPTION
As mentioned in the background section, a user of the virtual or extended reality application may be provided with a visual representation of a musical artist, a speaker or performer alongside a pre-recorded or live-streamed audio signal of an event. However, such applications may not typically provide an accurate or convincing representation of the event because a location of the performer or other source of the audio during the event may not be known.
Certain aspects of the present disclosure attempt to provide a more convincing virtual or extended environment when reproducing an audio signal.
The present disclosure relates to devices and methods for generating and outputting signals comprising audio signals and location signals of an audio device. In some examples, there is disclosed an electronic device for use with an audio device. The electronic device may receive an audio signal from the audio device, sense location information of the audio device indicative of a location and/or orientation of the audio device during a time period of the audio signal to generate a location signal, combine the audio signal and the location signal and output the combined signal. In other examples, there is disclosed an audio device which is configured to sense the location information, generate the location signal and the combined signal, and output the combined signal. The devices and methods disclosed herein combine audio data with location information of an audio device generating or producing the audio data and output a combined signal, which can subsequently be processed by a device receiving the output signal to extract the audio data and the location information. Consequently, a position or orientation of the audio device during generation of the audio signal can be associated with the audio signal to obtain corresponding location information of the audio device during the generation of the audio signal. That is, a real-time location of the audio device producing the audio signal can be obtained. The associated location information and audio signal can then be utilised by a virtual reality, extended reality, virtual production or similar application to replay movement of the audio device during a performance or audio event, or to apply techniques such as convolutional reverberation to the audio signal to improve the user experience.
With reference to Figure 1, there is illustrated an electronic device 100 for use with an audio device 160. The electronic device 100 comprises one or more controller 110, an input 120, at least one sensor 130, a digital to analogue converter 140, and an output 150. It should be understood that the electronic device 100 may comprise further parts, elements or components which are not shown in Figure 1. For example, the electronic device 100 may further comprise one or more of a power supply, a memory, a DC-DC convertor, a coil, or one or more microprocessors, although these are not shown in Figure 1. Further, it should be understood that various parts of the electronic device 100 shown in Figure 1 may be omitted. For example, although the electronic device 100 is shown to comprise the digital to analogue convertor 140, in some examples the digital to analogue convertor 140 may be unnecessary and may be omitted, or replaced with an analogue to digital convertor, as discussed below.
The electronic device 100 of Figure 1 is configured for use with an audio device 160. The audio device 160 may comprise any suitable electronic device which can detect or produce audio and can generate and transmit an audio signal 165 based on the detected or produced audio. In some examples, the audio device 160 may comprise one or more of a microphone or an electronic instrument such as an electric guitar. For example, an electronic microphone may detect audio data such as a speaking person's voice, and may generate an audio signal based on the detected audio data. In another example, an electronic instrument such as a guitar may generate audio data and an audio signal based on a musician's input on the electronic instrument. The audio signal 165 may be continuously generated and provided to the input 120.
The electronic device 100 and the audio device 160 may be configured to couple together in use. In some examples, the electronic device 100 and the audio device may be physically and/or communicatively coupled together. For example, the electronic device 100 may physically couple to the audio device 160 via one or more physical connection ports. The one or more physical connection ports may include one or more of a XLR or TRS connection port, a USB connection port, or any other suitable physical connection port, although it should be understood that the present invention is not limited thereto. In one example, the audio device 160 may comprise an electronic microphone or guitar, and the electronic device 100 may couple to an existing conventional output of the audio device 160. In another example, as shown in Figure 2, the audio device 160 and the electronic device 100 may be formed as a single device. Although it is described that the electronic device 100 and the audio device 160 are connectable or able to couple together, it should be understood that the electronic device 100 and the audio device 160 may be separated and/or provided or manufactured separately. For example, the electronic device 100 may be provided separately and configured to connect with an existing audio device 160 via one or more connection ports provided on the audio device 160.
The electronic device 100 may receive power from the audio device 160 or may operate on a power supply of the electronic device 100 or a separate power supply (not shown). In one example, the electronic device 100 may operate using a phantom power supply at +12V or +48V received from the audio device 160. However, it should be understood that the electronic device 100 may operate based on alternative power supply means.
The input 120 of the electronic device may comprise circuitry or a connection port suitable for receiving the audio signal 165 from the audio device 160. In some examples, the input may comprise one or more physical connection port such as the XLR, TRS or USB connection ports referred to above, although the present invention is not limited thereto. For example, the input 120 may comprise communication means to wirelessly receive the audio signal 165 from the audio device 160. The input 120 may be configured to receive the audio signal 165 as an analogue signal or as a digital signal. In one example, the audio signal 165 may be received as an analogue output signal of a microphone or electronic instrument. In another example, the audio signal 165 may comprise a high-resolution digital audio signal.
For example, the high-resolution digital audio signal may be a digital signal of above approximately 44Hz and 16bit CD quality. The input 120 may be configured to provide the audio signal 165 to the controller 110. The audio signal 165 may be continuously received by the input 120 during transmission from the audio device 160. That is, the audio device 160 may provide a live stream of the audio signal 165 to the input 120 during generation or detection of the audio data of the audio signal 165, such that the input 120 receives live or almost live audio data from the audio device 160. It should be understood that a small delay or lag may exist between generation of the audio data or the audio signal at the audio device 160 and reception of the audio signal 165 at the input 120.
The electronic device 100 of Figure 1 further comprises at least one sensor 130. The at least one sensor 130 is configured to detect location information of the audio device 160. In some examples, the at least one sensor 130 comprises one or more of an orientation sensor and a geographic position sensor. The orientation sensor may comprise a gyroscope in some examples, and may be configured to detect an orientation of the audio device 160. The orientation of the audio device 160 refers to a rotation of the audio device 160 with respect to an axis such as a vertical axis corresponding to a direction of gravity. The orientation of the audio device 160 may be measured in one or more directions, such as in X, Y and Z directions, or pitch, yaw and roll directions. The geographic position sensor may comprise a sensor configured to detect a geographic position of the audio device 160, and in some examples may comprise a sensor configured to detect the geographic position of the audio device 160. For example, a geographic position of the audio device 160 may be determined using GPS data or by measuring received signal strength indicators of signals received from nearby devices, such as devices connected via Bluetooth or W-Fi networks. The geographic position of the audio device 160 may also include information corresponding to one or more directions, such as along positional X, Y and Z axes, a location within known bounds such as a location on a stage, geographic position data such as a longitude and latitude, and/or a vertical height of the audio device 160.
The location information of the audio device 160 may therefore comprise one or more of an orientation and a geographic position of the audio device 160, and may be used to indicate a location of the audio device 160. However, it should be understood that the location information may comprise further additional information about a location of the audio device 160, or may include information relating to a movement of the audio device 160, such as a velocity of the audio device 160 during movement.
The at least one sensor 130 may be configured to continuously detect the location information of the audio device 160. In one example, the at least one sensor 130 may be configured to periodically detect the location information. In another example, the at least one sensor 130 may be configured to only detect the location information during reception of an audio signal 165 from the audio device 160. That is, the at least one sensor 130 may cease detection of location information when no audio signal 165 is currently being received, or may continuously detect location information regardless of reception of the audio signal 165. The location information may be stored with time information, such that location information of the audio device 160 may be associated with the audio signal 165 produced at the same time or during the same time period.
The at least one sensor 130 may be configured to generate a location signal 135 based on the detected location information, and may transmit the location signal 135 to a digital to analogue convertor 140 or directly to the controller 110. The location signal 135 may be generated as a digital signal in some examples. In another example, the location signal 135 may be generated as an analogue signal. The at least one sensor 130 may continuously output the location signal 135 to the digital to analogue convertor 140 or to the controller, or may only output the location signal 135 during reception of the audio signal 165 at the input 120. That is, the location signal 135, 145 and the audio signal 165 may be simultaneously generated, and thus may each comprise data corresponding to a common period of time.
In some examples, the electronic device 100 comprises a signal convertor. In the example of Figure 1, the electronic device 100 comprises the digital to analogue convertor 140. The digital to analogue convertor 140 of Figure 1 is configured to receive the location signal 135 from the at least one sensor 130 and to convert the location signal 135 from digital to analogue. The digital to analogue convertor 140 may comprise any known form of digital to analogue convertor. The digital to analogue convertor 140 of Figure 1 may be configured to transmit the converted analogue location signal 145 to the controller 110. In some examples, the digital location signal 135 generated by the at least one sensor 130 are modulation encoded as a high frequency analogue signal by the digital to analogue convertor 140.
The controller 110 comprises one or more processing means and is configured to control the operation of the electronic device 100 including controlling the operation of one or more of the input 120, the at least one sensor 130, the digital to analogue convertor 140, and the output 150.
The controller 110 may further be configured to receive the audio signal 165 from the input 120 and the location signal 135, 145 from the at least one sensor 130 or from the digital to analogue convertor 140. The controller 110 is configured to combine the audio signal 165 and the location signal 145 to form a single output signal 155, which is then output by the output 150. The controller 110 may be configured to combine the audio signal 165 and the location signal 145 by converting one or more of the audio signal 165 and the location signal 145 from analogue to digital or from digital to analogue. That is, the controller 110, and/or the digital to analogue convertor 140, may be configured to convert a format of one or more of the audio signal 165 and the location signal 145 such that both signals are digital or both signals are analogue. The controller 110 may be configured to continuously combine incoming audio signals 165 and location signals 145 and/or to synchronize the audio signal 165 and the location signal 145, such that location information detected at a time corresponding to a time of obtaining audio data of the audio signal 165 is associated with the audio signal 165. That is, the controller 110 may combine the audio signal 165 and the location signal 145 such that the audio signal 165 and the location signal 145 are synchronized in time. In other words, the audio signal 165 and the location signal 145 may both be simultaneously received or generated by the electronic device 100 as continuous data signals and may simultaneously be combined to form the output signal 155 as a continuous data signal. It should be understood that the term "continuous" does not exclude formation or transmission of signals as comprising a series of data packets, but instead refers to a live or almost-live processing of data over a period of time.
In the example of Figure 1, where the electronic device 100 includes the digital to analogue convertor 140, and the audio signal 165 is an analogue signal, the location signal 135 is converted into an analogue location signal 145 and the analogue location signal 145 and the audio signal 165 are combined. It should be understood that references to combining signals may also be understood as mixing or multiplexing signals using any known techniques.
The controller 110 is configured to provide the mixed signal to the output 150. The output 150 is configured to output the mixed signal as an analogue output signal 155 to a receiving device 170. However, it should be understood that the output signal 155 may be a digital signal when the audio signal 165 is received as a digital signal and the at least one digital to analogue convertor 140 is omitted and the location signal 135 is also a digital signal. The output 150 may comprise any wired or wireless communication means suitable for outputting the output signal 155.
The output 150 is configured to output the output signal 155 to the receiving device 170. The receiving device 170 may comprise one or more devices which receive the output signal 155 and process the output signal 155 to extract the audio signal 165 and the location signal 145. In one example, the receiving device 170 comprises an analogue to digital (ADC) recorder such as a conventional ADC recorder typically used in the music industry, but in another example the receiving device 170 may comprise one or more computing devices such as a server or client device for viewing virtual reality, extended reality or virtual production applications.
In some examples, the controller 110 and the output 150 may continually output the output signal 155, or in another example may only output the output signal 155 when the audio signal 165 comprises data having a magnitude exceeding a predetermined threshold. That is, the output signal 165 may be generated and output regardless of a magnitude of the audio signal 165, such that the audio device 160 may be tracked even when the audio device 160 does not produce significant audio data, for example during pauses in musical performances or other audio events, or the audio device 160 may only be tracked when the audio device 160 provides audio data indicating an active section of the audio event.
The receiving device 170 may comprise suitable processing means and software installed thereon to demodulate the output signal 155 to obtain the audio signal 165 and the location signal 145. The receiving device 170 may be configured to remove high frequency data from the output signal 155 to extract the location signal 145, and may construct a data sequence or timeline of audio data and location information of the audio device 160 over a corresponding time period.
The receiving device 170 may in some examples output the associated audio data and location information of the audio device 160 to one or more of a virtual reality, extended reality or virtual production application. Said applications may use the associated audio data and location information of the audio device 160 to provide content to a user in which a location and/or orientation of the audio device 160 (and optionally a performer using the audio device 160) matching the location and/or orientation of the audio device 160 when producing the audio signal 165 can be shown. The associated audio data and location information of the audio device 160 may further be used to apply processing such as spatial convolutional reverberation to the audio signal 165 to improve a user experience of using the applications.
In some examples, the digital to analogue convertor 140 and/or the controller 110 may include error correction codes such as Hamming Codes into the converted location signal 145 and/or the output signal 155 to assist in subsequent demodulation of the output signal 155.
As explained above, in the example of Figure 1 the audio signal 165 is an analogue signal and the at least one sensor 130 produces a digital location signal 135 which is converted into analogue by the digital to analogue convertor 140, and the two analogue signals are mixed to produce an analogue output signal 155. However, the present invention is not limited thereto. In another example, the audio signal 165 may be received as a digital signal and may be mixed with the digital location signal 135 to produce a digital output signal 155. In another example, the audio signal 165 may be received as a digital signal and both the audio signal 165 and the location signal 135 may be converted to analogue before being combined to form an analogue output signal 155. In the case of using a digital audio signal 165, a high resolution digital signal may be used to prevent the location signal 135 from leaking into audible frequency ranges.
The electronic device 100 as illustrated in Figure 1 comprises one controller 110, although it will be appreciated that this is merely illustrative. The controller 110 may comprise processing means and memory means. The processing means may be one or more electronic processing device which operably executes computer-readable instructions. The memory means may be one or more memory device. The memory means is electrically coupled to the processing means. The memory means is configured to store instructions, and the processing means is configured to access the memory means and execute the instructions stored thereon.
It should be understood that the electronic device 100 of Figure 1 may comprise one or more devices. In one example, the one or more controller 110, the input 120, the least one sensor 130, the digital to analogue converter 140, and the output 150 may be provided as elements on a circuit board such as a printed circuit board (PCB), and for example may be provided as one or more integrated circuit (IC) modules, or one or more system on chip (SoC) components.
Figure 2 illustrates an audio device 200 according to an embodiment of the invention. The audio device 200 of Figure 2 comprises a controller 210, an audio source 220 configured to generate an audio signal 225, at least one sensor 230, a digital to analogue convertor 240 and an output 250 configured to output an output signal 255 to a receiving device 270. The controller 210, the at least one sensor 230, the digital to analogue convertor 240, the output and the receiving device 270 are similar to those described in respect of Figure 1, and a detailed description thereof is therefore omitted. That is, the at least one sensor 230 is configured to determine location information of the audio device 200 and generate a location signal 235 which may be converted into an analogue signal 245 by the digital to analogue convertor 240. The controller 210 may combine the location signal 235, 245 and an audio signal 225 to produce a combined output signal 255 which may be output to a receiving device 270.
The audio source 220 of Figure 2 comprises an electronic component for generating or detecting audio data, and generating the audio signal 225 based on the audio data. For example, the audio source 220 may comprise a microphone configured to detect audio data, for example sounds generated by a speaker or singer, and may produce the audio signal 225 based on the detected audio data. In another example, the audio source 220 may comprise an electronic instrument such as a guitar, and the audio source 220 may generate audio data in response to user input such as a user playing the guitar.
It should be understood that one or more of the one or more controller 210, the audio source 220, the least one sensor 230, the digital to analogue converter 240, and the output 250 may be provided as elements on a circuit board such as a printed circuit board (PCB), and for example may be provided as one or more integrated circuit (IC) modules, or one or more system on chip (SoC) components.
It should further be understood that as with Figure 1, while the description of Figure 2 refers to the at least one sensor 230 generating a digital signal 235 which is converted to an analogue signal 245 by the digital to analogue convertor 240, this step may be omitted or replaced with a conversion of an analogue to digital signal. Similarly, while the audio source 220 generates an analogue audio signal 225 in some examples, the audio source 220 may alternatively generate a digital signal. Finally, the output signal 255 may comprise a combined digital signal or a combined audio signal, as explained above in respect of Figure 1.
The audio device 200 of Figure 2 is therefore an audio device 200 which includes the function of the electronic device 100 of Figure 1, and further comprises the audio source 220 for producing the audio signal 225. That is, Figure 1 illustrates an electronic device 100 which may couple to an audio device 160, the electronic device 100 including at least one sensor 130 for determining location information of the audio device 160 and a controller 110 for combining a location signal 135 from the at least one sensor 130 and an audio signal 165 from a connected audio device 160 to produce an output signal 155. Figure 2 illustrates an alternative embodiment in which a single audio device 200 is provided to include the at least one sensor 230 for determining location information of the audio device 200 and a controller 210 for combining a location signal 235 from the at least one sensor 230 and an audio signal 225 from the audio source 220 to produce an output signal 255. That is, the audio device 200 of Figure 2 may perform all of the functions provided by the electronic device 100 and the audio device 160 of Figure 1 in combination in a single audio device 200.
Advantageously, while Figure 1 describes an embodiment where an electronic device 100 according to the present invention may be coupled to an existing audio device 160 to provide location tracking of the audio device 160 and to produce a combined output signal 155 including location signal 145 and audio signals 165 of the audio device 160, the audio device 200 of Figure 2 achieves the same output using a single integrated device including an audio source 220 which provides the audio signal 225. The electronic device 100 of Figure 1 therefore provides a flexible option for adding location information tracking and generation of combined output signals to existing audio devices, while the audio device 200 of Figure 2 provides the same function in a single integrated audio device which produces its own audio signal 225.
Therefore, both of the embodiments of Figures 1 and 2 provide a means for generating location information of an audio device 160, 200 and outputting the location information of the audio device 160, 200 alongside an audio signal from the audio device 160, 200 in a single mixed or combined signal. The output signal may be demodulated or otherwise processed at a receiving device 170, 270 to extract the location information and the audio data as associated data, such that a location of the audio device 160, 200 can be provided during a time in which the audio device 160, 200 generated or detected audio data. That is, the position and/or orientation of the audio device 160, 200 can be tracked or monitored across the same time period as the audio device 160, 200 produces an audio signal, and an accurate representation of the audio device's 160, 200 location during generation of the audio signal 225 can be generated based on the association between the location information and the audio data. The associated audio signal 225 and location signal 245 can then be used to generate an accurate replay of an audio event such as a concert or performance by one or more of a virtual reality, extended reality or virtual production application, including a representation of where the audio device 160, 200 was located at various points throughout the audio signal. By combining the audio signal and the location signal into the single output signal, an association between audio data and location information is created, and the audio data and location information may be constructed alongside one another in a data structure or timeline. That is, audio data and location information of the audio device 160, 200 may be synchronized such that a location of the audio device 160, 200 during generation of the audio signal may be determined.
Figure 3 shows a flow chart illustrating a method 300 according to an embodiment of the present invention. The method 300 of Figure 3 may be performed by the electronic device 100 of Figure 1 or the audio device 200 of Figure 2.
The method 300 at step 310 includes receiving an audio signal. The audio signal may comprise an analogue or digital signal including audio data representative of data generated or detected by an audio device, or an audio source of an audio device. When the audio signal comprises a digital signal, the audio signal may be a high-resolution digital signal above approximately 44Hz. In some examples, the audio data may comprise data detected by a microphone or generated by an electronic instrument. The audio signal may be received through a wireless or wired connection. The audio signal may be received continuously as a data stream including audio data.
At step 320, the method 300 comprises determining location information of an audio device.
The location information may be determined using one or more sensor. The one or more sensor may comprise one or more of an accelerometer, a gyroscopic sensor, a GPS position sensor or any other sensor for determining a position of the audio device. The location information may indicate a location of the audio device. The location information may comprise one or more of a geographic position and/or an orientation of the audio device. The location information may indicate one of more of the geographic position and/or the orientation of the audio device in one or more axes. In one example, the geographic position and/or the orientation of the audio device may each be measured in three axes. The location information may be continuously or periodically determined. The location information may be determined during reception of the audio signal. The location information may include time information associated with the detected location information.
At step 330, the method 300 comprises generating a location signal. The location signal is generated based on the location information determined at step 320. The location signal may be generated as a digital signal or an analogue signal. The generated location signal may optionally be converted from a digital signal to an analogue signal, or from an analogue signal to a digital signal. In some examples, the location signal is modulation encoded as high frequency analogue audio. For example, high frequency may refer to frequencies above typical audible frequencies which may constitute the audio signal. The location signal may be continuously generated and/or output during the same time as the audio signal is being received. The location signal and the audio signal may be simultaneously and continuously generated and/or received.
At step 340, the method 300 comprises combining the audio signal and the location signal to generate an output signal. Combining the audio signal and the location signal may also be understood as mixing or multiplexing the audio signal and the location signal. In one example, the high frequency analogue location signal may be mixed into the audio signal. Combining the audio signal and the location signal may comprise converting one or both of the audio signal and the location signal from analogue to digital or from digital to analogue such that the signals have a similar format. Combining the audio signal and the location signal may comprise combining corresponding parts of the audio signal and the location signal which are generated or received simultaneously. That is, the combined output signal comprises the audio signal and the location signal aligned with respect to time, such that location information of the audio device is associated with audio data of the audio device generated or detected at the same time as the location information is generated.
At step 350, the method 300 comprises outputting the output signal. The output signal may be output via a wireless or wired connection to a receiving device. The receiving device may be an external device, or may be one or more of an analogue to digital recorder or a computing device. The output signal may be continuously output as an analogue or digital signal. The output signal may be generated to include error correction codes such as Hamming Codes in some examples.
Figure 4 shows a flow chart illustrating a method 400 according to an embodiment of the present invention. The method 400 of Figure 4 may be performed by the receiving device 170, 270 of Figure 1 or 2. The method 400 of Figure 4 may follow from the method 300 of Figure 3.
At step 410, the method 400 comprises receiving the output signal output in step 350 of Figure 3. The output signal may be received through a wired or wireless connection. The output signal may be continuously received as an analogue or digital signal. The output signal may comprise a signal generated by mixing an audio signal and a location signal as explained in respect of Figures 1-3.
At step 420, the method 400 comprises demodulating the location signal. Demodulating the location signal may be understood to mean extracting or separating the location signal from the output signal. The demodulation may take any suitable form. In some examples, the method may optionally further comprise detecting and/or extracting error correction codes such as Hamming Codes. The method 400 may optionally further comprise correcting the location signal using the error correction codes.
At step 430, the method 400 comprises extracting the location signal and the audio signal.
The location signal and the audio signal may be extracted from the output signal. The location signal may be extracted from the output signal by demodulation as explained in step 420. Step 430 may further comprise removing high frequency data corresponding to a high frequency analogue location signal to separate the location signal and the audio signal.
At step 440, the method 400 comprises outputting the audio signal and the location signal.
The audio signal and the location signal may be output as separate signals. Step 440 may further comprise constructing a data sequence or timeline including the location signal and audio signal by extracting the audio signal and the location signal across the same time period. The method 400 may further comprise extracting the location information of the audio device from the location signal, and including the location information of the audio device in the constructed data sequence or timeline alongside the audio signal.
The method 400 may further comprise outputting the audio signal and the location signal, and/or the audio data and the location information, to one or more applications. The one or more applications may include a virtual reality, extended reality or virtual production application. The method 400 may further comprise driving spatial convolutional reverberation using the location information and the audio signal to apply spatial convolutional reverberation to the audio signal to simulate audio properties of the audio event being recorded or captured to generate the audio signal, and to apply these audio properties to the audio signal. As a result, a user of the one or more applications may experience an improved representation of the audio event where the location information of the audio device during the audio event is taken into account. For example, the location of the audio device during the audio event may be visually expressed. Alternatively or in addition, audio properties such as acoustics, echo, or direction may be applied to the audio signal to simulate the user being present at the audio event. For example, acoustic properties of a room where the audio event occurs may be simulated.
Figure 5 illustrates example signals according to various embodiments of the present invention. The signals of Figure 5 illustrate signals obtained after processing the output signal of Figures 1 to 4 to extract the audio signal and the location signal. That is, Figure 5 illustrates restored signals after demodulating the location signal from the combined output signal received from the electronic device 100 or the audio device 200. Figure 5 illustrates an example audio signal (bottom, labelled "Audio Signal") received from an audio device or an audio source, as well as an extracted location signals showing a position in X, Y, and Z axes, and rotation or orientation of the audio device in X, Y, and Z axes. As shown in Figure 5, the audio signal and the location signal can be constructed across a common time period after being extracted from the output signal. Consequently, the resulting data indicates a position and/or orientation of the audio device during recording of the audio signal. That is, the movement of the audio device during generation or recording of the audio signal can be output alongside the audio data from the audio signal. The location of the audio device in terms of geographic position and/or orientation (or rotation) can be determined for a given time point in the audio signal. Movement of the audio device can be determined in one or more axes and in terms of position and/or orientation. This data may be used by one or more application to improve a user experience or to apply audio effects such as convolutional reverberation to the audio signal. For example, the position and/or orientation information may be used to apply sound effects such as simulated acoustics to the audio signal, by considering the location information of the audio device at a given point in the audio signal. Alternatively or in addition, the audio signal can be provided to the user alongside a visual representation of the position and/or movement of the audio device during the period of the audio signal.
It will be appreciated that various changes and modifications can be made to the present invention without departing from the scope of the present application.
Claims (15)
- CLAIMS1. An electronic device (100) for use with an audio device (160), the electronic device (100) comprising: an input (120) configured to receive an audio signal (165) from the audio device (160); at least one sensor (130) configured to determine location information of the audio device (160); a controller (110) configured to: generate a location signal based on the location information of the audio device (160); and combine the audio signal (165) and the location signal to generate a single output signal (155); and an output (150) configured to output the output signal (155).
- 2. The electronic device (100) according to claim 1, wherein the controller (110) is configured to generate the location signal during reception of the audio signal (165) to correspond to a time period of the audio signal (165).
- 3. The electronic device (100) according to any preceding claim, wherein the received audio signal (165) is an analogue signal.
- 4. The electronic device (100) according to any preceding claim, wherein the generated location signal is a digital signal; and wherein the controller (110) is configured to: convert the digital location signal (135) into an analogue location signal (145); and combine the analogue location signal (145) and the audio signal (165) to generate the output signal (155) as an analogue signal.
- 5. The electronic device (100) according to claim 4, wherein the electronic device (100) further comprises a modulator (140) configured to convert the digital location signal (135) into the analogue location signal (145); and wherein the analogue location signal (145) is a high frequency location signal.
- 6. The electronic device (100) according to any preceding claim, wherein the location information comprises at least one of orientation and/or geographic position of the audio device (160).
- 7. The electronic device (100) according to any preceding claim, wherein the electronic device (100) is configured to couple to the audio device (160).
- 8. The electronic device (100) according to any preceding claim, wherein the controller (110) is further configured to include an error-correcting code in the output signal (155) to improve demodulation of the output signal (155).
- 9. The electronic device (100) according to any preceding claim, wherein the output (150) is configured to output the output signal (155) to at least one receiving device (170), so as to enable the receiving device (170) to demodulate the output signal to separate the audio signal (165) and the location signal.
- 10. An audio device (200) configured to output an output signal (255) to a receiving device (270), the audio device (200) comprising: an audio source (220) configured to detect or generate audio and to generate an audio signal (225) of the detected or generated audio; at least one sensor (230) configured to determine location information of the audio device (200); a controller (210) configured to: generate a location signal based on the location information of the audio device (200); and combine the audio signal (225) and the location signal to generate a single output signal (255); and an output (250) configured to output the output signal (255).
- 11. A system (180, 280) comprising the electronic device (100) according to any of claims 1 to 9 and/or the audio device (200) according to claim 10, and a receiving device (170) configured to: receive the output signal (155, 255) from the electronic device (100) or the audio device (200); extract the audio signal (165, 225) and the location signal (145, 245) from the received signal (155, 255); and output the audio signal (165, 225) and location signal (145, 245) of the audio device (160, 200) indicating location information of the audio device (160, 200) during a time period of the audio signal (165, 225).
- 12. The system (180, 280) according to claim 11, wherein the receiving device (170, 270) is further configured to: demodulate the location signal (145, 245) and remove high frequency data from the output signal (155, 255) to extract the audio signal (165, 225) and the location signal (145, 5 245).
- 13. The system (180, 280) according to any of claims 11 to 12, wherein the receiving device (170, 270) is further configured to: output the audio signal (165, 225) and location information of the audio device (160, 200) to at least one of a virtual reality, extended reality, or virtual production, VP, application; or apply spatial convolution reverberation to the audio signal (165, 225) based on the location information of the audio device (160, 200).
- 14. A method (300) for generating an output signal including an audio signal and a location signal of an audio device, the method comprising: receiving (310) the audio signal from the audio device; determining (320) location information of the audio device; generating (330) the location signal based on the location information of the audio device; combining (340) the audio signal and the location signal to generate a single output signal; and outputting (350) the output signal.
- 15. The method (300, 400) according to claim 14, further comprising: demodulating (420) the location signal and removing high frequency data from the output signal to extract (430) the audio signal and the location signal; and outputting (440) the audio signal and the location signal to at least one of a virtual reality, extended reality, or virtual production, VP, application; or applying spatial convolution reverberation to the audio signal based on the location information of the audio device.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB2206376.2A GB2618326A (en) | 2022-05-02 | 2022-05-02 | Electronic device for use with an audio device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB2206376.2A GB2618326A (en) | 2022-05-02 | 2022-05-02 | Electronic device for use with an audio device |
Publications (2)
Publication Number | Publication Date |
---|---|
GB202206376D0 GB202206376D0 (en) | 2022-06-15 |
GB2618326A true GB2618326A (en) | 2023-11-08 |
Family
ID=81943824
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
GB2206376.2A Pending GB2618326A (en) | 2022-05-02 | 2022-05-02 | Electronic device for use with an audio device |
Country Status (1)
Country | Link |
---|---|
GB (1) | GB2618326A (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140088975A1 (en) * | 2012-09-21 | 2014-03-27 | Kerry L. Davis | Method for Controlling a Computing Device over Existing Broadcast Media Acoustic Channels |
US20210385608A1 (en) * | 2018-10-24 | 2021-12-09 | Otto Engineering, Inc. | Directional awareness audio communications system |
-
2022
- 2022-05-02 GB GB2206376.2A patent/GB2618326A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140088975A1 (en) * | 2012-09-21 | 2014-03-27 | Kerry L. Davis | Method for Controlling a Computing Device over Existing Broadcast Media Acoustic Channels |
US20210385608A1 (en) * | 2018-10-24 | 2021-12-09 | Otto Engineering, Inc. | Directional awareness audio communications system |
Also Published As
Publication number | Publication date |
---|---|
GB202206376D0 (en) | 2022-06-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109313907B (en) | Combining audio signals and spatial metadata | |
US7161079B2 (en) | Audio signal generating apparatus, audio signal generating system, audio system, audio signal generating method, program, and storage medium | |
US9697814B2 (en) | Method and device for changing interpretation style of music, and equipment | |
US8618405B2 (en) | Free-space gesture musical instrument digital interface (MIDI) controller | |
US9171531B2 (en) | Device and method for interpreting musical gestures | |
CN102169705B (en) | tone reproduction apparatus and method | |
US11997459B2 (en) | Crowd-sourced device latency estimation for synchronization of recordings in vocal capture applications | |
US10878788B2 (en) | Enhanced system, method, and devices for capturing inaudible tones associated with music | |
JP5773960B2 (en) | Sound reproduction apparatus, method and program | |
US11388512B2 (en) | Positioning sound sources | |
WO2006133207A2 (en) | Method of and system for controlling audio effects | |
US10284985B1 (en) | Crowd-sourced device latency estimation for synchronization of recordings in vocal capture applications | |
CN111415675B (en) | Audio signal processing method, device, equipment and storage medium | |
JP4487209B2 (en) | Karaoke device and karaoke processing program | |
CN113474834A (en) | Tuner for musical instrument, playing support device, and musical instrument management device | |
GB2618326A (en) | Electronic device for use with an audio device | |
WO2013008869A1 (en) | Electronic device and data generation method | |
US9570056B2 (en) | Audio data synthesis method, audio output method, and program for synthesizing audio data based on a time difference | |
JP4561735B2 (en) | Content reproduction apparatus and content synchronous reproduction system | |
US10482858B2 (en) | Generation and transmission of musical performance data | |
JP2011035584A (en) | Acoustic device, phase correction method, phase correction program, and recording medium | |
US20240273981A1 (en) | Tactile signal generation device, tactile signal generation method, and program | |
JP3271572B2 (en) | Communication method of musical information, communication device, and medium recording program | |
WO2022269796A1 (en) | Device, ensemble system, audio reproduction method, and program | |
JP2005196074A (en) | Musical performance system, and musical sound and video reproducing apparatus |