EP3917160A1 - Capturing content - Google Patents
Capturing content Download PDFInfo
- Publication number
- EP3917160A1 EP3917160A1 EP21173836.4A EP21173836A EP3917160A1 EP 3917160 A1 EP3917160 A1 EP 3917160A1 EP 21173836 A EP21173836 A EP 21173836A EP 3917160 A1 EP3917160 A1 EP 3917160A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- audio device
- audio
- orientation
- capturing
- respect
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R3/00—Circuits for transducers, loudspeakers or microphones
- H04R3/005—Circuits for transducers, loudspeakers or microphones for combining the signals of two or more microphones
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R1/00—Details of transducers, loudspeakers or microphones
- H04R1/20—Arrangements for obtaining desired frequency or directional characteristics
- H04R1/32—Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only
- H04R1/40—Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only by combining a number of identical transducers
- H04R1/406—Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only by combining a number of identical transducers microphones
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2499/00—Aspects covered by H04R or H04S not otherwise provided for in their subgroups
- H04R2499/10—General applications
- H04R2499/11—Transducers incorporated or for use in hand-held devices, e.g. mobile phones, PDA's, camera's
Definitions
- the present application relates generally to capturing content. More specifically, the present application relates to controlling at least one functionality of an apparatus for capturing content.
- the amount of multimedia content increases continuously. Users create and consume multimedia content, and it has a big role in modern society.
- an apparatus comprising means for performing: receiving orientation information relating to an orientation of an audio device operatively connected to an apparatus, determining, based on the orientation information, an orientation of the audio device with respect to the apparatus, determining, based on the orientation of the audio device with respect to the apparatus, a direction of capturing content by the apparatus, and controlling at least one functionality of the apparatus for capturing content in the determined direction.
- a method comprising receiving orientation information relating to an orientation of an audio device operatively connected to an apparatus, determining, based on the orientation information, an orientation of the audio device with respect to the apparatus, determining, based on the orientation of the audio device with respect to the apparatus, a direction of capturing content by the apparatus, and controlling at least one functionality of the apparatus for capturing content in the determined direction.
- a computer program comprising instructions for causing an apparatus to perform at least the following: receiving orientation information relating to an orientation of an audio device operatively connected to an apparatus, determining, based on the orientation information, an orientation of the audio device with respect to the apparatus, determining, based on the orientation of the audio device with respect to the apparatus, a direction of capturing content by the apparatus, and controlling at least one functionality of the apparatus for capturing content in the determined direction.
- an apparatus comprising at least one processor and at least one memory including computer program code, the at least one memory and the computer program code configured to with the at least one processor, cause the apparatus at least to: receive orientation information relating to an orientation of an audio device operatively connected to the apparatus, determine, based on the orientation information, an orientation of the audio device with respect to the apparatus, determine, based on the orientation of the audio device with respect to the apparatus, a direction for capturing content by the apparatus, and control at least one functionality of the apparatus for capturing content in the determined direction.
- a non-transitory computer readable medium comprising program instructions for causing an apparatus to perform at least the following: receiving orientation information relating to an orientation of an audio device operatively connected to an apparatus, determining, based on the orientation information, an orientation of the audio device with respect to the apparatus, determining, based on the orientation of the audio device with respect to the apparatus, a direction of capturing content by the apparatus, and controlling at least one functionality of the apparatus for capturing content in the determined direction.
- a computer readable medium comprising program instructions for causing an apparatus to perform at least the following: receiving orientation information relating to an orientation of an audio device operatively connected to an apparatus, determining, based on the orientation information, an orientation of the audio device with respect to the apparatus, determining, based on the orientation of the audio device with respect to the apparatus, a direction of capturing content by the apparatus, and controlling at least one functionality of the apparatus for capturing content in the determined direction.
- Example embodiments relate to an apparatus configured to an apparatus configured to receive orientation information relating to an orientation of an audio device operatively connected to the apparatus, determine, based on the orientation information, an orientation of the audio device with respect to the apparatus, determine, based on the orientation of the audio device with respect to the apparatus, a direction for capturing content by the apparatus, and control at least one functionality of the apparatus for capturing content in the determined direction.
- Some example embodiments relate to controlling content capture options with an audio device such as an ear bud. Some example embodiments relate to capturing spatial audio using an audio device such as an ear bud.
- Spatial audio may comprise a full sphere surround-sound to mimic the way people perceive audio in real life.
- Spatial audio may comprise audio that appears from a user's position to be assigned to a certain direction and/or distance. Therefore, the perceived audio may change with the movement of the user or with the user turning.
- Spatial audio may comprise audio created by sound sources, ambient audio or a combination thereof.
- Ambient audio may comprise audio that might not be identifiable in terms of a sound source such as traffic humming, wind or waves, for example.
- the full sphere surround-sound may comprise a spatial audio field and the position of the user or the position of the capturing device may be considered as a reference point in the spatial audio field. According to an example embodiment, a reference point comprises the centre of the audio field.
- Spatial audio may be captured with, for example, a capturing device comprising a plurality of microphones configured to capture audio signals around the capturing device.
- the capturing device may also be configured to capture different types of information such as one or more parameters relating to the captured audio signals and/or visual information.
- the captured parameters may be stored with the captured audio or in a separate file.
- a capturing device may be, for example, a camera, a video recorder or a smartphone.
- Spatial audio may comprise one or more parameters such as an audio focus parameter and/or an audio zoom parameter.
- An audio parameter may comprise a parameter value with respect to a reference point such as the position of the user or the position of the capturing device. Modifying a spatial audio parameter value may cause a change in spatial audio perceived by a listener.
- An audio focus feature allows a user to focus on audio in a desired direction with respect to other directions when capturing content and/or playing back content. Therefore, an audio focus feature also allows a user to at least partially eliminate background noises.
- a direction of sound may be defined with respect to a reference point.
- a direction of sound may comprise an angle with respect to a reference point or a discrete direction such as front, back, left, right, up and/or down with respect to a reference point, or a combination thereof.
- the reference point may correspond to, for example, a value of 0 degrees or no audio focus direction in which case, at the reference point, the audio comprises surround sound with no audio focus.
- An audio focus parameter may also comprise one or more further levels of detail such as horizontal focus direction and/or vertical focus direction.
- An audio zoom feature allows a user to zoom in on a sound. Zooming in on a sound comprises adjusting an amount of audio gain associated with a particular direction. Therefore, an audio zoom parameter corresponds to sensitivity to a direction of sound. Audio zoom may be performed using audio beamforming with which a user may be able to control, for example, the size, shape and/or direction of the audio beam. Performing audio zooming may comprise controlling audio signals coming from a particular direction while attenuating audio signals coming from other directions. For example, an audio zoom feature may allow controlling audio gain. Audio gain may comprise an amount of gain set to audio input signals coming from a certain direction. An audio zoom parameter value may be defined with respect to a reference point.
- an audio zoom parameter may be a percentage value and the reference point may correspond to, for example, a value of 0 % in which case, at the reference point, the audio comprises surround sound with no audio zooming.
- an audio zoom feature may allow delaying different microphone signals differently and then summing the signals up, thereby enabling spatial filtering of audio.
- Audio zooming may be associated with zooming visual information. For example, if a user records a video and zooms in on an object, the audio may also be zoomed in on the object such that, for example, sound generated by the object is emphasized and other sounds are attenuated. In other words, spatial audio parameters may be controlled by controlling the video zoom.
- Mobile phones are more and more used in professional audio/video capture due to increased capabilities. In some situations, it would be useful to have additional microphones such as a close-up microphone that is close to a sound source. However, using a plurality of microphones might be challenging in terms of organizing the capture such that it would be easy to choose a microphone to be used or in which ratio microphones are used.
- FIG. 1 is a block diagram depicting an apparatus 100 operating in accordance with an example embodiment of the invention.
- the apparatus 100 may be, for example, an electronic device such as a chip or a chipset.
- the apparatus 100 comprises one or more control circuitry, such as at least one processor 110 and at least one memory 160, including one or more algorithms such as computer program code 120 wherein the at least one memory 160 and the computer program code are 120 configured, with the at least one processor 110 to cause the apparatus 100 to carry out any of example functionalities described below.
- control circuitry such as at least one processor 110 and at least one memory 160, including one or more algorithms such as computer program code 120 wherein the at least one memory 160 and the computer program code are 120 configured, with the at least one processor 110 to cause the apparatus 100 to carry out any of example functionalities described below.
- the processor 110 is a control unit operatively connected to read from and write to the memory 160.
- the processor 110 may also be configured to receive control signals received via an input interface and/or the processor 110 maybe configured to output control signals via an output interface.
- the processor 110 may be configured to convert the received control signals into appropriate commands for controlling functionalities of the apparatus 100.
- the at least one memory 160 stores computer program code 120 which when loaded into the processor 110 control the operation of the apparatus 100 as explained below.
- the apparatus 100 may comprise more than one memory 160 or different kinds of storage devices.
- Computer program code 120 for enabling implementations of example embodiments of the invention or a part of such computer program code may be loaded onto the apparatus 100 by the manufacturer of the apparatus 100, by a user of the apparatus 100, or by the apparatus 100 itself based on a download program, or the code can be pushed to the apparatus 100 by an external device.
- the computer program code 120 may arrive at the apparatus 100 via an electromagnetic carrier signal or be copied from a physical entity such as a computer program product, a memory device or a record medium such as a Compact Disc (CD), a Compact Disc Read-Only Memory (CD-ROM), a Digital Versatile Disk (DVD) or a Blu-ray disk.
- FIG. 2 is a block diagram depicting an apparatus 200 in accordance with an example embodiment of the invention.
- the apparatus 200 may be an electronic device such as a hand-portable device, a mobile phone or a Personal Digital Assistant (PDA), a Personal Computer (PC), a laptop, a desktop, a tablet computer, a wireless terminal, a communication terminal, a game console, a music player, an electronic book reader (e-book reader), a positioning device, a digital camera, a household appliance, a CD-, DVD or Blu-ray player, or a media player.
- PDA Personal Digital Assistant
- PC Personal Computer
- laptop a desktop
- a tablet computer a wireless terminal
- a communication terminal a game console
- music player an electronic book reader (e-book reader)
- a positioning device a digital camera, a household appliance, a CD-, DVD or Blu-ray player, or a media player.
- the apparatus 200 comprises a mobile computing device.
- the apparatus 200 comprises a part of a mobile communication device.
- the mobile computing device may comprise, for example, a mobile phone, a tablet computer, or the like. In the examples below it is assumed that the apparatus 200 is a mobile computing device or a part of it.
- the apparatus 200 is illustrated as comprising the apparatus 100, a microphone array 210, one or more loudspeakers 230 and a user interface 220 for interacting with the apparatus 200 (e.g. a mobile computing device).
- the apparatus 200 may also comprise a display configured to act as a user interface 220.
- the display may be a touch screen display.
- the display and/or the user interface 220 may be external to the apparatus 200, but in communication with it.
- the microphone array 210 comprises a plurality of microphones for capturing audio.
- the plurality of microphones may be configured to work together to capture audio, for example, at different sides of a device.
- a microphone array comprising two microphones may be configured to capture audio from a right side and a left side of a device.
- the apparatus 200 is configured to apply one or more audio focus operations to emphasize audio signals arriving from a particular direction and/or attenuate sounds coming from other directions and/or one or more audio zoom operations to switch between focused and non-focused audio, for example, in conjunction with a camera.
- the apparatus 200 is further configured to perform one or more spatial filtering methods for achieving audio focus and/or audio zoom.
- the one or more spatial filtering methods may comprise, for example, beamforming and/or parametric spatial audio.
- Beamforming may comprise forming an audio beam by selecting a particular microphone arrangement for capturing spatial audio information from a first direction and/or attenuating sounds coming from a second direction and processing the received audio information.
- a microphone array may be used to form a spatial filter which is configured to extract a signal from a specific direction and/or reduce contamination of signals from other directions.
- Parametric spatial audio processing comprises analysing a spatial audio field into a directional component with a direction-of-arrival parameter and ambient component without a direction-of-arrival parameter and changing the direction-of-arrival parameter at which directional signal components are enhanced.
- the apparatus 200 is configured to control a direction of audio focus.
- Controlling a direction of audio focus may comprise, for example, changing a direction of an audio beam with respect to a reference point in a spatial audio field.
- changing a direction of an audio beam may comprise changing the direction of the audio beam from a first direction to a second direction.
- audio signals from that direction are emphasized and when the audio beam is directed to a second direction, audio signals from that direction are emphasized.
- the apparatus 200 may be configured to control a direction of an audio beam by switching from a first microphone arrangement to a second microphone arrangement, by processing the captured audio information using an algorithm with different parameters and/or using a different algorithm for processing the captured audio information.
- the beam direction steering can be accomplished by adjusting the values of steering delays so that signals arriving from a particular direction are aligned before they are summed.
- the user interface 220 may also comprise a manually operable control such as a button, a key, a touch pad, a joystick, a stylus, a pen, a roller, a rocker, a keypad, a keyboard or any suitable input mechanism for inputting and/or accessing information.
- a manually operable control such as a button, a key, a touch pad, a joystick, a stylus, a pen, a roller, a rocker, a keypad, a keyboard or any suitable input mechanism for inputting and/or accessing information.
- Further examples include a camera, a speech recognition system, eye movement recognition system, acceleration-, tilt- and/or movement-based input systems. Therefore, the apparatus 200 may also comprise different kinds of sensors such as one or more gyro sensors, accelerometers, magnetometers, position sensors and/or tilt sensors.
- the apparatus 200 is configured to establish radio communication with another device using, for example, a Bluetooth, WiFi, radio frequency identification (RFID), or a near field communication (NFC) connection.
- the apparatus 200 may be configured to establish radio communication with an audio device 250 such as a wireless headphone, augmented/virtual reality device or the like.
- the apparatus 200 is operatively connected to an audio device 250.
- the apparatus 200 is wirelessly connected to the audio device 250.
- the apparatus 200 may be connected to the audio device 250 over a Bluetooth connection, or the like.
- the audio device 250 may comprise at least one microphone array for capturing audio signals and at least one loudspeaker for playing back received audio signals.
- the audio device 250 may further be configured to filter out background noise and/or detect in-ear placement.
- the audio device 250 may comprise a headphone such as a wireless headphone.
- a headphone may comprise a single headphone such as an ear bud or a pair of headphones such as a pair of ear buds configured to function as a pair.
- the audio device 250 may comprise a first wireless headphone and a second wireless headphone such that the first wireless headphone and the second wireless headphone are configured to function as a pair. Functioning as a pair may comprise, for example, providing stereo output for a user using the first wireless headphone and the second wireless headphone.
- the first wireless headphone and the second wireless headphone may also be configured such that the first wireless headphone and the second wireless headphone may be used separately and/or independently of each other. For example, same or different audio information may be provided to the first wireless headphone and the second wireless headphone, or audio information may be directed to one wireless headphone and the second wireless headphone may act as a microphone.
- the apparatus 200 is configured to communicate with the audio device 250.
- Communicating with the audio device 250 may comprise providing to and/or receiving information from the audio device 250.
- communicating with the audio device 250 comprises providing audio signals and/or receiving audio signals.
- the apparatus 200 may be configured to provide audio signals to the audio device 250 and receive audio signals from the audio device 250.
- the apparatus 200 is configured to determine an orientation and/or a position of the audio device 250 with respect to the apparatus 200.
- the apparatus 200 is configured to determine the orientation and/or position of the audio device 250 using Bluetooth technology such as Bluetooth Low Energy (BLE).
- BLE Bluetooth Low Energy
- the audio device 250 comprises at least one Bluetooth antenna for transmitting data and the apparatus 200 comprises an array of phased antennas for receiving and/or transmitting data.
- the array of phased antennas may be configured to measure the angle-of-departure (AoD) or angle-of-arrival (AoA) comprising measuring both azimuth and elevation angles.
- the apparatus 200 may be configured to execute antenna switching when receiving an AoA packet from the audio device 250.
- the apparatus 200 may then utilize the amplitude and phase samples together with its own antenna array information to estimate the AoA of a packet received from the audio device 250.
- Performing AoD measurement may be based on broadcasting by the apparatus 200 the AoD signals, location and properties of a Bluetooth beacon that enables the audio device 250 to calculate its own position.
- the apparatus 200 is configured to determine the orientation and/or position of the audio device 250 using acoustic localization.
- Acoustic localization may comprise receiving microphone signals comprising recorded signals from surroundings and determine a time difference of arrival (TDoA) between microphones.
- TDoA may be determined based on inter-microphone delays that may be determined through correlations and the geometry of a microphone array.
- the apparatus 200 is configured to determine a position of the audio device 250 using Global Positioning System (GPS) coordinates and Bluetooth technology to determine a relative location of the audio device 250.
- GPS Global Positioning System
- the apparatus 200 is configured to receive orientation information relating to an orientation of the audio device 250 operatively connected to the apparatus 200.
- the orientation information may comprise measurement data indicating the orientation of the audio device 250 or a signal based on which the orientation of the audio device 250 may be determined.
- the orientation information may comprise information received from one or more orientation sensors comprised by the audio device 250.
- the orientation information may comprise raw data or pre-processed data.
- the orientation information may comprise a Bluetooth signal.
- the apparatus 200 is configured to determine, based on the orientation information, an orientation of the audio device 250 with respect to the apparatus 200.
- Determining the orientation of the audio device 250 with respect to the apparatus 200 may comprise comparing the orientation of the audio device 250 with an orientation of the apparatus 200.
- the apparatus 200 may be configured to compare measurement data indicating an orientation of the audio device 250 with measurement data indicating an orientation of the apparatus 200.
- the apparatus 200 may be configured to determine the orientation of the audio device 250 with respect to the apparatus 200 based on characteristics of a wireless connection between the apparatus 200 and the audio device 250. For example, assuming the apparatus 200 and the audio device are wirelessly connected using a Bluetooth connection, the apparatus 200 may be configured to determine the orientation of the audio device 250 with respect to the apparatus 200 based on a Bluetooth signal using a Bluetooth Angle of Arrival (AoA) or a Bluetooth Angle of Departure (AoD) algorithm.
- AoA Bluetooth Angle of Arrival
- AoD Bluetooth Angle of Departure
- the apparatus 200 is configured to determine a direction of the audio device 250 with respect to the apparatus 200.
- the apparatus 200 may be configured to determine the direction of the audio device with respect to the apparatus 200 based on the orientation information relating to the audio device 250 and the apparatus 200 or based on characteristics of a wireless connection between the apparatus 200 and the audio device 250.
- determining the orientation of the audio device 250 with respect to the apparatus 200 comprises determining a pointing direction of the audio device 250.
- a pointing direction comprises a direction to which the audio device 250 points.
- the audio device 250 may be associated with a reference point that is used for determining the pointing direction. For example, assuming the audio device 250 comprises a wireless ear bud comprising a pointy end at one end of the ear bud determined as a reference point, the pointing direction may be determined based on a direction to which the ear bud points.
- the apparatus 200 is configured to determine, based on the orientation of the audio device 250 with respect to the apparatus 200, a direction for capturing content by the apparatus 200.
- the direction for capturing content by the apparatus 200 may comprise an absolute direction, a direction relative to the apparatus 200, an approximate direction or a direction within predefined threshold values.
- a direction for capturing content by the apparatus 200 comprises a direction corresponding to the orientation of the audio device 250.
- a direction corresponding to the orientation of the audio device 250 comprises a pointing direction of the audio device 250.
- the apparatus 200 may be configured to determine a direction for capturing content by the apparatus 200 in response to receiving information on an activation of the audio device 250 or in response to receiving information on a change of orientation of the audio device.
- An activation of the audio device 250 may comprise a change of mode of the audio device 250.
- activation of the audio device based on a change of mode of the ear bud may comprise removing the ear bud from an in-ear position.
- a change of orientation may comprise a change of orientation that is above a predefined threshold value.
- the apparatus 200 is configured to control at least one functionality of the apparatus 200 for capturing content in the determined direction.
- capturing content comprises capturing content using the apparatus 200.
- capturing content comprises capturing video content comprising audio and visual information.
- capturing content comprises capturing audio content.
- capturing content comprises capturing visual content.
- Controlling a functionality relating to capturing content may comprise controlling capturing audio and/or controlling capturing visual information.
- controlling a functionality relating to capturing content may comprise controlling one or more microphones and/or one or more cameras.
- the apparatus 200 may be configured to control at least one functionality of the apparatus 200 by controlling a component of the apparatus 200.
- controlling the at least one functionality of the mobile computing device comprises activating a camera.
- the apparatus 200 may comprise a plurality of cameras.
- the camera comprises a first camera located on a first side of the apparatus 200.
- the camera comprises a second camera located at a second side of the apparatus 200.
- the first camera and the second camera may be configured to record images/video at opposite sides of the apparatus 200.
- the first camera may comprise, for example, a front camera and the second camera may comprise, for example, a back camera.
- activating the camera comprises activating a camera comprising a field of view in a direction corresponding to the direction for capturing content.
- a field of view of a camera may comprise a scene that is visible through the camera at a particular position and orientation when taking a picture or recording video. Objects outside the field of view when the picture is taken may not be recorded.
- controlling the at least one functionality of the mobile computing device comprises controlling at least one microphone array.
- Controlling a microphone array may comprise controlling the microphone array to capture audio in a particular direction. Capturing audio in a particular direction may comprise performing an audio focus operation by, for example, forming a directional beam pattern towards the particular direction. Beamforming in terms of using a particular beam pattern enables a microphone array to be more sensitive to sound coming from one or more particular directions than sound coming from other directions.
- controlling the at least one microphone array comprises controlling the at least one microphone array to focus audio in the direction for capturing content.
- Focusing audio in the direction for capturing content may comprise performing spatial filtering such as performing beamforming or parametric spatial audio processing.
- the at least one microphone may comprise at least one microphone array comprised by the apparatus 200 or at least one microphone array comprised by the audio device 250.
- the at least one microphone array comprises a microphone array of the audio device 250 or a microphone array of the apparatus 200.
- the apparatus 200 may be configured to select a microphone array to be controlled.
- the apparatus 200 may be configured to select a microphone array closest to a capturing target such as a sound source. Therefore, the apparatus 200 may be configured to select the microphone array to be controlled based on the respective positions of at least a first microphone array and a second microphone array.
- the apparatus 200 is configured to receive position information relating to a position of the audio device 250.
- Position information relating to the position of the audio device 250 may comprise, for example, measurement data indicating the position of the audio device 250, coordinates indicating the position of the audio device and/or a signal such as a Bluetooth signal based on which the position of the audio device 250 may be determined.
- the apparatus 200 is configured to determine, based on the position information, a position of the audio device 250 with respect to the apparatus 200.
- an advantage of determining a position of the audio device 250 with respect to the apparatus 200 is that the apparatus 200 may determine which of the audio device 250 and the apparatus 200 is closer to a capturing target, thereby enabling better audio quality.
- the apparatus 200 is configured to determine the microphone array closest to the capturing target based on the position of the audio device 250 with respect to the apparatus 200. According to an example embodiment, the apparatus 200 is configured to control the microphone array closest to the capturing target.
- the apparatus 200 may be configured to allow for a user to control audio capture using at least one microphone array. According to an example embodiment, the apparatus is configured to provide a user interface on the mobile computing device for controlling the capturing of content.
- the apparatus 200 comprises means for performing the features of the claimed invention, wherein the means for performing comprises at least one processor 110, at least one memory 160 including computer program code 120, the at least one memory 160 and the computer program code 120 configured to, with the at least one processor 110, cause the performance of the apparatus 200.
- the means for performing the features of the claimed invention may comprise means for receiving orientation information relating to an orientation of an audio device operatively connected to an apparatus, means for determining, based on the orientation information, an orientation of the audio device with respect to the apparatus, means for determining, based on the orientation of the audio device with respect to the apparatus, a direction of capturing content by the apparatus, and means for controlling at least one functionality of the apparatus for capturing content in the determined direction.
- the apparatus 200 may further comprise means for receiving position information relating to a position of the audio device, means for determining, based on the position information, a position of the audio device with respect to the apparatus 200 and means for determining the microphone array closest to the capturing target based on the position of the audio device with respect to the apparatus 200.
- the apparatus 200 may further comprise means for providing a user interface on the apparatus to control capturing content.
- Figure 3 illustrates an example of capturing content.
- the apparatus 200 is a mobile computing device configured to communicate with the audio device 250 such as an ear bud and both the apparatus 200 and the audio device 250 comprise at least one microphone array for capturing audio.
- the apparatus 200 is configured to determine a position and/or orientation of the audio device 250 using Bluetooth technology, acoustic localization and/or GPS coordinates together with Bluetooth technology.
- a first user 301 interviews a second user 302 that is a capturing target.
- the first user 301 records the interview by capturing video of the interview using the apparatus 200 and the audio device 250.
- the audio device 250 is configured to function as pair with another audio device 304.
- the apparatus 200 receives position information from the audio device 250 and determines a position of the audio device 250 with respect to the apparatus 200.
- the apparatus 200 further determines, based on the position of the audio device 250 with respect to the apparatus 200, which of the microphone array of the apparatus 200 and the microphone array of the audio device is the closest to the capturing target (second user 302).
- the audio device 250 is closer to the capturing target (second user 302) than the apparatus 200.
- the apparatus 200 determines that the microphone array of the audio device 250 is to be used for capturing audio such that the microphone array of the audio device 250 is controlled to focus audio capturing towards the second user 302.
- the audio focus is illustrated with dashed line 305.
- the apparatus further receives orientation information relating to the audio device 250 and determines, based on the orientation information, the orientation of the audio device 250 with respect to the apparatus 200.
- the apparatus 200 further determines, based on the orientation of the audio device 250 with respect to the apparatus 200, a direction for capturing content by the apparatus 200.
- the orientation of the audio device 250 is such that the audio device 250 points towards the second user 302.
- the apparatus 200 determines that the orientation of the audio device 250 corresponds to the field of view of the back camera 303 and activates the back camera 303.
- the apparatus 200 further provides a user interface 306 for the first user 301 for controlling the capturing of content.
- the user interface 306 comprises an illustration of the audio focus 307 provided by the microphone array of the audio device 250 for enabling the first user 301 to control the audio focus parameters.
- Figure 4 illustrates an example of capturing content.
- the apparatus 200 is a mobile computing device configured to communicate with audio device 250.
- both the apparatus 200 and the audio device 250 comprise a microphone array for capturing audio.
- the apparatus 200 is configured to determine a position and/or orientation of the audio device 250 using Bluetooth technology, acoustic localization and/or GPS coordinates together with Bluetooth technology.
- a first user 301 interviews a second user 302 that is a capturing target.
- the first user 301 records the interview by capturing video of the interview using the apparatus 200 and the audio device 250.
- the audio device 250 is configured to function as pair with another audio device 304.
- the apparatus 200 receives position information from the audio device 250 and determines a position of the audio device 250 with respect to the apparatus 200. The apparatus 200 further determines, based on the position of the audio device 250 with respect to the apparatus 200, which of the microphone array of the apparatus 200 and the microphone array of the audio device 250 is the closest to the second user 302. In the example of Figure 4 , the apparatus 200 is closer to the second user 302 than the audio device 250.
- the apparatus 200 determines that the microphone array of the apparatus 200 is to be used for capturing audio such that the microphone array of the apparatus 200 is controlled to focus audio capturing towards the second user 302.
- the audio focus is illustrated with dashed line 405.
- the apparatus further receives orientation information relating to the audio device 250 and determines, based on the orientation information, the orientation of the audio device 250 with respect to the apparatus 200.
- the apparatus 200 further determines, based on the orientation of the audio device 250 with respect to the apparatus 200, a direction for capturing content by the apparatus 200.
- the orientation of the audio device 250 is such that the audio device 250 points towards the second user 302.
- the apparatus 200 determines that the orientation of the audio device 250 corresponds to the field of view of the back camera 303 and activates the back camera 303.
- the apparatus 200 further provides a user interface 306 for the first user 301 for controlling the capturing of content.
- the user interface 306 comprises an illustration of the audio focus 407 provided by the microphone array of the apparatus 200 for enabling the first user 301 to control the audio focus parameters.
- Figure 5 illustrates a further example of capturing content.
- the apparatus 200 is a mobile computing device configured to communicate with audio device 250.
- both the apparatus 200 and the audio device 250 comprise a microphone array for capturing audio.
- the apparatus 200 is configured to determine a position and/or orientation of the audio device 250 using Bluetooth technology, acoustic localization and/or GPS coordinates together with Bluetooth technology.
- a first user 301 interviews a second user 302 such that the first user 301 is a capturing target.
- the first user 301 records the interview by capturing video of the interview using the apparatus 200 and the audio device 250.
- the audio device 250 is configured to function as pair with another audio device 304.
- the apparatus 200 receives position information from the audio device 250 and determines a position of the audio device 250 with respect to the apparatus 200. The apparatus 200 further determines, based on the position of the audio device 250 with respect to the apparatus 200, which of the microphone array of the apparatus 200 and the microphone array of the audio device is the closest to the first user 301. In the example of Figure 5 , the audio device 250 is closer to the first user 301 than the apparatus 200.
- the apparatus 200 determines that the microphone array of the audio device 250 is to be used for capturing audio such that the microphone array is controlled to focus audio capturing towards the capturing target (first user 301).
- the apparatus further receives orientation information relating to the audio device 250 and determines, based on the orientation information, the orientation of the audio device 250 with respect to the apparatus 200.
- the apparatus 200 further determines, based on the orientation of the audio device 250 with respect to the apparatus 200, a direction for capturing content by the apparatus 200.
- the orientation of the audio device 250 is such that the audio device 250 points towards the first user 301.
- the apparatus 200 determines that the orientation of the audio device 250 corresponds to the field of view of a front camera and activates the front camera.
- the apparatus 200 further provides a user interface 306 for the first user 301 for controlling the capturing of content.
- the user interface 306 comprises an illustration of the audio focus 507 provided by the microphone array of the audio device 250 for enabling the first user 301 to control the audio focus parameters.
- Figure 6 illustrates a yet further example of capturing content.
- the apparatus 200 is a mobile computing device configured to communicate with audio device 250.
- both the apparatus 200 and the audio device 250 comprise a microphone array for capturing audio.
- the apparatus 200 is configured to determine a position and/or orientation of the audio device 250 using Bluetooth technology, acoustic localization and/or GPS coordinates together with Bluetooth technology.
- a first user 301 interviews a second user 302 such that the first user 301 is a capturing target.
- the first user 301 records the interview by capturing video of the interview using the apparatus 200 and the audio device 250.
- the audio device 250 is configured to function as pair with another audio device 304.
- the apparatus 200 receives position information from the audio device 250 and determines a position of the audio device 250 with respect to the apparatus 200. The apparatus 200 further determines, based on the position of the audio device 250 with respect to the apparatus 200, which of the microphone array of the apparatus 200 and the microphone array of the audio device is the closest to the capturing target (first user 301). In the example of Figure 6 , the apparatus 200 is closer to the capturing target (first user 301) than the audio device 250.
- the apparatus 200 determines that the microphone array of the apparatus 200 is to be used for capturing audio such that the microphone array is controlled to focus audio capturing towards the capturing target (first user 301).
- the apparatus further receives orientation information relating to the audio device 250 and determines, based on the orientation information, the orientation of the audio device 250 with respect to the apparatus 200.
- the apparatus 200 further determines, based on the orientation of the audio device 250 with respect to the apparatus 200, a direction for capturing content by the apparatus 200.
- the orientation of the audio device 250 is such that the audio device 250 points towards the first user 301.
- the apparatus 200 determines that the orientation of the audio device 250 corresponds to the field of view of a front camera and activates the front camera.
- the apparatus 200 further provides a user interface 306 for the first user 301 for controlling the capturing of content.
- the user interface 306 comprises an illustration of the audio focus 607 provided by the microphone array of the apparatus 200 for enabling the first user 301 to control the audio focus parameters.
- Figure 7 illustrates an example method 700 incorporating aspects of the previously disclosed embodiments. More specifically the example method 700 illustrates controlling at least one functionality of the apparatus 200 for capturing content in a determined direction. The method may be performed by the apparatus 200 such as a mobile computing device.
- the apparatus 200 is configured to determine a position and/or orientation of the audio device 250 using Bluetooth technology, acoustic localization and/or GPS coordinates together with Bluetooth technology.
- the method starts with receiving 705 orientation information relating to an orientation of an audio device 250 operatively connected to the apparatus 200.
- the orientation information may comprise measurement data indicating the orientation of the audio device 250 or a signal based on which the orientation of the audio device 250 may be determined.
- the method continues with determining 710, based on the orientation information, an orientation of the audio device 250 with respect to the apparatus 200.
- Determining the orientation of the audio device 250 with respect to the apparatus 200 may comprise comparing the orientation of the audio device 250 with an orientation of the apparatus 200.
- the orientation of the audio device 250 with respect to the apparatus 200 may comprise determining a direction of the audio device 250.
- the method further continues with determining 715, based on the orientation of the audio device 250 with respect to the apparatus 200, a direction for capturing content by the apparatus 200.
- a direction for capturing content by the apparatus 200 comprises a direction corresponding to the orientation of the audio device 250.
- Capturing content may comprise, for example, capturing video content comprising audio and visual information.
- Figure 8 illustrates another example method 800 incorporating aspects of the previously disclosed embodiments. More specifically the example method 800 illustrates selecting a microphone array for capturing audio and controlling at least one functionality of the apparatus 200 for capturing content in a determined direction. The method may be performed by the apparatus 200 such as a mobile computing device.
- the apparatus 200 is configured to determine a position and/or orientation of the audio device 250 using Bluetooth technology, acoustic localization and/or GPS coordinates together with Bluetooth technology.
- the method starts with receiving 805 position information relating to the audio device 250 and determining 810 a position of the audio device 250 with respect to the apparatus 200.
- the method continues with determining 815 a microphone array closest to a capturing target.
- the microphone array closest to the capturing target is selected as a microphone array for capturing audio.
- the method further continues with receiving 820 orientation information relating to an orientation of an audio device 250 operatively connected to the apparatus 200.
- the orientation information may comprise measurement data indicating the orientation of the audio device 250 or a signal based on which the orientation of the audio device 250 may be determined.
- the method further continues with determining 825, based on the orientation information, an orientation of the audio device 250 with respect to the apparatus 200.
- Determining the orientation of the audio device 250 with respect to the apparatus 200 may comprise comparing the orientation of the audio device 250 with an orientation of the apparatus 200.
- the orientation of the audio device 250 with respect to the apparatus 200 may comprise determining a direction of the audio device 250.
- the method further continues with determining 830, based on the orientation of the audio device 250 with respect to the apparatus 200, a direction of capturing content by the apparatus 200.
- a direction for capturing content by the apparatus 200 comprises a direction corresponding to the orientation of the audio device 250.
- Capturing content may comprise, for example, capturing video content comprising audio and visual information.
- an advantage of controlling at least one functionality of an apparatus based on an orientation of an audio device is that a direction of a capturing target may be indicated using the audio device.
- Another advantage is that audio may be captured using different microphone arrays in a controlled manner.
- a further advantage is that a visual information may be captured using different cameras in a controlled manner.
- a technical effect of one or more of the example embodiments disclosed herein is that a high quality audio/video capture may be provided using a distributed content capturing.
- Another technical effect may be a dynamic control of content capturing.
- circuitry may refer to one or more or all of the following: (a) hardware-only circuit implementations (such as implementations in only analog and/or digital circuitry) and (b) combinations of hardware circuits and software, such as (as applicable): (i) a combination of analog and/or digital hardware circuit(s) with software/firmware and (ii) any portions of hardware processor(s) with software (including digital signal processor(s)), software, and memory(ies) that work together to cause an apparatus, such as a mobile phone or server, to perform various functions) and (c) hardware circuit(s) and or processor(s), such as a microprocessor(s) or a portion of a microprocessor(s), that requires software (e.g., firmware) for operation, but the software may not be present when it is not needed for operation.
- hardware-only circuit implementations such as implementations in only analog and/or digital circuitry
- combinations of hardware circuits and software such as (as applicable): (i) a combination of analog and/or digital hardware circuit(s
- circuitry also covers an implementation of merely a hardware circuit or processor (or multiple processors) or portion of a hardware circuit or processor and its (or their) accompanying software and/or firmware.
- circuitry also covers, for example and if applicable to the particular claim element, a baseband integrated circuit or processor integrated circuit for a mobile device or a similar integrated circuit in server, a cellular network device, or other computing or network device.
- Embodiments of the present invention may be implemented in software, hardware, application logic or a combination of software, hardware and application logic.
- the software, application logic and/or hardware may reside on the apparatus, a separate device or a plurality of devices. If desired, part of the software, application logic and/or hardware may reside on the apparatus, part of the software, application logic and/or hardware may reside on a separate device, and part of the software, application logic and/or hardware may reside on a plurality of devices.
- the application logic, software or an instruction set is maintained on any one of various conventional computer-readable media.
- a 'computer-readable medium' may be any media or means that can contain, store, communicate, propagate or transport the instructions for use by or in connection with an instruction execution system, apparatus, or device, such as a computer, with one example of a computer described and depicted in FIGURE 2 .
- a computer-readable medium may comprise a computer-readable storage medium that may be any media or means that can contain or store the instructions for use by or in connection with an instruction execution system, apparatus, or device, such as a computer.
- the different functions discussed herein may be performed in a different order and/or concurrently with each other. Furthermore, if desired, one or more of the above-described functions may be optional or may be combined.
Landscapes
- Health & Medical Sciences (AREA)
- Otolaryngology (AREA)
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Acoustics & Sound (AREA)
- Signal Processing (AREA)
- General Health & Medical Sciences (AREA)
- Circuit For Audible Band Transducer (AREA)
Abstract
Description
- The present application relates generally to capturing content. More specifically, the present application relates to controlling at least one functionality of an apparatus for capturing content.
- The amount of multimedia content increases continuously. Users create and consume multimedia content, and it has a big role in modern society.
- Various aspects of examples of the invention are set out in the claims. The scope of protection sought for various embodiments of the invention is set out by the independent claims. The examples and features, if any, described in this specification that do not fall under the scope of the independent claims are to be interpreted as examples useful for understanding various embodiments of the invention.
- According to a first aspect of the invention, there is provided an apparatus comprising means for performing: receiving orientation information relating to an orientation of an audio device operatively connected to an apparatus, determining, based on the orientation information, an orientation of the audio device with respect to the apparatus, determining, based on the orientation of the audio device with respect to the apparatus, a direction of capturing content by the apparatus, and controlling at least one functionality of the apparatus for capturing content in the determined direction.
- According to a second aspect of the invention, there is provided a method comprising receiving orientation information relating to an orientation of an audio device operatively connected to an apparatus, determining, based on the orientation information, an orientation of the audio device with respect to the apparatus, determining, based on the orientation of the audio device with respect to the apparatus, a direction of capturing content by the apparatus, and controlling at least one functionality of the apparatus for capturing content in the determined direction.
- According to a third aspect of the invention, there is provided a computer program comprising instructions for causing an apparatus to perform at least the following: receiving orientation information relating to an orientation of an audio device operatively connected to an apparatus, determining, based on the orientation information, an orientation of the audio device with respect to the apparatus, determining, based on the orientation of the audio device with respect to the apparatus, a direction of capturing content by the apparatus, and controlling at least one functionality of the apparatus for capturing content in the determined direction.
- According to a fourth aspect of the invention, there is provided an apparatus comprising at least one processor and at least one memory including computer program code, the at least one memory and the computer program code configured to with the at least one processor, cause the apparatus at least to: receive orientation information relating to an orientation of an audio device operatively connected to the apparatus, determine, based on the orientation information, an orientation of the audio device with respect to the apparatus, determine, based on the orientation of the audio device with respect to the apparatus, a direction for capturing content by the apparatus, and control at least one functionality of the apparatus for capturing content in the determined direction.
- According to a fifth aspect of the invention, there is provided a non-transitory computer readable medium comprising program instructions for causing an apparatus to perform at least the following: receiving orientation information relating to an orientation of an audio device operatively connected to an apparatus, determining, based on the orientation information, an orientation of the audio device with respect to the apparatus, determining, based on the orientation of the audio device with respect to the apparatus, a direction of capturing content by the apparatus, and controlling at least one functionality of the apparatus for capturing content in the determined direction.
- According to a sixth aspect of the invention, there is provided a computer readable medium comprising program instructions for causing an apparatus to perform at least the following: receiving orientation information relating to an orientation of an audio device operatively connected to an apparatus, determining, based on the orientation information, an orientation of the audio device with respect to the apparatus, determining, based on the orientation of the audio device with respect to the apparatus, a direction of capturing content by the apparatus, and controlling at least one functionality of the apparatus for capturing content in the determined direction.
- Some example embodiments will now be described with reference to the accompanying drawings:
-
Figure 1 shows a block diagram of an example apparatus in which examples of the disclosed embodiments may be applied; -
Figure 2 shows a block diagram of another example apparatus in which examples of the disclosed embodiments may be applied; -
Figure 3 illustrates an example of capturing content; -
Figure 4 illustrates another example of capturing content; -
Figure 5 illustrates a further example of capturing content; -
Figure 6 illustrates a yet further example of capturing content; -
Figure 7 illustrates an example method; and -
Figure 8 illustrates another example method. - The following embodiments are exemplifying. Although the specification may refer to "an", "one", or "some" embodiment(s) in several locations of the text, this does not necessarily mean that each reference is made to the same embodiment(s), or that a particular feature only applies to a single embodiment. Single features of different embodiments may also be combined to provide other embodiments.
- Example embodiments relate to an apparatus configured to an apparatus configured to receive orientation information relating to an orientation of an audio device operatively connected to the apparatus, determine, based on the orientation information, an orientation of the audio device with respect to the apparatus, determine, based on the orientation of the audio device with respect to the apparatus, a direction for capturing content by the apparatus, and control at least one functionality of the apparatus for capturing content in the determined direction.
- Some example embodiments relate to controlling content capture options with an audio device such as an ear bud. Some example embodiments relate to capturing spatial audio using an audio device such as an ear bud.
- Spatial audio may comprise a full sphere surround-sound to mimic the way people perceive audio in real life. Spatial audio may comprise audio that appears from a user's position to be assigned to a certain direction and/or distance. Therefore, the perceived audio may change with the movement of the user or with the user turning. Spatial audio may comprise audio created by sound sources, ambient audio or a combination thereof. Ambient audio may comprise audio that might not be identifiable in terms of a sound source such as traffic humming, wind or waves, for example. The full sphere surround-sound may comprise a spatial audio field and the position of the user or the position of the capturing device may be considered as a reference point in the spatial audio field. According to an example embodiment, a reference point comprises the centre of the audio field.
- Spatial audio may be captured with, for example, a capturing device comprising a plurality of microphones configured to capture audio signals around the capturing device. In addition to capturing audio signals, the capturing device may also be configured to capture different types of information such as one or more parameters relating to the captured audio signals and/or visual information. The captured parameters may be stored with the captured audio or in a separate file. A capturing device may be, for example, a camera, a video recorder or a smartphone.
- Spatial audio may comprise one or more parameters such as an audio focus parameter and/or an audio zoom parameter. An audio parameter may comprise a parameter value with respect to a reference point such as the position of the user or the position of the capturing device. Modifying a spatial audio parameter value may cause a change in spatial audio perceived by a listener.
- An audio focus feature allows a user to focus on audio in a desired direction with respect to other directions when capturing content and/or playing back content. Therefore, an audio focus feature also allows a user to at least partially eliminate background noises. When capturing content, in addition to capturing audio, also the direction of sound is captured. A direction of sound may be defined with respect to a reference point. For example, a direction of sound may comprise an angle with respect to a reference point or a discrete direction such as front, back, left, right, up and/or down with respect to a reference point, or a combination thereof. The reference point may correspond to, for example, a value of 0 degrees or no audio focus direction in which case, at the reference point, the audio comprises surround sound with no audio focus. An audio focus parameter may also comprise one or more further levels of detail such as horizontal focus direction and/or vertical focus direction.
- An audio zoom feature allows a user to zoom in on a sound. Zooming in on a sound comprises adjusting an amount of audio gain associated with a particular direction. Therefore, an audio zoom parameter corresponds to sensitivity to a direction of sound. Audio zoom may be performed using audio beamforming with which a user may be able to control, for example, the size, shape and/or direction of the audio beam. Performing audio zooming may comprise controlling audio signals coming from a particular direction while attenuating audio signals coming from other directions. For example, an audio zoom feature may allow controlling audio gain. Audio gain may comprise an amount of gain set to audio input signals coming from a certain direction. An audio zoom parameter value may be defined with respect to a reference point. For example, an audio zoom parameter may be a percentage value and the reference point may correspond to, for example, a value of 0 % in which case, at the reference point, the audio comprises surround sound with no audio zooming. As another example, an audio zoom feature may allow delaying different microphone signals differently and then summing the signals up, thereby enabling spatial filtering of audio.
- Audio zooming may be associated with zooming visual information. For example, if a user records a video and zooms in on an object, the audio may also be zoomed in on the object such that, for example, sound generated by the object is emphasized and other sounds are attenuated. In other words, spatial audio parameters may be controlled by controlling the video zoom.
- Mobile phones are more and more used in professional audio/video capture due to increased capabilities. In some situations, it would be useful to have additional microphones such as a close-up microphone that is close to a sound source. However, using a plurality of microphones might be challenging in terms of organizing the capture such that it would be easy to choose a microphone to be used or in which ratio microphones are used.
-
Figure 1 is a block diagram depicting anapparatus 100 operating in accordance with an example embodiment of the invention. Theapparatus 100 may be, for example, an electronic device such as a chip or a chipset. Theapparatus 100 comprises one or more control circuitry, such as at least oneprocessor 110 and at least onememory 160, including one or more algorithms such ascomputer program code 120 wherein the at least onememory 160 and the computer program code are 120 configured, with the at least oneprocessor 110 to cause theapparatus 100 to carry out any of example functionalities described below. - In the example of
Figure 1 , theprocessor 110 is a control unit operatively connected to read from and write to thememory 160. Theprocessor 110 may also be configured to receive control signals received via an input interface and/or theprocessor 110 maybe configured to output control signals via an output interface. In an example embodiment theprocessor 110 may be configured to convert the received control signals into appropriate commands for controlling functionalities of theapparatus 100. - The at least one
memory 160 storescomputer program code 120 which when loaded into theprocessor 110 control the operation of theapparatus 100 as explained below. In other examples, theapparatus 100 may comprise more than onememory 160 or different kinds of storage devices. -
Computer program code 120 for enabling implementations of example embodiments of the invention or a part of such computer program code may be loaded onto theapparatus 100 by the manufacturer of theapparatus 100, by a user of theapparatus 100, or by theapparatus 100 itself based on a download program, or the code can be pushed to theapparatus 100 by an external device. Thecomputer program code 120 may arrive at theapparatus 100 via an electromagnetic carrier signal or be copied from a physical entity such as a computer program product, a memory device or a record medium such as a Compact Disc (CD), a Compact Disc Read-Only Memory (CD-ROM), a Digital Versatile Disk (DVD) or a Blu-ray disk. -
Figure 2 is a block diagram depicting anapparatus 200 in accordance with an example embodiment of the invention. Theapparatus 200 may be an electronic device such as a hand-portable device, a mobile phone or a Personal Digital Assistant (PDA), a Personal Computer (PC), a laptop, a desktop, a tablet computer, a wireless terminal, a communication terminal, a game console, a music player, an electronic book reader (e-book reader), a positioning device, a digital camera, a household appliance, a CD-, DVD or Blu-ray player, or a media player. - According to an example embodiment, the
apparatus 200 comprises a mobile computing device. According to another example embodiment, theapparatus 200 comprises a part of a mobile communication device. The mobile computing device may comprise, for example, a mobile phone, a tablet computer, or the like. In the examples below it is assumed that theapparatus 200 is a mobile computing device or a part of it. - In the example embodiment of
Figure 2 , theapparatus 200 is illustrated as comprising theapparatus 100, amicrophone array 210, one ormore loudspeakers 230 and a user interface 220 for interacting with the apparatus 200 (e.g. a mobile computing device). Theapparatus 200 may also comprise a display configured to act as a user interface 220. For example, the display may be a touch screen display. In an example embodiment, the display and/or the user interface 220 may be external to theapparatus 200, but in communication with it. - The
microphone array 210 comprises a plurality of microphones for capturing audio. The plurality of microphones may be configured to work together to capture audio, for example, at different sides of a device. For example, a microphone array comprising two microphones may be configured to capture audio from a right side and a left side of a device. - According to an example embodiment, the
apparatus 200 is configured to apply one or more audio focus operations to emphasize audio signals arriving from a particular direction and/or attenuate sounds coming from other directions and/or one or more audio zoom operations to switch between focused and non-focused audio, for example, in conjunction with a camera. - The
apparatus 200 is further configured to perform one or more spatial filtering methods for achieving audio focus and/or audio zoom. The one or more spatial filtering methods may comprise, for example, beamforming and/or parametric spatial audio. - Beamforming may comprise forming an audio beam by selecting a particular microphone arrangement for capturing spatial audio information from a first direction and/or attenuating sounds coming from a second direction and processing the received audio information. In other words, a microphone array may be used to form a spatial filter which is configured to extract a signal from a specific direction and/or reduce contamination of signals from other directions.
- Parametric spatial audio processing comprises analysing a spatial audio field into a directional component with a direction-of-arrival parameter and ambient component without a direction-of-arrival parameter and changing the direction-of-arrival parameter at which directional signal components are enhanced.
- According to an example embodiment, the
apparatus 200 is configured to control a direction of audio focus. Controlling a direction of audio focus may comprise, for example, changing a direction of an audio beam with respect to a reference point in a spatial audio field. For example, changing a direction of an audio beam may comprise changing the direction of the audio beam from a first direction to a second direction. When the audio beam is directed to a first direction, audio signals from that direction are emphasized and when the audio beam is directed to a second direction, audio signals from that direction are emphasized. - The
apparatus 200 may be configured to control a direction of an audio beam by switching from a first microphone arrangement to a second microphone arrangement, by processing the captured audio information using an algorithm with different parameters and/or using a different algorithm for processing the captured audio information. For example, in the case of a Delay-Sum beamformer, the beam direction steering can be accomplished by adjusting the values of steering delays so that signals arriving from a particular direction are aligned before they are summed. - Referring back to
Figure 2 , the user interface 220 may also comprise a manually operable control such as a button, a key, a touch pad, a joystick, a stylus, a pen, a roller, a rocker, a keypad, a keyboard or any suitable input mechanism for inputting and/or accessing information. Further examples include a camera, a speech recognition system, eye movement recognition system, acceleration-, tilt- and/or movement-based input systems. Therefore, theapparatus 200 may also comprise different kinds of sensors such as one or more gyro sensors, accelerometers, magnetometers, position sensors and/or tilt sensors. - According to an example embodiment, the
apparatus 200 is configured to establish radio communication with another device using, for example, a Bluetooth, WiFi, radio frequency identification (RFID), or a near field communication (NFC) connection. For example, theapparatus 200 may be configured to establish radio communication with anaudio device 250 such as a wireless headphone, augmented/virtual reality device or the like. - According to an example embodiment, the
apparatus 200 is operatively connected to anaudio device 250. According to an example embodiment, theapparatus 200 is wirelessly connected to theaudio device 250. For example, theapparatus 200 may be connected to theaudio device 250 over a Bluetooth connection, or the like. Similarly to theapparatus 200, theaudio device 250 may comprise at least one microphone array for capturing audio signals and at least one loudspeaker for playing back received audio signals. Theaudio device 250 may further be configured to filter out background noise and/or detect in-ear placement. Theaudio device 250 may comprise a headphone such as a wireless headphone. - A headphone may comprise a single headphone such as an ear bud or a pair of headphones such as a pair of ear buds configured to function as a pair. For example, the
audio device 250 may comprise a first wireless headphone and a second wireless headphone such that the first wireless headphone and the second wireless headphone are configured to function as a pair. Functioning as a pair may comprise, for example, providing stereo output for a user using the first wireless headphone and the second wireless headphone. The first wireless headphone and the second wireless headphone may also be configured such that the first wireless headphone and the second wireless headphone may be used separately and/or independently of each other. For example, same or different audio information may be provided to the first wireless headphone and the second wireless headphone, or audio information may be directed to one wireless headphone and the second wireless headphone may act as a microphone. - According to an example embodiment, the
apparatus 200 is configured to communicate with theaudio device 250. Communicating with theaudio device 250 may comprise providing to and/or receiving information from theaudio device 250. According to an example embodiment, communicating with theaudio device 250 comprises providing audio signals and/or receiving audio signals. For example, theapparatus 200 may be configured to provide audio signals to theaudio device 250 and receive audio signals from theaudio device 250. - According to an example embodiment, the
apparatus 200 is configured to determine an orientation and/or a position of theaudio device 250 with respect to theapparatus 200. - According to an example embodiment, the
apparatus 200 is configured to determine the orientation and/or position of theaudio device 250 using Bluetooth technology such as Bluetooth Low Energy (BLE). According to an example embodiment, theaudio device 250 comprises at least one Bluetooth antenna for transmitting data and theapparatus 200 comprises an array of phased antennas for receiving and/or transmitting data. The array of phased antennas may be configured to measure the angle-of-departure (AoD) or angle-of-arrival (AoA) comprising measuring both azimuth and elevation angles. - When performing AoA measurement, the
apparatus 200 may be configured to execute antenna switching when receiving an AoA packet from theaudio device 250. Theapparatus 200 may then utilize the amplitude and phase samples together with its own antenna array information to estimate the AoA of a packet received from theaudio device 250. - Performing AoD measurement may be based on broadcasting by the
apparatus 200 the AoD signals, location and properties of a Bluetooth beacon that enables theaudio device 250 to calculate its own position. - According to another example embodiment, the
apparatus 200 is configured to determine the orientation and/or position of theaudio device 250 using acoustic localization. Acoustic localization may comprise receiving microphone signals comprising recorded signals from surroundings and determine a time difference of arrival (TDoA) between microphones. TDoA may be determined based on inter-microphone delays that may be determined through correlations and the geometry of a microphone array. - According to a further example embodiment, the
apparatus 200 is configured to determine a position of theaudio device 250 using Global Positioning System (GPS) coordinates and Bluetooth technology to determine a relative location of theaudio device 250. - According to an example embodiment, the
apparatus 200 is configured to receive orientation information relating to an orientation of theaudio device 250 operatively connected to theapparatus 200. The orientation information may comprise measurement data indicating the orientation of theaudio device 250 or a signal based on which the orientation of theaudio device 250 may be determined. For example, the orientation information may comprise information received from one or more orientation sensors comprised by theaudio device 250. The orientation information may comprise raw data or pre-processed data. As another example, the orientation information may comprise a Bluetooth signal. - According to an example embodiment, the
apparatus 200 is configured to determine, based on the orientation information, an orientation of theaudio device 250 with respect to theapparatus 200. - Determining the orientation of the
audio device 250 with respect to theapparatus 200 may comprise comparing the orientation of theaudio device 250 with an orientation of theapparatus 200. For example, theapparatus 200 may be configured to compare measurement data indicating an orientation of theaudio device 250 with measurement data indicating an orientation of theapparatus 200. - As another example, the
apparatus 200 may be configured to determine the orientation of theaudio device 250 with respect to theapparatus 200 based on characteristics of a wireless connection between theapparatus 200 and theaudio device 250. For example, assuming theapparatus 200 and the audio device are wirelessly connected using a Bluetooth connection, theapparatus 200 may be configured to determine the orientation of theaudio device 250 with respect to theapparatus 200 based on a Bluetooth signal using a Bluetooth Angle of Arrival (AoA) or a Bluetooth Angle of Departure (AoD) algorithm. - According to an example embodiment, the
apparatus 200 is configured to determine a direction of theaudio device 250 with respect to theapparatus 200. Theapparatus 200 may be configured to determine the direction of the audio device with respect to theapparatus 200 based on the orientation information relating to theaudio device 250 and theapparatus 200 or based on characteristics of a wireless connection between theapparatus 200 and theaudio device 250. - According to an example embodiment, determining the orientation of the
audio device 250 with respect to theapparatus 200 comprises determining a pointing direction of theaudio device 250. - A pointing direction comprises a direction to which the
audio device 250 points. Theaudio device 250 may be associated with a reference point that is used for determining the pointing direction. For example, assuming theaudio device 250 comprises a wireless ear bud comprising a pointy end at one end of the ear bud determined as a reference point, the pointing direction may be determined based on a direction to which the ear bud points. - According to an example embodiment, the
apparatus 200 is configured to determine, based on the orientation of theaudio device 250 with respect to theapparatus 200, a direction for capturing content by theapparatus 200. The direction for capturing content by theapparatus 200 may comprise an absolute direction, a direction relative to theapparatus 200, an approximate direction or a direction within predefined threshold values. - According to an example embodiment, a direction for capturing content by the
apparatus 200 comprises a direction corresponding to the orientation of theaudio device 250. According to an example embodiment, a direction corresponding to the orientation of theaudio device 250 comprises a pointing direction of theaudio device 250. - The
apparatus 200 may be configured to determine a direction for capturing content by theapparatus 200 in response to receiving information on an activation of theaudio device 250 or in response to receiving information on a change of orientation of the audio device. An activation of theaudio device 250 may comprise a change of mode of theaudio device 250. For example, assuming theaudio device 250 comprises an ear bud, activation of the audio device based on a change of mode of the ear bud may comprise removing the ear bud from an in-ear position. A change of orientation may comprise a change of orientation that is above a predefined threshold value. - According to an example embodiment, the
apparatus 200 is configured to control at least one functionality of theapparatus 200 for capturing content in the determined direction. According to an example embodiment, capturing content comprises capturing content using theapparatus 200. According to an example embodiment, capturing content comprises capturing video content comprising audio and visual information. According to another example embodiment, capturing content comprises capturing audio content. According to a further example embodiment, capturing content comprises capturing visual content. - Controlling a functionality relating to capturing content may comprise controlling capturing audio and/or controlling capturing visual information. For example, controlling a functionality relating to capturing content may comprise controlling one or more microphones and/or one or more cameras.
- The
apparatus 200 may be configured to control at least one functionality of theapparatus 200 by controlling a component of theapparatus 200. According to an example embodiment, controlling the at least one functionality of the mobile computing device comprises activating a camera. Theapparatus 200 may comprise a plurality of cameras. According to an example embodiment, the camera comprises a first camera located on a first side of theapparatus 200. According to example embodiment, the camera comprises a second camera located at a second side of theapparatus 200. The first camera and the second camera may be configured to record images/video at opposite sides of theapparatus 200. The first camera may comprise, for example, a front camera and the second camera may comprise, for example, a back camera. - According to an example embodiment, activating the camera comprises activating a camera comprising a field of view in a direction corresponding to the direction for capturing content. A field of view of a camera may comprise a scene that is visible through the camera at a particular position and orientation when taking a picture or recording video. Objects outside the field of view when the picture is taken may not be recorded.
- According to an example embodiment, controlling the at least one functionality of the mobile computing device comprises controlling at least one microphone array.
- Controlling a microphone array may comprise controlling the microphone array to capture audio in a particular direction. Capturing audio in a particular direction may comprise performing an audio focus operation by, for example, forming a directional beam pattern towards the particular direction. Beamforming in terms of using a particular beam pattern enables a microphone array to be more sensitive to sound coming from one or more particular directions than sound coming from other directions.
- According to an example embodiment, controlling the at least one microphone array comprises controlling the at least one microphone array to focus audio in the direction for capturing content. Focusing audio in the direction for capturing content may comprise performing spatial filtering such as performing beamforming or parametric spatial audio processing.
- The at least one microphone may comprise at least one microphone array comprised by the
apparatus 200 or at least one microphone array comprised by theaudio device 250. According to an example embodiment, the at least one microphone array comprises a microphone array of theaudio device 250 or a microphone array of theapparatus 200. - The
apparatus 200 may be configured to select a microphone array to be controlled. For example, theapparatus 200 may be configured to select a microphone array closest to a capturing target such as a sound source. Therefore, theapparatus 200 may be configured to select the microphone array to be controlled based on the respective positions of at least a first microphone array and a second microphone array. - According to an example embodiment, the
apparatus 200 is configured to receive position information relating to a position of theaudio device 250. Position information relating to the position of theaudio device 250 may comprise, for example, measurement data indicating the position of theaudio device 250, coordinates indicating the position of the audio device and/or a signal such as a Bluetooth signal based on which the position of theaudio device 250 may be determined. - According to an example embodiment, the
apparatus 200 is configured to determine, based on the position information, a position of theaudio device 250 with respect to theapparatus 200. - Without limiting the scope of the claims, an advantage of determining a position of the
audio device 250 with respect to theapparatus 200 is that theapparatus 200 may determine which of theaudio device 250 and theapparatus 200 is closer to a capturing target, thereby enabling better audio quality. - According to an example embodiment, the
apparatus 200 is configured to determine the microphone array closest to the capturing target based on the position of theaudio device 250 with respect to theapparatus 200. According to an example embodiment, theapparatus 200 is configured to control the microphone array closest to the capturing target. - The
apparatus 200 may be configured to allow for a user to control audio capture using at least one microphone array. According to an example embodiment, the apparatus is configured to provide a user interface on the mobile computing device for controlling the capturing of content. - According to an example embodiment, the
apparatus 200 comprises means for performing the features of the claimed invention, wherein the means for performing comprises at least oneprocessor 110, at least onememory 160 includingcomputer program code 120, the at least onememory 160 and thecomputer program code 120 configured to, with the at least oneprocessor 110, cause the performance of theapparatus 200. The means for performing the features of the claimed invention may comprise means for receiving orientation information relating to an orientation of an audio device operatively connected to an apparatus, means for determining, based on the orientation information, an orientation of the audio device with respect to the apparatus, means for determining, based on the orientation of the audio device with respect to the apparatus, a direction of capturing content by the apparatus, and means for controlling at least one functionality of the apparatus for capturing content in the determined direction. - The
apparatus 200 may further comprise means for receiving position information relating to a position of the audio device, means for determining, based on the position information, a position of the audio device with respect to theapparatus 200 and means for determining the microphone array closest to the capturing target based on the position of the audio device with respect to theapparatus 200. Theapparatus 200 may further comprise means for providing a user interface on the apparatus to control capturing content. -
Figure 3 illustrates an example of capturing content. In the example ofFigure 3 , theapparatus 200 is a mobile computing device configured to communicate with theaudio device 250 such as an ear bud and both theapparatus 200 and theaudio device 250 comprise at least one microphone array for capturing audio. - The
apparatus 200 is configured to determine a position and/or orientation of theaudio device 250 using Bluetooth technology, acoustic localization and/or GPS coordinates together with Bluetooth technology. - In the example of
Figure 3 , afirst user 301 interviews asecond user 302 that is a capturing target. Thefirst user 301 records the interview by capturing video of the interview using theapparatus 200 and theaudio device 250. Theaudio device 250 is configured to function as pair with anotheraudio device 304. - In the example of
Figure 3 , theapparatus 200 receives position information from theaudio device 250 and determines a position of theaudio device 250 with respect to theapparatus 200. Theapparatus 200 further determines, based on the position of theaudio device 250 with respect to theapparatus 200, which of the microphone array of theapparatus 200 and the microphone array of the audio device is the closest to the capturing target (second user 302). In the example ofFigure 3 , theaudio device 250 is closer to the capturing target (second user 302) than theapparatus 200. - As the
audio device 250 is determined as the closest to the capturing target (second user 302), theapparatus 200 determines that the microphone array of theaudio device 250 is to be used for capturing audio such that the microphone array of theaudio device 250 is controlled to focus audio capturing towards thesecond user 302. The audio focus is illustrated with dashedline 305. - The apparatus further receives orientation information relating to the
audio device 250 and determines, based on the orientation information, the orientation of theaudio device 250 with respect to theapparatus 200. Theapparatus 200 further determines, based on the orientation of theaudio device 250 with respect to theapparatus 200, a direction for capturing content by theapparatus 200. In the example ofFigure 3 , the orientation of theaudio device 250 is such that theaudio device 250 points towards thesecond user 302. Theapparatus 200 determines that the orientation of theaudio device 250 corresponds to the field of view of theback camera 303 and activates theback camera 303. - The
apparatus 200 further provides auser interface 306 for thefirst user 301 for controlling the capturing of content. Theuser interface 306 comprises an illustration of theaudio focus 307 provided by the microphone array of theaudio device 250 for enabling thefirst user 301 to control the audio focus parameters. -
Figure 4 illustrates an example of capturing content. In the example ofFigure 4 , theapparatus 200 is a mobile computing device configured to communicate withaudio device 250. Similarly toFigure 3 , both theapparatus 200 and theaudio device 250 comprise a microphone array for capturing audio. - Similarly to the example of
Figure 3 , theapparatus 200 is configured to determine a position and/or orientation of theaudio device 250 using Bluetooth technology, acoustic localization and/or GPS coordinates together with Bluetooth technology. - In the example of
Figure 4 , afirst user 301 interviews asecond user 302 that is a capturing target. Thefirst user 301 records the interview by capturing video of the interview using theapparatus 200 and theaudio device 250. Theaudio device 250 is configured to function as pair with anotheraudio device 304. - In the example of
Figure 4 , theapparatus 200 receives position information from theaudio device 250 and determines a position of theaudio device 250 with respect to theapparatus 200. Theapparatus 200 further determines, based on the position of theaudio device 250 with respect to theapparatus 200, which of the microphone array of theapparatus 200 and the microphone array of theaudio device 250 is the closest to thesecond user 302. In the example ofFigure 4 , theapparatus 200 is closer to thesecond user 302 than theaudio device 250. - As the
apparatus 200 is determined as the closest to the capturing target (second user 302), theapparatus 200 determines that the microphone array of theapparatus 200 is to be used for capturing audio such that the microphone array of theapparatus 200 is controlled to focus audio capturing towards thesecond user 302. The audio focus is illustrated with dashedline 405. - The apparatus further receives orientation information relating to the
audio device 250 and determines, based on the orientation information, the orientation of theaudio device 250 with respect to theapparatus 200. Theapparatus 200 further determines, based on the orientation of theaudio device 250 with respect to theapparatus 200, a direction for capturing content by theapparatus 200. In the example ofFigure 4 , the orientation of theaudio device 250 is such that theaudio device 250 points towards thesecond user 302. Theapparatus 200 determines that the orientation of theaudio device 250 corresponds to the field of view of theback camera 303 and activates theback camera 303. - The
apparatus 200 further provides auser interface 306 for thefirst user 301 for controlling the capturing of content. Theuser interface 306 comprises an illustration of theaudio focus 407 provided by the microphone array of theapparatus 200 for enabling thefirst user 301 to control the audio focus parameters. -
Figure 5 illustrates a further example of capturing content. In the example ofFigure 5 , theapparatus 200 is a mobile computing device configured to communicate withaudio device 250. Similarly toFigures 3 and 4 , both theapparatus 200 and theaudio device 250 comprise a microphone array for capturing audio. - Similarly to the examples of
Figures 3 and 4 , theapparatus 200 is configured to determine a position and/or orientation of theaudio device 250 using Bluetooth technology, acoustic localization and/or GPS coordinates together with Bluetooth technology. - In the example of
Figure 5 , afirst user 301 interviews asecond user 302 such that thefirst user 301 is a capturing target. Thefirst user 301 records the interview by capturing video of the interview using theapparatus 200 and theaudio device 250. Theaudio device 250 is configured to function as pair with anotheraudio device 304. - In the example of
Figure 5 , theapparatus 200 receives position information from theaudio device 250 and determines a position of theaudio device 250 with respect to theapparatus 200. Theapparatus 200 further determines, based on the position of theaudio device 250 with respect to theapparatus 200, which of the microphone array of theapparatus 200 and the microphone array of the audio device is the closest to thefirst user 301. In the example ofFigure 5 , theaudio device 250 is closer to thefirst user 301 than theapparatus 200. - As the
audio device 250 is determined as the closest to the capturing target (first user 301), theapparatus 200 determines that the microphone array of theaudio device 250 is to be used for capturing audio such that the microphone array is controlled to focus audio capturing towards the capturing target (first user 301). - The apparatus further receives orientation information relating to the
audio device 250 and determines, based on the orientation information, the orientation of theaudio device 250 with respect to theapparatus 200. Theapparatus 200 further determines, based on the orientation of theaudio device 250 with respect to theapparatus 200, a direction for capturing content by theapparatus 200. In the example ofFigure 5 , the orientation of theaudio device 250 is such that theaudio device 250 points towards thefirst user 301. Theapparatus 200 determines that the orientation of theaudio device 250 corresponds to the field of view of a front camera and activates the front camera. - The
apparatus 200 further provides auser interface 306 for thefirst user 301 for controlling the capturing of content. Theuser interface 306 comprises an illustration of theaudio focus 507 provided by the microphone array of theaudio device 250 for enabling thefirst user 301 to control the audio focus parameters. -
Figure 6 illustrates a yet further example of capturing content. In the example ofFigure 6 , theapparatus 200 is a mobile computing device configured to communicate withaudio device 250. In the example ofFigure 6 , both theapparatus 200 and theaudio device 250 comprise a microphone array for capturing audio. - Similarly to the examples of
Figures 3 to 5 , theapparatus 200 is configured to determine a position and/or orientation of theaudio device 250 using Bluetooth technology, acoustic localization and/or GPS coordinates together with Bluetooth technology. - In the example of
Figure 6 , afirst user 301 interviews asecond user 302 such that thefirst user 301 is a capturing target. Thefirst user 301 records the interview by capturing video of the interview using theapparatus 200 and theaudio device 250. Theaudio device 250 is configured to function as pair with anotheraudio device 304. - In the example of
Figure 6 , theapparatus 200 receives position information from theaudio device 250 and determines a position of theaudio device 250 with respect to theapparatus 200. Theapparatus 200 further determines, based on the position of theaudio device 250 with respect to theapparatus 200, which of the microphone array of theapparatus 200 and the microphone array of the audio device is the closest to the capturing target (first user 301). In the example ofFigure 6 , theapparatus 200 is closer to the capturing target (first user 301) than theaudio device 250. - As the
apparatus 200 is determined as the closest to the capturing target (first user 301), theapparatus 200 determines that the microphone array of theapparatus 200 is to be used for capturing audio such that the microphone array is controlled to focus audio capturing towards the capturing target (first user 301). - The apparatus further receives orientation information relating to the
audio device 250 and determines, based on the orientation information, the orientation of theaudio device 250 with respect to theapparatus 200. Theapparatus 200 further determines, based on the orientation of theaudio device 250 with respect to theapparatus 200, a direction for capturing content by theapparatus 200. In the example ofFigure 6 , the orientation of theaudio device 250 is such that theaudio device 250 points towards thefirst user 301. Theapparatus 200 determines that the orientation of theaudio device 250 corresponds to the field of view of a front camera and activates the front camera. - The
apparatus 200 further provides auser interface 306 for thefirst user 301 for controlling the capturing of content. Theuser interface 306 comprises an illustration of theaudio focus 607 provided by the microphone array of theapparatus 200 for enabling thefirst user 301 to control the audio focus parameters. -
Figure 7 illustrates anexample method 700 incorporating aspects of the previously disclosed embodiments. More specifically theexample method 700 illustrates controlling at least one functionality of theapparatus 200 for capturing content in a determined direction. The method may be performed by theapparatus 200 such as a mobile computing device. - The
apparatus 200 is configured to determine a position and/or orientation of theaudio device 250 using Bluetooth technology, acoustic localization and/or GPS coordinates together with Bluetooth technology. - The method starts with receiving 705 orientation information relating to an orientation of an
audio device 250 operatively connected to theapparatus 200. The orientation information may comprise measurement data indicating the orientation of theaudio device 250 or a signal based on which the orientation of theaudio device 250 may be determined. - The method continues with determining 710, based on the orientation information, an orientation of the
audio device 250 with respect to theapparatus 200. Determining the orientation of theaudio device 250 with respect to theapparatus 200 may comprise comparing the orientation of theaudio device 250 with an orientation of theapparatus 200. The orientation of theaudio device 250 with respect to theapparatus 200 may comprise determining a direction of theaudio device 250. - The method further continues with determining 715, based on the orientation of the
audio device 250 with respect to theapparatus 200, a direction for capturing content by theapparatus 200. A direction for capturing content by theapparatus 200 comprises a direction corresponding to the orientation of theaudio device 250. - The method further continues with controlling 720 at least one functionality of the
apparatus 200 for capturing content in the determined direction. Capturing content may comprise, for example, capturing video content comprising audio and visual information. -
Figure 8 illustrates anotherexample method 800 incorporating aspects of the previously disclosed embodiments. More specifically theexample method 800 illustrates selecting a microphone array for capturing audio and controlling at least one functionality of theapparatus 200 for capturing content in a determined direction. The method may be performed by theapparatus 200 such as a mobile computing device. - The
apparatus 200 is configured to determine a position and/or orientation of theaudio device 250 using Bluetooth technology, acoustic localization and/or GPS coordinates together with Bluetooth technology. - The method starts with receiving 805 position information relating to the
audio device 250 and determining 810 a position of theaudio device 250 with respect to theapparatus 200. - The method continues with determining 815 a microphone array closest to a capturing target. The microphone array closest to the capturing target is selected as a microphone array for capturing audio.
- The method further continues with receiving 820 orientation information relating to an orientation of an
audio device 250 operatively connected to theapparatus 200. The orientation information may comprise measurement data indicating the orientation of theaudio device 250 or a signal based on which the orientation of theaudio device 250 may be determined. - The method further continues with determining 825, based on the orientation information, an orientation of the
audio device 250 with respect to theapparatus 200. Determining the orientation of theaudio device 250 with respect to theapparatus 200 may comprise comparing the orientation of theaudio device 250 with an orientation of theapparatus 200. The orientation of theaudio device 250 with respect to theapparatus 200 may comprise determining a direction of theaudio device 250. - The method further continues with determining 830, based on the orientation of the
audio device 250 with respect to theapparatus 200, a direction of capturing content by theapparatus 200. A direction for capturing content by theapparatus 200 comprises a direction corresponding to the orientation of theaudio device 250. - The method further continues with controlling 835 at least one functionality of the
apparatus 200 for capturing content in the determined direction. Capturing content may comprise, for example, capturing video content comprising audio and visual information. - Without limiting the scope of the claims, an advantage of controlling at least one functionality of an apparatus based on an orientation of an audio device is that a direction of a capturing target may be indicated using the audio device. Another advantage is that audio may be captured using different microphone arrays in a controlled manner. A further advantage is that a visual information may be captured using different cameras in a controlled manner.
- Without in any way limiting the scope, interpretation, or application of the claims appearing below, a technical effect of one or more of the example embodiments disclosed herein is that a high quality audio/video capture may be provided using a distributed content capturing. Another technical effect may be a dynamic control of content capturing.
- As used in this application, the term "circuitry" may refer to one or more or all of the following: (a) hardware-only circuit implementations (such as implementations in only analog and/or digital circuitry) and (b) combinations of hardware circuits and software, such as (as applicable): (i) a combination of analog and/or digital hardware circuit(s) with software/firmware and (ii) any portions of hardware processor(s) with software (including digital signal processor(s)), software, and memory(ies) that work together to cause an apparatus, such as a mobile phone or server, to perform various functions) and (c) hardware circuit(s) and or processor(s), such as a microprocessor(s) or a portion of a microprocessor(s), that requires software (e.g., firmware) for operation, but the software may not be present when it is not needed for operation.
- This definition of circuitry applies to all uses of this term in this application, including in any claims. As a further example, as used in this application, the term circuitry also covers an implementation of merely a hardware circuit or processor (or multiple processors) or portion of a hardware circuit or processor and its (or their) accompanying software and/or firmware. The term circuitry also covers, for example and if applicable to the particular claim element, a baseband integrated circuit or processor integrated circuit for a mobile device or a similar integrated circuit in server, a cellular network device, or other computing or network device.
- Embodiments of the present invention may be implemented in software, hardware, application logic or a combination of software, hardware and application logic. The software, application logic and/or hardware may reside on the apparatus, a separate device or a plurality of devices. If desired, part of the software, application logic and/or hardware may reside on the apparatus, part of the software, application logic and/or hardware may reside on a separate device, and part of the software, application logic and/or hardware may reside on a plurality of devices. In an example embodiment, the application logic, software or an instruction set is maintained on any one of various conventional computer-readable media. In the context of this document, a 'computer-readable medium' may be any media or means that can contain, store, communicate, propagate or transport the instructions for use by or in connection with an instruction execution system, apparatus, or device, such as a computer, with one example of a computer described and depicted in
FIGURE 2 . A computer-readable medium may comprise a computer-readable storage medium that may be any media or means that can contain or store the instructions for use by or in connection with an instruction execution system, apparatus, or device, such as a computer. - If desired, the different functions discussed herein may be performed in a different order and/or concurrently with each other. Furthermore, if desired, one or more of the above-described functions may be optional or may be combined.
- Although various aspects of the invention are set out in the independent claims, other aspects of the invention comprise other combinations of features from the described embodiments and/or the dependent claims with the features of the independent claims, and not solely the combinations explicitly set out in the claims.
- It will be obvious to a person skilled in the art that, as the technology advances, the inventive concept can be implemented in various ways. The invention and its embodiments are not limited to the examples described above but may vary within the scope of the claims.
Claims (15)
- An apparatus comprising at least one processor and at least one memory including computer program code, the at least one memory and the computer program code configured to with the at least one processor, cause the apparatus at least to:receive orientation information relating to an orientation of an audio device operatively connected to the apparatus;determine, based on the orientation information, an orientation of the audio device with respect to the apparatus;determine, based on the orientation of the audio device with respect to the apparatus, a direction for capturing content by the apparatus; andcontrol at least one functionality of the apparatus for capturing content in the determined direction.
- The apparatus according to claim 1, wherein determining the orientation of the audio device with respect to the apparatus comprises determining a pointing direction of the audio device.
- The apparatus according to claim 1 or 2, wherein controlling the at least one functionality of the apparatus comprises activating a camera.
- The apparatus according to claim 3, wherein activating the camera comprises activating a camera comprising a field of view in a direction corresponding to the direction for capturing content.
- The apparatus according to any preceding claim, wherein controlling the at least one functionality of the apparatus comprises controlling at least one microphone array.
- The apparatus according to claim 5, wherein controlling the at least one microphone array comprises controlling the at least one microphone array to focus audio in the direction for capturing content.
- The apparatus according to claim 5 or 6, wherein the at least one microphone array comprises a microphone array of the audio device or a microphone array of the apparatus.
- The apparatus according to any preceding claim, wherein the at least one memory and the computer program code are configured to with the at least one processor, cause the apparatus to receive position information relating to a position of the audio device.
- The apparatus according to claim 8, wherein the at least one memory and the computer program code are configured to with the at least one processor, cause the apparatus to determine, based on the position information, a position of the audio device with respect to the apparatus.
- The apparatus according to claim 9, wherein the at least one memory and the computer program code are configured to with the at least one processor, cause the apparatus to determine the microphone array closest to the capturing target based on the position of the audio device with respect to the apparatus and the wherein controlling the at least one functionality of the apparatus comprises controlling the microphone array closest to the capturing target.
- The apparatus according to any preceding claim, wherein the at least one memory and the computer program code are configured to with the at least one processor, cause the apparatus to provide a user interface on the apparatus for controlling the capturing of content.
- The apparatus according to any preceding claim, wherein capturing content comprises capturing video content comprising audio and visual information.
- The apparatus according to any preceding claim, wherein the apparatus comprises a mobile computing device.
- A method comprising:receiving orientation information relating to an orientation of an audio device operatively connected to an apparatus;determining, based on the orientation information, an orientation of the audio device with respect to the apparatus;determining, based on the orientation of the audio device with respect to the apparatus, a direction of capturing content by the apparatus; andcontrolling at least one functionality of the apparatus for capturing content in the determined direction.
- A computer readable medium comprising instructions for causing an apparatus to perform at least the following:receiving orientation information relating to an orientation of an audio device operatively connected to the apparatus;determining, based on the orientation information, an orientation of the audio device with respect to the apparatus;determining, based on the orientation of the audio device with respect to the apparatus, a direction of capturing content by the apparatus; andcontrolling at least one functionality of the apparatus for capturing content in the determined direction.
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
FI20205538 | 2020-05-27 |
Publications (1)
Publication Number | Publication Date |
---|---|
EP3917160A1 true EP3917160A1 (en) | 2021-12-01 |
Family
ID=75936719
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP21173836.4A Pending EP3917160A1 (en) | 2020-05-27 | 2021-05-14 | Capturing content |
Country Status (1)
Country | Link |
---|---|
EP (1) | EP3917160A1 (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080199025A1 (en) * | 2007-02-21 | 2008-08-21 | Kabushiki Kaisha Toshiba | Sound receiving apparatus and method |
US20100128892A1 (en) * | 2008-11-25 | 2010-05-27 | Apple Inc. | Stabilizing Directional Audio Input from a Moving Microphone Array |
WO2012083989A1 (en) * | 2010-12-22 | 2012-06-28 | Sony Ericsson Mobile Communications Ab | Method of controlling audio recording and electronic device |
US9432768B1 (en) * | 2014-03-28 | 2016-08-30 | Amazon Technologies, Inc. | Beam forming for a wearable computer |
EP3252775A1 (en) * | 2015-08-26 | 2017-12-06 | Huawei Technologies Co., Ltd. | Directivity recording method, apparatus and recording device |
-
2021
- 2021-05-14 EP EP21173836.4A patent/EP3917160A1/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080199025A1 (en) * | 2007-02-21 | 2008-08-21 | Kabushiki Kaisha Toshiba | Sound receiving apparatus and method |
US20100128892A1 (en) * | 2008-11-25 | 2010-05-27 | Apple Inc. | Stabilizing Directional Audio Input from a Moving Microphone Array |
WO2012083989A1 (en) * | 2010-12-22 | 2012-06-28 | Sony Ericsson Mobile Communications Ab | Method of controlling audio recording and electronic device |
US9432768B1 (en) * | 2014-03-28 | 2016-08-30 | Amazon Technologies, Inc. | Beam forming for a wearable computer |
EP3252775A1 (en) * | 2015-08-26 | 2017-12-06 | Huawei Technologies Co., Ltd. | Directivity recording method, apparatus and recording device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11706577B2 (en) | Systems and methods for equalizing audio for playback on an electronic device | |
US8908880B2 (en) | Electronic apparatus having microphones with controllable front-side gain and rear-side gain | |
US9516241B2 (en) | Beamforming method and apparatus for sound signal | |
US9185509B2 (en) | Apparatus for processing of audio signals | |
US10257611B2 (en) | Stereo separation and directional suppression with omni-directional microphones | |
CN109565629B (en) | Method and apparatus for controlling processing of audio signals | |
US11631422B2 (en) | Methods, apparatuses and computer programs relating to spatial audio | |
CN113014983A (en) | Video playing method and device, storage medium and electronic equipment | |
US20220225049A1 (en) | An apparatus and associated methods for capture of spatial audio | |
WO2022062531A1 (en) | Multi-channel audio signal acquisition method and apparatus, and system | |
EP3917160A1 (en) | Capturing content | |
US11696085B2 (en) | Apparatus, method and computer program for providing notifications | |
US11882401B2 (en) | Setting a parameter value | |
US20220086593A1 (en) | Alignment control information | |
US20230224664A1 (en) | Supplementing Content | |
US20240073571A1 (en) | Generating microphone arrays from user devices | |
JP2023513318A (en) | multimedia content |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION HAS BEEN PUBLISHED |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
B565 | Issuance of search results under rule 164(2) epc |
Effective date: 20211027 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE |
|
17P | Request for examination filed |
Effective date: 20220601 |
|
RBV | Designated contracting states (corrected) |
Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: EXAMINATION IS IN PROGRESS |
|
17Q | First examination report despatched |
Effective date: 20230104 |