EP3917160A1 - Erfassen von inhalten - Google Patents

Erfassen von inhalten Download PDF

Info

Publication number
EP3917160A1
EP3917160A1 EP21173836.4A EP21173836A EP3917160A1 EP 3917160 A1 EP3917160 A1 EP 3917160A1 EP 21173836 A EP21173836 A EP 21173836A EP 3917160 A1 EP3917160 A1 EP 3917160A1
Authority
EP
European Patent Office
Prior art keywords
audio device
audio
orientation
capturing
respect
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
EP21173836.4A
Other languages
English (en)
French (fr)
Inventor
Arto Lehtiniemi
Mikko Heikkinen
Antti Eronen
Miikka Vilermo
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nokia Technologies Oy
Original Assignee
Nokia Technologies Oy
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nokia Technologies Oy filed Critical Nokia Technologies Oy
Publication of EP3917160A1 publication Critical patent/EP3917160A1/de
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • H04R3/005Circuits for transducers, loudspeakers or microphones for combining the signals of two or more microphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/20Arrangements for obtaining desired frequency or directional characteristics
    • H04R1/32Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only
    • H04R1/40Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only by combining a number of identical transducers
    • H04R1/406Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only by combining a number of identical transducers microphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2499/00Aspects covered by H04R or H04S not otherwise provided for in their subgroups
    • H04R2499/10General applications
    • H04R2499/11Transducers incorporated or for use in hand-held devices, e.g. mobile phones, PDA's, camera's

Definitions

  • the present application relates generally to capturing content. More specifically, the present application relates to controlling at least one functionality of an apparatus for capturing content.
  • the amount of multimedia content increases continuously. Users create and consume multimedia content, and it has a big role in modern society.
  • an apparatus comprising means for performing: receiving orientation information relating to an orientation of an audio device operatively connected to an apparatus, determining, based on the orientation information, an orientation of the audio device with respect to the apparatus, determining, based on the orientation of the audio device with respect to the apparatus, a direction of capturing content by the apparatus, and controlling at least one functionality of the apparatus for capturing content in the determined direction.
  • a method comprising receiving orientation information relating to an orientation of an audio device operatively connected to an apparatus, determining, based on the orientation information, an orientation of the audio device with respect to the apparatus, determining, based on the orientation of the audio device with respect to the apparatus, a direction of capturing content by the apparatus, and controlling at least one functionality of the apparatus for capturing content in the determined direction.
  • a computer program comprising instructions for causing an apparatus to perform at least the following: receiving orientation information relating to an orientation of an audio device operatively connected to an apparatus, determining, based on the orientation information, an orientation of the audio device with respect to the apparatus, determining, based on the orientation of the audio device with respect to the apparatus, a direction of capturing content by the apparatus, and controlling at least one functionality of the apparatus for capturing content in the determined direction.
  • an apparatus comprising at least one processor and at least one memory including computer program code, the at least one memory and the computer program code configured to with the at least one processor, cause the apparatus at least to: receive orientation information relating to an orientation of an audio device operatively connected to the apparatus, determine, based on the orientation information, an orientation of the audio device with respect to the apparatus, determine, based on the orientation of the audio device with respect to the apparatus, a direction for capturing content by the apparatus, and control at least one functionality of the apparatus for capturing content in the determined direction.
  • a non-transitory computer readable medium comprising program instructions for causing an apparatus to perform at least the following: receiving orientation information relating to an orientation of an audio device operatively connected to an apparatus, determining, based on the orientation information, an orientation of the audio device with respect to the apparatus, determining, based on the orientation of the audio device with respect to the apparatus, a direction of capturing content by the apparatus, and controlling at least one functionality of the apparatus for capturing content in the determined direction.
  • a computer readable medium comprising program instructions for causing an apparatus to perform at least the following: receiving orientation information relating to an orientation of an audio device operatively connected to an apparatus, determining, based on the orientation information, an orientation of the audio device with respect to the apparatus, determining, based on the orientation of the audio device with respect to the apparatus, a direction of capturing content by the apparatus, and controlling at least one functionality of the apparatus for capturing content in the determined direction.
  • Example embodiments relate to an apparatus configured to an apparatus configured to receive orientation information relating to an orientation of an audio device operatively connected to the apparatus, determine, based on the orientation information, an orientation of the audio device with respect to the apparatus, determine, based on the orientation of the audio device with respect to the apparatus, a direction for capturing content by the apparatus, and control at least one functionality of the apparatus for capturing content in the determined direction.
  • Some example embodiments relate to controlling content capture options with an audio device such as an ear bud. Some example embodiments relate to capturing spatial audio using an audio device such as an ear bud.
  • Spatial audio may comprise a full sphere surround-sound to mimic the way people perceive audio in real life.
  • Spatial audio may comprise audio that appears from a user's position to be assigned to a certain direction and/or distance. Therefore, the perceived audio may change with the movement of the user or with the user turning.
  • Spatial audio may comprise audio created by sound sources, ambient audio or a combination thereof.
  • Ambient audio may comprise audio that might not be identifiable in terms of a sound source such as traffic humming, wind or waves, for example.
  • the full sphere surround-sound may comprise a spatial audio field and the position of the user or the position of the capturing device may be considered as a reference point in the spatial audio field. According to an example embodiment, a reference point comprises the centre of the audio field.
  • Spatial audio may be captured with, for example, a capturing device comprising a plurality of microphones configured to capture audio signals around the capturing device.
  • the capturing device may also be configured to capture different types of information such as one or more parameters relating to the captured audio signals and/or visual information.
  • the captured parameters may be stored with the captured audio or in a separate file.
  • a capturing device may be, for example, a camera, a video recorder or a smartphone.
  • Spatial audio may comprise one or more parameters such as an audio focus parameter and/or an audio zoom parameter.
  • An audio parameter may comprise a parameter value with respect to a reference point such as the position of the user or the position of the capturing device. Modifying a spatial audio parameter value may cause a change in spatial audio perceived by a listener.
  • An audio focus feature allows a user to focus on audio in a desired direction with respect to other directions when capturing content and/or playing back content. Therefore, an audio focus feature also allows a user to at least partially eliminate background noises.
  • a direction of sound may be defined with respect to a reference point.
  • a direction of sound may comprise an angle with respect to a reference point or a discrete direction such as front, back, left, right, up and/or down with respect to a reference point, or a combination thereof.
  • the reference point may correspond to, for example, a value of 0 degrees or no audio focus direction in which case, at the reference point, the audio comprises surround sound with no audio focus.
  • An audio focus parameter may also comprise one or more further levels of detail such as horizontal focus direction and/or vertical focus direction.
  • An audio zoom feature allows a user to zoom in on a sound. Zooming in on a sound comprises adjusting an amount of audio gain associated with a particular direction. Therefore, an audio zoom parameter corresponds to sensitivity to a direction of sound. Audio zoom may be performed using audio beamforming with which a user may be able to control, for example, the size, shape and/or direction of the audio beam. Performing audio zooming may comprise controlling audio signals coming from a particular direction while attenuating audio signals coming from other directions. For example, an audio zoom feature may allow controlling audio gain. Audio gain may comprise an amount of gain set to audio input signals coming from a certain direction. An audio zoom parameter value may be defined with respect to a reference point.
  • an audio zoom parameter may be a percentage value and the reference point may correspond to, for example, a value of 0 % in which case, at the reference point, the audio comprises surround sound with no audio zooming.
  • an audio zoom feature may allow delaying different microphone signals differently and then summing the signals up, thereby enabling spatial filtering of audio.
  • Audio zooming may be associated with zooming visual information. For example, if a user records a video and zooms in on an object, the audio may also be zoomed in on the object such that, for example, sound generated by the object is emphasized and other sounds are attenuated. In other words, spatial audio parameters may be controlled by controlling the video zoom.
  • Mobile phones are more and more used in professional audio/video capture due to increased capabilities. In some situations, it would be useful to have additional microphones such as a close-up microphone that is close to a sound source. However, using a plurality of microphones might be challenging in terms of organizing the capture such that it would be easy to choose a microphone to be used or in which ratio microphones are used.
  • FIG. 1 is a block diagram depicting an apparatus 100 operating in accordance with an example embodiment of the invention.
  • the apparatus 100 may be, for example, an electronic device such as a chip or a chipset.
  • the apparatus 100 comprises one or more control circuitry, such as at least one processor 110 and at least one memory 160, including one or more algorithms such as computer program code 120 wherein the at least one memory 160 and the computer program code are 120 configured, with the at least one processor 110 to cause the apparatus 100 to carry out any of example functionalities described below.
  • control circuitry such as at least one processor 110 and at least one memory 160, including one or more algorithms such as computer program code 120 wherein the at least one memory 160 and the computer program code are 120 configured, with the at least one processor 110 to cause the apparatus 100 to carry out any of example functionalities described below.
  • the processor 110 is a control unit operatively connected to read from and write to the memory 160.
  • the processor 110 may also be configured to receive control signals received via an input interface and/or the processor 110 maybe configured to output control signals via an output interface.
  • the processor 110 may be configured to convert the received control signals into appropriate commands for controlling functionalities of the apparatus 100.
  • the at least one memory 160 stores computer program code 120 which when loaded into the processor 110 control the operation of the apparatus 100 as explained below.
  • the apparatus 100 may comprise more than one memory 160 or different kinds of storage devices.
  • Computer program code 120 for enabling implementations of example embodiments of the invention or a part of such computer program code may be loaded onto the apparatus 100 by the manufacturer of the apparatus 100, by a user of the apparatus 100, or by the apparatus 100 itself based on a download program, or the code can be pushed to the apparatus 100 by an external device.
  • the computer program code 120 may arrive at the apparatus 100 via an electromagnetic carrier signal or be copied from a physical entity such as a computer program product, a memory device or a record medium such as a Compact Disc (CD), a Compact Disc Read-Only Memory (CD-ROM), a Digital Versatile Disk (DVD) or a Blu-ray disk.
  • FIG. 2 is a block diagram depicting an apparatus 200 in accordance with an example embodiment of the invention.
  • the apparatus 200 may be an electronic device such as a hand-portable device, a mobile phone or a Personal Digital Assistant (PDA), a Personal Computer (PC), a laptop, a desktop, a tablet computer, a wireless terminal, a communication terminal, a game console, a music player, an electronic book reader (e-book reader), a positioning device, a digital camera, a household appliance, a CD-, DVD or Blu-ray player, or a media player.
  • PDA Personal Digital Assistant
  • PC Personal Computer
  • laptop a desktop
  • a tablet computer a wireless terminal
  • a communication terminal a game console
  • music player an electronic book reader (e-book reader)
  • a positioning device a digital camera, a household appliance, a CD-, DVD or Blu-ray player, or a media player.
  • the apparatus 200 comprises a mobile computing device.
  • the apparatus 200 comprises a part of a mobile communication device.
  • the mobile computing device may comprise, for example, a mobile phone, a tablet computer, or the like. In the examples below it is assumed that the apparatus 200 is a mobile computing device or a part of it.
  • the apparatus 200 is illustrated as comprising the apparatus 100, a microphone array 210, one or more loudspeakers 230 and a user interface 220 for interacting with the apparatus 200 (e.g. a mobile computing device).
  • the apparatus 200 may also comprise a display configured to act as a user interface 220.
  • the display may be a touch screen display.
  • the display and/or the user interface 220 may be external to the apparatus 200, but in communication with it.
  • the microphone array 210 comprises a plurality of microphones for capturing audio.
  • the plurality of microphones may be configured to work together to capture audio, for example, at different sides of a device.
  • a microphone array comprising two microphones may be configured to capture audio from a right side and a left side of a device.
  • the apparatus 200 is configured to apply one or more audio focus operations to emphasize audio signals arriving from a particular direction and/or attenuate sounds coming from other directions and/or one or more audio zoom operations to switch between focused and non-focused audio, for example, in conjunction with a camera.
  • the apparatus 200 is further configured to perform one or more spatial filtering methods for achieving audio focus and/or audio zoom.
  • the one or more spatial filtering methods may comprise, for example, beamforming and/or parametric spatial audio.
  • Beamforming may comprise forming an audio beam by selecting a particular microphone arrangement for capturing spatial audio information from a first direction and/or attenuating sounds coming from a second direction and processing the received audio information.
  • a microphone array may be used to form a spatial filter which is configured to extract a signal from a specific direction and/or reduce contamination of signals from other directions.
  • Parametric spatial audio processing comprises analysing a spatial audio field into a directional component with a direction-of-arrival parameter and ambient component without a direction-of-arrival parameter and changing the direction-of-arrival parameter at which directional signal components are enhanced.
  • the apparatus 200 is configured to control a direction of audio focus.
  • Controlling a direction of audio focus may comprise, for example, changing a direction of an audio beam with respect to a reference point in a spatial audio field.
  • changing a direction of an audio beam may comprise changing the direction of the audio beam from a first direction to a second direction.
  • audio signals from that direction are emphasized and when the audio beam is directed to a second direction, audio signals from that direction are emphasized.
  • the apparatus 200 may be configured to control a direction of an audio beam by switching from a first microphone arrangement to a second microphone arrangement, by processing the captured audio information using an algorithm with different parameters and/or using a different algorithm for processing the captured audio information.
  • the beam direction steering can be accomplished by adjusting the values of steering delays so that signals arriving from a particular direction are aligned before they are summed.
  • the user interface 220 may also comprise a manually operable control such as a button, a key, a touch pad, a joystick, a stylus, a pen, a roller, a rocker, a keypad, a keyboard or any suitable input mechanism for inputting and/or accessing information.
  • a manually operable control such as a button, a key, a touch pad, a joystick, a stylus, a pen, a roller, a rocker, a keypad, a keyboard or any suitable input mechanism for inputting and/or accessing information.
  • Further examples include a camera, a speech recognition system, eye movement recognition system, acceleration-, tilt- and/or movement-based input systems. Therefore, the apparatus 200 may also comprise different kinds of sensors such as one or more gyro sensors, accelerometers, magnetometers, position sensors and/or tilt sensors.
  • the apparatus 200 is configured to establish radio communication with another device using, for example, a Bluetooth, WiFi, radio frequency identification (RFID), or a near field communication (NFC) connection.
  • the apparatus 200 may be configured to establish radio communication with an audio device 250 such as a wireless headphone, augmented/virtual reality device or the like.
  • the apparatus 200 is operatively connected to an audio device 250.
  • the apparatus 200 is wirelessly connected to the audio device 250.
  • the apparatus 200 may be connected to the audio device 250 over a Bluetooth connection, or the like.
  • the audio device 250 may comprise at least one microphone array for capturing audio signals and at least one loudspeaker for playing back received audio signals.
  • the audio device 250 may further be configured to filter out background noise and/or detect in-ear placement.
  • the audio device 250 may comprise a headphone such as a wireless headphone.
  • a headphone may comprise a single headphone such as an ear bud or a pair of headphones such as a pair of ear buds configured to function as a pair.
  • the audio device 250 may comprise a first wireless headphone and a second wireless headphone such that the first wireless headphone and the second wireless headphone are configured to function as a pair. Functioning as a pair may comprise, for example, providing stereo output for a user using the first wireless headphone and the second wireless headphone.
  • the first wireless headphone and the second wireless headphone may also be configured such that the first wireless headphone and the second wireless headphone may be used separately and/or independently of each other. For example, same or different audio information may be provided to the first wireless headphone and the second wireless headphone, or audio information may be directed to one wireless headphone and the second wireless headphone may act as a microphone.
  • the apparatus 200 is configured to communicate with the audio device 250.
  • Communicating with the audio device 250 may comprise providing to and/or receiving information from the audio device 250.
  • communicating with the audio device 250 comprises providing audio signals and/or receiving audio signals.
  • the apparatus 200 may be configured to provide audio signals to the audio device 250 and receive audio signals from the audio device 250.
  • the apparatus 200 is configured to determine an orientation and/or a position of the audio device 250 with respect to the apparatus 200.
  • the apparatus 200 is configured to determine the orientation and/or position of the audio device 250 using Bluetooth technology such as Bluetooth Low Energy (BLE).
  • BLE Bluetooth Low Energy
  • the audio device 250 comprises at least one Bluetooth antenna for transmitting data and the apparatus 200 comprises an array of phased antennas for receiving and/or transmitting data.
  • the array of phased antennas may be configured to measure the angle-of-departure (AoD) or angle-of-arrival (AoA) comprising measuring both azimuth and elevation angles.
  • the apparatus 200 may be configured to execute antenna switching when receiving an AoA packet from the audio device 250.
  • the apparatus 200 may then utilize the amplitude and phase samples together with its own antenna array information to estimate the AoA of a packet received from the audio device 250.
  • Performing AoD measurement may be based on broadcasting by the apparatus 200 the AoD signals, location and properties of a Bluetooth beacon that enables the audio device 250 to calculate its own position.
  • the apparatus 200 is configured to determine the orientation and/or position of the audio device 250 using acoustic localization.
  • Acoustic localization may comprise receiving microphone signals comprising recorded signals from surroundings and determine a time difference of arrival (TDoA) between microphones.
  • TDoA may be determined based on inter-microphone delays that may be determined through correlations and the geometry of a microphone array.
  • the apparatus 200 is configured to determine a position of the audio device 250 using Global Positioning System (GPS) coordinates and Bluetooth technology to determine a relative location of the audio device 250.
  • GPS Global Positioning System
  • the apparatus 200 is configured to receive orientation information relating to an orientation of the audio device 250 operatively connected to the apparatus 200.
  • the orientation information may comprise measurement data indicating the orientation of the audio device 250 or a signal based on which the orientation of the audio device 250 may be determined.
  • the orientation information may comprise information received from one or more orientation sensors comprised by the audio device 250.
  • the orientation information may comprise raw data or pre-processed data.
  • the orientation information may comprise a Bluetooth signal.
  • the apparatus 200 is configured to determine, based on the orientation information, an orientation of the audio device 250 with respect to the apparatus 200.
  • Determining the orientation of the audio device 250 with respect to the apparatus 200 may comprise comparing the orientation of the audio device 250 with an orientation of the apparatus 200.
  • the apparatus 200 may be configured to compare measurement data indicating an orientation of the audio device 250 with measurement data indicating an orientation of the apparatus 200.
  • the apparatus 200 may be configured to determine the orientation of the audio device 250 with respect to the apparatus 200 based on characteristics of a wireless connection between the apparatus 200 and the audio device 250. For example, assuming the apparatus 200 and the audio device are wirelessly connected using a Bluetooth connection, the apparatus 200 may be configured to determine the orientation of the audio device 250 with respect to the apparatus 200 based on a Bluetooth signal using a Bluetooth Angle of Arrival (AoA) or a Bluetooth Angle of Departure (AoD) algorithm.
  • AoA Bluetooth Angle of Arrival
  • AoD Bluetooth Angle of Departure
  • the apparatus 200 is configured to determine a direction of the audio device 250 with respect to the apparatus 200.
  • the apparatus 200 may be configured to determine the direction of the audio device with respect to the apparatus 200 based on the orientation information relating to the audio device 250 and the apparatus 200 or based on characteristics of a wireless connection between the apparatus 200 and the audio device 250.
  • determining the orientation of the audio device 250 with respect to the apparatus 200 comprises determining a pointing direction of the audio device 250.
  • a pointing direction comprises a direction to which the audio device 250 points.
  • the audio device 250 may be associated with a reference point that is used for determining the pointing direction. For example, assuming the audio device 250 comprises a wireless ear bud comprising a pointy end at one end of the ear bud determined as a reference point, the pointing direction may be determined based on a direction to which the ear bud points.
  • the apparatus 200 is configured to determine, based on the orientation of the audio device 250 with respect to the apparatus 200, a direction for capturing content by the apparatus 200.
  • the direction for capturing content by the apparatus 200 may comprise an absolute direction, a direction relative to the apparatus 200, an approximate direction or a direction within predefined threshold values.
  • a direction for capturing content by the apparatus 200 comprises a direction corresponding to the orientation of the audio device 250.
  • a direction corresponding to the orientation of the audio device 250 comprises a pointing direction of the audio device 250.
  • the apparatus 200 may be configured to determine a direction for capturing content by the apparatus 200 in response to receiving information on an activation of the audio device 250 or in response to receiving information on a change of orientation of the audio device.
  • An activation of the audio device 250 may comprise a change of mode of the audio device 250.
  • activation of the audio device based on a change of mode of the ear bud may comprise removing the ear bud from an in-ear position.
  • a change of orientation may comprise a change of orientation that is above a predefined threshold value.
  • the apparatus 200 is configured to control at least one functionality of the apparatus 200 for capturing content in the determined direction.
  • capturing content comprises capturing content using the apparatus 200.
  • capturing content comprises capturing video content comprising audio and visual information.
  • capturing content comprises capturing audio content.
  • capturing content comprises capturing visual content.
  • Controlling a functionality relating to capturing content may comprise controlling capturing audio and/or controlling capturing visual information.
  • controlling a functionality relating to capturing content may comprise controlling one or more microphones and/or one or more cameras.
  • the apparatus 200 may be configured to control at least one functionality of the apparatus 200 by controlling a component of the apparatus 200.
  • controlling the at least one functionality of the mobile computing device comprises activating a camera.
  • the apparatus 200 may comprise a plurality of cameras.
  • the camera comprises a first camera located on a first side of the apparatus 200.
  • the camera comprises a second camera located at a second side of the apparatus 200.
  • the first camera and the second camera may be configured to record images/video at opposite sides of the apparatus 200.
  • the first camera may comprise, for example, a front camera and the second camera may comprise, for example, a back camera.
  • activating the camera comprises activating a camera comprising a field of view in a direction corresponding to the direction for capturing content.
  • a field of view of a camera may comprise a scene that is visible through the camera at a particular position and orientation when taking a picture or recording video. Objects outside the field of view when the picture is taken may not be recorded.
  • controlling the at least one functionality of the mobile computing device comprises controlling at least one microphone array.
  • Controlling a microphone array may comprise controlling the microphone array to capture audio in a particular direction. Capturing audio in a particular direction may comprise performing an audio focus operation by, for example, forming a directional beam pattern towards the particular direction. Beamforming in terms of using a particular beam pattern enables a microphone array to be more sensitive to sound coming from one or more particular directions than sound coming from other directions.
  • controlling the at least one microphone array comprises controlling the at least one microphone array to focus audio in the direction for capturing content.
  • Focusing audio in the direction for capturing content may comprise performing spatial filtering such as performing beamforming or parametric spatial audio processing.
  • the at least one microphone may comprise at least one microphone array comprised by the apparatus 200 or at least one microphone array comprised by the audio device 250.
  • the at least one microphone array comprises a microphone array of the audio device 250 or a microphone array of the apparatus 200.
  • the apparatus 200 may be configured to select a microphone array to be controlled.
  • the apparatus 200 may be configured to select a microphone array closest to a capturing target such as a sound source. Therefore, the apparatus 200 may be configured to select the microphone array to be controlled based on the respective positions of at least a first microphone array and a second microphone array.
  • the apparatus 200 is configured to receive position information relating to a position of the audio device 250.
  • Position information relating to the position of the audio device 250 may comprise, for example, measurement data indicating the position of the audio device 250, coordinates indicating the position of the audio device and/or a signal such as a Bluetooth signal based on which the position of the audio device 250 may be determined.
  • the apparatus 200 is configured to determine, based on the position information, a position of the audio device 250 with respect to the apparatus 200.
  • an advantage of determining a position of the audio device 250 with respect to the apparatus 200 is that the apparatus 200 may determine which of the audio device 250 and the apparatus 200 is closer to a capturing target, thereby enabling better audio quality.
  • the apparatus 200 is configured to determine the microphone array closest to the capturing target based on the position of the audio device 250 with respect to the apparatus 200. According to an example embodiment, the apparatus 200 is configured to control the microphone array closest to the capturing target.
  • the apparatus 200 may be configured to allow for a user to control audio capture using at least one microphone array. According to an example embodiment, the apparatus is configured to provide a user interface on the mobile computing device for controlling the capturing of content.
  • the apparatus 200 comprises means for performing the features of the claimed invention, wherein the means for performing comprises at least one processor 110, at least one memory 160 including computer program code 120, the at least one memory 160 and the computer program code 120 configured to, with the at least one processor 110, cause the performance of the apparatus 200.
  • the means for performing the features of the claimed invention may comprise means for receiving orientation information relating to an orientation of an audio device operatively connected to an apparatus, means for determining, based on the orientation information, an orientation of the audio device with respect to the apparatus, means for determining, based on the orientation of the audio device with respect to the apparatus, a direction of capturing content by the apparatus, and means for controlling at least one functionality of the apparatus for capturing content in the determined direction.
  • the apparatus 200 may further comprise means for receiving position information relating to a position of the audio device, means for determining, based on the position information, a position of the audio device with respect to the apparatus 200 and means for determining the microphone array closest to the capturing target based on the position of the audio device with respect to the apparatus 200.
  • the apparatus 200 may further comprise means for providing a user interface on the apparatus to control capturing content.
  • Figure 3 illustrates an example of capturing content.
  • the apparatus 200 is a mobile computing device configured to communicate with the audio device 250 such as an ear bud and both the apparatus 200 and the audio device 250 comprise at least one microphone array for capturing audio.
  • the apparatus 200 is configured to determine a position and/or orientation of the audio device 250 using Bluetooth technology, acoustic localization and/or GPS coordinates together with Bluetooth technology.
  • a first user 301 interviews a second user 302 that is a capturing target.
  • the first user 301 records the interview by capturing video of the interview using the apparatus 200 and the audio device 250.
  • the audio device 250 is configured to function as pair with another audio device 304.
  • the apparatus 200 receives position information from the audio device 250 and determines a position of the audio device 250 with respect to the apparatus 200.
  • the apparatus 200 further determines, based on the position of the audio device 250 with respect to the apparatus 200, which of the microphone array of the apparatus 200 and the microphone array of the audio device is the closest to the capturing target (second user 302).
  • the audio device 250 is closer to the capturing target (second user 302) than the apparatus 200.
  • the apparatus 200 determines that the microphone array of the audio device 250 is to be used for capturing audio such that the microphone array of the audio device 250 is controlled to focus audio capturing towards the second user 302.
  • the audio focus is illustrated with dashed line 305.
  • the apparatus further receives orientation information relating to the audio device 250 and determines, based on the orientation information, the orientation of the audio device 250 with respect to the apparatus 200.
  • the apparatus 200 further determines, based on the orientation of the audio device 250 with respect to the apparatus 200, a direction for capturing content by the apparatus 200.
  • the orientation of the audio device 250 is such that the audio device 250 points towards the second user 302.
  • the apparatus 200 determines that the orientation of the audio device 250 corresponds to the field of view of the back camera 303 and activates the back camera 303.
  • the apparatus 200 further provides a user interface 306 for the first user 301 for controlling the capturing of content.
  • the user interface 306 comprises an illustration of the audio focus 307 provided by the microphone array of the audio device 250 for enabling the first user 301 to control the audio focus parameters.
  • Figure 4 illustrates an example of capturing content.
  • the apparatus 200 is a mobile computing device configured to communicate with audio device 250.
  • both the apparatus 200 and the audio device 250 comprise a microphone array for capturing audio.
  • the apparatus 200 is configured to determine a position and/or orientation of the audio device 250 using Bluetooth technology, acoustic localization and/or GPS coordinates together with Bluetooth technology.
  • a first user 301 interviews a second user 302 that is a capturing target.
  • the first user 301 records the interview by capturing video of the interview using the apparatus 200 and the audio device 250.
  • the audio device 250 is configured to function as pair with another audio device 304.
  • the apparatus 200 receives position information from the audio device 250 and determines a position of the audio device 250 with respect to the apparatus 200. The apparatus 200 further determines, based on the position of the audio device 250 with respect to the apparatus 200, which of the microphone array of the apparatus 200 and the microphone array of the audio device 250 is the closest to the second user 302. In the example of Figure 4 , the apparatus 200 is closer to the second user 302 than the audio device 250.
  • the apparatus 200 determines that the microphone array of the apparatus 200 is to be used for capturing audio such that the microphone array of the apparatus 200 is controlled to focus audio capturing towards the second user 302.
  • the audio focus is illustrated with dashed line 405.
  • the apparatus further receives orientation information relating to the audio device 250 and determines, based on the orientation information, the orientation of the audio device 250 with respect to the apparatus 200.
  • the apparatus 200 further determines, based on the orientation of the audio device 250 with respect to the apparatus 200, a direction for capturing content by the apparatus 200.
  • the orientation of the audio device 250 is such that the audio device 250 points towards the second user 302.
  • the apparatus 200 determines that the orientation of the audio device 250 corresponds to the field of view of the back camera 303 and activates the back camera 303.
  • the apparatus 200 further provides a user interface 306 for the first user 301 for controlling the capturing of content.
  • the user interface 306 comprises an illustration of the audio focus 407 provided by the microphone array of the apparatus 200 for enabling the first user 301 to control the audio focus parameters.
  • Figure 5 illustrates a further example of capturing content.
  • the apparatus 200 is a mobile computing device configured to communicate with audio device 250.
  • both the apparatus 200 and the audio device 250 comprise a microphone array for capturing audio.
  • the apparatus 200 is configured to determine a position and/or orientation of the audio device 250 using Bluetooth technology, acoustic localization and/or GPS coordinates together with Bluetooth technology.
  • a first user 301 interviews a second user 302 such that the first user 301 is a capturing target.
  • the first user 301 records the interview by capturing video of the interview using the apparatus 200 and the audio device 250.
  • the audio device 250 is configured to function as pair with another audio device 304.
  • the apparatus 200 receives position information from the audio device 250 and determines a position of the audio device 250 with respect to the apparatus 200. The apparatus 200 further determines, based on the position of the audio device 250 with respect to the apparatus 200, which of the microphone array of the apparatus 200 and the microphone array of the audio device is the closest to the first user 301. In the example of Figure 5 , the audio device 250 is closer to the first user 301 than the apparatus 200.
  • the apparatus 200 determines that the microphone array of the audio device 250 is to be used for capturing audio such that the microphone array is controlled to focus audio capturing towards the capturing target (first user 301).
  • the apparatus further receives orientation information relating to the audio device 250 and determines, based on the orientation information, the orientation of the audio device 250 with respect to the apparatus 200.
  • the apparatus 200 further determines, based on the orientation of the audio device 250 with respect to the apparatus 200, a direction for capturing content by the apparatus 200.
  • the orientation of the audio device 250 is such that the audio device 250 points towards the first user 301.
  • the apparatus 200 determines that the orientation of the audio device 250 corresponds to the field of view of a front camera and activates the front camera.
  • the apparatus 200 further provides a user interface 306 for the first user 301 for controlling the capturing of content.
  • the user interface 306 comprises an illustration of the audio focus 507 provided by the microphone array of the audio device 250 for enabling the first user 301 to control the audio focus parameters.
  • Figure 6 illustrates a yet further example of capturing content.
  • the apparatus 200 is a mobile computing device configured to communicate with audio device 250.
  • both the apparatus 200 and the audio device 250 comprise a microphone array for capturing audio.
  • the apparatus 200 is configured to determine a position and/or orientation of the audio device 250 using Bluetooth technology, acoustic localization and/or GPS coordinates together with Bluetooth technology.
  • a first user 301 interviews a second user 302 such that the first user 301 is a capturing target.
  • the first user 301 records the interview by capturing video of the interview using the apparatus 200 and the audio device 250.
  • the audio device 250 is configured to function as pair with another audio device 304.
  • the apparatus 200 receives position information from the audio device 250 and determines a position of the audio device 250 with respect to the apparatus 200. The apparatus 200 further determines, based on the position of the audio device 250 with respect to the apparatus 200, which of the microphone array of the apparatus 200 and the microphone array of the audio device is the closest to the capturing target (first user 301). In the example of Figure 6 , the apparatus 200 is closer to the capturing target (first user 301) than the audio device 250.
  • the apparatus 200 determines that the microphone array of the apparatus 200 is to be used for capturing audio such that the microphone array is controlled to focus audio capturing towards the capturing target (first user 301).
  • the apparatus further receives orientation information relating to the audio device 250 and determines, based on the orientation information, the orientation of the audio device 250 with respect to the apparatus 200.
  • the apparatus 200 further determines, based on the orientation of the audio device 250 with respect to the apparatus 200, a direction for capturing content by the apparatus 200.
  • the orientation of the audio device 250 is such that the audio device 250 points towards the first user 301.
  • the apparatus 200 determines that the orientation of the audio device 250 corresponds to the field of view of a front camera and activates the front camera.
  • the apparatus 200 further provides a user interface 306 for the first user 301 for controlling the capturing of content.
  • the user interface 306 comprises an illustration of the audio focus 607 provided by the microphone array of the apparatus 200 for enabling the first user 301 to control the audio focus parameters.
  • Figure 7 illustrates an example method 700 incorporating aspects of the previously disclosed embodiments. More specifically the example method 700 illustrates controlling at least one functionality of the apparatus 200 for capturing content in a determined direction. The method may be performed by the apparatus 200 such as a mobile computing device.
  • the apparatus 200 is configured to determine a position and/or orientation of the audio device 250 using Bluetooth technology, acoustic localization and/or GPS coordinates together with Bluetooth technology.
  • the method starts with receiving 705 orientation information relating to an orientation of an audio device 250 operatively connected to the apparatus 200.
  • the orientation information may comprise measurement data indicating the orientation of the audio device 250 or a signal based on which the orientation of the audio device 250 may be determined.
  • the method continues with determining 710, based on the orientation information, an orientation of the audio device 250 with respect to the apparatus 200.
  • Determining the orientation of the audio device 250 with respect to the apparatus 200 may comprise comparing the orientation of the audio device 250 with an orientation of the apparatus 200.
  • the orientation of the audio device 250 with respect to the apparatus 200 may comprise determining a direction of the audio device 250.
  • the method further continues with determining 715, based on the orientation of the audio device 250 with respect to the apparatus 200, a direction for capturing content by the apparatus 200.
  • a direction for capturing content by the apparatus 200 comprises a direction corresponding to the orientation of the audio device 250.
  • Capturing content may comprise, for example, capturing video content comprising audio and visual information.
  • Figure 8 illustrates another example method 800 incorporating aspects of the previously disclosed embodiments. More specifically the example method 800 illustrates selecting a microphone array for capturing audio and controlling at least one functionality of the apparatus 200 for capturing content in a determined direction. The method may be performed by the apparatus 200 such as a mobile computing device.
  • the apparatus 200 is configured to determine a position and/or orientation of the audio device 250 using Bluetooth technology, acoustic localization and/or GPS coordinates together with Bluetooth technology.
  • the method starts with receiving 805 position information relating to the audio device 250 and determining 810 a position of the audio device 250 with respect to the apparatus 200.
  • the method continues with determining 815 a microphone array closest to a capturing target.
  • the microphone array closest to the capturing target is selected as a microphone array for capturing audio.
  • the method further continues with receiving 820 orientation information relating to an orientation of an audio device 250 operatively connected to the apparatus 200.
  • the orientation information may comprise measurement data indicating the orientation of the audio device 250 or a signal based on which the orientation of the audio device 250 may be determined.
  • the method further continues with determining 825, based on the orientation information, an orientation of the audio device 250 with respect to the apparatus 200.
  • Determining the orientation of the audio device 250 with respect to the apparatus 200 may comprise comparing the orientation of the audio device 250 with an orientation of the apparatus 200.
  • the orientation of the audio device 250 with respect to the apparatus 200 may comprise determining a direction of the audio device 250.
  • the method further continues with determining 830, based on the orientation of the audio device 250 with respect to the apparatus 200, a direction of capturing content by the apparatus 200.
  • a direction for capturing content by the apparatus 200 comprises a direction corresponding to the orientation of the audio device 250.
  • Capturing content may comprise, for example, capturing video content comprising audio and visual information.
  • an advantage of controlling at least one functionality of an apparatus based on an orientation of an audio device is that a direction of a capturing target may be indicated using the audio device.
  • Another advantage is that audio may be captured using different microphone arrays in a controlled manner.
  • a further advantage is that a visual information may be captured using different cameras in a controlled manner.
  • a technical effect of one or more of the example embodiments disclosed herein is that a high quality audio/video capture may be provided using a distributed content capturing.
  • Another technical effect may be a dynamic control of content capturing.
  • circuitry may refer to one or more or all of the following: (a) hardware-only circuit implementations (such as implementations in only analog and/or digital circuitry) and (b) combinations of hardware circuits and software, such as (as applicable): (i) a combination of analog and/or digital hardware circuit(s) with software/firmware and (ii) any portions of hardware processor(s) with software (including digital signal processor(s)), software, and memory(ies) that work together to cause an apparatus, such as a mobile phone or server, to perform various functions) and (c) hardware circuit(s) and or processor(s), such as a microprocessor(s) or a portion of a microprocessor(s), that requires software (e.g., firmware) for operation, but the software may not be present when it is not needed for operation.
  • hardware-only circuit implementations such as implementations in only analog and/or digital circuitry
  • combinations of hardware circuits and software such as (as applicable): (i) a combination of analog and/or digital hardware circuit(s
  • circuitry also covers an implementation of merely a hardware circuit or processor (or multiple processors) or portion of a hardware circuit or processor and its (or their) accompanying software and/or firmware.
  • circuitry also covers, for example and if applicable to the particular claim element, a baseband integrated circuit or processor integrated circuit for a mobile device or a similar integrated circuit in server, a cellular network device, or other computing or network device.
  • Embodiments of the present invention may be implemented in software, hardware, application logic or a combination of software, hardware and application logic.
  • the software, application logic and/or hardware may reside on the apparatus, a separate device or a plurality of devices. If desired, part of the software, application logic and/or hardware may reside on the apparatus, part of the software, application logic and/or hardware may reside on a separate device, and part of the software, application logic and/or hardware may reside on a plurality of devices.
  • the application logic, software or an instruction set is maintained on any one of various conventional computer-readable media.
  • a 'computer-readable medium' may be any media or means that can contain, store, communicate, propagate or transport the instructions for use by or in connection with an instruction execution system, apparatus, or device, such as a computer, with one example of a computer described and depicted in FIGURE 2 .
  • a computer-readable medium may comprise a computer-readable storage medium that may be any media or means that can contain or store the instructions for use by or in connection with an instruction execution system, apparatus, or device, such as a computer.
  • the different functions discussed herein may be performed in a different order and/or concurrently with each other. Furthermore, if desired, one or more of the above-described functions may be optional or may be combined.

Landscapes

  • Health & Medical Sciences (AREA)
  • Otolaryngology (AREA)
  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • General Health & Medical Sciences (AREA)
  • Circuit For Audible Band Transducer (AREA)
EP21173836.4A 2020-05-27 2021-05-14 Erfassen von inhalten Pending EP3917160A1 (de)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
FI20205538 2020-05-27

Publications (1)

Publication Number Publication Date
EP3917160A1 true EP3917160A1 (de) 2021-12-01

Family

ID=75936719

Family Applications (1)

Application Number Title Priority Date Filing Date
EP21173836.4A Pending EP3917160A1 (de) 2020-05-27 2021-05-14 Erfassen von inhalten

Country Status (1)

Country Link
EP (1) EP3917160A1 (de)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080199025A1 (en) * 2007-02-21 2008-08-21 Kabushiki Kaisha Toshiba Sound receiving apparatus and method
US20100128892A1 (en) * 2008-11-25 2010-05-27 Apple Inc. Stabilizing Directional Audio Input from a Moving Microphone Array
WO2012083989A1 (en) * 2010-12-22 2012-06-28 Sony Ericsson Mobile Communications Ab Method of controlling audio recording and electronic device
US9432768B1 (en) * 2014-03-28 2016-08-30 Amazon Technologies, Inc. Beam forming for a wearable computer
EP3252775A1 (de) * 2015-08-26 2017-12-06 Huawei Technologies Co., Ltd. Richtungsaufzeichnungsverfahren, einrichtung und aufzeichnungsvorrichtung

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080199025A1 (en) * 2007-02-21 2008-08-21 Kabushiki Kaisha Toshiba Sound receiving apparatus and method
US20100128892A1 (en) * 2008-11-25 2010-05-27 Apple Inc. Stabilizing Directional Audio Input from a Moving Microphone Array
WO2012083989A1 (en) * 2010-12-22 2012-06-28 Sony Ericsson Mobile Communications Ab Method of controlling audio recording and electronic device
US9432768B1 (en) * 2014-03-28 2016-08-30 Amazon Technologies, Inc. Beam forming for a wearable computer
EP3252775A1 (de) * 2015-08-26 2017-12-06 Huawei Technologies Co., Ltd. Richtungsaufzeichnungsverfahren, einrichtung und aufzeichnungsvorrichtung

Similar Documents

Publication Publication Date Title
US11706577B2 (en) Systems and methods for equalizing audio for playback on an electronic device
US8908880B2 (en) Electronic apparatus having microphones with controllable front-side gain and rear-side gain
US9516241B2 (en) Beamforming method and apparatus for sound signal
US9185509B2 (en) Apparatus for processing of audio signals
US10257611B2 (en) Stereo separation and directional suppression with omni-directional microphones
CN109565629B (zh) 用于控制音频信号的处理的方法和装置
US11631422B2 (en) Methods, apparatuses and computer programs relating to spatial audio
CN113014983A (zh) 视频播放方法、装置、存储介质及电子设备
US20220225049A1 (en) An apparatus and associated methods for capture of spatial audio
WO2022062531A1 (zh) 一种多通道音频信号获取方法、装置及系统
EP3917160A1 (de) Erfassen von inhalten
US11696085B2 (en) Apparatus, method and computer program for providing notifications
US11882401B2 (en) Setting a parameter value
US20220086593A1 (en) Alignment control information
US20230224664A1 (en) Supplementing Content
US20240073571A1 (en) Generating microphone arrays from user devices
JP2023513318A (ja) マルチメディアコンテンツ

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN PUBLISHED

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

B565 Issuance of search results under rule 164(2) epc

Effective date: 20211027

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20220601

RBV Designated contracting states (corrected)

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

17Q First examination report despatched

Effective date: 20230104