WO2017130210A1 - Method and system for rendering audio streams - Google Patents

Method and system for rendering audio streams Download PDF

Info

Publication number
WO2017130210A1
WO2017130210A1 PCT/IN2016/050462 IN2016050462W WO2017130210A1 WO 2017130210 A1 WO2017130210 A1 WO 2017130210A1 IN 2016050462 W IN2016050462 W IN 2016050462W WO 2017130210 A1 WO2017130210 A1 WO 2017130210A1
Authority
WO
WIPO (PCT)
Prior art keywords
audio streams
speakers
computing device
speaker
information
Prior art date
Application number
PCT/IN2016/050462
Other languages
French (fr)
Inventor
Ashray MALHOTRA
Chhatoi PRITAM BARAL
Original Assignee
Indian Institute Of Technology Bombay
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Indian Institute Of Technology Bombay filed Critical Indian Institute Of Technology Bombay
Publication of WO2017130210A1 publication Critical patent/WO2017130210A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/439Processing of audio elementary streams
    • H04N21/4398Processing of audio elementary streams involving reformatting operations of audio signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/4302Content synchronisation processes, e.g. decoder synchronisation
    • H04N21/4307Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen
    • H04N21/43076Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen of the same content streams on multiple devices, e.g. when family members are watching the same movie on different devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/436Interfacing a local distribution network, e.g. communicating with another STB or one or more peripheral devices inside the home
    • H04N21/43615Interfacing a Home Network, e.g. for connecting the client to a plurality of peripherals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/439Processing of audio elementary streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R27/00Public address systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2227/00Details of public address [PA] systems covered by H04R27/00 but not provided for in any of its subgroups
    • H04R2227/003Digital PA systems using, e.g. LAN or internet
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2420/00Details of connection covered by H04R, not provided for in its groups
    • H04R2420/07Applications of wireless loudspeakers or wireless microphones

Definitions

  • the embodiments herein generally relate to sound systems. More particularly related to a mechanism for rendering audio streams across a plurality of speakers spatially distributed in an environment.
  • the present application is based on, and claims priority from an Indian Application Number 201621002956 filed on 27 th January, 2016, the disclosure of which is hereby incorporated by reference herein.
  • the principal object of the embodiments herein is to provide a method and system for rendering audio streams across a plurality of speakers spatially distributed in an environment.
  • Another object of the embodiments herein is to provide a method for obtaining information associated with a plurality of speakers.
  • the plurality of speakers is spatially distributed and is synchronized with a computing device.
  • Another object of the embodiments herein is to provide a method for regulating parameters associated with the audio streams based on the information.
  • Another object of the embodiments herein is to provide a method for rendering the audio streams with the regulated parameters to each of the plurality of devices.
  • Another object of the embodiments herein is to provide immersive sound experience to the listeners by networking the speakers.
  • the embodiments herein provide a method of rendering audio streams.
  • the method includes obtaining, by a computing device, information associated with a plurality of speakers.
  • the plurality of speakers is spatially distributed and is synchronized with the computing device, in an environment.
  • the method includes regulating, by the computing device, parameters associated with audio streams based on the information.
  • the method includes rendering, by the computing device, the audio streams with the regulated parameters to each of the plurality of speakers.
  • the embodiments herein provide a system for rendering audio streams.
  • the system includes a computing device and a plurality of speakers.
  • the computing device is configured to obtain information associated with a plurality of speakers.
  • the plurality of speakers is spatially distributed and is synchronized with the computing device in an environment.
  • the computing device is configured to regulate parameters associated with audio streams based on the information.
  • the computing device is configured to render the audio streams with the regulated parameters to each of the plurality of speakers.
  • Each of the speaker is configured to transmit information to the computing device. Further, each speaker is configured to receive the audio streams from the computing device. Furthermore, each speaker is configured to output the audio streams based on the information.
  • FIG. 1 illustrates an example system for rendering audio streams, according to an embodiment as described herein;
  • FIG. 2a illustrates various units of a computing device present in the system described in the FIG. 1, according to an embodiment as described herein;
  • FIG. 2b illustrates various units of a speaker in the system described in the FIG. 1, according to an embodiments as described herein;
  • FIG. 3 is a flow diagram illustrating a method of rendering the audio streams, according to an embodiment as described herein;
  • FIG. 4a shows an example illustration in which the computing device renders same audio streams to a plurality of speakers, according to an embodiment as described herein;
  • FIG. 4b shows an example illustration in which the computing device renders different audio streams to the plurality of speakers, according to an embodiment as described herein;
  • FIG. 4c shows an example illustration in which the audio streams are rendered by the computing device to the plurality of speakers in a stadium environment, according to an embodiment as described herein;
  • FIG. 5 is a flow diagram illustrating a method for providing output audio streams, according to an embodiment as described herein.
  • the embodiments herein provide a method of rendering audio streams.
  • the method includes obtaining, by a computing device, information associated with a plurality of speakers.
  • the plurality of speakers is spatially distributed and is synchronized with the computing device, in an environment.
  • the plurality of speakers is distributed at pre-determined/random distances with respect to each other.
  • the plurality of speakers is distributed at equal distances with respect to each other and each speaker synchronizes with the computing device based on inputs from computing device to render the audio stream at same point of time.
  • each speaker among the plurality of speakers is synchronized with the computing device using any of the short range communication (SRC) such as Bluetooth, Wireless-Fidelity (Wi-Fi) or the like.
  • SRC short range communication
  • Wi-Fi Wireless-Fidelity
  • each speaker among the plurality of speakers is synchronized with the computing device through a wired network.
  • the information includes location coordinates, power level, capability information and an identifier.
  • the identifier can be a unique number assigned to each speaker for identifying the speaker.
  • the method includes regulating, by the computing device, parameters associated with audio streams based on the information.
  • the parameters can be volume, equalizer, genre or the like.
  • the computing device regulates the parameters associated with the audio streams.
  • the method includes rendering, by the computing device, the audio streams with the regulated parameters to each of the plurality of speakers.
  • the proposed method and system can be used in concerts, movies, public gatherings, any areas where it is feasible to deploy huge number of speakers, or the like.
  • the proposed system because of the huge number of speakers a more immersive experience can be achieved since sound comes from all directions as it happens in real life.
  • the proposed system is designed for large number of users, most speakers have a surrounding system of speakers and unlike phasor addition systems, and there is no need to rely on a specific area within the speaker arrangement for the producing sound experience.
  • the proposed system can be used for bidirectional data transmission.
  • the computing device 104 renders audio streams to the plurality of speakers and any data from the speakers can be rendered to the computing device to achieve the bidirectional data transmission between the computing device and the plurality of speakers.
  • FIGS. 1 through 5 where similar reference characters denote corresponding features consistently throughout the figures, preferred embodiments are shown.
  • FIG. 1 illustrates an example system 100 for rendering audio streams, according to an embodiment as described herein.
  • the system 100 includes an audio source, a computing device 104 and a plurality of speakers 106a, 106b, 106c, and so on to 106N.
  • the audio 102 source can be a host such as a musician, an artist or the like.
  • the musician or the artist may use a headphone, which can be worn on or around the head of the musician or the artist.
  • a video camera may be used to record the data, and this recorded data is stored in its memory and can be played again.
  • a microphone is primarily used by the artist mainly during a concert so that the voice of the artist voice will be audible to all the people listening to the artist.
  • the audio source 102 can be an electronic device which includes but not limited to a computer, a mobile phone, tablet, or the like that can be used to generate the audio streams.
  • the audio streams generated by the audio source 102 are fed as an input to the computing device 104.
  • the voice of the artist (i.e., the audio streams) in a concert is fed as the input to the computing device 104.
  • the audio streams are present in the computing device 104.
  • the plurality of speakers 106a- 106n may be spaced at any random distance from each other. The distance between the plurality of speakers 106a- 106N will automatically be detected by the computing device 104, and then changes the audio streams output from the plurality of speakers 106a- 106N accordingly. The plurality of speakers 106a- 106N receives the audio streams from the computing device 104. The plurality of speakers 106a- 106N are synchronized at the same time. It should be noted that the plurality of speakers 106a- 106N are the main source for massive sound generation.
  • the plurality of speakers 106a- 106N are connected to the computing device 104 either through the wired connection or wirelessly, depending on the demands of the event, venue, or artist.
  • the speakers identify themselves on the network and then notify the computing device 104 of their location.
  • the plurality of speakers 106a- 106N may also report instrumentation data and metrics for later analysis.
  • the computing device 104 can identify the location of the plurality of speakers 106a- 106N.
  • the computing device 104 obtains information associated with the plurality of speakers 106a- 106N in the environment.
  • the information includes the location co-ordinates, the power level of the speaker, capability information, and the identifier.
  • the power level indicates the amount (in terms of percentage) of power possessed by the speaker.
  • the capability information includes sound characteristics supported by the speaker.
  • the identifier can be a unique number assigned to each speaker for identifying the speaker. The above mentioned information helps the computing device 104 to render the audio streams to each speaker or to a set of speakers for creating immersive experience.
  • the computing device 104 regulates the parameters associated with audio streams based on the obtained information.
  • the parameters can be volume, equalizer, genre or the like.
  • the computing device regulates the parameters associated with the audio streams. Further, the computing device 104 renders the audio streams to the plurality of speakers 106a-106N with the regulated parameters.
  • the computing device 104 broadcasts the audio streams (i.e., either the same audio streams or different audio streams) to the plurality of speakers 106a- 106N).
  • the computing device 104 renders the audio streams output from the plurality of speakers 106a- 106N accordingly to create immersive sound experience. Due to the plurality of speaker's 106A-106N a greater immersive experience is achieved, as the sound comes from multiple or all directions.
  • FIG. 1 shows a limited overview of the system 100.
  • the labels in the FIG. 1 are used for illustrative purpose only and are not intended to limit the embodiments. From the FIG. 1, it should be noted that the system 100 can include the components (other than the components shown in the system 100) to create immersive sound experience in the environment.
  • FIG. 2a illustrates various units of a computing device 104 in the system 100 described in the FIG. 1, according to an embodiment as described herein.
  • the computing device 100 includes a communication unit 202a, a controller unit 204a and a storage unit 206a.
  • the communication unit 202a is configured to obtain information associated with the plurality of speakers 106 a- 106N.
  • the communication unit 202a is configured to communicate with the plurality of speakers 106a- 106N using any of the communication means such as Bluetooth, Wi-Fi or the like through a wireless network.
  • the communication unit 202a is configured to communicate with the plurality of speakers 106a- 106N using the wired network.
  • the controller unit 204a is configured to regulate the parameters associated with the audio streams. Further, the controller unit 204a is configured to render the audio streams with the regulated parameters.
  • the controller unit 204a can include a database, an audio control module, a video control module, a voice control module, and a speaker control module.
  • a bi-directional connection is established between the database and the audio control module, the video control module, the voice control module, and speaker control module.
  • the audio control module controls and allows audio to be outputted through the speaker control module.
  • the voice control module controls and allows voice or sound to be outputted through the speaker control module.
  • the audio streams are rendered by the speaker control module to the desired locations (with the help of the audio control module and voice control module.
  • the storage unit 206a stores a plurality of audio streams received from the audio source 102.
  • the storage unit 206a stores the plurality of audio streams present in the computing device 104.
  • the storage unit 206a may include one or more computer-readable storage media.
  • the storage unit 206a may include non-volatile storage elements. Examples of such non-volatile storage elements may include magnetic hard discs, optical discs, floppy discs, flash memories, or forms of electrically programmable memories (EPROM) or electrically erasable and programmable (EEPROM) memories.
  • EPROM electrically programmable memories
  • EEPROM electrically erasable and programmable
  • the storage unit 206a may, in some examples, be considered a non- transitory storage medium.
  • the term "non-transitory" may indicate that the storage medium is not embodied in a carrier wave or a propagated signal.
  • non-transitory should not be interpreted that the storage unit 206a is non-movable.
  • the storage unit 206a can be configured to store larger amounts of information than the memory.
  • a non-transitory storage medium may store data that can, over time, change (e.g., in Random Access Memory (RAM) or cache).
  • RAM Random Access Memory
  • FIG. 2b illustrates various units of a speaker 106 in the system described in the FIG. 1, according to an embodiment as described herein.
  • the speaker 106 includes a communication unit 202b, a controller unit 204b and a storage unit 206b.
  • the communication unit 202b is configured to transmit the information associated with the plurality of speakers 106 a- 106N.
  • the communication unit 202b is configured to communicate the audio streams to the computing device 104 using any of the communication means such as Bluetooth, Wi-Fi or the like.
  • the communication unit 202b is configured to communicate with the computing device 104 using the wired connection.
  • the channel identification unit 204b is configured to identify the channel corresponding to audio streams based on the channel identifier.
  • channel identifier is a pre-defined number assigned to a channel on which the audio streams are broadcasted to the speaker. Further, the channel identification unit 204b is configured to tune to the frequency of the identified channel to receive the audio streams from the computing device 104.
  • the audio output unit 206b is configured to output the audio streams received from the computing device 104.
  • the storage unit 208b stores a plurality of audio streams (which are different from the audio streams rendered by the computing device 104).
  • the audio streams rendered by the computing device 104 are mixed with the audio streams stored in the storage unit 208b before outputting the audio streams.
  • the storage unit 206b may include one or more computer-readable storage media.
  • the storage unit 206b may include non- volatile storage elements. Examples of such non-volatile storage elements may include magnetic hard discs, optical discs, floppy discs, flash memories, or forms of electrically programmable memories (EPROM) or electrically erasable and programmable (EEPROM) memories.
  • the storage unit 206a may, in some examples, be considered a non-transitory storage medium.
  • non- transitory may indicate that the storage medium is not embodied in a carrier wave or a propagated signal. However, the term “non- transitory” should not be interpreted that the storage unit 206b is non- movable. In some examples, the storage unit 206b can be configured to store larger amounts of information than the memory. In certain examples, a non-transitory storage medium may store data that can, over time, change (e.g., in Random Access Memory (RAM) or cache).
  • RAM Random Access Memory
  • Digital content may also be stored in the storage unit 208b for future processing or consumption.
  • the storage unit 206b also stores program specific information and/or service information (PSI/SI), including information about digital content (e.g., the detected information bits) available in the future or stored from the past.
  • PSI/SI program specific information and/or service information
  • a user of the speaker 106a may view this stored information on display and selects an item of for viewing, listening, or the like, which may take the form of keypad, scroll, or other input device(s) or combinations thereof.
  • the content and PSI/SI may be passed among functions within the speaker using the bus.
  • FIG. 3 is a flow diagram illustrating a method 300 of rendering the audio streams, according to an embodiment as described herein.
  • the method 300 includes obtain information associated with the plurality of speakers.
  • the method 300 allows the communication interface unit 202a to obtain the information associated with the plurality of speakers.
  • the information includes location co-ordinates, power level, capability information and an identifier.
  • the identifier can be a unique number assigned to each speaker for identifying the speaker.
  • the information is used by the computing device 104 to render the audio streams for creating immersive experience.
  • the method 300 includes regulating the parameters associated with audio streams based on the information.
  • the method 300 allows the controller unit 204a to regulate the parameters associated with the audio streams based on the information.
  • the parameters can be volume, equalizer, genre or the like.
  • the computing device regulates the parameters associated with the audio streams.
  • the computing device regulates the parameters associated with the audio streams.
  • the computing device 104 regulates the volume of the audio streams based on the location of the speakers (i.e., the computing device 104 may render the audio streams with lesser volume to the speakers located closer to the computing device 104 and the computing device 104 may render the audio streams with higher volume to the plurality of speakers located away from the computing device 104.
  • the computing device 104 regulates the equalizer to be rendered to the plurality of speakers (i.e., the computing device 104 adjusts the amplitude of the audio streams at particular frequencies). In a similar manner, the computing device 104 regulates the parameters associated with the audio streams.
  • the computing device 104 selects the genre based on the pre-defined information.
  • the method 300 includes rendering the audio streams with the regulated parameters.
  • the method 300 allows the controller unit 204a to render the audio streams with the regulated parameters.
  • the computing device 104 renders the audio streams with regulated volume (i.e., either the volume is increased or decreased) based on the information associated with the plurality of speakers.
  • the computing device 104 renders the audio streams by regulating the equalizer in the audio streams.
  • the various actions, acts, blocks, steps, or the like in the method 300 may be performed in the order presented, in a different order or simultaneously. Further, in some embodiments, some of the actions, acts, blocks, steps, or the like may be omitted, added, modified, skipped, or the like without departing from the scope of the invention.
  • FIG. 4a shows an example illustration in which the computing device 104 renders same audio streams to a plurality of speakers, according to an embodiment as described herein.
  • the computing device 104 renders same audio streams to the plurality of speakers 106a- 106u in the environment.
  • the audio streams are rendered on a same channel to the plurality of speakers 106a- 106u.
  • Each speaker among the plurality of speakers 106a- 106u tunes to the same frequency corresponding to the channel in which the audio streams are broadcasted.
  • each of the plurality of speakers 106a- 106u tune to the same channel to output the same audio streams.
  • each of the plurality of speakers 106a- 106u tune to a different channel to output different audio streams as described in the FIG. 4b.
  • FIG. 4b shows an example illustration in which the computing device renders different audio streams to the plurality of speakers, according to an embodiment as described herein.
  • the computing device 104 renders different audio streams (i.e., audio stream 1, audio stream 2 and audio stream 3) on different channels to the plurality of speakers 106a- 106u in the environment.
  • Each speaker among the plurality of speakers 106a- 106u tunes to a different frequency or same frequency corresponding to the channel in which the audio streams are broadcasted.
  • each speaker among the plurality of speakers 106a- 106u can listen all the audio streams rendered by the computing device 104.
  • the channel identification unit 204b decides to the audio streams to be outputted by the speaker.
  • the channel identification unit 204b tunes to a particular channel to output the audio stream.
  • the plurality of speakers 106a- 106u receive the audio streams 1, 2 and 3, however, the channel identification unit 204b decides the audio stream (i.e., either the audio stream 1 or the audio stream 2 or the audio stream 3), or a combination of audio streams, or a combination of incoming audio streams and stored data to be outputted.
  • the computing device 104 broadcasts the audio stream 1 on a first frequency, the audio stream 2 on a second frequency and the audio stream 3 on a third frequency.
  • the speakers 106a- 106d tune to the first channel to output the audio stream 1.
  • the speakers 106f-106h tune to the second frequency to output the audio stream 2.
  • the speakers 106n-106u tune to the third frequency to output the audio stream 3.
  • FIG. 4c shows an example illustration in which the audio streams are rendered by the computing device 104 to the plurality of speakers in a stadium environment, according to an embodiment as described herein.
  • the computing device renders either same audio streams or different audio streams to the plurality of speakers 106a- 106v in the stadium.
  • each speaker outputs same audio stream by tuning to a corresponding channel in which the audio stream is broadcasted.
  • each speaker outputs a different audio stream, where each speaker tunes to a different channel to output different audio stream.
  • the computing device 104 broadcasts same audio streams to a first set of speakers (for example, the speakers 106a- 106h) and different audio streams to a second set of speakers (for example, the speakers 106i-106v).
  • a first set of speakers for example, the speakers 106a- 106h
  • a second set of speakers for example, the speakers 106i-106v
  • each of the speakers among the second set of speakers tune to a different frequency (as decided by the channel identification unit 204b to output different audio streams).
  • the computing device 104 broadcasts all audio streams on a single channel or a single frequency to second set of speakers.
  • the channel identification unit 204b in the speakers 106a is configured to tune to the single channel to output the audio stream.
  • the audio streams are mixed on the single channel, for example using a formula, based on the location of each speaker. Each speaker determines an intended audio stream by feeding its location into the formula.
  • FIG. 5 is a flow diagram illustrating a method 500 for outputting audio streams, according to an embodiment as described herein.
  • the method includes transmitting information to the computing device 104.
  • the method 500 allows the communication unit 202b to transmit the information to the computing device 104.
  • the method 500 includes receiving the audio streams from the computing device.
  • the method 500 allows the channel identification unit 204b to receive the audio streams from the computing device 104. Further, the channel identification unit 204b is configured to tune to the frequency of the identified channel to receive the audio streams from the computing device 104. In an example, consider that the computing device 104 broadcasts the audio stream 1 on a first frequency, the audio stream 2 on a second frequency and the audio stream 3 on a third frequency.
  • the channel identification unit 204b in the speakers 106a is configured to tune to the first channel to output the audio stream 1.
  • the channel identification unit 204b in the speakers 106f is configured to tune to the second frequency to output the audio stream 2.
  • the channel identification unit in the speakers 106n is configured to tune to the third frequency to output the audio stream 3.
  • the computing device 104 broadcasts all audio streams on a single channel or a single frequency to second set of speakers.
  • the channel identification unit 204b in the speakers 106a is configured to tune to the single channel to output the audio stream.
  • the audio streams are mixed on the single channel, for example using a formula, based on the location of each speaker. Each speaker determines an intended audio stream by feeding its location into the formula.
  • the method 500 includes outputting the audio streams.
  • the method 500 allows the audio output unit 206b to output the audio streams.
  • the computing device 104 broadcasts same audio streams to the first set of speakers then, the audio output unit in each of the speakers among the first set of speakers output the same audio stream.
  • the audio output unit 206b in each of the speakers among the second set of speakers tune to a different frequency (as decided by the channel identification unit 204b to output different audio streams) to output different audio streams

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Circuit For Audible Band Transducer (AREA)

Abstract

Embodiments herein provide a method and system for rendering audio streams across a plurality of speakers spatially distributed in an environment. The method includes obtaining information associated with a plurality of speakers. The plurality of speakers is spatially distributed and is synchronized with a computing device, in an environment. Further, the method includes regulating, by the computing device, parameters associated with audio streams based on the information. Furthermore, the method includes rendering, by the computing device, the audio streams with the regulated parameters to each of the plurality of speakers.

Description

METHOD AND SYSTEM FOR RENDERING AUDIO STREAMS"
FIELD OF INVENTION
[0001] The embodiments herein generally relate to sound systems. More particularly related to a mechanism for rendering audio streams across a plurality of speakers spatially distributed in an environment. The present application is based on, and claims priority from an Indian Application Number 201621002956 filed on 27th January, 2016, the disclosure of which is hereby incorporated by reference herein.
BACKGROUND OF INVENTION
[0002] Sound experience remained more or less the same across years. Artificial sound reproduction started with techniques that used a single source of sound. Advancement is made when people realized they could use two sources and vary the sound output slightly between the two sources, thus creating a stereophonic effect. This began a trend of multi-channel "surround" sound output. The audio experience available with electronically reproduced sound uses, for example one or two speakers at the top of the hall to give some control over another spatial dimension (height) of perceived sound. The aim of "entertainment" is to make the sound seem as close to realistic and not perceivably limited in its sources as possible; but with such few speakers, the experience inevitably feels artificial.
[0003] The above information is presented as background information only to help the reader to understand the present invention. Applicants have made no determination and make no assertion as to whether any of the above might be applicable as Prior Art with regard to the present application. OBJECT OF INVENTION
[0004] The principal object of the embodiments herein is to provide a method and system for rendering audio streams across a plurality of speakers spatially distributed in an environment.
[0005] Another object of the embodiments herein is to provide a method for obtaining information associated with a plurality of speakers. The plurality of speakers is spatially distributed and is synchronized with a computing device.
[0006] Another object of the embodiments herein is to provide a method for regulating parameters associated with the audio streams based on the information.
[0007] Another object of the embodiments herein is to provide a method for rendering the audio streams with the regulated parameters to each of the plurality of devices.
[0008] Another object of the embodiments herein is to provide immersive sound experience to the listeners by networking the speakers.
SUMMARY
[0009] Accordingly the embodiments herein provide a method of rendering audio streams. The method includes obtaining, by a computing device, information associated with a plurality of speakers. The plurality of speakers is spatially distributed and is synchronized with the computing device, in an environment. Further, the method includes regulating, by the computing device, parameters associated with audio streams based on the information. Furthermore, the method includes rendering, by the computing device, the audio streams with the regulated parameters to each of the plurality of speakers. [0010] Accordingly the embodiments herein provide a system for rendering audio streams. The system includes a computing device and a plurality of speakers. The computing device is configured to obtain information associated with a plurality of speakers. The plurality of speakers is spatially distributed and is synchronized with the computing device in an environment. The computing device is configured to regulate parameters associated with audio streams based on the information. The computing device is configured to render the audio streams with the regulated parameters to each of the plurality of speakers. Each of the speaker is configured to transmit information to the computing device. Further, each speaker is configured to receive the audio streams from the computing device. Furthermore, each speaker is configured to output the audio streams based on the information.
BRIEF DESCRIPTION OF FIGURES
[0011] The embodiments herein will be better understood from the following detailed description with reference to the drawings, in which:
[0012] FIG. 1 illustrates an example system for rendering audio streams, according to an embodiment as described herein;
[0013] FIG. 2a illustrates various units of a computing device present in the system described in the FIG. 1, according to an embodiment as described herein;
[0014] FIG. 2b illustrates various units of a speaker in the system described in the FIG. 1, according to an embodiments as described herein;
[0015] FIG. 3 is a flow diagram illustrating a method of rendering the audio streams, according to an embodiment as described herein; [0016] FIG. 4a shows an example illustration in which the computing device renders same audio streams to a plurality of speakers, according to an embodiment as described herein;
[0017] FIG. 4b shows an example illustration in which the computing device renders different audio streams to the plurality of speakers, according to an embodiment as described herein;
[0018] FIG. 4c shows an example illustration in which the audio streams are rendered by the computing device to the plurality of speakers in a stadium environment, according to an embodiment as described herein; and
[0019] FIG. 5 is a flow diagram illustrating a method for providing output audio streams, according to an embodiment as described herein.
DETAILED DESCRIPTION OF INVENTION
[0020] The embodiments herein and the various features and advantageous details thereof are explained more fully with reference to the non-limiting embodiments that are illustrated in the accompanying drawings and detailed in the following description. Descriptions of well-known components and processing techniques are omitted so as to not unnecessarily obscure the embodiments herein. Also, the various embodiments described herein are not necessarily mutually exclusive, as some embodiments can be combined with one or more other embodiments to form new embodiments. The term "or" as used herein, refers to a non-exclusive or, unless otherwise indicated. The examples used herein are intended merely to facilitate an understanding of ways in which the embodiments herein can be practiced and to further enable those skilled in the art to practice the embodiments herein. Accordingly, the examples should not be construed as limiting the scope of the embodiments herein.
[0021] The embodiments herein provide a method of rendering audio streams. The method includes obtaining, by a computing device, information associated with a plurality of speakers. The plurality of speakers is spatially distributed and is synchronized with the computing device, in an environment. In an embodiment, the plurality of speakers is distributed at pre-determined/random distances with respect to each other.
[0022] In an embodiment, the plurality of speakers is distributed at equal distances with respect to each other and each speaker synchronizes with the computing device based on inputs from computing device to render the audio stream at same point of time.
[0023] In an embodiment, each speaker among the plurality of speakers is synchronized with the computing device using any of the short range communication (SRC) such as Bluetooth, Wireless-Fidelity (Wi-Fi) or the like.
[0024] In an embodiment, each speaker among the plurality of speakers is synchronized with the computing device through a wired network.
[0025] In an embodiment, the information includes location coordinates, power level, capability information and an identifier. In an example, the identifier can be a unique number assigned to each speaker for identifying the speaker.
[0026] Further, the method includes regulating, by the computing device, parameters associated with audio streams based on the information. In an example, the parameters can be volume, equalizer, genre or the like. The computing device regulates the parameters associated with the audio streams.
[0027] Furthermore, the method includes rendering, by the computing device, the audio streams with the regulated parameters to each of the plurality of speakers.
[0028] The proposed method and system can be used in concerts, movies, public gatherings, any areas where it is feasible to deploy huge number of speakers, or the like.
[0029] In the proposed system, because of the huge number of speakers a more immersive experience can be achieved since sound comes from all directions as it happens in real life. The proposed system is designed for large number of users, most speakers have a surrounding system of speakers and unlike phasor addition systems, and there is no need to rely on a specific area within the speaker arrangement for the producing sound experience.
[0030] In an embodiment, the proposed system can be used for bidirectional data transmission. In an example, the computing device 104 renders audio streams to the plurality of speakers and any data from the speakers can be rendered to the computing device to achieve the bidirectional data transmission between the computing device and the plurality of speakers.
[0031] Referring now to the drawings, and more particularly to FIGS. 1 through 5, where similar reference characters denote corresponding features consistently throughout the figures, preferred embodiments are shown.
[0032] FIG. 1 illustrates an example system 100 for rendering audio streams, according to an embodiment as described herein. As depicted in the system 100 includes an audio source, a computing device 104 and a plurality of speakers 106a, 106b, 106c, and so on to 106N.
[0033] In an embodiment, the audio 102 source can be a host such as a musician, an artist or the like. The musician or the artist may use a headphone, which can be worn on or around the head of the musician or the artist. A video camera may be used to record the data, and this recorded data is stored in its memory and can be played again. A microphone is primarily used by the artist mainly during a concert so that the voice of the artist voice will be audible to all the people listening to the artist.
[0034] In an embodiment, the audio source 102 can be an electronic device which includes but not limited to a computer, a mobile phone, tablet, or the like that can be used to generate the audio streams.
[0035] In an embodiment, the audio streams generated by the audio source 102 are fed as an input to the computing device 104. In an example, the voice of the artist (i.e., the audio streams) in a concert is fed as the input to the computing device 104. In an embodiment, the audio streams are present in the computing device 104.
[0036] The plurality of speakers 106a- 106n may be spaced at any random distance from each other. The distance between the plurality of speakers 106a- 106N will automatically be detected by the computing device 104, and then changes the audio streams output from the plurality of speakers 106a- 106N accordingly. The plurality of speakers 106a- 106N receives the audio streams from the computing device 104. The plurality of speakers 106a- 106N are synchronized at the same time. It should be noted that the plurality of speakers 106a- 106N are the main source for massive sound generation.
[0037] In an embodiment, the plurality of speakers 106a- 106N are connected to the computing device 104 either through the wired connection or wirelessly, depending on the demands of the event, venue, or artist. The speakers identify themselves on the network and then notify the computing device 104 of their location. The plurality of speakers 106a- 106N may also report instrumentation data and metrics for later analysis. In an embodiment, the computing device 104 can identify the location of the plurality of speakers 106a- 106N.
[0038] In an embodiment, the computing device 104 obtains information associated with the plurality of speakers 106a- 106N in the environment. The information includes the location co-ordinates, the power level of the speaker, capability information, and the identifier. The power level indicates the amount (in terms of percentage) of power possessed by the speaker. The capability information includes sound characteristics supported by the speaker. The identifier can be a unique number assigned to each speaker for identifying the speaker. The above mentioned information helps the computing device 104 to render the audio streams to each speaker or to a set of speakers for creating immersive experience.
[0039] After obtaining the information associated with each speaker, the computing device 104 regulates the parameters associated with audio streams based on the obtained information. In an example, the parameters can be volume, equalizer, genre or the like. The computing device regulates the parameters associated with the audio streams. Further, the computing device 104 renders the audio streams to the plurality of speakers 106a-106N with the regulated parameters.
[0040] In an embodiment, the computing device 104 broadcasts the audio streams (i.e., either the same audio streams or different audio streams) to the plurality of speakers 106a- 106N).
[0041] In an embodiment, as the distance between the plurality of speakers 106a- 106N is known to the computing device 104, the computing device 104 renders the audio streams output from the plurality of speakers 106a- 106N accordingly to create immersive sound experience. Due to the plurality of speaker's 106A-106N a greater immersive experience is achieved, as the sound comes from multiple or all directions.
[0042] The FIG. 1 shows a limited overview of the system 100. The labels in the FIG. 1 are used for illustrative purpose only and are not intended to limit the embodiments. From the FIG. 1, it should be noted that the system 100 can include the components (other than the components shown in the system 100) to create immersive sound experience in the environment.
[0043] FIG. 2a illustrates various units of a computing device 104 in the system 100 described in the FIG. 1, according to an embodiment as described herein. As depicted in the FIG. 2a, the computing device 100 includes a communication unit 202a, a controller unit 204a and a storage unit 206a.
[0044] In an embodiment, the communication unit 202a is configured to obtain information associated with the plurality of speakers 106 a- 106N.
[0045] In an embodiment, the communication unit 202a is configured to communicate with the plurality of speakers 106a- 106N using any of the communication means such as Bluetooth, Wi-Fi or the like through a wireless network.
[0046] In an embodiment, the communication unit 202a is configured to communicate with the plurality of speakers 106a- 106N using the wired network.
[0047] In an embodiment, the controller unit 204a is configured to regulate the parameters associated with the audio streams. Further, the controller unit 204a is configured to render the audio streams with the regulated parameters.
[0048] In an embodiment, the controller unit 204a can include a database, an audio control module, a video control module, a voice control module, and a speaker control module. A bi-directional connection is established between the database and the audio control module, the video control module, the voice control module, and speaker control module. The audio control module controls and allows audio to be outputted through the speaker control module. The voice control module controls and allows voice or sound to be outputted through the speaker control module. The audio streams are rendered by the speaker control module to the desired locations (with the help of the audio control module and voice control module. [0049] In an embodiment, the storage unit 206a stores a plurality of audio streams received from the audio source 102.
[0050] In an embodiment, the storage unit 206a stores the plurality of audio streams present in the computing device 104. The storage unit 206a may include one or more computer-readable storage media. The storage unit 206a may include non-volatile storage elements. Examples of such non-volatile storage elements may include magnetic hard discs, optical discs, floppy discs, flash memories, or forms of electrically programmable memories (EPROM) or electrically erasable and programmable (EEPROM) memories. In addition, the storage unit 206a may, in some examples, be considered a non- transitory storage medium. The term "non-transitory" may indicate that the storage medium is not embodied in a carrier wave or a propagated signal. However, the term "non-transitory" should not be interpreted that the storage unit 206a is non-movable. In some examples, the storage unit 206a can be configured to store larger amounts of information than the memory. In certain examples, a non-transitory storage medium may store data that can, over time, change (e.g., in Random Access Memory (RAM) or cache).
[0051] FIG. 2b illustrates various units of a speaker 106 in the system described in the FIG. 1, according to an embodiment as described herein. As depicted in the FIG. 2b, the speaker 106 includes a communication unit 202b, a controller unit 204b and a storage unit 206b.
[0052] In an embodiment, the communication unit 202b is configured to transmit the information associated with the plurality of speakers 106 a- 106N.
[0053] In an embodiment, the communication unit 202b is configured to communicate the audio streams to the computing device 104 using any of the communication means such as Bluetooth, Wi-Fi or the like.
[0054] In an embodiment, the communication unit 202b is configured to communicate with the computing device 104 using the wired connection.
[0055] In an embodiment, the channel identification unit 204b is configured to identify the channel corresponding to audio streams based on the channel identifier. In an example, channel identifier is a pre-defined number assigned to a channel on which the audio streams are broadcasted to the speaker. Further, the channel identification unit 204b is configured to tune to the frequency of the identified channel to receive the audio streams from the computing device 104.
[0056] In an embodiment, the audio output unit 206b is configured to output the audio streams received from the computing device 104.
[0057] In an embodiment, the storage unit 208b stores a plurality of audio streams (which are different from the audio streams rendered by the computing device 104). In an embodiment, the audio streams rendered by the computing device 104 are mixed with the audio streams stored in the storage unit 208b before outputting the audio streams. The storage unit 206b may include one or more computer-readable storage media. The storage unit 206b may include non- volatile storage elements. Examples of such non-volatile storage elements may include magnetic hard discs, optical discs, floppy discs, flash memories, or forms of electrically programmable memories (EPROM) or electrically erasable and programmable (EEPROM) memories. In addition, the storage unit 206a may, in some examples, be considered a non-transitory storage medium. The term "non- transitory" may indicate that the storage medium is not embodied in a carrier wave or a propagated signal. However, the term "non- transitory" should not be interpreted that the storage unit 206b is non- movable. In some examples, the storage unit 206b can be configured to store larger amounts of information than the memory. In certain examples, a non-transitory storage medium may store data that can, over time, change (e.g., in Random Access Memory (RAM) or cache).
[0058] Digital content may also be stored in the storage unit 208b for future processing or consumption. The storage unit 206b also stores program specific information and/or service information (PSI/SI), including information about digital content (e.g., the detected information bits) available in the future or stored from the past. A user of the speaker 106a may view this stored information on display and selects an item of for viewing, listening, or the like, which may take the form of keypad, scroll, or other input device(s) or combinations thereof. The content and PSI/SI may be passed among functions within the speaker using the bus.
[0059] FIG. 3 is a flow diagram illustrating a method 300 of rendering the audio streams, according to an embodiment as described herein. At step 302, the method 300 includes obtain information associated with the plurality of speakers. The method 300 allows the communication interface unit 202a to obtain the information associated with the plurality of speakers. In an embodiment, the information includes location co-ordinates, power level, capability information and an identifier. In an example, the identifier can be a unique number assigned to each speaker for identifying the speaker.
[0060] In an embodiment, the information is used by the computing device 104 to render the audio streams for creating immersive experience. [0061] At step 304, the method 300 includes regulating the parameters associated with audio streams based on the information. The method 300 allows the controller unit 204a to regulate the parameters associated with the audio streams based on the information. In an example, the parameters can be volume, equalizer, genre or the like. The computing device regulates the parameters associated with the audio streams. The computing device 104. The computing device regulates the parameters associated with the audio streams. In an example, the computing device 104 regulates the volume of the audio streams based on the location of the speakers (i.e., the computing device 104 may render the audio streams with lesser volume to the speakers located closer to the computing device 104 and the computing device 104 may render the audio streams with higher volume to the plurality of speakers located away from the computing device 104.
[0062] In another example, the computing device 104 regulates the equalizer to be rendered to the plurality of speakers (i.e., the computing device 104 adjusts the amplitude of the audio streams at particular frequencies). In a similar manner, the computing device 104 regulates the parameters associated with the audio streams.
[0063] In an example, the computing device 104 selects the genre based on the pre-defined information.
[0064] At step 306, the method 300 includes rendering the audio streams with the regulated parameters. The method 300 allows the controller unit 204a to render the audio streams with the regulated parameters. In an example, the computing device 104 renders the audio streams with regulated volume (i.e., either the volume is increased or decreased) based on the information associated with the plurality of speakers. In another example, the computing device 104 renders the audio streams by regulating the equalizer in the audio streams. [0065] The various actions, acts, blocks, steps, or the like in the method 300 may be performed in the order presented, in a different order or simultaneously. Further, in some embodiments, some of the actions, acts, blocks, steps, or the like may be omitted, added, modified, skipped, or the like without departing from the scope of the invention.
[0066] FIG. 4a shows an example illustration in which the computing device 104 renders same audio streams to a plurality of speakers, according to an embodiment as described herein. As depicted in the FIG. 4a, the computing device 104 renders same audio streams to the plurality of speakers 106a- 106u in the environment. The audio streams are rendered on a same channel to the plurality of speakers 106a- 106u. Each speaker among the plurality of speakers 106a- 106u tunes to the same frequency corresponding to the channel in which the audio streams are broadcasted. Thus, with the proposed method, each of the plurality of speakers 106a- 106u tune to the same channel to output the same audio streams. In another example, with the proposed method, each of the plurality of speakers 106a- 106u tune to a different channel to output different audio streams as described in the FIG. 4b.
[0067] FIG. 4b shows an example illustration in which the computing device renders different audio streams to the plurality of speakers, according to an embodiment as described herein. As depicted in the FIG. 4b, the computing device 104 renders different audio streams (i.e., audio stream 1, audio stream 2 and audio stream 3) on different channels to the plurality of speakers 106a- 106u in the environment. Each speaker among the plurality of speakers 106a- 106u tunes to a different frequency or same frequency corresponding to the channel in which the audio streams are broadcasted. However, each speaker among the plurality of speakers 106a- 106u can listen all the audio streams rendered by the computing device 104. The channel identification unit 204b decides to the audio streams to be outputted by the speaker. The channel identification unit 204b tunes to a particular channel to output the audio stream. The plurality of speakers 106a- 106u receive the audio streams 1, 2 and 3, however, the channel identification unit 204b decides the audio stream (i.e., either the audio stream 1 or the audio stream 2 or the audio stream 3), or a combination of audio streams, or a combination of incoming audio streams and stored data to be outputted.
[0068] In an example, consider that the computing device 104 broadcasts the audio stream 1 on a first frequency, the audio stream 2 on a second frequency and the audio stream 3 on a third frequency.
[0069] In an embodiment, the speakers 106a- 106d tune to the first channel to output the audio stream 1. In an embodiment, the speakers 106f-106h tune to the second frequency to output the audio stream 2. In an embodiment, the speakers 106n-106u tune to the third frequency to output the audio stream 3.
[0070] FIG. 4c shows an example illustration in which the audio streams are rendered by the computing device 104 to the plurality of speakers in a stadium environment, according to an embodiment as described herein. As depicted in the FIG. 4c, the computing device renders either same audio streams or different audio streams to the plurality of speakers 106a- 106v in the stadium. In an embodiment, each speaker outputs same audio stream by tuning to a corresponding channel in which the audio stream is broadcasted.
[0071] In an embodiment, each speaker outputs a different audio stream, where each speaker tunes to a different channel to output different audio stream. [0072] In an embodiment, the computing device 104 broadcasts same audio streams to a first set of speakers (for example, the speakers 106a- 106h) and different audio streams to a second set of speakers (for example, the speakers 106i-106v). When the computing device 104 broadcasts same audio streams to the first set of speakers then, each of the speakers among the first set of speakers output the same audio stream.
[0073] In an embodiment, when the computing device 104 broadcasts different audio streams to the second set of speakers then, each of the speakers among the second set of speakers tune to a different frequency (as decided by the channel identification unit 204b to output different audio streams).
[0074] In an embodiment, the computing device 104 broadcasts all audio streams on a single channel or a single frequency to second set of speakers. The channel identification unit 204b in the speakers 106a is configured to tune to the single channel to output the audio stream. The audio streams are mixed on the single channel, for example using a formula, based on the location of each speaker. Each speaker determines an intended audio stream by feeding its location into the formula.
[0075] FIG. 5 is a flow diagram illustrating a method 500 for outputting audio streams, according to an embodiment as described herein. At step 502, the method includes transmitting information to the computing device 104. The method 500 allows the communication unit 202b to transmit the information to the computing device 104.
[0076] At step 504, the method 500 includes receiving the audio streams from the computing device. The method 500 allows the channel identification unit 204b to receive the audio streams from the computing device 104. Further, the channel identification unit 204b is configured to tune to the frequency of the identified channel to receive the audio streams from the computing device 104. In an example, consider that the computing device 104 broadcasts the audio stream 1 on a first frequency, the audio stream 2 on a second frequency and the audio stream 3 on a third frequency.
[0077] In an embodiment, the channel identification unit 204b in the speakers 106a is configured to tune to the first channel to output the audio stream 1. In an embodiment, the channel identification unit 204b in the speakers 106f is configured to tune to the second frequency to output the audio stream 2. In an embodiment, the channel identification unit in the speakers 106n is configured to tune to the third frequency to output the audio stream 3.
[0078] In another example, the computing device 104 broadcasts all audio streams on a single channel or a single frequency to second set of speakers. The channel identification unit 204b in the speakers 106a is configured to tune to the single channel to output the audio stream. The audio streams are mixed on the single channel, for example using a formula, based on the location of each speaker. Each speaker determines an intended audio stream by feeding its location into the formula.
[0079] At step 506, the method 500 includes outputting the audio streams. The method 500 allows the audio output unit 206b to output the audio streams. When the computing device 104 broadcasts same audio streams to the first set of speakers then, the audio output unit in each of the speakers among the first set of speakers output the same audio stream.
[0080] In an embodiment, when the computing device 104 broadcasts different audio streams to the second set of speakers then, the audio output unit 206b in each of the speakers among the second set of speakers tune to a different frequency (as decided by the channel identification unit 204b to output different audio streams) to output different audio streams
[0081] The various actions, acts, blocks, steps, or the like in the method 500 may be performed in the order presented, in a different order or simultaneously. Further, in some embodiments, some of the actions, acts, blocks, steps, or the like may be omitted, added, modified, skipped, or the like without departing from the scope of the invention.
[0082] The foregoing description of the specific embodiments will so fully reveal the general nature of the embodiments herein that others can, by applying current knowledge, readily modify and/or adapt for various applications such specific embodiments without departing from the generic concept, and, therefore, such adaptations and modifications should and are intended to be comprehended within the meaning and range of equivalents of the disclosed embodiments. It is to be understood that the phraseology or terminology employed herein is for the purpose of description and not of limitation. Therefore, while the embodiments herein have been described in terms of preferred embodiments, those skilled in the art will recognize that the embodiments herein can be practiced with modification within the spirit and scope.

Claims

STATEMENT OF CLAIMS We claim:
1. A method of rendering audio streams, the method comprising: obtaining, by a computing device, information associated with a plurality of speakers, wherein said plurality of speakers are spatially distributed and are synchronized with said computing device, in an environment; regulating, by said computing device, a parameters associated with audio streams based on said information; and rendering, by said computing device, said audio streams with said regulated parameters to each of said plurality of speakers.
2. The method of claim 1, wherein said rendering includes broadcasting said audio streams to said plurality of speakers.
3. The method of claim 1, wherein said computing device broadcasts same audio streams to said plurality of speakers.
4. The method of claim 1, wherein said computing device broadcasts different audio streams to each of said plurality of speakers on different channels.
5. The method of claim 4, wherein said computing device broadcasts all audio streams to said plurality of speakers on a single channel.
6. The method of claim 3, wherein said computing device broadcasts said same audio streams to a first set of speakers among said plurality of speakers and a different audio streams to a second set of speakers among said plurality of speakers.
7. The method of claim 1, wherein said audio streams are present in said computing device.
8. The method of claim 1, wherein said audio streams are fed as an input to said computing device.
9. The method of claim 1, wherein said information is at least one of: location co-ordinates, power level of said speaker, capability information, and an identifier.
10. A method of outputting audio streams by a speaker, the method comprising: transmitting information to a computing device; receiving audio streams from said computing device; and outputting said audio streams based on said information.
10. The method of claim 9, wherein said audio streams are received in a broadcast channel, wherein said audio streams is intended for a plurality of speakers in an environment.
11. The method of claim 10, wherein said speaker identifies corresponding audio streams based on a channel identifier and outputs said corresponding audio streams.
12. The method of claim 9, wherein said speaker mixes said audio streams from said computing device and audio streams present in storage of said speaker.
13. A system for of rendering audio streams, the system comprising: a computing device configured to: obtain information associated with a plurality of speakers, wherein said plurality of speakers are spatially distributed and are synchronized with said computing device, in an environment; regulate parameters associated with audio streams based on said information; and render said audio streams with said regulated parameters to each of said plurality of speakers. wherein each said speaker is configured to:
transmit information to a computing device;
receive audio streams from said computing device; and output said audio streams based on said information.
14. The system of claim 13, wherein said computing device is configured to render by broadcasting said audio streams to said plurality of speakers.
15. The system of claim 13, wherein said computing device is configured to broadcast same audio streams to said plurality of speakers.
16. The system of claim 13, wherein said computing device is configured to broadcast different audio streams to said plurality of speakers on different channels.
17. The system of claim 16, wherein said computing device is configured to broadcast all audio streams to said plurality of speakers on a single channel.
18. The system of claim 13, wherein said computing device is configured to broadcast said same audio streams to a first set of speakers among said plurality of speakers and a different audio streams to a second set of speakers among said plurality of speakers.
19. The system of claim 13, wherein said audio streams are present in said computing device.
20. The system of claim 13, wherein said audio streams are fed as an input to said computing device.
21. The system of claim 13, wherein said information is at least one of: location co-ordinates, power level of said speaker, capability information, and an identifier.
22. The system of claim 13, wherein said audio streams are received in a broadcast channel, wherein said audio streams is intended for a plurality of speakers in an environment.
23. The system of claim 22, wherein said speaker identifies corresponding audio streams based on a channel identifier and outputs said corresponding audio streams.
24. The system of claim 13, wherein said speaker mixes said audio streams from said computing device and audio streams present in storage of said speaker.
PCT/IN2016/050462 2016-01-27 2016-12-30 Method and system for rendering audio streams WO2017130210A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IN201621002956 2016-01-27
IN201621002956 2016-01-27

Publications (1)

Publication Number Publication Date
WO2017130210A1 true WO2017130210A1 (en) 2017-08-03

Family

ID=59397656

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IN2016/050462 WO2017130210A1 (en) 2016-01-27 2016-12-30 Method and system for rendering audio streams

Country Status (1)

Country Link
WO (1) WO2017130210A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114072792A (en) * 2019-07-03 2022-02-18 高通股份有限公司 Cryptographic-based authorization for audio rendering

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5642171A (en) * 1994-06-08 1997-06-24 Dell Usa, L.P. Method and apparatus for synchronizing audio and video data streams in a multimedia system
US20020040295A1 (en) * 2000-03-02 2002-04-04 Saunders William R. Method and apparatus for accommodating primary content audio and secondary content remaining audio capability in the digital audio production process
US6466832B1 (en) * 1998-08-24 2002-10-15 Altec Lansing R & D Center Israel High quality wireless audio speakers
US6748356B1 (en) * 2000-06-07 2004-06-08 International Business Machines Corporation Methods and apparatus for identifying unknown speakers using a hierarchical tree structure
US20060067536A1 (en) * 2004-09-27 2006-03-30 Michael Culbert Method and system for time synchronizing multiple loudspeakers
US20130202129A1 (en) * 2009-08-14 2013-08-08 Dts Llc Object-oriented audio streaming system
US20140133683A1 (en) * 2011-07-01 2014-05-15 Doly Laboratories Licensing Corporation System and Method for Adaptive Audio Signal Generation, Coding and Rendering

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5642171A (en) * 1994-06-08 1997-06-24 Dell Usa, L.P. Method and apparatus for synchronizing audio and video data streams in a multimedia system
US6466832B1 (en) * 1998-08-24 2002-10-15 Altec Lansing R & D Center Israel High quality wireless audio speakers
US20020040295A1 (en) * 2000-03-02 2002-04-04 Saunders William R. Method and apparatus for accommodating primary content audio and secondary content remaining audio capability in the digital audio production process
US6748356B1 (en) * 2000-06-07 2004-06-08 International Business Machines Corporation Methods and apparatus for identifying unknown speakers using a hierarchical tree structure
US20060067536A1 (en) * 2004-09-27 2006-03-30 Michael Culbert Method and system for time synchronizing multiple loudspeakers
US20130202129A1 (en) * 2009-08-14 2013-08-08 Dts Llc Object-oriented audio streaming system
US20140133683A1 (en) * 2011-07-01 2014-05-15 Doly Laboratories Licensing Corporation System and Method for Adaptive Audio Signal Generation, Coding and Rendering

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114072792A (en) * 2019-07-03 2022-02-18 高通股份有限公司 Cryptographic-based authorization for audio rendering

Similar Documents

Publication Publication Date Title
EP3122067B1 (en) Systems and methods for delivery of personalized audio
EP3108672B1 (en) Content-aware audio modes
CN103650539B (en) The system and method for produce for adaptive audio signal, encoding and presenting
CN110140170B (en) Distributed audio recording adapted for end user free viewpoint monitoring
US20140328485A1 (en) Systems and methods for stereoisation and enhancement of live event audio
US20130324031A1 (en) Dynamic allocation of audio channel for surround sound systems
CA2992510C (en) Synchronising an audio signal
US9864573B2 (en) Personal audio mixer
US20090220104A1 (en) Venue private network
US9841942B2 (en) Method of augmenting an audio content
KR102580502B1 (en) Electronic apparatus and the control method thereof
GB2550877A (en) Object-based audio rendering
WO2013022483A1 (en) Methods and apparatus for automatic audio adjustment
WO2017130210A1 (en) Method and system for rendering audio streams
CN110493702B (en) Six-face sound cinema sound returning system
CN108650592B (en) Method for realizing neck strap type surround sound and stereo control system
KR20140090469A (en) Method for operating an apparatus for displaying image
US20190182557A1 (en) Method of presenting media
CN116017312A (en) Data processing method and electronic equipment
KR20170095477A (en) The smart multiple sounds control system and method
Jackson et al. Object-Based Audio Rendering
KR20160077284A (en) Audio and Set-Top-Box All-in-One System, and Video Signal and Audio Signal Processing Method therefor
KR20130075966A (en) Instrument sound delivery apparatus and that method
KR20220146165A (en) An electronic apparatus and a method for processing audio signal
KR20160079339A (en) Method and system for providing sound service and device for transmitting sound

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16887830

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16887830

Country of ref document: EP

Kind code of ref document: A1