US11853641B2 - System and method for audio combination and playback - Google Patents
System and method for audio combination and playback Download PDFInfo
- Publication number
- US11853641B2 US11853641B2 US17/446,134 US202117446134A US11853641B2 US 11853641 B2 US11853641 B2 US 11853641B2 US 202117446134 A US202117446134 A US 202117446134A US 11853641 B2 US11853641 B2 US 11853641B2
- Authority
- US
- United States
- Prior art keywords
- audio
- audio data
- user
- data
- combined
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active, expires
Links
- 238000000034 method Methods 0.000 title claims abstract description 26
- 230000008569 process Effects 0.000 claims description 8
- 238000001514 detection method Methods 0.000 claims description 5
- 238000004891 communication Methods 0.000 claims description 2
- 238000004422 calculation algorithm Methods 0.000 description 6
- 238000010586 diagram Methods 0.000 description 6
- 230000003993 interaction Effects 0.000 description 3
- 230000002776 aggregation Effects 0.000 description 2
- 238000004220 aggregation Methods 0.000 description 2
- 238000007405 data analysis Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000007621 cluster analysis Methods 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 238000002592 echocardiography Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000010183 spectrum analysis Methods 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
- G06F3/162—Interface to dedicated audio devices, e.g. audio drivers, interface to CODECs
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
- G06F3/165—Management of the audio stream, e.g. setting of volume, audio stream path
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
- G06F3/167—Audio in a user interface, e.g. using voice commands for navigating, audio feedback
Definitions
- the present invention is of a system and method for audio combination and playback, and in particular, to such a system and method which replicates the sounds made by a crowd gathered together in one place.
- Inviting fans of a particular event to join remotely has become increasingly important. Sporting events, for example, sell their television and other remote viewing rights at a high price. Indeed, certain remote viewing events are so lucrative that they require viewers to purchase an individual viewing ticket (pay per view) in order to see them. However, remote viewing may lack the feeling of intimacy and gathering together with a large crowd that is available to those who view the event “live” in the same physical space where it is occurring. Clearly remote viewing of live events would be even more lucrative if remote viewers felt that their viewing experience had these qualities.
- the background art does not teach or suggest a method for providing, a similar feeling to remote viewers as for those viewing a live event in the same physical space where it is occurring.
- the background art does not also teach or suggest a method for audio combination and playback which replicates the sounds made by a crowd gathered together in one place.
- the present invention overcomes the background art by providing a system and method for audio combination and playback which replicates the sounds made by a crowd gathered together in one place.
- Implementation of the method and system of the present invention involves performing or completing certain selected tasks or steps manually, automatically, or a combination thereof.
- several selected steps could be implemented by hardware or by software on any operating system of any firmware or a combination thereof.
- selected steps of the invention could be implemented as a chip or a circuit.
- selected steps of the invention could be implemented as a plurality of software instructions being executed by a computer using any suitable operating system.
- selected steps of the method and system of the invention could be described as being performed by a data processor, such as a computing platform for executing a plurality of instructions.
- An algorithm as described herein may refer to any series of functions, steps, one or more methods or one or more processes, for example for performing data analysis.
- Implementation of the apparatuses, devices, methods and systems of the present disclosure involve performing or completing certain selected tasks or steps manually, automatically, or a combination thereof. Specifically, several selected steps can be implemented by hardware or by software on an operating system, of a firmware, and/or a combination thereof. For example, as hardware, selected steps of at least some embodiments of the disclosure can be implemented as a chip or circuit (e.g., ASIC). As software, selected steps of at least some embodiments of the disclosure can be implemented as a number of software instructions being executed by a computer (e.g., a processor of the computer) using an operating system. In any case, selected steps of methods of at least some embodiments of the disclosure can be described as being performed by a processor, such as a computing platform for executing a plurality of instructions.
- a processor such as a computing platform for executing a plurality of instructions.
- processor may be a hardware component, or, according to some embodiments, a software component.
- a processor may also be referred to as a module; in some embodiments, a processor may comprise one or more modules; in some embodiments, a module may comprise computer instructions—which can be a set of instructions, an application, software—which are operable on a computational device (e.g., a processor) to cause the computational device to conduct and/or achieve one or more specific functionality.
- a computational device e.g., a processor
- any device featuring a processor which may be referred to as “data processor”; “pre-processor” may also be referred to as “processor” and the ability to execute one or more instructions may be described as a computer, a computational device, and a processor (e.g., see above), including but not limited to a personal computer (PC), a server, a cellular telephone, an IP telephone, a smart phone, a PDA (personal digital assistant), a thin client, a mobile communication device, a smart watch, head mounted display or other wearable that is able to communicate externally, a virtual or cloud based processor, a pager, and/or a similar device. Two or more of such devices in communication with each other may be a “computer network.”
- FIG. 1 shows a non-limiting exemplary system for analyzing user input sound, and combining it to form a cheer or other combined sound output;
- FIG. 2 shows the non-limiting exemplary system diagram concentrating on the previously mentioned server for combining the audio sounds
- FIG. 4 shows a non-limiting exemplary method for creating a combined user input sound from a plurality of separate user input sounds and then outputting that combined sound to a user device for display;
- FIG. 5 shows a non-limiting exemplary user interface description
- FIG. 6 shows a non-limiting exemplary flow for crowd based sound aggregation
- FIG. 7 shows a non-limiting exemplary flow for performing a sound quality upgrade through user input rejection
- FIG. 8 shows a non-limiting exemplary server cluster architecture implementation
- FIG. 9 shows a non-limiting exemplary personal audio feed method
- FIG. 1 shows a non-limiting exemplary system for analyzing user input sound, and combining it to form a cheer or other combined sound output.
- a plurality of devices 101 that interact with a plurality of users 100 .
- Devices 101 may include but are not limited to a smartphone, tablet, an Alexa or other smart speaker, a smart home appliance, user computer, laptop, a phablet (smartphone having a larger display screen) or indeed any other computational device.
- Devices 101 connect to a computer network such as the internet 102 by providing audio data through the computer network 102 and also receiving audio data for audio display from the computer network 102 .
- pre-recorded audio database 107 may include sounds of other crowd noises, playing of an anthem, performing the wave, in which a variety of users all around a stadium, if they were present in a stadium, would make a sound sequentially which sounds like a wave, and so forth. All of this is then operated by backend server 104 so that the sounds are received by backend server 104 , combined, and then output back to website 103 for display on one or more of the plurality of user devices 101 .
- FIG. 2 shows the non-limiting exemplary system diagram concentrating on the previously mentioned server for combining the audio sounds.
- a user 200 interacts with a device, in this case as an unlimiting example, a smartphone 201 .
- Data is sent from smartphone 201 and is received by smartphone 201 as combined data for audio display to the user.
- audio display it is meant audio that is to be played back to the user.
- Such combined audio data preferably sounds like a stadium full of cheering fans optionally with other sounds as previously described. Audio data is conveyed to and from a website 203 which may act as a gateway to a server 211 through a computer network such as the internet 202 .
- the sounds are then combined at 208 to provide a combined sound, optionally with data that is buffered through an audio based buffer database 209 which may upgrade the sound quality and/or adjust the sound as previously described, and also optionally by combining sounds from a pre-recorded audio database 210 , such as noise of fans in a crowded stadium as previously described.
- the combined sounds are then output from sound combination 208 back to a user output data stream 212 and then back through website 203 and ultimately to the device of user 200 , which in this case is smartphone 201 .
- Smartphone 201 then preferably displays (plays back the audio data, which preferably sounds like a large number of users again in a stadium whether open or closed, in an open or closed air building, or in a field or other large enclosure.
- the result then passes to sound combination at 208 , where these sounds are combined and then stored in the audio buffer database at 209 , and then may be stored at the pre-recorded audio database at 210 .
- pre-recorded clips can be played back and layered at different times.
- the live, the buffered, and the pre-recorded datasets can all be put together at the sound combination 208 and then this can be passed directly back up the digital filters and the noise rejection again to reach the user data stream at 204 .
- FIG. 3 shows a non-limiting exemplary diagram with regard to the operations of the client, which is operated by the user device.
- user 300 is shown operating a device 301 , which may optionally be any computational device as previously described.
- Device 301 features a display 305 , which preferably includes a microphone 306 for inputting user audio data, and a speaker 307 for outputting audio data that is combined for display to the user.
- Data analysis and initial processing may optionally be performed by processor 308 through a user interface 302 .
- User interface 302 preferably enables the audio data to be output by the user as controlled by the user and also allows the user to determine which user audio combined feed should be input.
- a memory 309 stores a plurality of instructions that, upon execution by a processor 308 , enables operation of user interface 302 so that the input audio data from microphone 306 is sent for combination with sounds for the correct game or other event, And the output audio data as output through speaker 307 is again obtained from the correct game or other event.
- Data is sent from a backend server 304 and is also combined audio data from backend server 304 to user device 301 in both cases audio transmission to and from occurs through a computer network such as internet 303 .
- Backend server 304 preferably features a processor 310 , which may operate as previously described, for example, for sound combination noise reduction and also application of digital filters.
- backend server 304 also features an audio buffer database 311 for operations previously described, for example, to avoid latency and for better combination of user input sound, and may also feature a pre-recorded audio database 312 again for combining pre-recorded sounds to the input user audio data, which may then be output from backend server 304 .
- FIG. 4 shows a non-limiting exemplary method for creating a combined user input sound from a plurality of separate user input sounds and then outputting that combined sound to a user device for display.
- the process preferably begins with a live event being displayed at 400 .
- the live event may be displayed on TV through streaming or through any other means.
- the live event may include a game, a musical performance, a theatrical performance, opera, and the like. Any type of live event in which cheering or noise making by an audience member is considered to be acceptable, or even encouraged.
- the user decides that they want to cheer and experience the stadium atmosphere and therefore they wish to have the audio output from this game displayed to them and they also want to participate in the audio input for this game.
- the user logs on to the app through some device, including but not limited to a smartphone, tablet, an Alexa or other smart speaker, a smart home appliance, user computer, laptop, a phablet (smartphone having a larger display screen) or indeed any other computational device.
- the user selects the live event to connect to the game.
- the live event may be pre-selected, for example, through an invitation link, for example, optionally, the user may have set up a reminder that they wish to view this event, they may have even bought tickets for a pay per view. If that is the case, then optionally a link is invoked, or some other type of system or function is invoked so that the event is started.
- the user may be asked to press a play button to start or play may occur automatically.
- the user is connected to a server so that their audio data is output to the server and they can begin to cheer as multiple users cheer the combined audio from all other valid connected fans is output through a broadcaster user device at 405 .
- the user may optionally stay on for the game as long as is desired or as long as the event is occurring.
- a typical sporting event or other game may be between two to three hours in length.
- FIG. 5 shows a non-limiting exemplary user interface description.
- the user interface may feature a number of graphical elements or the number of functional elements.
- the user display may, for example, include a side panel 500 , which may display a list of currently active games. If the user clicks on a game, they may be brought to the page shown below. In this case, the game is called Dallas Mavericks, but the event or game may have any name.
- the user can search for their desired game for cheering through a search menu at 501 .
- the game name and team name or other event name for example, that of the band, or of a festival is preferably displayed at 502 so that the user knows what they would be cheering for and what they would be involved with.
- user interface 505 also features a play button 504 so that the user can decide when they wish to join the live cheering and when they wish to have it stop. This may also be used to decide when they wish to hear the cheer sounds and also when they would like to have the sounds no longer being displayed.
- FIG. 6 shows a non-limiting exemplary flow for crowd based sound aggregation.
- a plurality of users 600 provide a plurality of output audio data through a plurality of microphones so that it forms microphone data 601 .
- the collected data then is preferably placed through a sound quality algorithm to determine user input rejection for example, for sound quality at 602 .
- Each user output audio, which forms the user input data is then analyzed, and preferably an audio sum block is then performed at 606 to summarize the data.
- One or more audio algorithms 607 are then applied, including but not limited to low latency real time algorithm 603 in a personalized audio stream 604 . This combination is then output to the users are shown as output audio stream 605 , which will be displayed to the user device which is not shown.
- FIG. 7 shows a non-limiting exemplary flow for performing a sound quality upgrade through user input rejection.
- Microphone data is obtained at 700 .
- the microphone data undergoes spectral analysis at 701 for example, to enable speech detection and also optionally to enable sounds which are either too high or too low frequency to be removed.
- Packet to packet speech detection is then performed at 702 in order to be certain that the user is actually cheering and that the sounds are not random noise or otherwise not related to cheering or having the user make a sound.
- An exponential moving average of speech detection is performed at 703 . This enables the voice detection to occur in such a way that there isn't drop off.
- a hysteresis process is then optionally performed at 704 , to smooth the sound produced over time and to incorporate historical data in the analysis. Such a process may be applied for example to avoid repeatedly adding and then ceasing to add audio input from a particular user computational device.
- the microphone data is passed along to be added to the output stream at 707 ; otherwise the user may be muted at 706 .
- this data is removed and the user may then be muted, for at least a period of time.
- a server cluster architecture is shown in a non-limiting exemplary implementation in FIG. 8 .
- a plurality of users 800 operate a plurality devices 801 , which may be as previously described, and maybe any type of suitable computational device.
- the output audio data from these users is then input to a server cluster 802 .
- a server cluster 802 Optionally operating through a plurality of servers shown as S 1 803 , S 2 804 , and S 3 805 . These are preferably controlled by a master coordinating server, MS 1 at 806 .
- the server cluster analysis are then used to combine the audio data and output audio stream 807 which may then be output back to devices 801 or may be output for audio broadcast, for example, through a television or through a large gathering or through speakers in another area.
- Preferably output audio stream 807 is suitable for one-way output as opposed to two-way interactions.
- FIG. 9 shows a non-limiting exemplary personal audio feed method.
- microphone data from user 1 is obtained at 900 .
- Audio data from N users is obtained at 901 .
- a subtraction block is applied such that the microphone data from user 1 is de-duplicated from the audio data combined from N users at 901 .
- the application of a subtraction block is to prevent an echo or prevent these users from hearing their own voice but with a time lag.
- Next audio normalization is performed based on the number of users considered to be speaking at 903 . This may optionally also include adding in further audio sounds if in fact insufficient numbers of users are outputting audio data.
- the output audio stream 904 is then displayed back to the user 905 through their device, not shown.
- FIG. 10 shows a non-limiting exemplary client sequence diagram again showing the interactions between a user device and the servers for both input and output audio data.
- a client computational device preferably features a microphone 1009 , a processor 1010 , a memory 1011 and speakers 1012 .
- User output audio data forms the input to server 1008 which then outputs an audio stream for display by speakers 1012 .
- the user input data is recorded and passed off to the processor at a stage 1000 . This is then handed off to processor 1010 for processing speech data and placing into a short term buffer at 1001 .
- the buffered speech is sent off in chunks at server 1002 preferably to store it at 1011 before being sent.
- the buffered speech chunks are preferably 960 samples, corresponding in total to 20 ms of audio data per buffered chunk.
- the buffered speech chunks are received by server 1008 , and then audio algorithms are run and combined with N other users at 1003 .
- the server sends out the audio data to clients at stage 1004 .
- This audio data is then received by the client computational device and specifically by processor 1010 .
- Audio data received from the servers placed into a short term audio buffer 1005 the buffered audios read from the speakers at 1006 from memory 1011 .
- the audio data is then played through the speakers at stage 1007 .
- FIG. 11 shows an exemplary, non-limiting system for handling echo.
- An audio based echo system may be used to reduce echo or alternatively to add echo to provide the sound of a plurality of users cheering in a large stadium or other open area, or alternatively large closed area.
- the system features an input audio stream 1100 , a plurality of game settings 1101 , a speaker count 1102 and a gain 1103 .
- game settings 1101 include a maximum number of users (clients or apps on the respective user devices), baseline volume, and an echo threshold which may be set as a multiple of a particular number, to determine when to apply echo.
- Input audio stream 1100 and gain 1103 are applied within an echo engine 1104 , to add or reduce echo in order to create the desired sound experience.
- Speaker count 1102 may relate to the number of input user audio streams.
- speaker count 1102 and game settings 1101 may be applied to create a sound that is typical of a particular stadium, arena, or other open or closed area.
- Echo may be applied for example by repeating existing sounds and noises from fans, at varying volume levels, number of echoes and so forth, to preferably create a more realistic output sound.
- realistic it is meant a sound that more closely reproduces the sound that would have normally been expected to be output were the event to have been held with the expected number of attendees at the physical location for the event.
- an output audio stream 1105 is provided by echo engine 1104 .
- FIG. 12 shows an exemplary, non-limiting flow for group sound from a plurality of users into boxes, and applying particular filters and/or settings to sound according to each grouped box.
- sound from a plurality of user computational devices 1200 is grouped into a plurality of box designations 1201 .
- Some boxes may have sound from only one user computational device 1200 , while others may have sound from a plurality of such user computational devices 1200 .
- Sorting user computational devices 1200 into box designations 1201 may be performed for example according to sound quality, frequency of sound, consistency of sound production and/or quality, and so forth.
- box designations 1201 may relate to a shared sound experience with a group of users, who could then hear each other more clearly (as though they were at the same or similar location at a space where an event is occurring) while still also hearing the background sounds.
- Box designations 1201 may represent multiple computational algorithms operated by a plurality of different server processes and/or computational devices, for example.
- a plurality of output streams 1202 are output according to the settings, parameters, filters and so forth for box designations 1201 .
- user input rejection is performed.
- user input rejection is performed according to the previously described box designations 1201 , for example according to the settings or parameters, such that each output stream 1202 is processed according to the settings, parameters and so forth for its respective box designation 1201 .
- audio settings may be applied for combining sounds within each output stream 1202 and/or between output streams 1202 . Then preferably all sounds are combined through an audio sound block 1204 , after which an output sound stream 1205 is preferably output.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Circuit For Audible Band Transducer (AREA)
Abstract
Description
Claims (12)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/446,134 US11853641B2 (en) | 2020-08-26 | 2021-08-26 | System and method for audio combination and playback |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US202063070435P | 2020-08-26 | 2020-08-26 | |
US17/446,134 US11853641B2 (en) | 2020-08-26 | 2021-08-26 | System and method for audio combination and playback |
Publications (2)
Publication Number | Publication Date |
---|---|
US20220113930A1 US20220113930A1 (en) | 2022-04-14 |
US11853641B2 true US11853641B2 (en) | 2023-12-26 |
Family
ID=81077715
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/446,134 Active 2041-10-20 US11853641B2 (en) | 2020-08-26 | 2021-08-26 | System and method for audio combination and playback |
Country Status (1)
Country | Link |
---|---|
US (1) | US11853641B2 (en) |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020188943A1 (en) * | 1991-11-25 | 2002-12-12 | Freeman Michael J. | Digital interactive system for providing full interactivity with live programming events |
US20060104223A1 (en) * | 2004-11-12 | 2006-05-18 | Arnaud Glatron | System and method to create synchronized environment for audio streams |
US20120017242A1 (en) * | 2010-07-16 | 2012-01-19 | Echostar Technologies L.L.C. | Long Distance Audio Attendance |
US8379874B1 (en) * | 2007-02-02 | 2013-02-19 | Jeffrey Franklin Simon | Apparatus and method for time aligning program and video data with natural sound at locations distant from the program source and/or ticketing and authorizing receiving, reproduction and controlling of program transmissions |
US20140317673A1 (en) * | 2011-11-16 | 2014-10-23 | Chandrasagaran Murugan | Remote engagement system |
US20160261917A1 (en) * | 2015-03-03 | 2016-09-08 | Google Inc. | Systems and methods for broadcast audience interaction and participation |
US20160378427A1 (en) * | 2013-12-24 | 2016-12-29 | Digimarc Corporation | Methods and system for cue detection from audio input, low-power data processing and related arrangements |
US20170170918A1 (en) * | 2015-12-11 | 2017-06-15 | Adaptive Sound Technologies, Inc. | Receiver device with adjustable delay and event notification |
US20170330579A1 (en) * | 2015-05-12 | 2017-11-16 | Tencent Technology (Shenzhen) Company Limited | Method and device for improving audio processing performance |
US20200404219A1 (en) * | 2019-06-18 | 2020-12-24 | Tmrw Foundation Ip & Holding Sarl | Immersive interactive remote participation in live entertainment |
US11179635B2 (en) * | 2017-10-11 | 2021-11-23 | Sony Interactive Entertainment LLC | Sound localization in an augmented reality view of a live event held in a real-world venue |
-
2021
- 2021-08-26 US US17/446,134 patent/US11853641B2/en active Active
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020188943A1 (en) * | 1991-11-25 | 2002-12-12 | Freeman Michael J. | Digital interactive system for providing full interactivity with live programming events |
US20060104223A1 (en) * | 2004-11-12 | 2006-05-18 | Arnaud Glatron | System and method to create synchronized environment for audio streams |
US8379874B1 (en) * | 2007-02-02 | 2013-02-19 | Jeffrey Franklin Simon | Apparatus and method for time aligning program and video data with natural sound at locations distant from the program source and/or ticketing and authorizing receiving, reproduction and controlling of program transmissions |
US20120017242A1 (en) * | 2010-07-16 | 2012-01-19 | Echostar Technologies L.L.C. | Long Distance Audio Attendance |
US20140317673A1 (en) * | 2011-11-16 | 2014-10-23 | Chandrasagaran Murugan | Remote engagement system |
US20160378427A1 (en) * | 2013-12-24 | 2016-12-29 | Digimarc Corporation | Methods and system for cue detection from audio input, low-power data processing and related arrangements |
US20160261917A1 (en) * | 2015-03-03 | 2016-09-08 | Google Inc. | Systems and methods for broadcast audience interaction and participation |
US20170330579A1 (en) * | 2015-05-12 | 2017-11-16 | Tencent Technology (Shenzhen) Company Limited | Method and device for improving audio processing performance |
US20170170918A1 (en) * | 2015-12-11 | 2017-06-15 | Adaptive Sound Technologies, Inc. | Receiver device with adjustable delay and event notification |
US11179635B2 (en) * | 2017-10-11 | 2021-11-23 | Sony Interactive Entertainment LLC | Sound localization in an augmented reality view of a live event held in a real-world venue |
US20200404219A1 (en) * | 2019-06-18 | 2020-12-24 | Tmrw Foundation Ip & Holding Sarl | Immersive interactive remote participation in live entertainment |
Non-Patent Citations (1)
Title |
---|
Huggins, Mark, et al. "Adaptive High Accuracy Approaches to Speech Activity Detection in Noisy and Hostile Audio Environments." Conference of the International Speech Communication Association, Sep. 2010, https://doi.org/10.21437/interspeech.2010-770. (Year: 2010). * |
Also Published As
Publication number | Publication date |
---|---|
US20220113930A1 (en) | 2022-04-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104966522B (en) | Effect adjusting method, cloud server, stereo set and system | |
US8112166B2 (en) | Personalized sound system hearing profile selection process | |
US20180176705A1 (en) | Wireless exchange of data between devices in live events | |
CN110910860B (en) | Online KTV implementation method and device, electronic equipment and storage medium | |
US11024331B2 (en) | Voice detection optimization using sound metadata | |
US11915687B1 (en) | Systems and methods for generating labeled data to facilitate configuration of network microphone devices | |
US11785280B1 (en) | System and method for recognizing live event audiovisual content to recommend time-sensitive targeted interactive contextual transactions offers and enhancements | |
US20180123713A1 (en) | System and method for participants to perceivably modify a performance based on vital signs | |
CN113286161A (en) | Live broadcast method, device, equipment and storage medium | |
US11622197B2 (en) | Audio enhancement for hearing impaired in a shared listening environment | |
US20170148438A1 (en) | Input/output mode control for audio processing | |
US20240057234A1 (en) | Adjusting light effects based on adjustments made by users of other systems | |
US11853641B2 (en) | System and method for audio combination and playback | |
CN114125480A (en) | Live broadcasting chorus interaction method, system and device and computer equipment | |
JP2021021870A (en) | Content collection/distribution system | |
CN112333531A (en) | Audio data playing method and device and readable storage medium | |
JP2005333279A (en) | Broadcast system | |
US20160164936A1 (en) | Personal audio delivery system | |
WO2023120244A1 (en) | Transmission device, transmission method, and program | |
US10341762B2 (en) | Dynamic generation and distribution of multi-channel audio from the perspective of a specific subject of interest | |
US12052551B2 (en) | Networked audio auralization and feedback cancellation system and method | |
WO2022190446A1 (en) | Control device, control method, and program | |
US20240276143A1 (en) | Signal normalization using loudness metadata for audio processing | |
CN118588101A (en) | Audio processing method and device, electronic equipment and storage medium | |
WO2023058330A1 (en) | Information processing device, information processing method, and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY |
|
AS | Assignment |
Owner name: CHAMPTRAX TECHNOLOGIES INC., CANADA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:ANDERSEN, NICHOLAS;ANDERSEN, THOMAS;ANDERSEN, ELIAS;AND OTHERS;REEL/FRAME:057339/0094 Effective date: 20210826 |
|
FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO SMALL (ORIGINAL EVENT CODE: SMAL); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
AS | Assignment |
Owner name: HEARMECHEER, INC., CANADA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:CHAMPTRAX INC.;REEL/FRAME:064793/0032 Effective date: 20230901 Owner name: HEARMECHEER, INC., CANADA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:CHAMPTRAX INC.;REEL/FRAME:066241/0051 Effective date: 20230901 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT RECEIVED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |