US20140098177A1 - Mobile application for accessing television audio - Google Patents
Mobile application for accessing television audio Download PDFInfo
- Publication number
- US20140098177A1 US20140098177A1 US13/839,002 US201313839002A US2014098177A1 US 20140098177 A1 US20140098177 A1 US 20140098177A1 US 201313839002 A US201313839002 A US 201313839002A US 2014098177 A1 US2014098177 A1 US 2014098177A1
- Authority
- US
- United States
- Prior art keywords
- audio
- voip
- server
- user
- content server
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/4302—Content synchronisation processes, e.g. decoder synchronisation
- H04N21/4307—Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen
- H04N21/43076—Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen of the same content streams on multiple devices, e.g. when family members are watching the same movie on different devices
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M3/00—Automatic or semi-automatic exchanges
- H04M3/42—Systems providing special services or facilities to subscribers
- H04M3/56—Arrangements for connecting several subscribers to a common circuit, i.e. affording conference facilities
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L65/00—Network arrangements, protocols or services for supporting real-time applications in data packet communication
- H04L65/40—Support for services or applications
- H04L65/403—Arrangements for multi-party communication, e.g. for conferences
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L65/00—Network arrangements, protocols or services for supporting real-time applications in data packet communication
- H04L65/40—Support for services or applications
- H04L65/403—Arrangements for multi-party communication, e.g. for conferences
- H04L65/4038—Arrangements for multi-party communication, e.g. for conferences with floor control
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L65/00—Network arrangements, protocols or services for supporting real-time applications in data packet communication
- H04L65/60—Network streaming of media packets
- H04L65/61—Network streaming of media packets for supporting one-way streaming services, e.g. Internet radio
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/02—Protocols based on web technology, e.g. hypertext transfer protocol [HTTP]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/41—Structure of client; Structure of client peripherals
- H04N21/4104—Peripherals receiving signals from specially adapted client devices
- H04N21/4126—The peripheral being portable, e.g. PDAs or mobile phones
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/4302—Content synchronisation processes, e.g. decoder synchronisation
- H04N21/4307—Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen
- H04N21/43079—Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen of additional data with content streams on multiple devices
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/60—Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client
- H04N21/61—Network physical structure; Signal processing
- H04N21/6106—Network physical structure; Signal processing specially adapted to the downstream path of the transmission network
- H04N21/6131—Network physical structure; Signal processing specially adapted to the downstream path of the transmission network involving transmission via a mobile phone network
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/60—Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client
- H04N21/61—Network physical structure; Signal processing
- H04N21/6106—Network physical structure; Signal processing specially adapted to the downstream path of the transmission network
- H04N21/6137—Network physical structure; Signal processing specially adapted to the downstream path of the transmission network involving transmission via a telephone network, e.g. POTS
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/14—Systems for two-way working
- H04N7/15—Conference systems
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computer Networks & Wireless Communication (AREA)
- Telephonic Communication Services (AREA)
- Information Transfer Between Computers (AREA)
Abstract
This disclosure describes embodiments of systems and methods that use protocols and techniques that can stream audio from a video device to a separate device while reducing or eliminate audio/video synchronization errors. In some embodiments, these systems and methods use Voice over IP (VoIP) technology to stream audio to mobile devices with low latency, resulting in little or no user-perceivable delay between the audio stream and corresponding video presentation. As a result, users can enjoy both the audio and video of any video display in an establishment. In addition, the systems and methods described herein may be implemented in the home or other locations to allow viewers who may be hard of hearing to listen to audio clearly via headphones.
Description
- This application claims priority under 35 U.S.C. §119(e) to U.S. Provisional Application No. 61/711,670, filed Oct. 9, 2012, titled “System and Method for Providing Access to Real-Time Audio Sources Using a Computer Network,” the disclosure of which is hereby incorporated by reference in its entirety.
- Television distribution systems today broadcast numerous programs, as well as other audio-visual content, via cable, satellite, and Internet streaming channels. Many public establishments include multiple televisions, monitors, or projection systems that simultaneously provide many different such programs concurrently for the enjoyment of their clientele. Often, these video devices are placed in relatively close proximity to each other, or are placed in the same room, so that any patron of the establishment may elect to view any of multiple video devices from a single vantage point.
- To avoid the confusion arising from each video device outputting different audio simultaneously, many establishments mute or drastically lower the volume of video devices. Some establishments instead increase the audio volume of a single video device perceived to be have the most popular programming while muting or lowering the volume of other devices. To assist users in understanding the missing or difficult to discern audio content, establishments typically enable captions or subtitles on video devices to display text as a partial substitute for the missing audio.
- For purposes of summarizing the disclosure, certain aspects, advantages and novel features of several embodiments have been described herein. It is to be understood that not necessarily all such advantages can be achieved in accordance with any particular embodiment of the features disclosed herein. Thus, the embodiments disclosed herein can be embodied or carried out in a manner that achieves or optimizes one advantage or group of advantages as taught herein without necessarily achieving other advantages as can be taught or suggested herein.
- In certain embodiments, a system for streaming an audio feed associated with a corresponding video includes a content server including computer hardware. The computer hardware can include a sound card driver that can receive audio sources from a plurality of video devices, each audio source including audio associated with a corresponding video; a feed data repository that can store data mapping each audio source to an audio feed accessible by a conference call identifier, thereby providing a plurality of audio feeds; a web server that can receive a request from a user device to access a list of the audio feeds and to provide the list of the audio feeds to the user device to enable a user to select one of the audio feeds for streaming; a Voice over IP (VoIP) server that can receive a VoIP request from the user device, the VoIP request including a selected conference call identifier identifying a selected audio feed of the list of audio feeds; and a conference call bridge that can connect the user device to a conference call associated with the selected conference call identifier to make the selected audio feed available for streaming to the user device.
- In certain embodiments, the system of the preceding paragraph can include any subcombination of the following features, among others. For example, the content server can further include a wireless access point that can provide wireless access to the user device. The system can also include one or more signal processing modules that can provide digitized forms of the audio sources to the content server. The one or more signal processing modules can include a high-definition multimedia interface (HDMI) audio extractor that can extract audio from a digital HDMI signal. The one or more signal processing modules can also receive one or more of the audio sources wirelessly. The one or more signal processing modules can also receive the audio source over a very high frequency (VHF) wireless connection. The system may also include a universal serial bus (USB) hub that can receive inputs from the one or more signal processing modules and to provide an output to the content server. The content server can be implemented in an audio-visual receiver. In addition, the content server can be implemented in a television. The system may also include a domain name server (DNS) that can provide instructions to the user device for downloading a mobile application to the user device, and the mobile application can access the content server to obtain the selected audio feed.
- In certain embodiments, a method of streaming an audio feed associated with a corresponding video can include: by a content server including physical computer hardware: receiving audio sources from a plurality of video devices, each audio source including audio associated with a corresponding video, each audio source assigned to an audio feed accessible by a conference call identifier, thereby providing a plurality of audio feeds; receiving a request from a user device to access a list of the audio feeds; providing the list of the audio feeds to the user device to enable a user to select one of the audio feeds for streaming; receiving a Voice over IP (VoIP) request from the user device, the VoIP request including a selected conference call identifier identifying a selected audio feed of the list of audio feeds; connecting the user device to a conference call associated with the selected conference call identifier to make the selected audio feed available for streaming to the user device; and streaming the selected audio feed to the user device in response to said connecting.
- In certain embodiments, the method of the preceding paragraph can include any subcombination of the following features, among others. For example, connecting the user device to the conference call can include connecting the user device as a muted participant to the conference call. Receiving the VoIP request can include receiving a session initial protocol (SIP) request. The VoIP request can implement any subset of the following protocols: a session initial protocol (SIP), a real-time transport protocol (RTP), and a uniform datagram protocol (UDP). The VoIP request can implement the H.323 protocol. The method can also include connecting second user devices to the conference call in response to requests from the second user devices to access the selected audio feed.
- In certain embodiments, a system for streaming an audio feed associated with corresponding visual content can include: a data repository that can store data mapping an audio feed with a network telephony session identifier, the audio feed corresponding to an audio source associated with visual content; and a network telephony server that can: receive a network telephony call, the network telephony call referring to the conference call identifier, and provide access to a network telephony session for a user device, the conference call associated with the network telephony session identifier, wherein the conference call system makes the audio feed available for streaming to the user device via the network telephony session.
- In certain embodiments, the system of the preceding paragraph can include any subcombination of the following features, among others. For example, the network telephony server can include a VoIP server. The network telephony session identifier can include a reference to the audio feed. The network telephony session identifier can include a reference to a video device associated with the audio feed. The network telephony system can also route the network telephony call to the audio source to enable the network telephony server to stream the audio source to the user device. The network telephony system can also connect additional user devices to the network telephony session. The system can also include a cellular radio that can communicate with a remote server to perform one or more of the following: receive maintenance, receive software updates, store user data, and obtain advertisements for users.
- In certain embodiments, non-transitory physical computer storage can include instructions stored thereon that, when executed by one or more processors, can implement operations for streaming an audio feed associated with corresponding visual content. The operations can include: receiving audio from an audio-visual device, the audio being associated with corresponding visual content; associating the audio with a network telephony identifier; hosting a network telephony session that can provide access to the audio for one or more user devices; receiving a network telephony call including the network telephony identifier from a selected user device; providing access to the network telephony session for the selected user device in response to receipt of the network telephony call from the selected user device; and providing access to the audio for the selected user device through the network telephony session.
- In certain embodiments, the physical computer storage of the preceding paragraph can include any subcombination of the following features, among others. For example, receiving the audio can include receiving the audio as digital audio from a signal processing module. Providing access to the audio can include streaming the audio to the user device using one or both of the following protocols: a real-time transport protocol (RTP) and a uniform datagram protocol (UDP). Further, the physical computer storage may be in combination with a computer system including computer hardware.
- In certain embodiments, a method of streaming an audio feed and secondary content to a user device can include: by a content server including physical computer hardware: receiving a request from a user device to access an audio feed for streaming, the audio feed associated with a corresponding video; wirelessly streaming the audio feed to the user device via a Voice over IP (VoIP) conference call; identifying a feed characteristic related to the audio feed; supplying data related to the feed characteristic to an ad server along with a request for an advertisement; receiving the advertisement in response to the request; and transmitting the advertisement to the user device in response to receiving the advertisement, thereby providing a targeted advertisement related to the audio feed to the user device.
- In certain embodiments, the method of the preceding paragraph can include any subcombination of the following features, among others. For example, identifying the feed characteristic can include identifying a keyword from caption text associated with the video. Identifying the feed characteristic can include identifying a keyword by converting speech in the audio feed to text. The method can also include identifying a second feed characteristic related to a second audio feed streamed to the user device prior to said streaming audio feed to the user. The method can also include supplying the second feed characteristic with the feed characteristic along with the request for the advertisement. The method can also include requesting a second advertisement related to the second feed characteristic. The method can also include identifying a user characteristic of a user of the user device. The method can also include supplying the user characteristic to the ad server along with the request for the advertisement. The user characteristic can include a location of the user. The user characteristic can include demographic information regarding the user.
- In certain embodiments, a system for streaming an audio feed and secondary content to a user device can include: a server that can provide an audio feed to a user device using a Voice over IP (VoIP) protocol, the audio associated with corresponding visual content; and a secondary content server including computer hardware. The secondary content server can: identify a feed characteristic related to the audio feed, supply data related to the feed characteristic to an ad server along with a request for an advertisement, receive the advertisement in response to the request, and transmit the advertisement to the user device in response to receiving the advertisement, thereby providing a targeted advertisement related to the audio feed to the user device.
- In certain embodiments, the system of the preceding paragraph can include any subcombination of the following features, among others. For example, the secondary content server can include a caption extractor that can extract captions from the visual content. The system can further include a signal processing module that can capture the visual content and submit at least a portion of the visual content to the secondary content server, the visual content including the captions. The secondary content server can also include a caption analyzer that can analyze the captions to identify a keyword associated with the captions. The secondary content server can also include a local ad server that can supply the keyword as the feed characteristic to the remote ad server. The secondary content server can include a speech-to-text converter that can extract text from the audio feed. The secondary content server can also include a text analyzer that can analyze the extracted text to identify a keyword associated with the extracted text. The secondary content server can also include a local ad server that can supply the keyword as the feed characteristic to the remote ad server. The secondary content server can provide access to a game related to the audio feed for the user device. The secondary content server can provide access to a local service for the user device. The local service can include one of the following: a taxi service, a restaurant ordering service, and a concierge service.
- In certain embodiments, non-transitory physical computer storage can include instructions stored thereon that, when executed by one or more processors, implement components for streaming an audio feed and secondary content to a user device. The components can include: a first server that can provide an audio feed to a user device using a network telephony protocol, the audio associated with corresponding visual content; and a secondary content server that can: identify a feed characteristic related to the audio feed, supply data related to the feed characteristic to a third server along with a request for secondary content related to the feed characteristic, receive the secondary content from the third server in response to the request, and transmit the secondary content to the user device in response to receiving the advertisement.
- In certain embodiments, the physical computer storage of the preceding paragraph can include any subcombination of the following features, among others. For example, the first server can receive an additional audio source. The first server can broadcast the additional audio source to the user device and other user devices, overriding the audio feed. The additional audio source can include one of the following: a local advertisement and a public service announcement.
- In certain embodiments, a method of accessing an audio feed associated with a corresponding video can include: by a mobile device including a processor: establishing a wireless connection to a content server; obtaining a list of audio feeds available for streaming from the content server; outputting a graphical user interface for presentation to a user, the graphical user interface including user interface controls that can represent the list of audio feeds; receiving a user selection of one of the audio feeds through the graphical user interface; in response to receiving the user selection of the selected audio feed, establishing a Voice over IP (VoIP) conference call with the content server using a conference call identifier that can identify the selected audio feed; and receiving streaming access to the selected audio feed through the VoIP conference call.
- In certain embodiments, the method of the preceding paragraph can include any subcombination of the following features, among others. For example, establishing the VoIP conference call with the content server can include connecting to the VoIP conference call as a muted participant. The method may also include receiving a web page including instructions for downloading a mobile application that can implement said obtaining the list of audio feeds, outputting said graphical user interface, said establishing the VoIP conference call, and said receiving the streaming access to the selected audio feed. Establishing the VoIP call can include initiating a session initial protocol (SIP) request to the content server. The VoIP call can implement any subset of the following protocols: a session initial protocol (SIP), a real-time transport protocol (RTP), and a uniform datagram protocol (UDP). The VoIP call can implement any subset of the following protocols: a real-time transport protocol (RTP) and a uniform datagram protocol (UDP). The VoIP call can implement the H.323 protocol.
- In certain embodiments, a system for accessing an audio feed associated with a corresponding visual content can include: a content processor that can obtain a list of audio feeds available for streaming from a server; a user interface module that can output a graphical user interface including user interface controls that can represent the list of audio feeds and to receive a user selection of one of the audio feeds; and a Voice over IP (VoIP) client including computer hardware, the VoIP client that can initiate a VoIP session with the server in response to receipt of the user selection of one of the audio feeds and to receive streaming access to the selected audio feed through the VoIP session.
- In certain embodiments, the system of the preceding paragraph can include any subcombination of the following features, among others. For example, the VoIP session can include a VoIP session identifier. The VoIP session identifier can be formatted according to a session initial protocol (SIP). The VoIP session identifier can include a reference to the audio feed. The VoIP session identifier can include a reference to a television associated with the audio feed. The VoIP client can initiate the VoIP session with the server as a muted participant. The system can also include a wireless module that can establish a wireless connection to the server.
- In certain embodiments, non-transitory physical computer storage can include instructions stored thereon that, when executed by one or more processors, implement components for accessing an audio feed associated with a corresponding visual content. The components can include: a content processor that can obtain information about an audio feed available for streaming from a server in wireless communication with the content processor; a network telephony client that can initiate a network telephony session with the server to receive streaming access to the audio feed; and a user interface that can provide a user interface control that can adjust a characteristic of the audio feed responsive to an input of a user.
- In certain embodiments, the physical computer storage of the preceding paragraph can include any subcombination of the following features, among others. For example, the user interface control can include a volume control. The user interface control can include a stop playback control. The user interface can include an advertisement. The user interface can identify a television channel associated with the audio feed. The network telephony client can also initiate the network telephony session using a VoIP protocol. The VoIP protocol can include one or more of the following: a session initial protocol (SIP), an H.323 protocol, a real-time transport protocol (RTP), and a uniform datagram protocol (UDP). The audio feed can include television audio. The audio feed can include live audio. The physical computer storage can also be in combination with a computer system having computer hardware.
- Throughout the drawings, reference numbers are re-used to indicate correspondence between referenced elements. The drawings are provided to illustrate embodiments of the features described herein and not to limit the scope thereof.
-
FIGS. 1A and 1B depict example embodiments of television audio delivery systems. -
FIGS. 2A and 2B depict additional example embodiments of television audio delivery systems. -
FIGS. 3A through 3D depict embodiments of signal processing modules associated with a television audio delivery system. -
FIGS. 4 and 5 depict embodiments of television audio delivery processes. -
FIG. 6 depicts an embodiment of a state flow diagram for delivering television audio. -
FIGS. 7A , 7B, and 8 depict example mobile application user interfaces. -
FIG. 9 depicts an embodiment of a computing environment including multiple television audio delivery systems. -
FIG. 10 depicts another embodiment of signal processing modules associated with a television audio delivery system. -
FIGS. 11A and 11B depict example embodiments of a secondary content server associated with a television audio delivery system. -
FIG. 12 depicts an embodiment of a feed-based ad serving process. -
FIG. 13 depicts an embodiment of a caption-based ad serving process. -
FIG. 14 depicts an embodiment of a speech-based ad serving process. - Muting or lowering television audio can be very frustrating for patrons of establishments such as restaurants, bars, gyms, airports, hotel lobbies, conference rooms, and the like. However, due to the ubiquitous spread of mobile handheld devices, it is possible to stream television audio to individual listeners' mobile devices, allowing listeners to watch the video on any display and simultaneously listen to the audio with headphones (or mobile speakers). Such an arrangement can allow an establishment to continue to mute or lower television volume to avoid audio interference while allowing patrons to enjoy the full audio of any program in the establishment.
- One major drawback of existing audio streaming systems is inadequate synchronization between the television video and audio stream, which can be very irritating for viewers. For example, in some systems, the audio may be delayed or out of sync with a speaker in a video, making it hard to follow the speaker's speech and moving lips together. These synchronization problems may arise from the use of streaming protocols such as TCP-based or HTTP-based protocols, which inherently have delays. Even existing UDP-based streaming protocols, which may have less delay than TCP-based protocols, may still have an unacceptable synchronization delay of about 1-3 seconds. Such delay is typically not a problem when streaming just audio because listeners are usually willing to wait for a few seconds for the stream to buffer, but a delay of 1-3 seconds between audio and television video can be jarring. Some systems attempt to address this synchronization problem by delaying the video to match the delay of the audio. However, because the underlying streaming protocols involved can have variable delay, delaying the video is an imperfect solution that can still result in synchronization errors.
- This disclosure describes embodiments of systems and methods that use protocols and techniques that can stream audio from a video device to a separate device while reducing or eliminate audio/video synchronization errors. In some embodiments, these systems and methods use Voice over IP (VoIP) technology to stream audio to mobile devices with low latency, resulting in little or no user-perceivable delay between the audio stream and corresponding video presentation. As a result, users can enjoy both the audio and video of any video display in an establishment. In addition, the systems and methods described herein may be implemented in the home or other locations to allow viewers who may be hard of hearing to listen to audio clearly via headphones.
-
FIGS. 1A and 1B depict example embodiments of televisionaudio delivery systems -
FIG. 1A , in particular, shows an embodiment of the televisionaudio delivery system 100 a wheremultiple televisions 150 are used, while the televisionaudio delivery system 100 bFIG. 1B includes asingle television 150 that may be in a user's home, or hospital waiting room, for example. - Referring specifically to
FIG. 1A , in the televisionaudio delivery system 100 a,user devices 102 have installed thereonmobile applications 110 that can access audio associated with one ormore televisions 150. Theuser devices 102 can be any type of mobile computing device including, for example, phones, smartphones, tablet computers, tablet computers, MP3 players, watches, laptops, personal digital assistants (PDAs), computerized glasses or goggles, or more generally, any mobile device with a processor or a computing capability. Themobile application 110 can be implemented in a browser or as a standalone application, such as a mobile application that may be downloaded from an application store like the Apple™ App Store™ for iOS™ devcies or the Google™ Google Play Store™ for Android™ devices. - The
mobile application 110 on a givenuser device 102 can connect wirelessly, as indicated by dashed lines, to acontent server 120. Thecontent server 120 can include hardware and/or software for providing content such as television audio to theuser devices 102, for example, in real time. In certain embodiments, thecontent server 120 receives the television audio throughsignal processing modules 130 that receive the television audio from audio/visual receivers 140. The audio/visual (A/V) receivers can be, for example, set-top boxes, digital video recorders (DVRs), satellite cable receivers, Blue-Ray™ or other optical players, video game platforms (such as the Microsoft Xbox™,Sony Playstation V receivers 140 can provide audio and video to thetelevisions 150 and also audio to thesignal processing modules 130. Thesignal processing modules 130 may receive, for example, analog audio from certain A/V receivers 140, convert this analog audio to digital audio and provide this digital audio to thecontent server 120. In addition, in some embodiments, the A/V receivers 140 receive digital audio and provide the digital audio to thecontent server 120. - The
content server 120 can include hardware and/or software that delivering television audio to themobile applications 110. In one embodiment, thecontent server 120 includes an access point for providing wireless (e.g., Bluetooth® or Wi-Fi) access to theuser devices 102. Thecontent server 120 can also include a network telephony system that facilitates delivering television audio to themobile applications 110. For instance, this network telephony system can enable thecontent server 120 to connect to themobile applications 120 via a voice-over IP connection. Thecontent server 120 can host a conference call for each audio feed received from the A/V receivers 140, where each audio feed can correspond to the audio for a given TV. A conference call established by thecontent server 120 can provide access to one of the feeds associated with one of thetelevisions 150 to any number of themobile applications 110 that connect to that conference call. Thus, theuser devices 102 ormobile applications 110 can use voice-over IP protocols or other network telephony protocols to connect to conference calls hosted by thecontent server 120 to obtain access to the television audio. - One example benefit of using conference calls and network telephony technology on the
content server 120 can be reduction in latency. As a result, the audio can be played on theuser devices 102 with little user-perceived delay from the corresponding video output on thetelevisions 150. In contrast, existing technologies for audio streaming, such as HTTP- or TCP-based streaming, can provide a much longer delay that results in a frustrating out-of-sync presentation of audio and video to the users. Additional details about the conference call and VoIP embodiments that may be implemented by thecontent server 120 are described in greater detail below. - Network telephony technologies other than VoIP may be employed by the
content server 120 in other embodiments. However, for convenience, this specification generally refers to VoIP as one example type of network telephony that may be implemented by thecontent server 120 to deliver television audio. Other terms commonly associated with VoIP, and which technologies may be implemented by thecontent server 120, include IP telephony, Internet telephony, voice over broadband (VoBB), broadband telephony, IP communications, and broadband phone. - Further, for convenience, this application refers primarily to the delivery of television audio from a content server to mobile devices. However, it should be understood that this audio can come from any video device, including any television, projector, computer monitor, mobile or fixed computing device, or the like. Thus, the term “television audio,” as used herein, in addition to having its ordinary meaning, can include any audio associated with a corresponding video, whether delivered by a television or other device. Further, any type of visual content may be output by the
content server 120, including video. - With continued reference to
FIG. 1A , a remote server (or servers) 160 is also shown in communication with the televisionaudio delivery system 100 a via anetwork 108, which may be a local area network (LAN), a Wide Area Network (WAN, e.g., the Internet) at leased line, or some combination of the same. Theremote server 160 can provide secondary content to thecontent server 120, which can in turn provide this content to theuser devices 110 via themobile application 110. The secondary content can include, for example, advertisements, games, web content, other applications, chat functions, social networking or social media content, or the like, more detailed examples of which are described below with respect toFIGS. 9 through 14 . - As described above, the television
audio delivery system 100 b ofFIG. 1B can be implemented in locations that a single television 150 (e.g., in a single room). The televisionaudio delivery system 100 b may be used in an individual home or in other areas that have a single television including some doctor's offices, hospitals, dialysis treatment areas, and the like, where people may be waiting for a period of time while watching television. The televisionaudio delivery system 100 b may also be used in areas with multiple televisions where television audio delivery service is available for a single television, such as some doctor waiting rooms that have a high volume television for children and a second television for adults. In this example scenario, the television with programming for adults may be configured with the televisionaudio delivery system 100 b. - In applications in the home, a
user device 102 can connect to thecontent server 120 as in other locations. Thecontent server 120 may be implemented as a set-top box that sits on top of or close to atelevision 150. One example purpose of using the system in the home can be to assist hearing for hearing-impaired listeners. Typically, hearing-impaired listeners turn television volume up very loudly to the point of annoyance of non-hearing impaired persons. It can therefore be beneficial to provide such hearing-impaired persons with access to theuser device 102 with themobile application 110 and headphones to listen in comfort while not disturbing others around him or her. However, it is becoming increasingly common to find multiple televisions in the home, even in the same room. Therefore, the televisionaudio delivery system 100 a ofFIG. 1A could also be implemented in the home. - The
content server 120 and other modules shown inFIG. 1B can have all of the same functionality described above with respect toFIG. 1A . In fact,multiple user devices 102 can be used to listen to thetelevision 150 by different users with different headphones. Likewise, thetelevision 150 may have the functionality to provide split-screen viewing and may show two different television shows or videos on a single screen, or more than two on a single screen. Such a split-screen arrangement is common, for example, in video gaming, where users may have up to four or more different segmented portions of a screen in a multi-player game setting. - Thus, in one embodiment, the television 150 (or the A/
V receiver 140, which may be a video game platform) may provide two or more audio feeds to thecontent server 120 via thesignal processing modules 130, each feed of audio corresponding to one split screen of the television display. Different listeners of theuser devices 102 can access these different feeds via thecontent server 120. In this manner, users can watch different portions of a video game or even different television shows on the same television and receive different audio individually via headphones, without disturbing each other. Listening to different audio may be particularly valuable in video games, such as first-person shooters, where a user may glean information about opponents via audio that the user would not wish other users to hear. For example, in a football video game, a user might call a certain play and not wish to have other users hear that play being called, and can do so more discretely using thissystem 100 b. - The television
audio delivery systems FIGS. 1A and 1B , can be modified in many different ways, but while still achieving the same or similar benefits described herein. For instance, in one embodiment, thecontent server 120 may be implemented directly in the A/V receiver 140 (see, e.g.,FIG. 2A , with acontent server 220 in an A/V receiver 240). In another embodiment, thetelevisions 150 can be Internet-enabled televisions or may have integrated cable or satellite television receivers within thetelevisions 150, and can therefore provide digital or analog audio directly to thecontent server 120. If digital audio is output by atelevision 150, the A/V receivers 140 may be omitted and thesignal processing modules 130 may optionally be omitted. Thus, thetelevisions 150 can connect directly to the content server 120 (see, e.g.,FIG. 2B , where atelevision 250 includes acontent server 220 that connects to the user devices 102). - In still other embodiments, the A/
V receivers 140 may receive digital signals instead of analog signals and can therefore send digital signals directly to thecontent server 120 instead of through thesignal processing modules 130. Thesignal processing modules 130 may therefore be omitted. - Each of the different television audio delivery system configurations described above may be combined into a single television audio delivery system, where some
televisions 150 provide digital audio directly to acontent server 120, and whereother televisions 150 connect to A/V receivers 140, which connect to thecontent server 120. Some A/V receivers 140 can be analog, while others may be digital. Similarly, sometelevisions 150 provide analog audio out while others provide digital audio out. Thus, any combination of the various television audio systems described above may be implemented in a given location or venue. - In addition to streaming television audio, the
content server 120 may also stream any type of audio content, including live audio, recorded performances, audio associated with live events such as live plays or sporting events, including indoor or outdoor events, movie audio, home theater audio, sports betting audio, music (including at concerts), and the like. For convenience, the remainder of this specification refers generally to television audio, although it should be understood that any type of audio (including the examples given above), can be streamed by the systems and methods described herein. - Turning to
FIGS. 3A through 3D , embodiments of signal processing modules 330 associated with a television audio delivery system are shown. In particular,FIGS. 3A through 3D include more detailed example embodiments of thesignal processing module 130 ofFIGS. 1A and 1B , namely the signal processing modules 330 a-d. These signal processing modules 330 include various features that can enable analog and/or digital audio to be processed and provided to acontent server 320. Thecontent server 320 can have all of the functionality of thecontent server 120 described above. - Turning specifically to
FIG. 3A , thesignal processing module 330 a receives analog and digital audio from A/V receivers 340. The A/V receivers 340 can have all the functionality of the A/V receivers 140, described above. Although not shown, thesignal processing modules 330 a can receive analog or digital audio from thetelevisions 150 described above. In the depicted embodiment, thesignal processing modules 330 a include universal serial bus (USB) digital signal processing (DSP)modules 332. Each USB/DSP module 332 can connect to an A/V receiver 340 via a cable or the like to receive audio and can convert the audio to a format suitable for processing by thecontent server 320. The USB/DSP modules 332 can plug into USB ports in thecontent server 320. - Some examples of inputs that the USB/
DSP modules 332 can receive include 3.5 mm jack audio inputs, RCA inputs, HDMI inputs, optical inputs, coaxial inputs, and the like. In one embodiment, the A/V receivers 340 output in one jack format, such as RCA or HDMI, to a cable that has a corresponding connector, and the other end of the cable may include a 3.5 mm jack that connects to theDSP module 332. Although shown as a USB/DSP module 332, themodules 332 may connect to thecontent server 320 using an interface other than USB, such as another serial interface, Firewire, a Lightning connector, or any other suitable connection. - Referring to
FIG. 3B , more detailed versions of theDSP modules 332 are shown in thesignal processing module 330 b. EachDSP module 332 may include an analog-to-digital converter 334, although as will be described below, someDSP modules 332 need not include an analog-to-digital converter 334. - The analog-to-
digital converter 334 can receive an analog audio signal and convert it to a digital audio signal that can be processed bycontent server 320. Although not shown, eachDSP module 332 may also include an audio enhancement module that enhances the digital output of the analog-to-digital converter 334 to make dialog or other vocals easier to understand for the listener, or which otherwise provide audio enhancements to the audio. - Another USB/
DSP module 332 can include components that can interface with digital audio, for example, obtained from HDMI. Thus, for example, theDSP module 332 may include anHDMI audio extractor 336 and an analog-to-digital converter 338. HDMI, although in digital format already, interleaves both audio and video. In order to obtain the audio from an HDMI signal, an HDMI extractor or de-embedder 336 can therefore be employed. The output of this extractor or de-embedder can be an analog signal, which may be converted to digital format by the analog-to-digital converter 338 and provided to thecontent server 320. In another embodiment, the output of theHDMI audio extractor 336 is a digital audio signal that can be provided directly to thecontent server 320, allowing the analog-to-digital converter 338 to be omitted. - Although described herein as “DSP”
modules 332, themodules 332 may in fact include just an A/D converter 334 and not a digital signal processor chip. However, a digital signal processor chip may be included in any of theDSP modules 332 in various embodiments. - Referring to
FIG. 3C , another embodiment of a portion of the television audio delivery system is shown havingsignal processing modules 330 c that include theDSP modules 332 described above. However, one of theDSP modules 332 connects to an A/V receiver 340 with acable 333, and theother DSP module 332 connects to awireless receiver 354 that wirelessly receives audio and/or video data from awireless transmitter 352 in communication with another A/V receiver 340. The A/V receivers 340 can therefore be wirelessly coupled with thesignal processing modules 330 c and/orcontent server 320. - The purpose, in one embodiment, of having wireless communication from the A/
V receivers 340 or, indeed, a television that may be directly providing audio, is that in a location with many televisions or a large building, the televisions may be located far from thecontent server 320. To avoid the clutter of numerous cables from the different televisions to the content server, it can be beneficial to wirelessly transmit the audio and/or video to thecontent server 320. - In one embodiment, the
wireless transmitter 352 operates on a VHF or UHF frequency band to avoid interference with the 2.4 gigahertz Wi-Fi band that may be employed by thecontent server 320 acting as an 802.11x wireless hotspot. While only one of the A/V receivers 340 is shown communicating wirelessly with thecontent server 120 via thesignal processing modules 330 c, more or all of the televisions or A/V receivers can communicate wirelessly with the content server and/or signal processing modules, in some embodiments. Likewise, wireless communication between A/V receivers, televisions, content servers, signal processing modules, and the like, may be omitted in other embodiments. - Turning to
FIG. 3D , there are two sets ofsignal processing modules 330 d shown, each set ofsignal processing modules 330 d including USB/DSP modules 332 that provide signals to aUSB hub 362. Two USB hubs are shown that can receive the signals and transmit them to thecontent server 320. EachUSB hub 362 includes a single connection to thecontent server 320. Thus, eachUSB hub 362 can aggregate signals frommultiple DSP modules 332, allowing an even greater number of televisions to connect to asingle content server 320. - Any number of
DSP modules 332 and, therefore, A/V receivers and/or televisions can connect to aUSB hub 362, depending on the configuration of theUSB hub 362. For example, 2, 3, 4, 8 ormore DSP modules 332 can connect to any givenUSB hub 362, and any number ofUSB hubs 362 can connect to a givencontent server 320, depending on the number of USB ports available on thecontent server 320. - In another embodiment (not shown), each
USB hub 362 can communicate wirelessly with thecontent server 320 instead, or any subset of theUSB hubs 362 may communicate with thecontent server 320 wirelessly, either using Wi-Fi, Blue-Tooth™, VHF, UHF, or some other wireless protocol or set of protocols. Further, there may be multiplecontent servers 320 in any given location. For instance,several content servers 320 may be dispersed throughout a large building. An airport, for example, may have multiple content servers that are dispersed throughout the airport terminals. - In another embodiment, the
content server 320 acts as a server only and not as an access point or wireless hotspot, but instead is connected to a wireless hotspot. There may therefore be multiple wireless hotspots that are connected to thecontent server - Turning to
FIG. 4 , an embodiment of a televisionaudio delivery process 400 is shown. The televisionaudio delivery process 400 can be implemented by any of the television and audio delivery systems described herein. Theprocess 400 illustrates an overview of a technique for delivering television audio to a mobile device using network telephony technologies such as VoIP. More detailed processes for delivering television audio to mobile devices are described in great details below with respect toFIGS. 5 and 6 . Theprocess 400 is described from the perspective of themobile application 110, which has already been downloaded to a user'sdevice 102 by the start of theprocess 400. - At
block 402, themobile application 110 obtains a list of television audio feeds from thecontent server 120. Themobile application 110 may display this list in a user interface of themobile application 110. Atblock 404, the mobile application receives the user selection of a feed. The user may tap on a touch screen display of theuser device 102, for instance, to select one of the displayed feeds. Atblock 406, themobile application 110 establishes a VoIP conference call with thecontent server 120 to request audio associated with the selected feed. Atblock 408, themobile application 110 receives the TV audio from thecontent server 120 and plays back the audio for a presentation to a user. - As described above, establishing a VoIP conference call using VoIP protocols can greatly reduce latency in hardware transmission as compared with existing audio streaming protocols. For example, in one embodiment, using VoIP to stream audio can achieve a latency of less than 100 milliseconds or even less than 70 milliseconds, which delay may be imperceptible or barely perceptible to a user. In contrast, other streaming techniques using HTTP and/or TCP can have latencies on the order of 1 to 3 seconds, which would cause a major lack of synchronization between the received audio and the video, which would be bothersome to many listeners.
- It should be noted that in some embodiments, the television audio delivery systems and associated processes described herein can implement certain of the features described herein without using network telephony to deliver the audio. Instead, these embodiments can use other streaming techniques to stream the audio while achieving other advantages described herein.
- Turning to
FIG. 5 , a more detailed televisionaudio delivery process 500 is shown. Theprocess 500 is shown from the perspective of both the user device and the content server in a swim-lane diagram. Blocks on the left of the diagram can be implemented by theuser device 102, and blocks on the right of the diagram can be implemented by the content server 120 (or 220, 320). - At
block 502, theuser device 102 connects to a wireless access point at thecontent server 120. Initially, for example, when a user discovers that an establishment includes a wireless hotspot, the user may connect to that hotspot attempting to obtain Internet access. Thecontent server 120 can provide a splash page or the like to theuser device 102 that informs the user of the purpose of the content server and that provides instructions for using thecontent server 120. Another way that the user may initiate connection with thecontent server 120 is to be informed at the location or venue that the location provides access to the services of a television audio delivery system. The user may be presented with information of how to access a wireless hotspot to download themobile application 110. - At
block 504, with the user connected to the wireless access point at thecontent server 120, thecontent server 120 can assign theuser device 102 an internal IP address, for example, using a dynamic host configuration protocol (DHCP) server. Thecontent server 120 optionally provides instructions to the user device on how to download the mobile application atblock 506. For example, thecontent server 120 can serve a web page with instructions on how to download the mobile application from an application store or directly from thecontent server 120. - In an embodiment, advertising material that advertises the availability of a television audio delivery system at the location can include a machine-readable code, such as a QR code or other barcode that a user can scan with his or her
user device 102. The QR code or other barcode may have a website link or link to an application store or other download location from which the user can download themobile applications 110 to theuser device 102. - In another embodiment, the user has already downloaded the
mobile application 110 to theuser device 102 and block 506 is skipped. For instance, the user may have used themobile application 110 at this location or another location before and still have themobile application 110 installed on his or heruser device 102. - If the app is downloaded in
block 508, then the application can be invoked and request a list of audio feeds atblock 510. Otherwise, functionality cannot continue without access to themobile application 110, and theprocess 500 remains atblock 508 until themobile application 110 is downloaded. - At
block 512, thecontent server 120 can provide a list of available audio feeds to the user device. These audio feeds can be output on a display of a user interface of the mobile application ofblock 514. User selection of one of the audio feeds can be received atblock 516. Themobile application 110 can place a VoIP conference call to gain access to the audio feed atblock 518. In an embodiment, the mobile application gains access to the VoIP conference call as a muted participant. As the sole purpose of obtaining the audio feed may be to listen, it may be disturbing for viewers to finally participate in a phone conference conversation. However, optionally in some embodiments, the mobile device is not a muted participant, but instead users can freely talk into their phones with their friends or with others. - At
block 520, thecontent server 120 routes the incoming VoIP call to the selected audio feed using conference bridging software or the like, as will be described in greater detail below with respect toFIG. 6 . The audio is received and output atblock 522 at theuser device 102. It is then determined atblock 524 whether the user disconnects and, if not, the process loops back to block 522. Otherwise, atblock 526, the content server disconnects the user device from the conference call. -
FIG. 6 depicts an embodiment of a state flow diagram 600 for delivering television audio in the context of example components of auser device 602 and acontent server 620. Theuser device 602 andcontent server 620 are more detailed examples of theuser device 102 andcontent server user device 602, in particular, includes anaudio playback module 611, amobile application 610, and awireless module 615. Themobile application 610 is an example of themobile application 110 and includes aVoIP client 612, a user interface 614, and acontent processor 616. Each of these components can be implemented in hardware and/or software. For instance, themobile application 610 can run in one or more processors and may be stored in a memory or the like. Thewireless module 615 may include a wireless antenna and a wireless circuit, including RF circuits, in addition to a processor. Likewise, theaudio playback module 611 may include hardware and software, including the software to playback the audio such as codecs for decoding coded or compressed audio. - The
content server 620 includes several components that can be implemented in hardware and software. These components are depicted examples that include aweb server 622, aconference call bridge 624, asound card driver 626, aweb server 628, awireless access point 630, afeed data store 632, and a domain name server (DNS) 634. By way of overview, theweb server 622 can provide access to web protocols for theuser device 602. Theconference call bridge 624 can manage access to specific television audio sources that are provided through sound cards to thesound card driver 626. Theweb server 628 can provide access to feed data to determine which feed corresponds to which television or which audio that is stored, for example, in the feed data store 632 (which may include a database or flat file system), and thewireless access point 630 can include software as well as RF circuitry and an antenna to communicate with theuser device 602. TheDNS server 634 can provide information on how to download themobile application 610 to theuser device 602. - With continued reference to
FIG. 6 , the various states in state flow diagram 600 will now be described. Atstate 1, thewireless module 615 connects to thewireless access point 630 to obtain wireless access to thecontent server 620. Atstate 2a, thewireless access point 630 can inform theDNS server 634 of the access by thewireless module 615. Thewireless access point 630 can also assign an IP address to the wireless module atstate 2b so that thewireless module 615 can continue communicating with thecontent server 620. Atstate 2c, theDNS server 634 can optionally providemobile application 610 download instructions to theuser device 602, as described above. - At
state 3, thecontent processor 616 of themobile application 610 can request a feed list from theweb server 628. Thecontent processor 616 can send the request to an IP address that is stored or hard coded in thecontent processor 616, such as (for example) the private address 192.168.173.1:7770, which has a port designation of port “7770” on thecontent server 620. Upon receipt of this request for a feed list, theweb server 628 can obtain the list from thefeed data store 632 and provide the list to the content processor for 616 atstate 4. The feed list may be formatted, for example, as a JSON or XML file that maps feeds to conference call identifiers or addresses (described below). - The
content processor 616 can pass the feed list to the user interface 614 atstate 5, which can allow the user interface 614 to output the list for user selection. Upon receipt of the user selection of a feed, the user interface 614 can pass this user selection atstate 6 to theVoIP client 612. TheVoIP client 612 can then place a VoIP call to theVoIP server 622 atstate 7 using the conference call identifier corresponding to the selected feed in the feed list. The VoIP call may be placed to a VoIP address that is stored in theVoIP client 612 or that is obtained from theweb server 628. TheVoIP client 612 can use any VoIP protocol, including the session initiation protocol (SIP), H.323, or the like. For example, in one embodiment, theVoIP client 612 uses a SIP protocol over the real-time transport (RTP) protocol, which can be operated over a uniform datagram protocol (UDP) in the network layer of the OSI model. SIP and H.323 are merely examples of signaling protocols that may be implemented by theVoIP client 612, while RTP and UDP are merely examples of transport protocols that may be implemented by theVoIP client 612. - In an embodiment, the
VoIP client 612 modifies the VoIP address of thecorresponding VoIP server 622 to refer to the selected feed or selected TV. For instance, a general format of a VoIP address using a certain protocol might be similar to the following: SIP:TV<ID>@192.168.173.1:7770. The <ID> field in this address may be replaced with the ID of a feed or television that has been selected by the user. Thus, the address can be modified as follows (for a selection of TVC number “3”): SIP:TV3@192.168.173.1:7770. - The
VoIP server 622 receives the incoming call and connects to theconference call bridge 624 atstate 8. Theconference call bridge 624 can identify the corresponding audio source that matches the requested feed in the address dialed by theVoIP client 612. For example, theconference call bridge 624 can access thefeed data store 632 to identify a dialplan that may include, for example, a list of mappings of conference call identifiers to audio feeds. Once the feed is identified, theconference call bridge 624 can provide access to the audio feed atstate 10, for example, by instructing theVoIP server 622 which audio source to access through thesound card driver 626. TheVoIP server 622 can route access to this selected sound source and provide the audio data to theVoIP client 612 atstate 11. TheVoIP client 612 can hand off the audio to theaudio playback module 611 atstate 12 for playback and listening by the user. - In certain embodiments, the
wireless access point 630 can be an unsecured hotspot so that users of theuser device 602 do not need to log in to thewireless access point 630 for convenience. Security may therefore not be necessary, or minimal security may be used, because in certain embodiments, thewireless access point 630 does not provide Internet access to theuser device 602. In other embodiments, certain Internet access may be provided, and a log-in or security mechanism may optionally be used by thewireless access point 630. For example, thewireless access point 630 may provide access to a limited number of websites, including a website instructs the user how to downloadmobile application 610. Thewireless access point 630 may also have access to the Internet for other purposes including providing secondary content to themobile application 610, which will be described in greater detail below with respect toFIG. 9 . - Any VoIP software can be used to implement the
VoIP client 612 orVoIP server 622. One example of VoIP software that may be used is available from Linphone™. Likewise, any conference conference call bridge software can be used to implement thebridge 624, one example of which is available from Freeswitch. Thecontent server 620 can be implemented using any operating system, one example of which is Linux. For example, the Linux Mint distribution can be used as a lightweight distribution to implement thecontent server 620, although many, many other distributions or other types of operating systems may be used. In the Linux operating system, thesound card driver 626 can be the ALSA driver, and theweb server 622 may be the Apache web server. However, many other types of components and software nodules may be used in place of those described. - Furthermore, in certain embodiments, the audio feed provided from the
VoIP server 622 to theVoIP client 612 can be persistent. If a time out or other issue occurs with the connection, theVoIP server 622 or theVoIP client 612 can reinitialize the connection and reconnect to the stream. For example, theweb server 622, if it detects a problem with the audio stream, can reinitialize the connection for other listeners on the stream to reconnect these listeners or theiruser devices 602 to theweb server 622. - Further, as an additional embodiment or alternative to VoIP, in one embodiment the
mobile application 610 can communicate with thecontent server 620 or for UDP, or a combination of UDP and RDP, without using a SIP H.323 or other VoIP protocol. - In other embodiments, the
conference call bridge 624 may be omitted. Instead, theVoIP server 622 can directly access the feed audio from thefeed data repository 632 and provide the feed audio to theVoIP client 612. For example, theVoIP server 622 can establish a separate VoIP call with eachuser device 602 that accesses theVoIP server 622, instead of a conference call that joinsmultiple user devices 602. In such embodiments, the audio feeds may be stored in thefeed data store 632 together with corresponding VoIP session identifiers. TheVoIP client 612 can therefore access theVoIP server 622 using a desired VoIP session identifier corresponding to the user's selected audio feed, resulting in theVoIP server 622 establishing a VoIP session with theVoIP client 612 to deliver the audio. In another embodiment, theVoIP server 622 can broadcast, unicast, multicast, or otherwise provide the audio to theVoIP client 612. In yet another embodiment, theVoIP client 612 accesses channels in theVoIP server 622, each channel corresponding to a feed of audio. For instance, the channels can be audio chat channels, although they may be muted on themobile application 610 side. TheVoIP server 622 can also use an intercom-like format to deliver audio to themobile application 610. More generally, theVoIP server 622 can establish any type of VoIP session with theVoIP client 612, including UDB-based, RTP-based, real-time streaming protocol (RTSP) based, web-browser based, or other types of VoIP sessions. - The
VoIP server 622 is one example of a network telephony server. Theuser device 602 can communicate with thecontent server 620 using any form of network telephony, including network telephony other than VoIP. For example, themobile application 610 can establish a network telephony session with thecontent server 620 using any of a variety of network telephony protocols. In addition, theuser device 602 can implement some or all themobile application 610 features using a web browser instead of or in addition to a standalone mobile application. - In some embodiments, the
content server 620 does not record or buffer the audio feeds for playback to themobile application 610. Instead, thecontent server 620 delivers the audio in real time to themobile application 610. Thecontent server 620 may therefore be considered to deliver live audio to themobile application 610 in some embodiments. Buffering may not be needed because of the low-latency delivery of the audio facilitated by embodiments of the VoIP or other network telephony solutions. However, in other embodiments, thecontent server 620 and/or themobile application 610 can perform at least some buffering. Buffering can be used to fine-tune synchronization between the audio feed and the video to avoid substantially any dubbing errors. To perform buffering, in one embodiment thecontent server 620 saves or buffers at least a portion of the audio (and/or video) and synchronizes the audio delivery in time with the video. Themobile application 610 may also buffer at least a portion of the audio. -
FIGS. 7A through 8 depict example user interfaces of a mobile application, such as any of the mobile applications described above. These user interfaces are just examples and may be varied in several embodiments. Each of the example user interfaces shown are depicted as being output by amobile phone 701, which is an example of theuser devices mobile phone 710 may have a touch screen or the like that allows a user to select user interface controls via touch or a stylus, or a combination of the same. However, it should be understood that the mobile application need not be implemented in a mobile phone in some embodiments. Instead, in some embodiments, the mobile application can be implemented in a web browser or in any device such as a tablet, laptop, or the like. Further, the mobile application can be implemented in a web browser on a mobile phone as well. - In
FIG. 7A , auser interface 700, is shown on themobile phone 701. In theuser interface 700, users are presented with severalaudio feeds 710 to choose from. In the depicted embodiment, these feeds 710 (or feed user interface controls) are listed as televisions, includingtelevisions 1 through 5, which may correspond to televisions that are numbered in an establishment to enable users to easily access the corresponding audio.FIG. 7B shows another embodiment of auser interface 720, where in addition to showing the television number and thefeeds 722, that particular channel on the television is also shown (including ESBN, CNN, etc.). -
FIG. 8 shows another example mobileapplication user interface 800 on themobile device 701 that can be displayed in response to a user selecting one of the feeds fromFIG. 7A or 7B. In this embodiment, the user has selected the feed corresponding totelevision 1 with the channel ESPN as indicated in thefirst portion 802 of the display. Volume control and stopbuttons stop button 804 is not equivalent to a pause function because when thestop button 804 is released and playback resumes, the stream may commence at the point that the television is currently playing at rather than the point in time when audio playback stopped. Aback button 805 allows the user to return to the feed list shown in eitherFIGS. 7A or 7B. - Also shown are
buttons button 812 provides access to a chat service that allows, in certain embodiments, the user to have a text chat or a voice chat with other users that, for example, may be friends with the user in a social networking sense. Alternatively, the user may select thechat button 812 to chat with anyone listening to the same feed. Thelocal services button 814 can provide access to various services, such as a taxi service to call a cab, ordering services to order food from the menu of a local establishment's restaurant or from other restaurants in the area, flagging or requesting a waiter, making reservations, offering feedback (such as suggestions/complaints/positive feedback), viewing a menu, splitting a tab, paying for a meal or other services, combinations of the same, or the like. In addition, anexample advertisement 820 is shown that may be selected by the user. The generation of display of thead 820 will be described in greater detail below. Other interactive content not shown may also be displayed on thedisplay 800 including, for example, video game content that may or may not be relevant to the feed being listened to by the user, interactive voting content for voting along with the television show being watched by the user, and the like. - Turning to
FIG. 9 , another embodiment is shown of acomputing environment 900 that includes televisionaudio delivery systems audio delivery system 901 includes many of the features of the television audio delivery systems described above as well as additional features. In the depicted embodiment, the televisionaudio delivery system 901 includes acontent server 920 having any of the features of the content servers described above, as well as a singleexample user device 902 connecting to thecontent server 920. Thesingle user device 902 is shown for illustration purposes only; it should be understood thatmultiple user devices 902 may connect with thecontent server 920. - The
computing environment 900 also includes other televisionaudio delivery systems 903 that includecontent servers 920 anduser devices 902. Eachaudio delivery system network 908. Thenetwork 908 may be the Internet, a WAN, LAN, leased line, combinations of the same, or the like. In addition, additional servers are shown including aremote ad server 950 and amanagement server 960, which are examples of theremote servers 160 described above and which will be described in further detail below. - The
content server 920 and theuser device 902 of theaudio delivery system 901 include many of the modules described above including, for example, in thecontent server 920, theVoIP server 622, theweb server 628, thewireless access point 630, and theconference call bridge 624. Other features from the content servers described above may also be included, like theDNS server 634. Likewise, theuser device 902 includes themobile application 610, thewireless module 615 and theaudio playback module 611. In addition, thecontent server 920 includes acellular radio 932 which can include functionality for communicating with themanagement server 960 and/orremote ad server 950 and/or otherlocal networks 903 via thenetwork 908. In other embodiments, thecontent server 920 includes a wired modem or the like that communicates with thenetwork 908 instead of (or in addition to) acellular radio 932. - In certain embodiments, it can be useful to have a
cellular radio 932 in the content server 920 (or in communication with the content server 920) because it can be useful to have access to thenetwork 908 for a variety of functions. For instance, it could be useful for a central office or organization that operates themanagement server 960 to be able to update or maintain software features on thecontent server 920. Accordingly, themanagement server 960 includes anupdater module 962 that can enable maintenance to be performed remotely on thecontent server 920. Likewise, it can be useful to obtain ad content for users of theuser devices 902 via aremote ad server 950 over thenetwork 908. While it is possible to connect thecontent server 920 with the local Internet network of the establishment or place in which thecontent server 920 is located, doing so can be cumbersome technically due to the typically required coordination with the local IT department of the establishment that hosts thecontent server 920. Thus, having a cellular connection or other wireless connection to themanagement server 960 and/or remote ad server 950 (and in general the network 908) can be beneficial. The cellular connection through thecellular radio 932 may, for example, be a 3G or 4G wireless connection or the like. - The
content server 920 also includes asecondary content server 935 that can include hardware and/or software for providing secondary content to theuser device 902. For example, thesecondary content server 935 can provide ads, interactive games, interactive voting functionality for voting along with television shows, local services as described briefly above with respect toFIG. 8 , social media functionality such as the ability to chat with friends as described above or to make Facebook™ or Twitter™ postings or the like. The secondary content server may store information about users of theuser devices 902 and a localuser data store 942 for the purpose of obtaining targeted ads for users as well as for other purposes. - The
secondary content server 935 can communicate with theremote ad server 950 over thenetwork 908 and through the cellular radio in an embodiment to obtain ads for users of the mobile devices. In certain embodiments, these ads can be targeted based on the particular audio feed or channel that a user is listening to and observing on a television (not shown). Detailed embodiments for generating such advertisements are described in subsequent figures. In other embodiments, thesecondary content server 935 does not necessarily perform the processing used to generate requests for ads from theremote ad server 950. Instead themanagement server 960 performs data collection using adata collector 964 of user data from one or more televisionaudio delivery systems data analyzer 966 to mine the user data for the purpose of generating or requesting ads from theremote ad server 950. - The
management server 960 can store user data in a multi-siteuser data repository 970, which can advantageously track data for the same user of auser device 902 inmultiple networks audio delivery system audio delivery system data collector 964 in the multi-siteuser data store 970. The listening and viewing habits of that user and other users may be analyzed over multiple sites by the data analyzer 966 to obtain more fine-grained and particular information about those users to obtain more relevant ads for those users from theremote ad server 950. - In some alternative embodiments, the remote ad server functionality of the
remote ad server 950 is subsumed or contained within themanagement server 960, which may generate its own ads without the aid of aremote ad server 950. Further, thesecondary content server 935 can generate ads together with, in addition to or in place of the functionality of theremote ad server 950. - Also shown within the
local network 901 is an additionalaudio source 944. The additionalaudio source 944 can come from within (or even outside of) an establishment hosting thelocal network 901 and may include, for example, an audio input by a person (e.g., employee or patron) at the establishment. For example, a microphone may be provided that can plug into or wirelessly communicate with thecontent server 920, which can enable a person to make an announcement that is transmitted to some or all listeners and users of themobile application 610. The additionalaudio source 944 can communicate directly with theconference call bridge 624 which, upon receipt of audio from the additionalaudio source 944, can broadcast the audio to some or all users of themobile application 610 anddifferent user devices audio source 944 can also include music such as from a jukebox or a jukebox application that is implemented on thecontent server 920 or in another computing system. The additionalaudio source 944 may also be used for public safety announcements in a particular area. For instance, in an airport, hotel or hospital a safety announcement may be announced to all listeners, etc. It should also be noted that themanagement server 960 and/or theremote ad server 950 can be implemented in a Software-as-a-Service platform or cloud-based platform such as Amazon AWS™ or Microsoft Azure™ platforms. - In one embodiment, the additional
audio source 944 can communicate with an interactive voice response (IVR) system in thecontent server 920. For instance, a user can interact with a voice prompt menu in the IVR system to provide audio data to theconference call bridge 624. The IVR system can perform text-to-speech conversion that receives input text from a keyboard, mobile device, or the like, and that converts this text to speech. The IVR system may be implemented by theconference call bridge 624 in an embodiment as a phone number that a user can dial into thecontent server 920. Thus, the additionalaudio source 944 may be omitted in certain embodiments. In another embodiment, theaudio source 944 is a prerecorded message, or thecontent server 920 can output a user interface that enables a user to select from prerecorded messages to output via theconference call bridge 624. The user can initially record these messages for storage at thecontent server 920 and subsequent broadcasting to listeners. - In yet another embodiment, the
conference call bridge 624 or another aspect of thecontent server 920 can provide a module or user interface that enables a user to type or dictate text that can be broadcast to the listeners or users of themobile devices 902. In an embodiment, the user can select which conference call or calls (or all conference calls) in which to broadcast the additional audio, e.g., via the user interface. - Turning to
FIG. 10 , a portion of the televisionaudio delivery system 900 is shown with thecontent server 1020 representing thecontent server 920. A portion of thecontent server 1020 is shown, including thesecondary content server 1035. In addition, thecontent server 1020 is in communication withsignal processing modules 1030, which can include all the functionality of the signal processing modules described above. Thesesignal processing modules 1030 are further in communication withAV receivers 1040, which also can have the same functionality of the AV receivers described above.FIG. 10 illustrates how thesecondary content server 1035 may obtain information useful for discerning what type of feed or channel that a user is currently listening to and for obtaining a relevant ad targeting information for the users listening to that feed or channel. - In addition to outputting audio, whether analog or digital, the
AV receivers 1040 can also output video to thesignal processing modules 1030 in one embodiment. For example, the signal processing modules can include analog to digital (A/D)converters 1034, one of which might receive audio and another of which might receive video. It should be understood that the same A/D converter 1034 might include multiple ports for receiving multiple audio inputs or audio and/or video inputs. The audio is provided to thecontent server 1020, and the video may be provided directly to thesecondary content server 1035. A video may also be extracted from a digital signal provided to anHDMI audio extractor 1036, which may provide analog, audio and video to an A/D converter 1038 which provides the audio to thecontent server 1020 and the video to thesecondary content server 1035. Video may be extracted directly from a digital signal provided from theAV receiver 1040 in one embodiment. - In certain embodiments, the
second content server 1035 may extract captions that are included in the video, whether they be live captions or subtitles. Thesecond content server 1035 may extract the captions from a separate file that is included in the video stream or may use signal processing techniques to obtain the captions from the video using digital image processing techniques, for example, to detect the lettering and so forth that is in the video. These algorithms or techniques may, for example, process the video to detect the text in an expected area of the images of the video and so forth. These captions can be analyzed by thesecondary content server 1035 to determine a type of content that is being listened to by a listener or being watched by a viewer for the purpose of finding targeted ads to present to a user. Likewise, audio may be provided directly to thesecondary content server 1035 for performing a speech-to-text conversion and subsequent analysis for providing targeted ads to users, as will be described in greater detail below. - Turning to
FIG. 11A , a more detailed embodiment of thesecondary content server 1035 is shown, in particular, thesecondary content server 1135. Thesecondary content server 1135 includes acaption extractor 1136, acaption analyzer 1138, and alocal ad server 1139. Thecaption extractor 1136 can receive video including captions as described above with respect toFIG. 10 . Thecaption extractor 1136 can extract the captions from the video or from a separate caption file or subtitle file included with the video. The output of thecaption extractor 1136 can include text to thecaption analyzer 1138. - The
caption analyzer 1138 can mine the text to identify keywords in the text. For instance, thecaption analyzer 1138 might initially remove stop words from the text such as articles “a,” “and,” “the,” and other minor words that may have little or no content associated with them. Thecaption analyzer 1138 can then count the keywords and sort the keywords based on their frequency of occurrence to identify keywords that may correspond to topics of interest in the text. In this manner, thecaption analyzer 1138 may be able to identify topics or categories based on these keywords that may be relevant for providing ads to a user. For instance, if the user is listening and watching a basketball game, basketball-related terms may arise frequently in the text extracted by thecaption extractor 1136. Thecaption analyzer 1138 can identify these terms and optionally identify them as being associated with basketball or the topic of basketball. - The
caption analyzer 1138 can pass mined data to thelocal ad server 1139. This mined data may include any subset of keywords or topics identified by thecaption analyzer 1138. For instance, thecaption analyzer 1138 may select a most highly-ranked subset of the keywords based on their frequency of occurrence, all of the keywords, one or two of the keywords or a small number of keywords. Thelocal ad server 1139 can request ads from aremote ad server 1150 over anetwork 1108. Theremote ad server 1150 can have all the functionality of theremote ad server 950 described above. Likewise, thenetwork 1108 can have any of the functionalities of the networks described herein. Theremote ad server 1150 can return an ad to thelocal ad server 1139, which may provide the ad to themobile application 610, 910, for example, to thecontent processor 616 of the mobile application 610 (seeFIG. 6 ). Thiscontent processor 616 can then output the ad to the user interface 614 of themobile application 610 for presentation to a user as shown, for example, inFIG. 8 . - Over time, the keywords and/or topics obtained by the
caption analyzer 1138 may change as the program watched and/or listened to by the user changes, and the ads may be updated accordingly to obtain different relevant ads. For instance, at one point in time, thelocal ad server 1139 may send basketball-related keywords to theremote ad server 1150, which may return ads relevant to basketball or which may be relevant to a person that is interested in basketball. Subsequently, a different program may come on the television being watched by the user, and the video captions obtained by thecaption extractor 1136 may refer to this different program, and the captions may be mined for text and keywords that thelocal ad server 1139 can then send to theremote ad server 1150. - As described above, the functionality of the
local ad server 1139 may also be replicated by, enhanced, or replaced by a similar functionality on themanagement server 960. For instance, themanagement server 960 or thelocal ad server 1139 can track data about the user over time including over multiple visits to the same location and/or to multiple locations that include television audio delivery systems as described herein. The management server 960 (or local ad server 1139) may use keywords mined from multiple shows watched by the user in order to request ads for that particular user that are relevant, even for shows that have transpired previously and which the user is not currently watching. Thus, for instance, if a user in the past was known to frequently tune in to feeds that include text related to sports, and the user is currently watching a news program as indicated by the caption text extracted from the current video being watched, themanagement server 960 can request ads from theremote ad server 950 that are related to sports instead of or in addition to ads related to the current news program. - Turning to
FIG. 11B , another embodiment of asecondary content server 1235 is shown. The secondary content server includes a speech-to-text converter 1236 that receives audio from an audio feed and using a speech to text software, such as may be available from Nuance™ or the like. Theconverter 1236 outputs text from the speech to thetext analyzer 1138, which can perform the same functionality described above with respect to 11A, for example, by providing mine data to thelocal ad server 1139 which can request ads from theremote ad server 1150. -
FIG. 12 depicts an embodiment of a channel-basedad serving process 1200 that can be implemented by any of the secondary content servers described above. The channel basedad serving process 1200 can advantageously serve ads to users of mobile devices that implement the mobile application described above based on information about the feed the user is listening to and/or information about the user himself thereby providing relevant targeted personal ads to users. Further, the channel basedad serving process 1200 can be used to recommend or suggest games or other interactive content to users other than just ads. - At
block 1202, the secondary content server identifies a characteristic related to a TV feed selected by a user. The characteristic may be a mined keyword, a topic, or a category related to the feed. In addition, in one embodiment the characteristic may be the type of show or channel being watched or listened to by the user. The secondary content server may be able to obtain the channel info, for instance, based on TV guide scheduling accessible over the Internet or a network to determine what content is being displayed on a particular channel at a given time, for instance, whether a baseball game is being displayed or whether a movie is being displayed, what the genre of the movie is, what the genre of a television show is, the name of the television show, etc. An establishment may also indicate or be able to input to the content server what type of channels are being displayed on given televisions and, therefore, the content server may know what type of channel is being displayed and can use this characteristic to provide ads to users. For example, users that watch ESPN or a sports channel may be targeted with different ads than users that watch a news channel or a cooking channel. - At
block 1204 the secondary content server optionally identifies a user characteristic. The user characteristic may be information about the user such as user demographics. When initially installing themobile application 610, themobile application 610 may request information from the user about demographics such as age, sex, location of the user, occupation, interests and so forth that may be used as a characteristic to identify targeted ads together with or separate from the characteristic of the television feed being watched or listened to by the user. The characteristic identified by the user may also relate to feeds that the user has listened to in the past and any information about those feeds such as the type of channel, keywords, topics, types of shows and so forth as ads may be generated on a user's past behavior and not just the current listening behavior. The second content server may be able to obtain this information from a local data store such as thelocal data store 942 based on previous interactions with the content server in a single network by a user or from a multi-user data store such as the multi-siteuser data store 970, which the secondary content server may access by accessing themanagement server 960 to obtain data about the user from multiple sites. - At
block 1206, the secondary content server supplies data related to the feed characteristic and/or the user characteristic to a remote ad server along with a request for one or more ads. For instance, this data may be any subset of the data that the secondary content server identifies inblock - At
block 1208 one or more ads are received at the secondary content server and the secondary content server transmits the one or more ads to the mobile application for presentation to the user atblock 1210. -
FIG. 13 illustrates an embodiment of a caption-basedad serving process 1300. Theprocess 1300 may be implemented by any of the secondary content servers described above. Atblock 1302, the secondary content server extracts captions from TV feed or video, mines data from the caption text atblock 1304, optionally identifies a user characteristic such as any of those characteristics described above atblock 1306, and sends a request to the ad server for an ad related to the mine data and/or user characteristic atblock 1308. The secondary content server receives one or more ads at block 13 and transmits the ads to a mobile application for presentation to a user atblock 1312. - Similarly, a speech-based
ad serving process 1400 is shown inFIG. 14 where the secondary content server can convert a TV feed speech audio to text atblock 1402, mine data from a speech text atblock 1404, optionally identify a user characteristic atblock 1406, and send a request to the ad server for an ad related to the mine data and/or user characteristic atblock 1408. The secondary content server receives one or more ads atblock 1410 and transmits the ads to a mobile application for presentation to a user atblock 1412. - Many other variations than those described herein will be apparent from this disclosure. For example, depending on the embodiment, certain acts, events, or functions of any of the algorithms described herein can be performed in a different sequence, can be added, merged, or left out all together (e.g., not all described acts or events are necessary for the practice of the algorithms). Moreover, in certain embodiments, acts or events can be performed concurrently, e.g., through multi-threaded processing, interrupt processing, or multiple processors or processor cores or on other parallel architectures, rather than sequentially. In addition, different tasks or processes can be performed by different machines and/or computing systems that can function together.
- The various illustrative logical blocks, modules, and algorithm steps described in connection with the embodiments disclosed herein can be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. The described functionality can be implemented in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the disclosure.
- The various illustrative logical blocks and modules described in connection with the embodiments disclosed herein can be implemented or performed by a machine, such as a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general purpose processor can be a microprocessor, but in the alternative, the processor can be a controller, microcontroller, or state machine, combinations of the same, or the like. A processor can also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. Although described herein primarily with respect to digital technology, a processor may also include primarily analog components. For example, any of the signal processing algorithms described herein may be implemented in analog circuitry. A computing environment can include any type of computer system, including, but not limited to, a computer system based on a microprocessor, a mainframe computer, a digital signal processor, a portable computing device, a personal organizer, a device controller, and a computational engine within an appliance, to name a few.
- The steps of a method, process, or algorithm described in connection with the embodiments disclosed herein can be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module can reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of non-transitory computer-readable storage medium, media, or physical computer storage known in the art. An example storage medium can be coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium can be integral to the processor. The processor and the storage medium can reside in an ASIC. The ASIC can reside in a user terminal. In the alternative, the processor and the storage medium can reside as discrete components in a user terminal.
- Conditional language used herein, such as, among others, “can,” “might,” “may,” “e.g.,” and the like, unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey that certain embodiments include, while other embodiments do not include, certain features, elements and/or states. Thus, such conditional language is not generally intended to imply that features, elements and/or states are in any way required for one or more embodiments or that one or more embodiments necessarily include logic for deciding, with or without author input or prompting, whether these features, elements and/or states are included or are to be performed in any particular embodiment. The terms “comprising,” “including,” “having,” and the like are synonymous and are used inclusively, in an open-ended fashion, and do not exclude additional elements, features, acts, operations, and so forth. Also, the term “or” is used in its inclusive sense (and not in its exclusive sense) so that when used, for example, to connect a list of elements, the term “or” means one, some, or all of the elements in the list. Further, the term “each,” as used herein, in addition to having its ordinary meaning, can mean any subset of a set of elements to which the term “each” is applied.
- While the above detailed description has shown, described, and pointed out novel features as applied to various embodiments, it will be understood that various omissions, substitutions, and changes in the form and details of the devices or algorithms illustrated can be made without departing from the spirit of the disclosure. As will be recognized, certain embodiments of the inventions described herein can be embodied within a form that does not provide all of the features and benefits set forth herein, as some features can be used or practiced separately from others.
Claims (24)
1. A method of accessing an audio feed associated with a corresponding video, the method comprising:
by a mobile device comprising a processor:
establishing a wireless connection to a content server;
obtaining a list of audio feeds available for streaming from the content server;
outputting a graphical user interface for presentation to a user, the graphical user interface comprising user interface controls configured to represent the list of audio feeds;
receiving a user selection of one of the audio feeds through the graphical user interface;
in response to receiving the user selection of the selected audio feed, establishing a Voice over IP (VoIP) conference call with the content server using a conference call identifier configured to identify the selected audio feed; and
receiving streaming access to the selected audio feed through the VoIP conference call.
2. The method of claim 1 , wherein establishing the VoIP conference call with the content server comprises connecting to the VoIP conference call as a muted participant.
3. The method of claim 1 , further comprising receiving a web page comprising instructions for downloading a mobile application configured to implement said obtaining the list of audio feeds, outputting said graphical user interface, said establishing the VoIP conference call, and said receiving the streaming access to the selected audio feed.
4. The method of claim 1 , wherein said establishing the VoIP call comprises initiating a session initial protocol (SIP) request to the content server.
5. The method of claim 1 , wherein the VoIP call implements the following protocols: a session initial protocol (SIP), a real-time transport protocol (RTP), and a uniform datagram protocol (UDP).
6. The method of claim 1 , wherein the VoIP call implements the following protocols: a real-time transport protocol (RTP) and a uniform datagram protocol (UDP).
7. The method of claim 1 , wherein the VoIP call implements an H.323 protocol.
8. A system for accessing an audio feed associated with a corresponding visual content, the system comprising:
a content processor configured to obtain a list of audio feeds available for streaming from a server;
a user interface module configured to output a graphical user interface comprising user interface controls configured to represent the list of audio feeds and to receive a user selection of one of the audio feeds; and
a Voice over IP (VoIP) client comprising computer hardware, the VoIP client configured to initiate a VoIP session with the server in response to receipt of the user selection of one of the audio feeds and to receive streaming access to the selected audio feed through the VoIP session.
9. The system of claim 8 , wherein the VoIP session comprises a VoIP session identifier.
10. The system of claim 9 , wherein the VoIP session identifier is formatted according to a session initial protocol (SIP).
11. The system of claim 9 , wherein the VoIP session identifier comprises a reference to the audio feed.
12. The system of claim 9 , wherein the VoIP session identifier comprises a reference to a television associated with the audio feed.
13. The system of claim 8 , wherein the VoIP client is configured to initiate the VoIP session with the server as a muted participant.
14. The system of claim 8 , further comprising a wireless module configured to establish a wireless connection to the server.
15. Non-transitory physical computer storage comprising instructions stored thereon that, when executed by one or more processors, are configured to implement components for accessing an audio feed associated with a corresponding visual content, the components comprising:
a content processor configured to obtain information about an audio feed available for streaming from a server in wireless communication with the content processor;
a network telephony client configured to initiate a network telephony session with the server to receive streaming access to the audio feed; and
a user interface configured to provide a user interface control that can adjust a characteristic of the audio feed responsive to an input of a user.
16. The non-transitory physical computer storage of claim 15 , wherein the user interface control comprises a volume control.
17. The non-transitory physical computer storage of claim 15 , wherein the user interface control comprises a stop playback control.
18. The non-transitory physical computer storage of claim 15 , wherein the user interface further comprises an advertisement.
19. The non-transitory physical computer storage of claim 15 , wherein the user interface identifies a television channel associated with the audio feed.
20. The non-transitory physical computer storage of claim 15 , wherein the network telephony client is further configured to initiate the network telephony session using a VoIP protocol.
21. The non-transitory physical computer storage of claim 20 , wherein the VoIP protocol comprises one or more of the following: a session initial protocol (SIP), an H.323 protocol, a real-time transport protocol (RTP), and a uniform datagram protocol (UDP).
22. The non-transitory physical computer storage of claim 15 , wherein the audio feed comprises television audio.
23. The non-transitory physical computer storage of claim 15 , wherein the audio feed comprises live audio.
24. The non-transitory physical computer storage of claim 15 , in combination with a computer system comprising computer hardware.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/839,002 US20140098177A1 (en) | 2012-10-09 | 2013-03-15 | Mobile application for accessing television audio |
PCT/US2013/063498 WO2014058739A1 (en) | 2012-10-09 | 2013-10-04 | System for streaming audio to a mobile device using voice over internet protocol |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201261711670P | 2012-10-09 | 2012-10-09 | |
US13/839,002 US20140098177A1 (en) | 2012-10-09 | 2013-03-15 | Mobile application for accessing television audio |
Publications (1)
Publication Number | Publication Date |
---|---|
US20140098177A1 true US20140098177A1 (en) | 2014-04-10 |
Family
ID=49518084
Family Applications (4)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/839,751 Abandoned US20140098715A1 (en) | 2012-10-09 | 2013-03-15 | System for streaming audio to a mobile device using voice over internet protocol |
US13/839,002 Abandoned US20140098177A1 (en) | 2012-10-09 | 2013-03-15 | Mobile application for accessing television audio |
US13/837,593 Active US8774172B2 (en) | 2012-10-09 | 2013-03-15 | System for providing secondary content relating to a VoIp audio session |
US13/853,949 Active US8582565B1 (en) | 2012-10-09 | 2013-03-29 | System for streaming audio to a mobile device using voice over internet protocol |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/839,751 Abandoned US20140098715A1 (en) | 2012-10-09 | 2013-03-15 | System for streaming audio to a mobile device using voice over internet protocol |
Family Applications After (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/837,593 Active US8774172B2 (en) | 2012-10-09 | 2013-03-15 | System for providing secondary content relating to a VoIp audio session |
US13/853,949 Active US8582565B1 (en) | 2012-10-09 | 2013-03-29 | System for streaming audio to a mobile device using voice over internet protocol |
Country Status (2)
Country | Link |
---|---|
US (4) | US20140098715A1 (en) |
WO (1) | WO2014058739A1 (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140320742A1 (en) * | 2011-05-25 | 2014-10-30 | Google Inc. | Using an Audio Stream to Identify Metadata Associated with a Currently Playing Television Program |
US20150325210A1 (en) * | 2014-04-10 | 2015-11-12 | Screenovate Technologies Ltd. | Method for real-time multimedia interface management |
US9357271B2 (en) | 2011-05-25 | 2016-05-31 | Google Inc. | Systems and method for using closed captions to initiate display of related content on a second display device |
US9363562B1 (en) | 2014-12-01 | 2016-06-07 | Stingray Digital Group Inc. | Method and system for authorizing a user device |
CN106973253A (en) * | 2016-01-13 | 2017-07-21 | 华为技术有限公司 | A kind of method and device for adjusting media flow transmission |
CN108111564A (en) * | 2016-11-25 | 2018-06-01 | 深圳联友科技有限公司 | A kind of realization method and system of iOS networking telephones backstage ring |
US10080061B1 (en) | 2009-12-18 | 2018-09-18 | Joseph F. Kirley | Distributing audio signals for an audio/video presentation |
CN110392273A (en) * | 2019-07-16 | 2019-10-29 | 北京达佳互联信息技术有限公司 | Method, apparatus, electronic equipment and the storage medium of audio-video processing |
US11451855B1 (en) | 2020-09-10 | 2022-09-20 | Joseph F. Kirley | Voice interaction with digital signage using mobile device |
Families Citing this family (32)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9367546B2 (en) * | 2007-01-05 | 2016-06-14 | Thomson Licensing | Method and apparatus for customizing syndicated data feeds |
US10034135B1 (en) | 2011-06-08 | 2018-07-24 | Dstillery Inc. | Privacy-sensitive methods, systems, and media for geo-social targeting |
US8495236B1 (en) | 2012-02-29 | 2013-07-23 | ExXothermic, Inc. | Interaction of user devices and servers in an environment |
US20150296247A1 (en) * | 2012-02-29 | 2015-10-15 | ExXothermic, Inc. | Interaction of user devices and video devices |
US9014717B1 (en) | 2012-04-16 | 2015-04-21 | Foster J. Provost | Methods, systems, and media for determining location information from real-time bid requests |
US9552590B2 (en) | 2012-10-01 | 2017-01-24 | Dstillery, Inc. | Systems, methods, and media for mobile advertising conversion attribution |
US20140172140A1 (en) * | 2012-12-17 | 2014-06-19 | Lookout Inc. | Method and apparatus for cross device audio sharing |
GB201223218D0 (en) * | 2012-12-21 | 2013-02-06 | Telefonica Uk Ltd | Audio broadcasting method |
TWI477108B (en) * | 2013-02-22 | 2015-03-11 | Quanta Comp Inc | Method for building video conference |
US20140324422A1 (en) * | 2013-04-18 | 2014-10-30 | WTF Technology Partners, Inc. | Synchronous audio distribution to portable computing devices |
US9529917B2 (en) * | 2013-05-21 | 2016-12-27 | Saleforce.com, inc. | System and method for generating information feed based on contextual data |
US9762954B2 (en) * | 2013-06-19 | 2017-09-12 | Verizon Patent And Licensing Inc. | System and method for streaming audio of a visual feed |
US9055134B2 (en) | 2013-08-29 | 2015-06-09 | ExXothermic, Inc. | Asynchronous audio and video in an environment |
KR20150069355A (en) * | 2013-12-13 | 2015-06-23 | 엘지전자 주식회사 | Display device and method for controlling the same |
US20150253974A1 (en) | 2014-03-07 | 2015-09-10 | Sony Corporation | Control of large screen display using wireless portable computer interfacing with display controller |
US9971319B2 (en) | 2014-04-22 | 2018-05-15 | At&T Intellectual Property I, Lp | Providing audio and alternate audio simultaneously during a shared multimedia presentation |
EP3216232A2 (en) * | 2014-11-03 | 2017-09-13 | Sonova AG | Hearing assistance method utilizing a broadcast audio stream |
US10374815B2 (en) | 2014-12-17 | 2019-08-06 | Hewlett-Packard Development Company, L.P. | Host a conference call |
KR102264992B1 (en) | 2014-12-31 | 2021-06-15 | 삼성전자 주식회사 | Method and Device for allocating a server in wireless communication system |
US9736204B2 (en) * | 2015-06-24 | 2017-08-15 | Pandora Media, Inc. | Media content delivery over telephone networks |
US10235129B1 (en) | 2015-06-29 | 2019-03-19 | Amazon Technologies, Inc. | Joining users to communications via voice commands |
US10021438B2 (en) | 2015-12-09 | 2018-07-10 | Comcast Cable Communications, Llc | Synchronizing playback of segmented video content across multiple video playback devices |
US10454982B1 (en) * | 2016-03-18 | 2019-10-22 | Audio Fusion Systems, Inc. | Monitor mixing system that distributes real-time multichannel audio over a wireless digital network |
US10582271B2 (en) * | 2017-07-18 | 2020-03-03 | VZP Digital | On-demand captioning and translation |
US10489496B1 (en) * | 2018-09-04 | 2019-11-26 | Rovi Guides, Inc. | Systems and methods for advertising within a subtitle of a media asset |
US20220157303A1 (en) | 2019-03-26 | 2022-05-19 | Sony Group Corporation | Information processing device and information processing method |
US11210058B2 (en) | 2019-09-30 | 2021-12-28 | Tv Ears, Inc. | Systems and methods for providing independently variable audio outputs |
US11601691B2 (en) | 2020-05-04 | 2023-03-07 | Kilburn Live, Llc | Method and apparatus for providing audio and video within an acceptable delay tolerance |
CN111951366B (en) * | 2020-07-29 | 2021-06-15 | 北京蔚领时代科技有限公司 | Cloud native 3D scene game method and system |
US11889028B2 (en) | 2021-04-26 | 2024-01-30 | Zoom Video Communications, Inc. | System and method for one-touch split-mode conference access |
US11581007B2 (en) * | 2021-04-27 | 2023-02-14 | Kyndryl, Inc. | Preventing audio delay-induced miscommunication in audio/video conferences |
US11916979B2 (en) | 2021-10-25 | 2024-02-27 | Zoom Video Communications, Inc. | Shared control of a remote client |
Family Cites Families (43)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6389463B2 (en) | 1999-06-16 | 2002-05-14 | Im Networks, Inc. | Internet radio receiver having a rotary knob for selecting audio content provider designations and negotiating internet access to URLS associated with the designations |
US6876734B1 (en) | 2000-02-29 | 2005-04-05 | Emeeting.Net, Inc. | Internet-enabled conferencing system and method accommodating PSTN and IP traffic |
US7149469B2 (en) | 2000-12-21 | 2006-12-12 | Larry Russell | Method and system for receiving audio broadcasts via a phone |
US20030081744A1 (en) * | 2001-08-28 | 2003-05-01 | Gedaliah Gurfein | Interactive voice communications network entertainment |
US7415005B1 (en) | 2001-10-29 | 2008-08-19 | The United States Of America As Represented By The Administrator Of The National Aeronautics And Space Administration | Ad hoc selection of voice over internet streams |
US7634531B2 (en) * | 2002-01-23 | 2009-12-15 | Ali Abdolsalehi | Interactive internet browser based media broadcast |
US20030208755A1 (en) | 2002-05-01 | 2003-11-06 | Koninklijke Philips Electronics N.V. | Conversational content recommender |
US7664056B2 (en) * | 2003-03-10 | 2010-02-16 | Meetrix Corporation | Media based collaboration using mixed-mode PSTN and internet networks |
US20050102703A1 (en) * | 2003-11-12 | 2005-05-12 | Mr. Masoud Qurashi | On demand broadcast information distribution system and method |
EP1650971A1 (en) | 2004-10-19 | 2006-04-26 | APS Astra Platform Services GmbH | Methods and devices for transmitting data to a mobile data processing unit |
US20060212897A1 (en) * | 2005-03-18 | 2006-09-21 | Microsoft Corporation | System and method for utilizing the content of audio/video files to select advertising content for display |
CN101253504A (en) | 2005-08-31 | 2008-08-27 | 莫德斯塔股份有限公司 | Ubiquitous music and multimedia service system and the method thereof |
US8929870B2 (en) * | 2006-02-27 | 2015-01-06 | Qualcomm Incorporated | Methods, apparatus, and system for venue-cast |
US8412773B1 (en) * | 2006-06-28 | 2013-04-02 | Insors Integrated Communications | Methods, systems and program products for initiating a process on data network |
US20080134276A1 (en) | 2006-06-30 | 2008-06-05 | Martin Orrell | Receiver and aspects thereof |
US7996040B1 (en) | 2007-04-16 | 2011-08-09 | Adam Timm | System, method and computer program product for providing a cellular telephone system with an integrated television signal communication interface |
US20080276266A1 (en) * | 2007-04-18 | 2008-11-06 | Google Inc. | Characterizing content for identification of advertising |
TW200844747A (en) | 2007-05-07 | 2008-11-16 | Xgi Technology Inc | Expansion device with AV input/output function of television |
US8433611B2 (en) * | 2007-06-27 | 2013-04-30 | Google Inc. | Selection of advertisements for placement with content |
US8275764B2 (en) | 2007-08-24 | 2012-09-25 | Google Inc. | Recommending media programs based on media program popularity |
EP2223228A4 (en) * | 2007-10-23 | 2011-06-22 | Viaclix Inc | Multimedia administration, advertising, content&services system |
US20090160735A1 (en) * | 2007-12-19 | 2009-06-25 | Kevin James Mack | System and method for distributing content to a display device |
US9584564B2 (en) * | 2007-12-21 | 2017-02-28 | Brighttalk Ltd. | Systems and methods for integrating live audio communication in a live web event |
US8867571B2 (en) | 2008-03-31 | 2014-10-21 | Echostar Technologies L.L.C. | Systems, methods and apparatus for transmitting data over a voice channel of a wireless telephone network |
US8229748B2 (en) * | 2008-04-14 | 2012-07-24 | At&T Intellectual Property I, L.P. | Methods and apparatus to present a video program to a visually impaired person |
US20090318077A1 (en) * | 2008-06-18 | 2009-12-24 | Microsoft Corporation | Television Audio via Phone |
CA2754173C (en) * | 2009-03-03 | 2016-12-06 | Centre De Recherche Informatique De Montreal (Crim) | Adaptive videodescription player |
US8386255B2 (en) * | 2009-03-17 | 2013-02-26 | Avaya Inc. | Providing descriptions of visually presented information to video teleconference participants who are not video-enabled |
US8767687B2 (en) * | 2009-05-01 | 2014-07-01 | Broadcom Corporation | Method and system for endpoint based architecture for VoIP access points |
US20120329420A1 (en) * | 2009-11-12 | 2012-12-27 | Soteria Systems, Llc | Personal safety application for mobile device and method |
US8763067B2 (en) * | 2009-12-18 | 2014-06-24 | Samir ABED | Systems and methods for automated extraction of closed captions in real time or near real-time and tagging of streaming data for advertisements |
US8505054B1 (en) | 2009-12-18 | 2013-08-06 | Joseph F. Kirley | System, device, and method for distributing audio signals for an audio/video presentation |
US8457118B2 (en) * | 2010-05-17 | 2013-06-04 | Google Inc. | Decentralized system and method for voice and video sessions |
US8719910B2 (en) | 2010-09-29 | 2014-05-06 | Verizon Patent And Licensing Inc. | Video broadcasting to mobile communication devices |
US8730294B2 (en) | 2010-10-05 | 2014-05-20 | At&T Intellectual Property I, Lp | Internet protocol television audio and video calling |
US20120117490A1 (en) * | 2010-11-10 | 2012-05-10 | Harwood William T | Methods and systems for providing access, from within a virtual world, to an external resource |
US9269072B2 (en) * | 2010-12-23 | 2016-02-23 | Citrix Systems, Inc. | Systems, methods, and devices for facilitating navigation of previously presented screen data in an ongoing online meeting |
US9749673B2 (en) | 2011-06-03 | 2017-08-29 | Amg Ip, Llc | Systems and methods for providing multiple audio streams in a venue |
US20120321112A1 (en) * | 2011-06-16 | 2012-12-20 | Apple Inc. | Selecting a digital stream based on an audio sample |
US20130107029A1 (en) * | 2011-10-26 | 2013-05-02 | Mysnapcam, Llc | Systems, methods, and apparatus for monitoring infants |
US20130142332A1 (en) * | 2011-12-06 | 2013-06-06 | Andrés Ramos | Voice and screen capture archive and review process using phones for quality assurance purposes |
US20130254812A1 (en) * | 2012-03-23 | 2013-09-26 | Sony Network Entertainment International Llc | Iptv radio device using low-bandwidth connection |
US9357215B2 (en) | 2013-02-12 | 2016-05-31 | Michael Boden | Audio output distribution |
-
2013
- 2013-03-15 US US13/839,751 patent/US20140098715A1/en not_active Abandoned
- 2013-03-15 US US13/839,002 patent/US20140098177A1/en not_active Abandoned
- 2013-03-15 US US13/837,593 patent/US8774172B2/en active Active
- 2013-03-29 US US13/853,949 patent/US8582565B1/en active Active
- 2013-10-04 WO PCT/US2013/063498 patent/WO2014058739A1/en active Application Filing
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10080061B1 (en) | 2009-12-18 | 2018-09-18 | Joseph F. Kirley | Distributing audio signals for an audio/video presentation |
US9357271B2 (en) | 2011-05-25 | 2016-05-31 | Google Inc. | Systems and method for using closed captions to initiate display of related content on a second display device |
US10567834B2 (en) | 2011-05-25 | 2020-02-18 | Google Llc | Using an audio stream to identify metadata associated with a currently playing television program |
US20140320742A1 (en) * | 2011-05-25 | 2014-10-30 | Google Inc. | Using an Audio Stream to Identify Metadata Associated with a Currently Playing Television Program |
US9661381B2 (en) | 2011-05-25 | 2017-05-23 | Google Inc. | Using an audio stream to identify metadata associated with a currently playing television program |
US9942617B2 (en) | 2011-05-25 | 2018-04-10 | Google Llc | Systems and method for using closed captions to initiate display of related content on a second display device |
US9043444B2 (en) * | 2011-05-25 | 2015-05-26 | Google Inc. | Using an audio stream to identify metadata associated with a currently playing television program |
US10154305B2 (en) | 2011-05-25 | 2018-12-11 | Google Llc | Using an audio stream to identify metadata associated with a currently playing television program |
US10631063B2 (en) | 2011-05-25 | 2020-04-21 | Google Llc | Systems and method for using closed captions to initiate display of related content on a second display device |
US20150325210A1 (en) * | 2014-04-10 | 2015-11-12 | Screenovate Technologies Ltd. | Method for real-time multimedia interface management |
US9363562B1 (en) | 2014-12-01 | 2016-06-07 | Stingray Digital Group Inc. | Method and system for authorizing a user device |
CN106973253A (en) * | 2016-01-13 | 2017-07-21 | 华为技术有限公司 | A kind of method and device for adjusting media flow transmission |
CN108111564A (en) * | 2016-11-25 | 2018-06-01 | 深圳联友科技有限公司 | A kind of realization method and system of iOS networking telephones backstage ring |
CN110392273A (en) * | 2019-07-16 | 2019-10-29 | 北京达佳互联信息技术有限公司 | Method, apparatus, electronic equipment and the storage medium of audio-video processing |
US11451855B1 (en) | 2020-09-10 | 2022-09-20 | Joseph F. Kirley | Voice interaction with digital signage using mobile device |
Also Published As
Publication number | Publication date |
---|---|
US8582565B1 (en) | 2013-11-12 |
WO2014058739A1 (en) | 2014-04-17 |
US20140098715A1 (en) | 2014-04-10 |
US20140098714A1 (en) | 2014-04-10 |
US8774172B2 (en) | 2014-07-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8582565B1 (en) | System for streaming audio to a mobile device using voice over internet protocol | |
US10754313B2 (en) | Providing audio and alternate audio simultaneously during a shared multimedia presentation | |
US11285386B2 (en) | Cloud gaming device handover | |
US20150296247A1 (en) | Interaction of user devices and video devices | |
US9590837B2 (en) | Interaction of user devices and servers in an environment | |
US10080061B1 (en) | Distributing audio signals for an audio/video presentation | |
US9131256B2 (en) | Method and apparatus for synchronizing content playback | |
US9979690B2 (en) | Method and apparatus for social network communication over a media network | |
US11290687B1 (en) | Systems and methods of multiple user video live streaming session control | |
US20150067726A1 (en) | Interaction of user devices and servers in an environment | |
CN110910860B (en) | Online KTV implementation method and device, electronic equipment and storage medium | |
US20140344854A1 (en) | Method and System for Displaying Speech to Text Converted Audio with Streaming Video Content Data | |
AU2014293711A1 (en) | System and method for networked communication of information content by way of a display screen and a remote controller | |
US9516358B2 (en) | Method and apparatus for providing media content | |
JP5811426B1 (en) | Audio data transmission / reception system | |
US20160164936A1 (en) | Personal audio delivery system | |
US20210344989A1 (en) | Crowdsourced Video Description via Secondary Alternative Audio Program | |
GB2510979A (en) | Audio broadcasting method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: TV EARS, INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:MORSY, AMRE;KERN, DAVID;DENNIS, GEORGE;SIGNING DATES FROM 20130411 TO 20130429;REEL/FRAME:030366/0782 |
|
AS | Assignment |
Owner name: HEARTV LLC, ARIZONA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:TV EARS, INC.;REEL/FRAME:032308/0413 Effective date: 20140131 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |