US20180152739A1 - Device-Based Audio-Format Selection - Google Patents
Device-Based Audio-Format Selection Download PDFInfo
- Publication number
- US20180152739A1 US20180152739A1 US15/697,571 US201715697571A US2018152739A1 US 20180152739 A1 US20180152739 A1 US 20180152739A1 US 201715697571 A US201715697571 A US 201715697571A US 2018152739 A1 US2018152739 A1 US 2018152739A1
- Authority
- US
- United States
- Prior art keywords
- user
- digitally encoded
- audio
- video content
- user system
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000004458 analytical method Methods 0.000 claims abstract description 25
- 238000000034 method Methods 0.000 claims description 32
- 238000004891 communication Methods 0.000 description 23
- 238000005070 sampling Methods 0.000 description 20
- 238000009826 distribution Methods 0.000 description 6
- 230000015654 memory Effects 0.000 description 6
- 230000006870 function Effects 0.000 description 4
- 238000004519 manufacturing process Methods 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 230000001413 cellular effect Effects 0.000 description 3
- 239000000835 fiber Substances 0.000 description 3
- 239000013307 optical fiber Substances 0.000 description 3
- 238000003860 storage Methods 0.000 description 3
- 230000010267 cellular communication Effects 0.000 description 2
- 238000011144 upstream manufacturing Methods 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 239000012634 fragment Substances 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/233—Processing of audio elementary streams
- H04N21/2335—Processing of audio elementary streams involving reformatting operations of audio signals, e.g. by converting from one coding standard to another
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
-
- G06F17/30—
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04H—BROADCAST COMMUNICATION
- H04H60/00—Arrangements for broadcast applications with a direct linking to broadcast information or broadcast space-time; Broadcast-related systems
- H04H60/56—Arrangements characterised by components specially adapted for monitoring, identification or recognition covered by groups H04H60/29-H04H60/54
- H04H60/58—Arrangements characterised by components specially adapted for monitoring, identification or recognition covered by groups H04H60/29-H04H60/54 of audio
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/25—Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
- H04N21/258—Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
- H04N21/25808—Management of client data
- H04N21/25833—Management of client data involving client hardware characteristics, e.g. manufacturer, processing or storage capabilities
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/41—Structure of client; Structure of client peripherals
- H04N21/422—Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
- H04N21/42203—Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS] sound input device, e.g. microphone
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/439—Processing of audio elementary streams
- H04N21/4394—Processing of audio elementary streams involving operations for analysing the audio stream, e.g. detecting features or characteristics in audio streams
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/442—Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
- H04N21/44213—Monitoring of end-user related data
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/45—Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
- H04N21/4508—Management of client data or end-user data
- H04N21/4516—Management of client data or end-user data involving client characteristics, e.g. Set-Top-Box type, software version or amount of memory available
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/60—Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client
- H04N21/65—Transmission of management data between client and server
- H04N21/658—Transmission by the client directed to the server
- H04N21/6582—Data stored in the client, e.g. viewing habits, hardware capabilities, credit card number
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S3/00—Systems employing more than two channels, e.g. quadraphonic
- H04S3/002—Non-adaptive circuits, e.g. manually adjustable or static, for enhancing the sound image or the spatial distribution
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S3/00—Systems employing more than two channels, e.g. quadraphonic
- H04S3/008—Systems employing more than two channels, e.g. quadraphonic in which the audio signals are in digital form, i.e. employing more than two discrete digital channels
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R5/00—Stereophonic arrangements
- H04R5/02—Spatial or constructional arrangements of loudspeakers
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S2400/00—Details of stereophonic systems covered by H04S but not provided for in its groups
- H04S2400/15—Aspects of sound capture and related signal processing for recording or reproduction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S2420/00—Techniques used stereophonic systems covered by H04S but not provided for in its groups
- H04S2420/03—Application of parametric coding in stereophonic audio systems
Definitions
- digitally encoded audio may be sent by a computing system to a user system over a network.
- the user system may receive the digitally encoded audio, may produce a sound encoded within the digitally encoded audio, and may record a sample of the sound as produced by the user system.
- the sample may be analyzed, a format of digitally encoded audio may be selected based on the analysis, and audio digitally encoded in accordance with the selected format may be sent by the computing system to the user system over the network.
- the network 100 may use a series of interconnected communication links 101 (e.g., coaxial cables, optical fibers, wireless links, etc.) to connect a premises 102 (e.g., a home or other user environment) to the local office 103 .
- the communication links 101 may include any wired communication links, wireless communication links, communications networks, or combinations thereof.
- portions of the communication links 101 may be implemented with fiber-optic cable, while other portions of the communication links 101 may be implemented with coaxial cable.
- the communication links 101 may also include various communications components such as splitters, filters, amplifiers, wireless components, and other components for communicating data.
- Data may include, for example, Internet data, voice data, weather data, media content, and any other information.
- the local office 103 may transmit downstream information signals onto the communication links 101 , and one or more of the premises 102 may receive and process those signals.
- the communication links 101 may originate from the local office 103 as a single communications path, and may be split into any number of communication links to distribute data to the premises 102 and various other destinations.
- the premises 102 may include any type of user environment, such as single family homes, apartment complexes, businesses, schools, hospitals, parks, and other environments and combinations of environments.
- the local office 103 may include an interface 104 , which may be a computing device configured to manage communications between devices on the network of the communication links 101 and backend devices, such as a server.
- the interface 104 may be a CMTS.
- the termination system may be as specified in a standard, such as, in an example of an HFC-type network, the Data Over Cable Service Interface Specification (DOCSIS) standard, published by Cable Television Laboratories, Inc.
- the termination system may be configured to transmit data over one or more downstream channels or frequencies to be received by various devices, such as modems in the premises 102 , and to receive upstream communications from those modems on one or more upstream frequencies.
- DOCSIS Data Over Cable Service Interface Specification
- the local office 103 may include a variety of servers that may be configured to perform various functions.
- the local office 103 may include a push server 105 for generating push notifications to deliver data, instructions, or both to devices that are configured to detect such notifications.
- the local office 103 may include a content server 106 configured to provide content (e.g., media content) to devices.
- the local office 103 may also include an application server 107 .
- FIG. 3 depicts an illustrative environment for employing systems and methods in accordance with one or more aspects of the disclosure.
- an environment 300 may include a user system 302 and one or more backend system(s) 304 .
- the user system 302 and the backend system(s) 304 may be interfaced via one or more network(s) 306 , which may include one or more LAN(s) and/or WAN(s) (e.g., one or more networks associated with the user system 302 , the backend system(s) 304 , one or more distribution or service-provider networks that interface the user system 302 and/or the backend system(s) 304 to the Internet, and/or the Internet, or portions thereof).
- network(s) 306 may include one or more LAN(s) and/or WAN(s) (e.g., one or more networks associated with the user system 302 , the backend system(s) 304 , one or more distribution or service-provider networks that interface the user system 30
- the backend system(s) 304 may include one or more computing systems and/or devices (e.g., servers or the like) configured to perform one or more of the functions described herein (e.g., systems and/or devices for storing, selecting, and/or communicating digital media content).
- the user system 302 may include one or more user devices 308 , 310 , and 312 , which may be associated with one another (e.g., via their inclusion within the user system 302 , a connection to a network associated with the user system 302 , an affiliation with a service provider or an account thereof, or the like).
- the user system 302 may include one or more speakers 316 , 318 , 320 , 322 , and 324 , which may be associated with one or more of the user devices 308 , 310 , and 312 and may produce aspects or components of audio content received, processed, and/or stored by one or more of the user devices 308 , 310 , and 312 . As illustrated in FIG.
- a user of the user system 302 may have selected media content available from the backend system(s) 304 , which may have initiated a configuration routine configured to cause the backend system(s) 304 to generate the data comprising the configuration information and communicate the data comprising the configuration information to the user device 308 .
- the configuration information may be configured to cause the user system 302 to produce one or more sounds encoded within digital audio content and to record one or more samples of the sound(s) as produced by the user system 302 .
- the backend system(s) 304 may generate data comprising digital media content (e.g., digitally encoded audio and/or video, or the like) and may communicate (e.g., via the network(s) 306 ) the data comprising the digital media content to the user device 308 .
- the digital media content may comprise the sound(s) (e.g., the sound(s) the configuration information configured the user system 302 to record) and/or graphical or video content for the user system 302 to display or produce (e.g., content indicating that the sound(s) are being recorded as part of the configuration routine).
- the backend system(s) 304 may analyze the sample(s). For example, the sound(s) may have been encoded at a plurality of different sampling rates, and the backend system(s) 304 may analyze the sample(s) to determine a maximum sampling rate that the user system 302 distinguishably produced relative to each other sampling rate of the plurality of different sampling rates. That is, the plurality of different sampling rates may include sampling rates that the user system 302 is capable of producing but that are not produced by the user system 302 in a manner that is distinguishable (e.g., from the perspective of a listener) from the maximum sampling rate determined by analyzing the sample(s).
- the backend system(s) 304 may generate data comprising digital media content (e.g., digitally encoded audio and/or video, or the like) and may communicate (e.g., via the network(s) 306 ) the data comprising the digital media content to the user device 308 .
- digital media content e.g., digitally encoded audio and/or video, or the like
- such configuration information may be configured to cause the user system 302 to produce one or more sounds encoded within digital audio content and to record one or more samples of the sound(s) as produced by the user system 302 .
- the configuration information communicated to the user device 308 may be configured to cause the user device 308 to cause the user system 302 to produce one or more sounds encoded within digital audio content and to record (e.g., via a microphone associated with the user device 308 ) one or more samples of the sound(s) as produced by the user system 302 .
- the configuration information communicated to the user device 310 may be configured to cause the user device 310 to record (e.g., via a microphone associated with the user device 310 ) one or more samples of the sound(s) as produced by the user system 302
- the configuration information communicated to the user device 312 may be configured to cause the user device 312 to record (e.g., via a microphone associated with the user device 312 ) one or more samples of the sound(s) as produced by the user system 302 .
- the backend system(s) 304 may select, from amongst the plurality of formats of digitally encoded audio associated with the video asset, a format comprising the number of channels the user system 302 distinguishably produced and may generate data comprising a subsequent portion of the video asset and corresponding audio (e.g., audio encoded in accordance with the format comprising the number of channels the user system 302 distinguishably produced) and may communicate (e.g., via the network(s) 306 ) the data comprising the subsequent portion of the video asset and the corresponding audio to the user device 308 (e.g., for display and production by the user system 302 ).
- a format comprising the number of channels the user system 302 distinguishably produced may generate data comprising a subsequent portion of the video asset and corresponding audio (e.g., audio encoded in accordance with the format comprising the number of channels the user system 302 distinguishably produced) and may communicate (e.g., via the network(s) 306 ) the data comprising the subsequent portion of the video asset and
- the user device 308 may cause the user system 302 to display the graphical or video content, produce the sound(s), and record the sample(s) of the sound(s) as produced by the user system 302 .
- the user device 310 may record the sample(s) of the sound(s) as produced by the user system 302 .
- the user device 312 may record the sample(s) of the sound(s) as produced by the user system 302 .
- the user device 308 may analyze the sample(s) of the sound(s) it records.
- the user device 310 may analyze the sample(s) of the sound(s) it records.
- the user device 312 may analyze the sample(s) of the sound(s) it records.
- the user device 308 may generate data comprising results of its analysis and may communicate (e.g., via the network(s) 306 ) the data comprising the results of its analysis to the backend system(s) 304 .
- the user device 310 may generate data comprising results of its analysis and may communicate (e.g., via the network(s) 306 ) the data comprising the results of its analysis to the backend system(s) 304 .
- the user device 312 may generate data comprising results of its analysis and may communicate (e.g., via the network(s) 306 ) the data comprising the results of its analysis to the backend system(s) 304 .
- the backend system(s) 304 may analyze the results of the analyses.
- backend system(s) 304 may select the different format and may communicate data comprising a subsequent portion of the video asset and corresponding audio digitally encoded in accordance with the selected format to the user system 302 .
- the method may return to the step 804 , in which digital media content comprising audio digitally encoded in accordance with the previously utilized format may be communicated to the user system.
- the methods and features recited herein may be implemented through any number of computer readable media that are able to store computer readable instructions.
- Examples of computer readable media that may be used include RAM, ROM, Electrically Erasable Programmable Read-Only Memory (EEPROM), flash memory or other memory technology, CD-ROM, DVD, or other optical disk storage, magnetic cassettes, magnetic tape, magnetic storage, and the like.
Abstract
Description
- This application is a continuation of U.S. patent application Ser. No. 14/853,680, filed Sep. 14, 2015, entitled “Device-Based Audio Format Selection”, the disclosures of which are hereby incorporated by reference.
- Digital audio content can be communicated over networks in various different formats. When content is available in multiple formats, each of the available formats is often communicated over the network. In a broadcast or multicast context, communicating each of the available formats may be preferred, so that various receiving devices can select an appropriate format based on their respective configurations or capabilities. In a unicast or individualized context, however, communicating multiple audio formats may unnecessarily consume network resources. Accordingly, a need exists for device-based audio-format selection.
- This disclosure relates to device-based audio-format selection. In accordance with one or more embodiments, digitally encoded audio may be sent by a computing system to a user system over a network. The user system may receive the digitally encoded audio, may produce a sound encoded within the digitally encoded audio, and may record a sample of the sound as produced by the user system. The sample may be analyzed, a format of digitally encoded audio may be selected based on the analysis, and audio digitally encoded in accordance with the selected format may be sent by the computing system to the user system over the network.
- This summary is not intended to identify critical or essential features of the disclosure, but merely to summarize certain features and variations thereof. Other details and features will be described in the sections that follow.
- Some features herein are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings, in which like reference numerals refer to similar elements, and in which:
-
FIG. 1 depicts an illustrative network environment in which one or more aspects of the disclosure may be implemented; -
FIG. 2 depicts an illustrative software and hardware device on which various aspects of the disclosure may be implemented; -
FIG. 3 depicts an illustrative environment for employing systems and methods in accordance with one or more aspects of the disclosure; -
FIGS. 4, 5, 6, and 7 depict various illustrative event sequences in accordance with one or more aspects of the disclosure; and -
FIG. 8 depicts an illustrative method in accordance with one or more aspects of the disclosure. -
FIG. 1 illustrates an example information distribution network in which one or more of the various features described herein may be implemented. The illustrated information distribution network is only one example of a network and is not intended to suggest any limitation as to the scope of use or functionality of the disclosure. The illustrated network should not be interpreted as having any dependency or requirement relating to any component or combination of components in an information distribution network. - A
network 100 may be a telecommunications network, a Multi-Service Operator (MSO) network, a cable television (CATV) network, a cellular network, a wireless network, an optical fiber network, a coaxial cable network, a Hybrid Fiber-Coaxial (HFC) network, or any other type of information distribution network or combination of networks. For example, thenetwork 100 may be a cellular broadband network communicating with multiple communications access points, such as awireless communications tower 130. In another example, thenetwork 100 may be a coaxial system comprising a Cable Modem Termination System (CMTS) communicating with numerous gateway interface devices (e.g., agateway 111 in anexample home 102 a). In another example, thenetwork 100 may be a fiber-optic system comprising optical fibers extending from an Optical Line Terminal (OLT) to numerous Optical Network Terminals (ONTs) communicatively coupled with various gateway interface devices. In another example, thenetwork 100 may be a Digital Subscriber Line (DSL) system that includes alocal office 103 communicating with numerous gateway interface devices. In another example, thenetwork 100 may be an HFC network in which Internet traffic is routed over both optical and coaxial communication paths to a gateway interface device in or near a user's home. Various aspects of the disclosure may operate on one or more of the networks described herein or any other network architectures now known or later developed. - The
network 100 may use a series of interconnected communication links 101 (e.g., coaxial cables, optical fibers, wireless links, etc.) to connect a premises 102 (e.g., a home or other user environment) to thelocal office 103. Thecommunication links 101 may include any wired communication links, wireless communication links, communications networks, or combinations thereof. For example, portions of thecommunication links 101 may be implemented with fiber-optic cable, while other portions of thecommunication links 101 may be implemented with coaxial cable. Thecommunication links 101 may also include various communications components such as splitters, filters, amplifiers, wireless components, and other components for communicating data. Data may include, for example, Internet data, voice data, weather data, media content, and any other information. Media content may include, for example, video content, audio content, media on demand, video on demand, streaming video, television programs, text listings, graphics, advertisements, and other content. A media content item may represent an individual piece of media content, such as a particular movie, television episode, online video clip, song, audio recording, image, or any other data. In some instances, a media content item may be fragmented into segments, such as a plurality of two-second video fragments that may be separately addressed and retrieved. - The
local office 103 may transmit downstream information signals onto thecommunication links 101, and one or more of thepremises 102 may receive and process those signals. In certain implementations, thecommunication links 101 may originate from thelocal office 103 as a single communications path, and may be split into any number of communication links to distribute data to thepremises 102 and various other destinations. Although the term premises is used by way of example, thepremises 102 may include any type of user environment, such as single family homes, apartment complexes, businesses, schools, hospitals, parks, and other environments and combinations of environments. - The
local office 103 may include aninterface 104, which may be a computing device configured to manage communications between devices on the network of thecommunication links 101 and backend devices, such as a server. For example, theinterface 104 may be a CMTS. The termination system may be as specified in a standard, such as, in an example of an HFC-type network, the Data Over Cable Service Interface Specification (DOCSIS) standard, published by Cable Television Laboratories, Inc. The termination system may be configured to transmit data over one or more downstream channels or frequencies to be received by various devices, such as modems in thepremises 102, and to receive upstream communications from those modems on one or more upstream frequencies. - The
local office 103 may include one or more network interfaces 108 for communicating with one or moreexternal networks 109. The one or moreexternal networks 109 may include, for example, one or more telecommunications networks, Internet Protocol (IP) networks, cellular communications networks (e.g., Global System for Mobile Communications (GSM), Code Division Multiple Access (CDMA), and any other 2nd, 3rd, 4th, or higher generation cellular communications networks), cellular broadband networks, radio access networks, fiber-optic networks, local wireless networks (e.g., Wi-Fi, WiMAX), satellite networks, and any other networks or combinations of networks. - The
local office 103 may include a variety of servers that may be configured to perform various functions. Thelocal office 103 may include apush server 105 for generating push notifications to deliver data, instructions, or both to devices that are configured to detect such notifications. Thelocal office 103 may include a content server 106 configured to provide content (e.g., media content) to devices. Thelocal office 103 may also include anapplication server 107. - The
premises 102, such as theexample home 102 a, may include an interface 120, which may include a modem 110 (or any device), for communicating on thecommunication links 101 with thelocal office 103, the one or moreexternal networks 109, or both. For example, themodem 110 may be a coaxial cable modem (for coaxial cable links), a broadband modem (for DSL links), a fiber interface node (for fiber-optic links), or any other device or combination of devices. In certain implementations, themodem 110 may be a part of, or communicatively coupled to, thegateway 111. Thegateway 111 may be, for example, a wireless router, a set-top box, a computer server, or any other computing device or combination. - The
gateway 111 may be any computing device for communicating with themodem 110 to allow one or more other devices in theexample home 102 a to communicate with thelocal office 103, the one or moreexternal networks 109, or other devices communicatively coupled thereto. Thegateway 111 may include local network interfaces to provide communication signals to client devices in or near theexample home 102 a, such as atelevision 112, a set-top box 113, apersonal computer 114, alaptop computer 115, a wireless device 116 (e.g., a wireless laptop, a tablet computer, a mobile phone, a portable gaming device a vehicular computing system, a mobile computing system, a navigation system, an entertainment system in an automobile, marine vessel, aircraft, or the like), or any other device. -
FIG. 2 illustrates general hardware elements and software elements that can be used to implement any of the various computing devices, servers, encoders, caches, and/or software discussed herein. A device 200 may include aprocessor 201, which may execute instructions of a computer program to perform any of the functions and steps described herein. The instructions may be stored in any type of computer-readable medium or memory to configure the operation of theprocessor 201. For example, instructions may be stored in a Read-Only Memory (ROM) 202, a Random Access Memory (RAM) 203, aremovable media 204, such as a Universal Serial Bus (USB) drive, Compact Disk (CD) or Digital Versatile Disk (DVD), hard drive, floppy disk, or any other desired electronic storage medium. Instructions may also be stored in ahard drive 205, which may be an internal or external hard drive. - The device 200 may include one or more output devices, such as a display 206 (e.g., an integrated or external display, monitor, or television), and may include a
device controller 207, such as a video processor. In some embodiments, the device 200 may include aninput device 208, such as a remote control, keyboard, mouse, touch screen, microphone, motion sensing input device, and/or any other input device. - The device 200 may also include one or more network interfaces, such as a network Input/Output (I/O)
interface 210 to communicate with a network 209. The network interface may be a wired interface, wireless interface, or a combination of the two. In some embodiments, the network I/O interface 210 may include a cable modem, and the network 209 may include thecommunication links 101 shown inFIG. 1 , the one or moreexternal networks 109, an in-home network, a provider's wireless, coaxial, fiber, or hybrid fiber/coaxial distribution system (e.g., a DOCSIS network), and/or any other desired network. -
FIG. 3 depicts an illustrative environment for employing systems and methods in accordance with one or more aspects of the disclosure. Referring toFIG. 3 , anenvironment 300 may include auser system 302 and one or more backend system(s) 304. Theuser system 302 and the backend system(s) 304 may be interfaced via one or more network(s) 306, which may include one or more LAN(s) and/or WAN(s) (e.g., one or more networks associated with theuser system 302, the backend system(s) 304, one or more distribution or service-provider networks that interface theuser system 302 and/or the backend system(s) 304 to the Internet, and/or the Internet, or portions thereof). The backend system(s) 304 may include one or more computing systems and/or devices (e.g., servers or the like) configured to perform one or more of the functions described herein (e.g., systems and/or devices for storing, selecting, and/or communicating digital media content). Theuser system 302 may include one ormore user devices user system 302, a connection to a network associated with theuser system 302, an affiliation with a service provider or an account thereof, or the like). Theuser devices - In some embodiments, one or more of the
user devices user system 302 may include one or more peripheral devices, which may be associated with one or more of theuser devices user system 302 may include adisplay 314, which may be associated with one or more of theuser devices user devices user system 302 may include one ormore speakers user devices user devices FIG. 3 , one or more of thespeakers display 314, one or more of theuser devices speakers speaker 316 may be located alongside and to the left of the display 314 (e.g., a front, left channel), thespeaker 318 may be located near the display 314 (e.g., a center channel), thespeaker 320 may be located alongside and to the right of the display 314 (e.g., a front, right channel), thespeaker 322 may be located to the left of thedisplay 314 and further in front of thedisplay 314 than the speaker 316 (e.g., a rear, left channel), and thespeaker 324 may be located to the right of thedisplay 314 and further in front of thedisplay 314 than the speaker 320 (e.g., a rear, right channel). -
FIGS. 4, 5, 6, and 7 depict various illustrative event sequences in accordance with one or more aspects of the disclosure. The events and steps illustrated inFIGS. 4, 5, 6, and 7 are merely illustrative, and one of ordinary skill in the art will recognize that some steps or events may be omitted, may be performed or occur in an order other than that illustrated, and/or may be performed by or occur at a device other than that illustrated. Referring toFIG. 4 , the backend system(s) 304 may generate data comprising configuration information and may communicate (e.g., via the network(s) 306) the data comprising the configuration information to theuser device 308. For example, a user of theuser system 302 may have selected media content available from the backend system(s) 304, which may have initiated a configuration routine configured to cause the backend system(s) 304 to generate the data comprising the configuration information and communicate the data comprising the configuration information to theuser device 308. In some embodiments, the configuration information may be configured to cause theuser system 302 to produce one or more sounds encoded within digital audio content and to record one or more samples of the sound(s) as produced by theuser system 302. Additionally or alternatively, the configuration information may be configured to cause theuser system 302 to instruct a user (e.g., via audio instructions produced by theuser system 302, visual instructions displayed by theuser system 302, or the like) to physically locate one or more of theuser devices - The backend system(s) 304 may generate data comprising digital media content (e.g., digitally encoded audio and/or video, or the like) and may communicate (e.g., via the network(s) 306) the data comprising the digital media content to the
user device 308. For example, the digital media content may comprise the sound(s) (e.g., the sound(s) the configuration information configured theuser system 302 to record) and/or graphical or video content for theuser system 302 to display or produce (e.g., content indicating that the sound(s) are being recorded as part of the configuration routine). Theuser device 308 may cause theuser system 302 to display the graphical or video content, produce the sound(s), and record the sample(s) of the sound(s) as produced by theuser system 302. Theuser device 308 may communicate (e.g., via the network(s) 306) the sample(s) to the backend system(s) 304. - The backend system(s) 304 may analyze the sample(s). For example, the sound(s) may have been encoded at a plurality of different sampling rates, and the backend system(s) 304 may analyze the sample(s) to determine a maximum sampling rate that the
user system 302 distinguishably produced relative to each other sampling rate of the plurality of different sampling rates. That is, the plurality of different sampling rates may include sampling rates that theuser system 302 is capable of producing but that are not produced by theuser system 302 in a manner that is distinguishable (e.g., from the perspective of a listener) from the maximum sampling rate determined by analyzing the sample(s). The selected media content may comprise a digitally encoded video asset (e.g., a movie, television program, or the like) and a plurality of formats of digitally encoded audio associated with the video asset, and the backend system(s) 304 may select, from amongst the plurality of formats, a format encoded at the maximum sampling rate determined by analyzing the sample(s). The backend system(s) 304 may generate data comprising at least a portion of the video asset and corresponding audio (e.g., audio encoded at the maximum sampling rate) and may communicate (e.g., via the network(s) 306) the data comprising the at least a portion of the video asset and the corresponding audio to theuser device 308. The data comprising the at least a portion of the video asset and the corresponding audio may comprise one or more sound(s) that theuser system 302 is configured to record (e.g., the corresponding audio may include such sound(s)). - The
user device 308 may cause theuser system 302 to display the at least a portion of the video asset, produce the corresponding audio, and record sample(s) of the sound(s) as produced by theuser system 302. Theuser device 308 may communicate (e.g., via the network(s) 306) the sample(s) to the backend system(s) 304. The backend system(s) 304 may analyze the sample(s). For example, the sound(s) (e.g., a portion of the audio corresponding to the at least a portion of the video asset) may have been encoded at a plurality of different sampling rates (e.g., sampling rates other than the maximum sampling rate previously determined), and the backend system(s) 304 may analyze the sample(s) to determine a new maximum sampling rate that theuser system 302 distinguishably produced relative to each other sampling rate of the plurality of different sampling rates. That is, one or more conditions, such as a configuration of the user system 302 (e.g., a change in speaker configuration) or an associated environmental variable (e.g., a level of background noise) may have changed since the sample(s) previously analyzed were recorded, and the backend system(s) 304 may determine that theuser system 302 is capable of producing audio encoded at a different sampling rate (e.g., a higher or lower sampling rate) in a distinguishable manner (e.g., from the perspective of a listener) than audio encoded at the maximum sampling rate previously determined. The backend system(s) 304 may select, from amongst the plurality of formats of digitally encoded audio associated with the video asset, a format encoded at the new maximum sampling rate. The backend system(s) 304 may generate data comprising a subsequent portion of the video asset and corresponding audio (e.g., audio encoded at the new maximum sampling rate) and may communicate (e.g., via the network(s) 306) the data comprising the subsequent portion of the video asset and the corresponding audio to the user device 308 (e.g., for display and production by the user system 302). - Referring to
FIG. 5 , as described above with respect toFIG. 4 , the backend system(s) 304 may generate data comprising configuration information and may communicate (e.g., via the network(s) 306) the data comprising the configuration information to theuser device 308. For example, a user of theuser system 302 may have selected media content available from the backend system(s) 304, which may have initiated a configuration routine configured to cause the backend system(s) 304 to generate the data comprising the configuration information and communicate the data comprising the configuration information to theuser device 308. As indicated above, such configuration information may be configured to cause theuser system 302 to produce one or more sounds encoded within digital audio content, to record one or more samples of the sound(s) as produced by theuser system 302, and/or to cause theuser system 302 to instruct a user to physically locate one or more of theuser devices user system 302 to analyze the sample(s) of the sound(s) and to communicate results of the analysis to the backend system(s) 304. - The backend system(s) 304 may generate data comprising digital media content (e.g., digitally encoded audio and/or video, or the like) and may communicate (e.g., via the network(s) 306) the data comprising the digital media content to the
user device 308. For example, the selected media content may comprise a digitally encoded video asset (e.g., a movie, television program, or the like) and a plurality of formats of digitally encoded audio associated with the video asset, and the backend system(s) 304 may generate data comprising at least a portion of the video asset and corresponding audio and may communicate (e.g., via the network(s) 306) the data comprising the at least a portion of the video asset and the corresponding audio to theuser device 308. The data comprising the at least a portion of the video asset and the corresponding audio may comprise one or more sound(s) that theuser system 302 is configured to record (e.g., the corresponding audio may include such sound(s)). Theuser device 308 may cause theuser system 302 to display the graphical or video content, produce the sound(s), and record the sample(s) of the sound(s) as produced by theuser system 302. Theuser device 308 may cause theuser system 302 to analyze the sample(s). For example, each of the sound(s) may have been encoded using a different format of the plurality of formats of digitally encoded audio associated with the video asset, and theuser device 308 may cause theuser system 302 to analyze the sample(s) to determine a format for theuser system 302. Theuser device 308 may generate data comprising results of the analysis and may communicate (e.g., via the network(s) 306) the data comprising the results of the analysis to the backend system(s) 304. The backend system(s) 304 may select, from amongst the plurality of formats of digitally encoded audio associated with the video asset, the format determined by analyzing the sample(s) and may generate data comprising a subsequent portion of the video asset and corresponding audio (e.g., audio encoded in accordance with the format determined by analyzing the sample(s)) and may communicate (e.g., via the network(s) 306) the data comprising the subsequent portion of the video asset and the corresponding audio to the user device 308 (e.g., for display and production by the user system 302). - Referring to
FIG. 6 , the backend system(s) 304 may identify one or more user devices associated with theuser system 302. For example, a user of theuser system 302 may have selected (e.g., via the user device 308) media content available from the backend system(s) 304, and the backend computing system(s) 304 may determine that theuser devices user system 302, connection to a network associated with theuser system 302, affiliation with a service provider or an account thereof, or the like). The backend system(s) 304 may generate data comprising configuration information for theuser device 308 and may communicate (e.g., via the network(s) 306) the data comprising the configuration information to theuser device 308. Similarly, the backend system(s) 304 may generate data comprising configuration information for theuser device 310 and may communicate (e.g., via the network(s) 306) the data comprising the configuration information to theuser device 310, and the backend system(s) 304 may generate data comprising configuration information for theuser device 312 and may communicate (e.g., via the network(s) 306) the data comprising the configuration information to theuser device 312. For example, selection of the media content may have initiated a configuration routine configured to cause the backend system(s) 304 to generate the data comprising the configuration information and communicate the data comprising the configuration information to theuser devices - As indicated above, such configuration information may be configured to cause the
user system 302 to produce one or more sounds encoded within digital audio content and to record one or more samples of the sound(s) as produced by theuser system 302. For example, the configuration information communicated to theuser device 308 may be configured to cause theuser device 308 to cause theuser system 302 to produce one or more sounds encoded within digital audio content and to record (e.g., via a microphone associated with the user device 308) one or more samples of the sound(s) as produced by theuser system 302. Similarly, the configuration information communicated to theuser device 310 may be configured to cause theuser device 310 to record (e.g., via a microphone associated with the user device 310) one or more samples of the sound(s) as produced by theuser system 302, and the configuration information communicated to theuser device 312 may be configured to cause theuser device 312 to record (e.g., via a microphone associated with the user device 312) one or more samples of the sound(s) as produced by theuser system 302. The configuration information (e.g., the configuration information communicated to theuser device 308, the configuration information communicated to theuser device 310, and/or the configuration information communicated to the user device 312) may be further configured to cause theuser system 302 to instruct a user to physically locate one or more of theuser devices user device 308 near thedisplay 314, to locate theuser device 310 in front of and to the left of thedisplay 314, and to locate theuser device 312 in front of and to the right of the display 314). - The backend system(s) 304 may generate data comprising digital media content (e.g., digitally encoded audio and/or video, or the like) and may communicate (e.g., via the network(s) 306) the data comprising the digital media content to the
user device 308. For example, the selected media content may comprise a digitally encoded video asset (e.g., a movie, television program, or the like) and a plurality of formats of digitally encoded audio associated with the video asset, and the backend system(s) 304 may generate data comprising at least a portion of the video asset and corresponding audio and may communicate (e.g., via the network(s) 306) the data comprising the at least a portion of the video asset and the corresponding audio to theuser device 308. The data comprising the at least a portion of the video asset and the corresponding audio may comprise one or more sound(s) that theuser system 302 is configured to record (e.g., the corresponding audio may include such sound(s)). In some embodiments, the sound(s) may comprise one or more tones configured for subsequent analysis. For example, the tone(s) may be configured to be played through one or more specific audio channels of the user system 302 (e.g., the tone(s) may include a tone configured to be played through a front, left channel of the user system 302 (e.g., the speaker 316), a tone configured to be played through a center channel of the user system 302 (e.g., the speaker 318), a tone configured to be played through a front, right channel of the user system 302 (e.g., the speaker 320), a tone configured to be played through a rear, left channel of the user system 302 (e.g., the speaker 322), and/or a tone configured to be played through a rear, right channel of the user system 302 (e.g., the speaker 324)). - The
user device 308 may cause theuser system 302 to display the graphical or video content, produce the sound(s), and record the sample(s) of the sound(s) as produced by theuser system 302. Theuser device 310 may record the sample(s) of the sound(s) as produced by theuser system 302. Theuser device 312 may record the sample(s) of the sound(s) as produced by theuser system 302. Theuser device 308 may communicate (e.g., via the network(s) 306) the sample(s) it recorded to the backend system(s) 304. Similarly, theuser device 310 may communicate (e.g., via the network(s) 306) the sample(s) it recorded to the backend system(s) 304, and theuser device 312 may communicate (e.g., via the network(s) 306) the sample(s) it recorded to the backend system(s) 304. The backend system(s) 304 may analyze the sample(s). For example, the sample(s) may comprise a plurality of channels, each of which may correspond to a microphone that recorded a portion of the sample(s) (e.g., a channel corresponding to a portion of the sample(s) recorded by the microphone associated with theuser device 308, a channel corresponding to a portion of the sample(s) recorded by the microphone associated with theuser device 310, and/or a channel corresponding to a portion of the sample(s) recorded by the microphone associated with the user device 312), and the backend system(s) 304 may analyze the sample(s) (e.g., by comparing recording(s) (e.g., of the tone(s)) produced by each of theuser devices user system 302 distinguishably produced relative to each other channel within the digitally encoded audio corresponding to the at least a portion of the video asset. For example, theuser system 302 may include hardware configured to produce five separate channels of audio (e.g., thespeakers speakers user system 302 does not distinguishably produce the five separate channels relative to one another but instead produces three. The backend system(s) 304 may select, from amongst the plurality of formats of digitally encoded audio associated with the video asset, a format comprising the number of channels theuser system 302 distinguishably produced and may generate data comprising a subsequent portion of the video asset and corresponding audio (e.g., audio encoded in accordance with the format comprising the number of channels theuser system 302 distinguishably produced) and may communicate (e.g., via the network(s) 306) the data comprising the subsequent portion of the video asset and the corresponding audio to the user device 308 (e.g., for display and production by the user system 302). - Referring to
FIG. 7 , the backend system(s) 304 may identify one or more user devices associated with theuser system 302. For example, a user of theuser system 302 may have selected (e.g., via the user device 308) media content available from the backend system(s) 304, and the backend computing system(s) 304 may determine that theuser devices user system 302, connection to a network associated with theuser system 302, affiliation with a service provider or an account thereof, or the like). The backend system(s) 304 may generate data comprising configuration information for theuser device 308 and may communicate (e.g., via the network(s) 306) the data comprising the configuration information to theuser device 308. Similarly, the backend system(s) 304 may generate data comprising configuration information for theuser device 310 and may communicate (e.g., via the network(s) 306) the data comprising the configuration information to theuser device 310, and the backend system(s) 304 may generate data comprising configuration information for theuser device 312 and may communicate (e.g., via the network(s) 306) the data comprising the configuration information to theuser device 312. For example, selection of the media content may have initiated a configuration routine configured to cause the backend system(s) 304 to generate the data comprising the configuration information and communicate the data comprising the configuration information to theuser devices - As indicated above, such configuration information may be configured to cause the
user system 302 to produce one or more sounds encoded within digital audio content and to record one or more samples of the sound(s) as produced by theuser system 302. For example, the configuration information communicated to theuser device 308 may be configured to cause theuser device 308 to cause theuser system 302 to produce one or more sounds encoded within digital audio content and to record (e.g., via a microphone associated with the user device 308) one or more samples of the sound(s) as produced by theuser system 302. Similarly, the configuration information communicated to theuser device 310 may be configured to cause theuser device 310 to record (e.g., via a microphone associated with the user device 310) one or more samples of the sound(s) as produced by theuser system 302, and the configuration information communicated to theuser device 312 may be configured to cause theuser device 312 to record (e.g., via a microphone associated with the user device 312) one or more samples of the sound(s) as produced by theuser system 302. In some embodiments, the configuration information may comprise instructions configured to cause theuser system 302 to analyze the sample(s) of the sound(s) and to communicate results of the analysis to the backend system(s) 304. For example, the configuration information communicated to theuser device 308 may be configured to cause theuser device 308 to analyze the sample(s) of the sound(s) it records and to communicate results of the analysis to the backend system(s) 304. Similarly, the configuration information communicated to theuser device 310 may be configured to cause theuser device 310 to analyze the sample(s) of the sound(s) it records and to communicate results of the analysis to the backend system(s) 304, and the configuration information communicated to theuser device 312 may be configured to cause theuser device 312 to analyze the sample(s) of the sound(s) it records and to communicate results of the analysis to the backend system(s) 304. - The backend system(s) 304 may generate data comprising digital media content (e.g., digitally encoded audio and/or video, or the like) and may communicate (e.g., via the network(s) 306) the data comprising the digital media content to the
user device 308. For example, the selected media content may comprise a digitally encoded video asset (e.g., a movie, television program, or the like) and a plurality of formats of digitally encoded audio associated with the video asset, and the backend system(s) 304 may generate data comprising at least a portion of the video asset and corresponding audio and may communicate (e.g., via the network(s) 306) the data comprising the at least a portion of the video asset and the corresponding audio to theuser device 308. The data comprising the at least a portion of the video asset and the corresponding audio may comprise one or more sound(s) that theuser system 302 is configured to record (e.g., the corresponding audio may include such sound(s)). - The
user device 308 may cause theuser system 302 to display the graphical or video content, produce the sound(s), and record the sample(s) of the sound(s) as produced by theuser system 302. Theuser device 310 may record the sample(s) of the sound(s) as produced by theuser system 302. Theuser device 312 may record the sample(s) of the sound(s) as produced by theuser system 302. Theuser device 308 may analyze the sample(s) of the sound(s) it records. Theuser device 310 may analyze the sample(s) of the sound(s) it records. Theuser device 312 may analyze the sample(s) of the sound(s) it records. Theuser device 308 may generate data comprising results of its analysis and may communicate (e.g., via the network(s) 306) the data comprising the results of its analysis to the backend system(s) 304. Theuser device 310 may generate data comprising results of its analysis and may communicate (e.g., via the network(s) 306) the data comprising the results of its analysis to the backend system(s) 304. Theuser device 312 may generate data comprising results of its analysis and may communicate (e.g., via the network(s) 306) the data comprising the results of its analysis to the backend system(s) 304. The backend system(s) 304 may analyze the results of the analyses. For example, each of the sound(s) may have been encoded using a different format of the plurality of formats of digitally encoded audio associated with the video asset, and the backend system(s) 304 may analyze the results of the analyses to determine a format for theuser system 302. The backend system(s) 304 may select, from amongst the plurality of formats of digitally encoded audio associated with the video asset, the format determined by analyzing the results of the analyses and may generate data comprising a subsequent portion of the video asset and corresponding audio (e.g., audio encoded in accordance with the format determined by analyzing the results of the analyses) and may communicate (e.g., via the network(s) 306) the data comprising the subsequent portion of the video asset and the corresponding audio to the user device 308 (e.g., for display and production by the user system 302). -
FIG. 8 depicts an illustrative method in accordance with one or more aspects of the disclosure. Referring toFIG. 8 , at a step 802, one or more user devices may be configured to record one or more samples of one or more sounds encoded within digitally encoded audio. For example, the backend system(s) 304 may configure theuser device 308 to record one or more samples of one or more sounds encoded within digitally encoded audio associated with a video asset. At astep 804, digital media content (e.g., audio/visual content) comprising the sound(s) may be communicated to a user system associated with the user device. For example, the backend system(s) 304 may communicate data comprising a portion of the video asset and corresponding audio to theuser system 302. At astep 806, the user device may record the sample(s) of the sound(s) as produced by the user system. For example, theuser device 308 may record the sample(s) of the sound(s) encoded within the digitally encoded audio associated with the video asset as produced by theuser system 302. At astep 808, the sample(s) may be analyzed. For example, the backend system(s) 304 may analyze the sample(s) recorded by theuser device 308. - At a
step 810, a determination may be made, based on results of the analysis, whether to change to a different format of the digitally encoded audio. For example, backend system(s) 304 may determine, based on the results of analyzing the sample(s) recorded by theuser device 308, whether to change to a different format of the digitally encoded audio associated with the video asset. At astep 812, responsive to determining to change to a different format of the digitally encoded audio, the different format may be selected, and the method may return to thestep 804, in which digital media content comprising audio digitally encoded in accordance with the selected format may be communicated to the user system. For example, responsive to determining to change to a different format of the digitally encoded audio associated with the video asset, backend system(s) 304 may select the different format and may communicate data comprising a subsequent portion of the video asset and corresponding audio digitally encoded in accordance with the selected format to theuser system 302. Returning to step 810, responsive to determining not to change to a different format of the digitally encoded audio, the method may return to thestep 804, in which digital media content comprising audio digitally encoded in accordance with the previously utilized format may be communicated to the user system. For example, responsive to determining not to change to a different format of the digitally encoded audio associated with the video asset, backend system(s) 304 may communicate data comprising a subsequent portion of the video asset and corresponding audio digitally encoded in accordance with the previously utilized format to theuser system 302. - The methods and features recited herein may be implemented through any number of computer readable media that are able to store computer readable instructions. Examples of computer readable media that may be used include RAM, ROM, Electrically Erasable Programmable Read-Only Memory (EEPROM), flash memory or other memory technology, CD-ROM, DVD, or other optical disk storage, magnetic cassettes, magnetic tape, magnetic storage, and the like.
- Additionally or alternatively, in at least some embodiments, the methods and features recited herein may be implemented through one or more Integrated Circuits (IC s). An IC may, for example, be a microprocessor that accesses programming instructions or other data stored in a ROM. In some embodiments, a ROM may store program instructions that cause an IC to perform operations according to one or more of the methods described herein. In some embodiments, one or more of the methods described herein may be hardwired into an IC. For example, an IC may comprise an Application Specific Integrated Circuit (ASIC) having gates and/or other logic dedicated to the calculations and other operations described herein. In still other embodiments, an IC may perform some operations based on execution of programming instructions read from ROM or RAM, with other operations hardwired into gates or other logic. Further, an IC may be configured to output image data to a display buffer.
- Although specific examples of carrying out the disclosure have been described, those skilled in the art will appreciate that there are numerous variations and permutations of the above-described apparatuses and methods that are contained within the spirit and scope of the disclosure as set forth in the appended claims. Additionally, numerous other embodiments, modifications, and variations within the scope and spirit of the appended claims may occur to persons of ordinary skill in the art from a review of this disclosure. Specifically, one or more of the features described herein may be combined with any or all of the other features described herein.
- The various features described above are merely non-limiting examples, and may be rearranged, combined, subdivided, omitted, and/or altered in any desired manner. For example, features of the servers may be subdivided among multiple processors and/or computing devices. The true scope of this patent should only be defined by the claims that follow.
Claims (21)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/697,571 US20180152739A1 (en) | 2015-09-14 | 2017-09-07 | Device-Based Audio-Format Selection |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/853,680 US9800905B2 (en) | 2015-09-14 | 2015-09-14 | Device based audio-format selection |
US15/697,571 US20180152739A1 (en) | 2015-09-14 | 2017-09-07 | Device-Based Audio-Format Selection |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/853,680 Continuation US9800905B2 (en) | 2015-09-14 | 2015-09-14 | Device based audio-format selection |
Publications (1)
Publication Number | Publication Date |
---|---|
US20180152739A1 true US20180152739A1 (en) | 2018-05-31 |
Family
ID=58237553
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/853,680 Active US9800905B2 (en) | 2015-09-14 | 2015-09-14 | Device based audio-format selection |
US15/697,571 Pending US20180152739A1 (en) | 2015-09-14 | 2017-09-07 | Device-Based Audio-Format Selection |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/853,680 Active US9800905B2 (en) | 2015-09-14 | 2015-09-14 | Device based audio-format selection |
Country Status (1)
Country | Link |
---|---|
US (2) | US9800905B2 (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10762911B2 (en) * | 2015-12-01 | 2020-09-01 | Ati Technologies Ulc | Audio encoding using video information |
US10162853B2 (en) * | 2015-12-08 | 2018-12-25 | Rovi Guides, Inc. | Systems and methods for generating smart responses for natural language queries |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030021589A1 (en) * | 2001-07-27 | 2003-01-30 | Shu Lin | Recording and playing back multiple programs |
US20050036519A1 (en) * | 2003-08-13 | 2005-02-17 | Jeyendran Balakrishnan | Method and system for re-multiplexing of content-modified MPEG-2 transport streams using interpolation of packet arrival times |
US9686625B2 (en) * | 2015-07-21 | 2017-06-20 | Disney Enterprises, Inc. | Systems and methods for delivery of personalized audio |
US9924252B2 (en) * | 2013-03-13 | 2018-03-20 | Polycom, Inc. | Loudspeaker arrangement with on-screen voice positioning for telepresence system |
US11062368B1 (en) * | 2014-03-19 | 2021-07-13 | Google Llc | Selecting online content using offline data |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
NL8403200A (en) * | 1984-10-22 | 1986-05-16 | Philips Nv | NOISE-DEPENDENT AND VOICE-INDEPENDENT VOLUME CONTROL. |
US6678505B1 (en) * | 2001-04-18 | 2004-01-13 | David Leason | Extravehicular communication system and method |
WO2003085968A1 (en) * | 2002-04-11 | 2003-10-16 | Konica Minolta Holdings, Inc. | Information recording medium and manufacturing method thereof |
JP2004336734A (en) * | 2003-04-17 | 2004-11-25 | Sharp Corp | Wireless terminal, base apparatus, wireless system, control method of wireless terminal, control program of wireless terminal, and computer-readable recording medium for recording the same |
US7359409B2 (en) * | 2005-02-02 | 2008-04-15 | Texas Instruments Incorporated | Packet loss concealment for voice over packet networks |
US8443403B2 (en) * | 2009-09-04 | 2013-05-14 | Time Warner Cable Inc. | Methods and apparatus for providing voice mail services |
US8825020B2 (en) * | 2012-01-12 | 2014-09-02 | Sensory, Incorporated | Information access and device control using mobile phones and audio in the home environment |
US9207857B2 (en) * | 2014-02-14 | 2015-12-08 | EyeGroove, Inc. | Methods and devices for presenting interactive media items |
US20150287403A1 (en) * | 2014-04-07 | 2015-10-08 | Neta Holzer Zaslansky | Device, system, and method of automatically generating an animated content-item |
-
2015
- 2015-09-14 US US14/853,680 patent/US9800905B2/en active Active
-
2017
- 2017-09-07 US US15/697,571 patent/US20180152739A1/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030021589A1 (en) * | 2001-07-27 | 2003-01-30 | Shu Lin | Recording and playing back multiple programs |
US20050036519A1 (en) * | 2003-08-13 | 2005-02-17 | Jeyendran Balakrishnan | Method and system for re-multiplexing of content-modified MPEG-2 transport streams using interpolation of packet arrival times |
US9924252B2 (en) * | 2013-03-13 | 2018-03-20 | Polycom, Inc. | Loudspeaker arrangement with on-screen voice positioning for telepresence system |
US11062368B1 (en) * | 2014-03-19 | 2021-07-13 | Google Llc | Selecting online content using offline data |
US9686625B2 (en) * | 2015-07-21 | 2017-06-20 | Disney Enterprises, Inc. | Systems and methods for delivery of personalized audio |
Also Published As
Publication number | Publication date |
---|---|
US20170078710A1 (en) | 2017-03-16 |
US9800905B2 (en) | 2017-10-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11616818B2 (en) | Distributed control of media content item during webcast | |
US9537737B2 (en) | Consolidated performance metric analysis | |
US9112623B2 (en) | Asynchronous interaction at specific points in content | |
US20200389756A1 (en) | Dynamic Positional Audio | |
US20210286586A1 (en) | Sound effect adjustment method, device, electronic device and storage medium | |
US20090217324A1 (en) | System, method and program product for customizing presentation of television content to a specific viewer and location | |
US20100138761A1 (en) | Techniques to push content to a connected device | |
US20100305729A1 (en) | Audio-based synchronization to media | |
US20130326575A1 (en) | Social Media Driven Generation of a Highlight Clip from a Media Content Stream | |
US20070106941A1 (en) | System and method of providing audio content | |
US20100049719A1 (en) | Techniques for the association, customization and automation of content from multiple sources on a single display | |
CN108124172B (en) | Cloud projection method, device and system | |
US20160316267A1 (en) | Aggregation Of Multiple Media Types Of User Consumption Habits And Device Preferences | |
CN108573393A (en) | Comment information processing method, device, server and storage medium | |
US9584761B2 (en) | Videoconference terminal, secondary-stream data accessing method, and computer storage medium | |
US20160294903A1 (en) | Method and device for pushing resources to mobile communication terminal by smart television | |
US20180152739A1 (en) | Device-Based Audio-Format Selection | |
CN101562550A (en) | Digital content service integrated system | |
CN108337556B (en) | Method and device for playing audio-video file | |
CN104038772B (en) | Generate the method and device of ring signal file | |
CN111541905B (en) | Live broadcast method and device, computer equipment and storage medium | |
US20090006581A1 (en) | Method and System For Downloading Streaming Content | |
US11050499B1 (en) | Audience response collection and analysis | |
CN111164982A (en) | Method and apparatus for determining a source of a media presentation | |
KR100991264B1 (en) | Method and system for playing and sharing music sources on an electric device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
AS | Assignment |
Owner name: COMCAST CABLE COMMUNICATIONS, LLC, PENNSYLVANIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:ROSE, DAVE;WEBUYE, WHITE;OHARE, DAVE;SIGNING DATES FROM 20150702 TO 20150804;REEL/FRAME:050133/0680 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |