US20180152739A1 - Device-Based Audio-Format Selection - Google Patents

Device-Based Audio-Format Selection Download PDF

Info

Publication number
US20180152739A1
US20180152739A1 US15/697,571 US201715697571A US2018152739A1 US 20180152739 A1 US20180152739 A1 US 20180152739A1 US 201715697571 A US201715697571 A US 201715697571A US 2018152739 A1 US2018152739 A1 US 2018152739A1
Authority
US
United States
Prior art keywords
user
digitally encoded
audio
video content
user system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
US15/697,571
Inventor
Dave Rose
White Webuye
Dave Ohare
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Comcast Cable Communications LLC
Original Assignee
Comcast Cable Communications LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Comcast Cable Communications LLC filed Critical Comcast Cable Communications LLC
Priority to US15/697,571 priority Critical patent/US20180152739A1/en
Publication of US20180152739A1 publication Critical patent/US20180152739A1/en
Assigned to COMCAST CABLE COMMUNICATIONS, LLC reassignment COMCAST CABLE COMMUNICATIONS, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: WEBUYE, WHITE, ROSE, DAVE, OHARE, DAVE
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/233Processing of audio elementary streams
    • H04N21/2335Processing of audio elementary streams involving reformatting operations of audio signals, e.g. by converting from one coding standard to another
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F17/30
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04HBROADCAST COMMUNICATION
    • H04H60/00Arrangements for broadcast applications with a direct linking to broadcast information or broadcast space-time; Broadcast-related systems
    • H04H60/56Arrangements characterised by components specially adapted for monitoring, identification or recognition covered by groups H04H60/29-H04H60/54
    • H04H60/58Arrangements characterised by components specially adapted for monitoring, identification or recognition covered by groups H04H60/29-H04H60/54 of audio
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/258Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
    • H04N21/25808Management of client data
    • H04N21/25833Management of client data involving client hardware characteristics, e.g. manufacturer, processing or storage capabilities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/42203Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS] sound input device, e.g. microphone
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/439Processing of audio elementary streams
    • H04N21/4394Processing of audio elementary streams involving operations for analysing the audio stream, e.g. detecting features or characteristics in audio streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44213Monitoring of end-user related data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/4508Management of client data or end-user data
    • H04N21/4516Management of client data or end-user data involving client characteristics, e.g. Set-Top-Box type, software version or amount of memory available
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/65Transmission of management data between client and server
    • H04N21/658Transmission by the client directed to the server
    • H04N21/6582Data stored in the client, e.g. viewing habits, hardware capabilities, credit card number
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S3/00Systems employing more than two channels, e.g. quadraphonic
    • H04S3/002Non-adaptive circuits, e.g. manually adjustable or static, for enhancing the sound image or the spatial distribution
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S3/00Systems employing more than two channels, e.g. quadraphonic
    • H04S3/008Systems employing more than two channels, e.g. quadraphonic in which the audio signals are in digital form, i.e. employing more than two discrete digital channels
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R5/00Stereophonic arrangements
    • H04R5/02Spatial or constructional arrangements of loudspeakers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S2400/00Details of stereophonic systems covered by H04S but not provided for in its groups
    • H04S2400/15Aspects of sound capture and related signal processing for recording or reproduction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S2420/00Techniques used stereophonic systems covered by H04S but not provided for in its groups
    • H04S2420/03Application of parametric coding in stereophonic audio systems

Definitions

  • digitally encoded audio may be sent by a computing system to a user system over a network.
  • the user system may receive the digitally encoded audio, may produce a sound encoded within the digitally encoded audio, and may record a sample of the sound as produced by the user system.
  • the sample may be analyzed, a format of digitally encoded audio may be selected based on the analysis, and audio digitally encoded in accordance with the selected format may be sent by the computing system to the user system over the network.
  • the network 100 may use a series of interconnected communication links 101 (e.g., coaxial cables, optical fibers, wireless links, etc.) to connect a premises 102 (e.g., a home or other user environment) to the local office 103 .
  • the communication links 101 may include any wired communication links, wireless communication links, communications networks, or combinations thereof.
  • portions of the communication links 101 may be implemented with fiber-optic cable, while other portions of the communication links 101 may be implemented with coaxial cable.
  • the communication links 101 may also include various communications components such as splitters, filters, amplifiers, wireless components, and other components for communicating data.
  • Data may include, for example, Internet data, voice data, weather data, media content, and any other information.
  • the local office 103 may transmit downstream information signals onto the communication links 101 , and one or more of the premises 102 may receive and process those signals.
  • the communication links 101 may originate from the local office 103 as a single communications path, and may be split into any number of communication links to distribute data to the premises 102 and various other destinations.
  • the premises 102 may include any type of user environment, such as single family homes, apartment complexes, businesses, schools, hospitals, parks, and other environments and combinations of environments.
  • the local office 103 may include an interface 104 , which may be a computing device configured to manage communications between devices on the network of the communication links 101 and backend devices, such as a server.
  • the interface 104 may be a CMTS.
  • the termination system may be as specified in a standard, such as, in an example of an HFC-type network, the Data Over Cable Service Interface Specification (DOCSIS) standard, published by Cable Television Laboratories, Inc.
  • the termination system may be configured to transmit data over one or more downstream channels or frequencies to be received by various devices, such as modems in the premises 102 , and to receive upstream communications from those modems on one or more upstream frequencies.
  • DOCSIS Data Over Cable Service Interface Specification
  • the local office 103 may include a variety of servers that may be configured to perform various functions.
  • the local office 103 may include a push server 105 for generating push notifications to deliver data, instructions, or both to devices that are configured to detect such notifications.
  • the local office 103 may include a content server 106 configured to provide content (e.g., media content) to devices.
  • the local office 103 may also include an application server 107 .
  • FIG. 3 depicts an illustrative environment for employing systems and methods in accordance with one or more aspects of the disclosure.
  • an environment 300 may include a user system 302 and one or more backend system(s) 304 .
  • the user system 302 and the backend system(s) 304 may be interfaced via one or more network(s) 306 , which may include one or more LAN(s) and/or WAN(s) (e.g., one or more networks associated with the user system 302 , the backend system(s) 304 , one or more distribution or service-provider networks that interface the user system 302 and/or the backend system(s) 304 to the Internet, and/or the Internet, or portions thereof).
  • network(s) 306 may include one or more LAN(s) and/or WAN(s) (e.g., one or more networks associated with the user system 302 , the backend system(s) 304 , one or more distribution or service-provider networks that interface the user system 30
  • the backend system(s) 304 may include one or more computing systems and/or devices (e.g., servers or the like) configured to perform one or more of the functions described herein (e.g., systems and/or devices for storing, selecting, and/or communicating digital media content).
  • the user system 302 may include one or more user devices 308 , 310 , and 312 , which may be associated with one another (e.g., via their inclusion within the user system 302 , a connection to a network associated with the user system 302 , an affiliation with a service provider or an account thereof, or the like).
  • the user system 302 may include one or more speakers 316 , 318 , 320 , 322 , and 324 , which may be associated with one or more of the user devices 308 , 310 , and 312 and may produce aspects or components of audio content received, processed, and/or stored by one or more of the user devices 308 , 310 , and 312 . As illustrated in FIG.
  • a user of the user system 302 may have selected media content available from the backend system(s) 304 , which may have initiated a configuration routine configured to cause the backend system(s) 304 to generate the data comprising the configuration information and communicate the data comprising the configuration information to the user device 308 .
  • the configuration information may be configured to cause the user system 302 to produce one or more sounds encoded within digital audio content and to record one or more samples of the sound(s) as produced by the user system 302 .
  • the backend system(s) 304 may generate data comprising digital media content (e.g., digitally encoded audio and/or video, or the like) and may communicate (e.g., via the network(s) 306 ) the data comprising the digital media content to the user device 308 .
  • the digital media content may comprise the sound(s) (e.g., the sound(s) the configuration information configured the user system 302 to record) and/or graphical or video content for the user system 302 to display or produce (e.g., content indicating that the sound(s) are being recorded as part of the configuration routine).
  • the backend system(s) 304 may analyze the sample(s). For example, the sound(s) may have been encoded at a plurality of different sampling rates, and the backend system(s) 304 may analyze the sample(s) to determine a maximum sampling rate that the user system 302 distinguishably produced relative to each other sampling rate of the plurality of different sampling rates. That is, the plurality of different sampling rates may include sampling rates that the user system 302 is capable of producing but that are not produced by the user system 302 in a manner that is distinguishable (e.g., from the perspective of a listener) from the maximum sampling rate determined by analyzing the sample(s).
  • the backend system(s) 304 may generate data comprising digital media content (e.g., digitally encoded audio and/or video, or the like) and may communicate (e.g., via the network(s) 306 ) the data comprising the digital media content to the user device 308 .
  • digital media content e.g., digitally encoded audio and/or video, or the like
  • such configuration information may be configured to cause the user system 302 to produce one or more sounds encoded within digital audio content and to record one or more samples of the sound(s) as produced by the user system 302 .
  • the configuration information communicated to the user device 308 may be configured to cause the user device 308 to cause the user system 302 to produce one or more sounds encoded within digital audio content and to record (e.g., via a microphone associated with the user device 308 ) one or more samples of the sound(s) as produced by the user system 302 .
  • the configuration information communicated to the user device 310 may be configured to cause the user device 310 to record (e.g., via a microphone associated with the user device 310 ) one or more samples of the sound(s) as produced by the user system 302
  • the configuration information communicated to the user device 312 may be configured to cause the user device 312 to record (e.g., via a microphone associated with the user device 312 ) one or more samples of the sound(s) as produced by the user system 302 .
  • the backend system(s) 304 may select, from amongst the plurality of formats of digitally encoded audio associated with the video asset, a format comprising the number of channels the user system 302 distinguishably produced and may generate data comprising a subsequent portion of the video asset and corresponding audio (e.g., audio encoded in accordance with the format comprising the number of channels the user system 302 distinguishably produced) and may communicate (e.g., via the network(s) 306 ) the data comprising the subsequent portion of the video asset and the corresponding audio to the user device 308 (e.g., for display and production by the user system 302 ).
  • a format comprising the number of channels the user system 302 distinguishably produced may generate data comprising a subsequent portion of the video asset and corresponding audio (e.g., audio encoded in accordance with the format comprising the number of channels the user system 302 distinguishably produced) and may communicate (e.g., via the network(s) 306 ) the data comprising the subsequent portion of the video asset and
  • the user device 308 may cause the user system 302 to display the graphical or video content, produce the sound(s), and record the sample(s) of the sound(s) as produced by the user system 302 .
  • the user device 310 may record the sample(s) of the sound(s) as produced by the user system 302 .
  • the user device 312 may record the sample(s) of the sound(s) as produced by the user system 302 .
  • the user device 308 may analyze the sample(s) of the sound(s) it records.
  • the user device 310 may analyze the sample(s) of the sound(s) it records.
  • the user device 312 may analyze the sample(s) of the sound(s) it records.
  • the user device 308 may generate data comprising results of its analysis and may communicate (e.g., via the network(s) 306 ) the data comprising the results of its analysis to the backend system(s) 304 .
  • the user device 310 may generate data comprising results of its analysis and may communicate (e.g., via the network(s) 306 ) the data comprising the results of its analysis to the backend system(s) 304 .
  • the user device 312 may generate data comprising results of its analysis and may communicate (e.g., via the network(s) 306 ) the data comprising the results of its analysis to the backend system(s) 304 .
  • the backend system(s) 304 may analyze the results of the analyses.
  • backend system(s) 304 may select the different format and may communicate data comprising a subsequent portion of the video asset and corresponding audio digitally encoded in accordance with the selected format to the user system 302 .
  • the method may return to the step 804 , in which digital media content comprising audio digitally encoded in accordance with the previously utilized format may be communicated to the user system.
  • the methods and features recited herein may be implemented through any number of computer readable media that are able to store computer readable instructions.
  • Examples of computer readable media that may be used include RAM, ROM, Electrically Erasable Programmable Read-Only Memory (EEPROM), flash memory or other memory technology, CD-ROM, DVD, or other optical disk storage, magnetic cassettes, magnetic tape, magnetic storage, and the like.

Abstract

In accordance with one or more embodiments, digitally encoded audio may be sent by a computing system to a user system over a network. The user system may receive the digitally encoded audio, may produce a sound encoded within the digitally encoded audio, and may record a sample of the sound as produced by the user system. The sample may be analyzed, a format of digitally encoded audio may be selected based on the analysis, and audio digitally encoded in accordance with the selected format may be sent by the computing system to the user system over the network.

Description

    CROSS REFERENCE TO RELATED APPLICATIONS
  • This application is a continuation of U.S. patent application Ser. No. 14/853,680, filed Sep. 14, 2015, entitled “Device-Based Audio Format Selection”, the disclosures of which are hereby incorporated by reference.
  • BACKGROUND
  • Digital audio content can be communicated over networks in various different formats. When content is available in multiple formats, each of the available formats is often communicated over the network. In a broadcast or multicast context, communicating each of the available formats may be preferred, so that various receiving devices can select an appropriate format based on their respective configurations or capabilities. In a unicast or individualized context, however, communicating multiple audio formats may unnecessarily consume network resources. Accordingly, a need exists for device-based audio-format selection.
  • SUMMARY
  • This disclosure relates to device-based audio-format selection. In accordance with one or more embodiments, digitally encoded audio may be sent by a computing system to a user system over a network. The user system may receive the digitally encoded audio, may produce a sound encoded within the digitally encoded audio, and may record a sample of the sound as produced by the user system. The sample may be analyzed, a format of digitally encoded audio may be selected based on the analysis, and audio digitally encoded in accordance with the selected format may be sent by the computing system to the user system over the network.
  • This summary is not intended to identify critical or essential features of the disclosure, but merely to summarize certain features and variations thereof. Other details and features will be described in the sections that follow.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • Some features herein are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings, in which like reference numerals refer to similar elements, and in which:
  • FIG. 1 depicts an illustrative network environment in which one or more aspects of the disclosure may be implemented;
  • FIG. 2 depicts an illustrative software and hardware device on which various aspects of the disclosure may be implemented;
  • FIG. 3 depicts an illustrative environment for employing systems and methods in accordance with one or more aspects of the disclosure;
  • FIGS. 4, 5, 6, and 7 depict various illustrative event sequences in accordance with one or more aspects of the disclosure; and
  • FIG. 8 depicts an illustrative method in accordance with one or more aspects of the disclosure.
  • DETAILED DESCRIPTION
  • FIG. 1 illustrates an example information distribution network in which one or more of the various features described herein may be implemented. The illustrated information distribution network is only one example of a network and is not intended to suggest any limitation as to the scope of use or functionality of the disclosure. The illustrated network should not be interpreted as having any dependency or requirement relating to any component or combination of components in an information distribution network.
  • A network 100 may be a telecommunications network, a Multi-Service Operator (MSO) network, a cable television (CATV) network, a cellular network, a wireless network, an optical fiber network, a coaxial cable network, a Hybrid Fiber-Coaxial (HFC) network, or any other type of information distribution network or combination of networks. For example, the network 100 may be a cellular broadband network communicating with multiple communications access points, such as a wireless communications tower 130. In another example, the network 100 may be a coaxial system comprising a Cable Modem Termination System (CMTS) communicating with numerous gateway interface devices (e.g., a gateway 111 in an example home 102 a). In another example, the network 100 may be a fiber-optic system comprising optical fibers extending from an Optical Line Terminal (OLT) to numerous Optical Network Terminals (ONTs) communicatively coupled with various gateway interface devices. In another example, the network 100 may be a Digital Subscriber Line (DSL) system that includes a local office 103 communicating with numerous gateway interface devices. In another example, the network 100 may be an HFC network in which Internet traffic is routed over both optical and coaxial communication paths to a gateway interface device in or near a user's home. Various aspects of the disclosure may operate on one or more of the networks described herein or any other network architectures now known or later developed.
  • The network 100 may use a series of interconnected communication links 101 (e.g., coaxial cables, optical fibers, wireless links, etc.) to connect a premises 102 (e.g., a home or other user environment) to the local office 103. The communication links 101 may include any wired communication links, wireless communication links, communications networks, or combinations thereof. For example, portions of the communication links 101 may be implemented with fiber-optic cable, while other portions of the communication links 101 may be implemented with coaxial cable. The communication links 101 may also include various communications components such as splitters, filters, amplifiers, wireless components, and other components for communicating data. Data may include, for example, Internet data, voice data, weather data, media content, and any other information. Media content may include, for example, video content, audio content, media on demand, video on demand, streaming video, television programs, text listings, graphics, advertisements, and other content. A media content item may represent an individual piece of media content, such as a particular movie, television episode, online video clip, song, audio recording, image, or any other data. In some instances, a media content item may be fragmented into segments, such as a plurality of two-second video fragments that may be separately addressed and retrieved.
  • The local office 103 may transmit downstream information signals onto the communication links 101, and one or more of the premises 102 may receive and process those signals. In certain implementations, the communication links 101 may originate from the local office 103 as a single communications path, and may be split into any number of communication links to distribute data to the premises 102 and various other destinations. Although the term premises is used by way of example, the premises 102 may include any type of user environment, such as single family homes, apartment complexes, businesses, schools, hospitals, parks, and other environments and combinations of environments.
  • The local office 103 may include an interface 104, which may be a computing device configured to manage communications between devices on the network of the communication links 101 and backend devices, such as a server. For example, the interface 104 may be a CMTS. The termination system may be as specified in a standard, such as, in an example of an HFC-type network, the Data Over Cable Service Interface Specification (DOCSIS) standard, published by Cable Television Laboratories, Inc. The termination system may be configured to transmit data over one or more downstream channels or frequencies to be received by various devices, such as modems in the premises 102, and to receive upstream communications from those modems on one or more upstream frequencies.
  • The local office 103 may include one or more network interfaces 108 for communicating with one or more external networks 109. The one or more external networks 109 may include, for example, one or more telecommunications networks, Internet Protocol (IP) networks, cellular communications networks (e.g., Global System for Mobile Communications (GSM), Code Division Multiple Access (CDMA), and any other 2nd, 3rd, 4th, or higher generation cellular communications networks), cellular broadband networks, radio access networks, fiber-optic networks, local wireless networks (e.g., Wi-Fi, WiMAX), satellite networks, and any other networks or combinations of networks.
  • The local office 103 may include a variety of servers that may be configured to perform various functions. The local office 103 may include a push server 105 for generating push notifications to deliver data, instructions, or both to devices that are configured to detect such notifications. The local office 103 may include a content server 106 configured to provide content (e.g., media content) to devices. The local office 103 may also include an application server 107.
  • The premises 102, such as the example home 102 a, may include an interface 120, which may include a modem 110 (or any device), for communicating on the communication links 101 with the local office 103, the one or more external networks 109, or both. For example, the modem 110 may be a coaxial cable modem (for coaxial cable links), a broadband modem (for DSL links), a fiber interface node (for fiber-optic links), or any other device or combination of devices. In certain implementations, the modem 110 may be a part of, or communicatively coupled to, the gateway 111. The gateway 111 may be, for example, a wireless router, a set-top box, a computer server, or any other computing device or combination.
  • The gateway 111 may be any computing device for communicating with the modem 110 to allow one or more other devices in the example home 102 a to communicate with the local office 103, the one or more external networks 109, or other devices communicatively coupled thereto. The gateway 111 may include local network interfaces to provide communication signals to client devices in or near the example home 102 a, such as a television 112, a set-top box 113, a personal computer 114, a laptop computer 115, a wireless device 116 (e.g., a wireless laptop, a tablet computer, a mobile phone, a portable gaming device a vehicular computing system, a mobile computing system, a navigation system, an entertainment system in an automobile, marine vessel, aircraft, or the like), or any other device.
  • FIG. 2 illustrates general hardware elements and software elements that can be used to implement any of the various computing devices, servers, encoders, caches, and/or software discussed herein. A device 200 may include a processor 201, which may execute instructions of a computer program to perform any of the functions and steps described herein. The instructions may be stored in any type of computer-readable medium or memory to configure the operation of the processor 201. For example, instructions may be stored in a Read-Only Memory (ROM) 202, a Random Access Memory (RAM) 203, a removable media 204, such as a Universal Serial Bus (USB) drive, Compact Disk (CD) or Digital Versatile Disk (DVD), hard drive, floppy disk, or any other desired electronic storage medium. Instructions may also be stored in a hard drive 205, which may be an internal or external hard drive.
  • The device 200 may include one or more output devices, such as a display 206 (e.g., an integrated or external display, monitor, or television), and may include a device controller 207, such as a video processor. In some embodiments, the device 200 may include an input device 208, such as a remote control, keyboard, mouse, touch screen, microphone, motion sensing input device, and/or any other input device.
  • The device 200 may also include one or more network interfaces, such as a network Input/Output (I/O) interface 210 to communicate with a network 209. The network interface may be a wired interface, wireless interface, or a combination of the two. In some embodiments, the network I/O interface 210 may include a cable modem, and the network 209 may include the communication links 101 shown in FIG. 1, the one or more external networks 109, an in-home network, a provider's wireless, coaxial, fiber, or hybrid fiber/coaxial distribution system (e.g., a DOCSIS network), and/or any other desired network.
  • FIG. 3 depicts an illustrative environment for employing systems and methods in accordance with one or more aspects of the disclosure. Referring to FIG. 3, an environment 300 may include a user system 302 and one or more backend system(s) 304. The user system 302 and the backend system(s) 304 may be interfaced via one or more network(s) 306, which may include one or more LAN(s) and/or WAN(s) (e.g., one or more networks associated with the user system 302, the backend system(s) 304, one or more distribution or service-provider networks that interface the user system 302 and/or the backend system(s) 304 to the Internet, and/or the Internet, or portions thereof). The backend system(s) 304 may include one or more computing systems and/or devices (e.g., servers or the like) configured to perform one or more of the functions described herein (e.g., systems and/or devices for storing, selecting, and/or communicating digital media content). The user system 302 may include one or more user devices 308, 310, and 312, which may be associated with one another (e.g., via their inclusion within the user system 302, a connection to a network associated with the user system 302, an affiliation with a service provider or an account thereof, or the like). The user devices 308, 310, and 312 may include one or more computing devices (e.g., servers, personal computers, desktop computers, laptop computers, tablet computers, smartphones, mobile devices, media players, set-top boxes, or the like) configured to perform one or more of the functions described herein (e.g., interfacing with users, selecting media content items, displaying or producing associated media content, and/or generating or communicating data associated therewith).
  • In some embodiments, one or more of the user devices 308, 310, and 312 may include one or more hardware components described herein (e.g., speakers, microphones, displays, communication interfaces, memories, processors or the like). Additionally or alternatively, the user system 302 may include one or more peripheral devices, which may be associated with one or more of the user devices 308, 310, and 312. For example, the user system 302 may include a display 314, which may be associated with one or more of the user devices 308, 310, and 312 and may display aspects or components of video content received, processed, and/or stored by one or more of the user devices 308, 310, and 312. Similarly, the user system 302 may include one or more speakers 316, 318, 320, 322, and 324, which may be associated with one or more of the user devices 308, 310, and 312 and may produce aspects or components of audio content received, processed, and/or stored by one or more of the user devices 308, 310, and 312. As illustrated in FIG. 3, one or more of the speakers 316, 318, 320, 322, and 324 may be physically located in various physical locations relative to one or more other components of the user system 302 (e.g., the display 314, one or more of the user devices 308, 310, and 312, or one or more other speakers of the speakers 316, 318, 320, 322, and 324). For example, the speaker 316 may be located alongside and to the left of the display 314 (e.g., a front, left channel), the speaker 318 may be located near the display 314 (e.g., a center channel), the speaker 320 may be located alongside and to the right of the display 314 (e.g., a front, right channel), the speaker 322 may be located to the left of the display 314 and further in front of the display 314 than the speaker 316 (e.g., a rear, left channel), and the speaker 324 may be located to the right of the display 314 and further in front of the display 314 than the speaker 320 (e.g., a rear, right channel).
  • FIGS. 4, 5, 6, and 7 depict various illustrative event sequences in accordance with one or more aspects of the disclosure. The events and steps illustrated in FIGS. 4, 5, 6, and 7 are merely illustrative, and one of ordinary skill in the art will recognize that some steps or events may be omitted, may be performed or occur in an order other than that illustrated, and/or may be performed by or occur at a device other than that illustrated. Referring to FIG. 4, the backend system(s) 304 may generate data comprising configuration information and may communicate (e.g., via the network(s) 306) the data comprising the configuration information to the user device 308. For example, a user of the user system 302 may have selected media content available from the backend system(s) 304, which may have initiated a configuration routine configured to cause the backend system(s) 304 to generate the data comprising the configuration information and communicate the data comprising the configuration information to the user device 308. In some embodiments, the configuration information may be configured to cause the user system 302 to produce one or more sounds encoded within digital audio content and to record one or more samples of the sound(s) as produced by the user system 302. Additionally or alternatively, the configuration information may be configured to cause the user system 302 to instruct a user (e.g., via audio instructions produced by the user system 302, visual instructions displayed by the user system 302, or the like) to physically locate one or more of the user devices 308, 310, and 312 (e.g., a device configured to record the sample(s) of the sound(s)) in one or more specified physical locations (e.g., a location where the user intends to consume media content, or the like).
  • The backend system(s) 304 may generate data comprising digital media content (e.g., digitally encoded audio and/or video, or the like) and may communicate (e.g., via the network(s) 306) the data comprising the digital media content to the user device 308. For example, the digital media content may comprise the sound(s) (e.g., the sound(s) the configuration information configured the user system 302 to record) and/or graphical or video content for the user system 302 to display or produce (e.g., content indicating that the sound(s) are being recorded as part of the configuration routine). The user device 308 may cause the user system 302 to display the graphical or video content, produce the sound(s), and record the sample(s) of the sound(s) as produced by the user system 302. The user device 308 may communicate (e.g., via the network(s) 306) the sample(s) to the backend system(s) 304.
  • The backend system(s) 304 may analyze the sample(s). For example, the sound(s) may have been encoded at a plurality of different sampling rates, and the backend system(s) 304 may analyze the sample(s) to determine a maximum sampling rate that the user system 302 distinguishably produced relative to each other sampling rate of the plurality of different sampling rates. That is, the plurality of different sampling rates may include sampling rates that the user system 302 is capable of producing but that are not produced by the user system 302 in a manner that is distinguishable (e.g., from the perspective of a listener) from the maximum sampling rate determined by analyzing the sample(s). The selected media content may comprise a digitally encoded video asset (e.g., a movie, television program, or the like) and a plurality of formats of digitally encoded audio associated with the video asset, and the backend system(s) 304 may select, from amongst the plurality of formats, a format encoded at the maximum sampling rate determined by analyzing the sample(s). The backend system(s) 304 may generate data comprising at least a portion of the video asset and corresponding audio (e.g., audio encoded at the maximum sampling rate) and may communicate (e.g., via the network(s) 306) the data comprising the at least a portion of the video asset and the corresponding audio to the user device 308. The data comprising the at least a portion of the video asset and the corresponding audio may comprise one or more sound(s) that the user system 302 is configured to record (e.g., the corresponding audio may include such sound(s)).
  • The user device 308 may cause the user system 302 to display the at least a portion of the video asset, produce the corresponding audio, and record sample(s) of the sound(s) as produced by the user system 302. The user device 308 may communicate (e.g., via the network(s) 306) the sample(s) to the backend system(s) 304. The backend system(s) 304 may analyze the sample(s). For example, the sound(s) (e.g., a portion of the audio corresponding to the at least a portion of the video asset) may have been encoded at a plurality of different sampling rates (e.g., sampling rates other than the maximum sampling rate previously determined), and the backend system(s) 304 may analyze the sample(s) to determine a new maximum sampling rate that the user system 302 distinguishably produced relative to each other sampling rate of the plurality of different sampling rates. That is, one or more conditions, such as a configuration of the user system 302 (e.g., a change in speaker configuration) or an associated environmental variable (e.g., a level of background noise) may have changed since the sample(s) previously analyzed were recorded, and the backend system(s) 304 may determine that the user system 302 is capable of producing audio encoded at a different sampling rate (e.g., a higher or lower sampling rate) in a distinguishable manner (e.g., from the perspective of a listener) than audio encoded at the maximum sampling rate previously determined. The backend system(s) 304 may select, from amongst the plurality of formats of digitally encoded audio associated with the video asset, a format encoded at the new maximum sampling rate. The backend system(s) 304 may generate data comprising a subsequent portion of the video asset and corresponding audio (e.g., audio encoded at the new maximum sampling rate) and may communicate (e.g., via the network(s) 306) the data comprising the subsequent portion of the video asset and the corresponding audio to the user device 308 (e.g., for display and production by the user system 302).
  • Referring to FIG. 5, as described above with respect to FIG. 4, the backend system(s) 304 may generate data comprising configuration information and may communicate (e.g., via the network(s) 306) the data comprising the configuration information to the user device 308. For example, a user of the user system 302 may have selected media content available from the backend system(s) 304, which may have initiated a configuration routine configured to cause the backend system(s) 304 to generate the data comprising the configuration information and communicate the data comprising the configuration information to the user device 308. As indicated above, such configuration information may be configured to cause the user system 302 to produce one or more sounds encoded within digital audio content, to record one or more samples of the sound(s) as produced by the user system 302, and/or to cause the user system 302 to instruct a user to physically locate one or more of the user devices 308, 310, and 312 (e.g., a device configured to record the sample(s) of the sound(s)) in one or more specified physical locations (e.g., a location where the user intends to consume media content, or the like). In some embodiments, the configuration information may comprise instructions configured to cause the user system 302 to analyze the sample(s) of the sound(s) and to communicate results of the analysis to the backend system(s) 304.
  • The backend system(s) 304 may generate data comprising digital media content (e.g., digitally encoded audio and/or video, or the like) and may communicate (e.g., via the network(s) 306) the data comprising the digital media content to the user device 308. For example, the selected media content may comprise a digitally encoded video asset (e.g., a movie, television program, or the like) and a plurality of formats of digitally encoded audio associated with the video asset, and the backend system(s) 304 may generate data comprising at least a portion of the video asset and corresponding audio and may communicate (e.g., via the network(s) 306) the data comprising the at least a portion of the video asset and the corresponding audio to the user device 308. The data comprising the at least a portion of the video asset and the corresponding audio may comprise one or more sound(s) that the user system 302 is configured to record (e.g., the corresponding audio may include such sound(s)). The user device 308 may cause the user system 302 to display the graphical or video content, produce the sound(s), and record the sample(s) of the sound(s) as produced by the user system 302. The user device 308 may cause the user system 302 to analyze the sample(s). For example, each of the sound(s) may have been encoded using a different format of the plurality of formats of digitally encoded audio associated with the video asset, and the user device 308 may cause the user system 302 to analyze the sample(s) to determine a format for the user system 302. The user device 308 may generate data comprising results of the analysis and may communicate (e.g., via the network(s) 306) the data comprising the results of the analysis to the backend system(s) 304. The backend system(s) 304 may select, from amongst the plurality of formats of digitally encoded audio associated with the video asset, the format determined by analyzing the sample(s) and may generate data comprising a subsequent portion of the video asset and corresponding audio (e.g., audio encoded in accordance with the format determined by analyzing the sample(s)) and may communicate (e.g., via the network(s) 306) the data comprising the subsequent portion of the video asset and the corresponding audio to the user device 308 (e.g., for display and production by the user system 302).
  • Referring to FIG. 6, the backend system(s) 304 may identify one or more user devices associated with the user system 302. For example, a user of the user system 302 may have selected (e.g., via the user device 308) media content available from the backend system(s) 304, and the backend computing system(s) 304 may determine that the user devices 308, 310, and 312 are associated with the user system 302 (e.g., based on their inclusion within the user system 302, connection to a network associated with the user system 302, affiliation with a service provider or an account thereof, or the like). The backend system(s) 304 may generate data comprising configuration information for the user device 308 and may communicate (e.g., via the network(s) 306) the data comprising the configuration information to the user device 308. Similarly, the backend system(s) 304 may generate data comprising configuration information for the user device 310 and may communicate (e.g., via the network(s) 306) the data comprising the configuration information to the user device 310, and the backend system(s) 304 may generate data comprising configuration information for the user device 312 and may communicate (e.g., via the network(s) 306) the data comprising the configuration information to the user device 312. For example, selection of the media content may have initiated a configuration routine configured to cause the backend system(s) 304 to generate the data comprising the configuration information and communicate the data comprising the configuration information to the user devices 308, 310, and 312.
  • As indicated above, such configuration information may be configured to cause the user system 302 to produce one or more sounds encoded within digital audio content and to record one or more samples of the sound(s) as produced by the user system 302. For example, the configuration information communicated to the user device 308 may be configured to cause the user device 308 to cause the user system 302 to produce one or more sounds encoded within digital audio content and to record (e.g., via a microphone associated with the user device 308) one or more samples of the sound(s) as produced by the user system 302. Similarly, the configuration information communicated to the user device 310 may be configured to cause the user device 310 to record (e.g., via a microphone associated with the user device 310) one or more samples of the sound(s) as produced by the user system 302, and the configuration information communicated to the user device 312 may be configured to cause the user device 312 to record (e.g., via a microphone associated with the user device 312) one or more samples of the sound(s) as produced by the user system 302. The configuration information (e.g., the configuration information communicated to the user device 308, the configuration information communicated to the user device 310, and/or the configuration information communicated to the user device 312) may be further configured to cause the user system 302 to instruct a user to physically locate one or more of the user devices 308, 310, and 312 in one or more specified physical locations (e.g., to locate the user device 308 near the display 314, to locate the user device 310 in front of and to the left of the display 314, and to locate the user device 312 in front of and to the right of the display 314).
  • The backend system(s) 304 may generate data comprising digital media content (e.g., digitally encoded audio and/or video, or the like) and may communicate (e.g., via the network(s) 306) the data comprising the digital media content to the user device 308. For example, the selected media content may comprise a digitally encoded video asset (e.g., a movie, television program, or the like) and a plurality of formats of digitally encoded audio associated with the video asset, and the backend system(s) 304 may generate data comprising at least a portion of the video asset and corresponding audio and may communicate (e.g., via the network(s) 306) the data comprising the at least a portion of the video asset and the corresponding audio to the user device 308. The data comprising the at least a portion of the video asset and the corresponding audio may comprise one or more sound(s) that the user system 302 is configured to record (e.g., the corresponding audio may include such sound(s)). In some embodiments, the sound(s) may comprise one or more tones configured for subsequent analysis. For example, the tone(s) may be configured to be played through one or more specific audio channels of the user system 302 (e.g., the tone(s) may include a tone configured to be played through a front, left channel of the user system 302 (e.g., the speaker 316), a tone configured to be played through a center channel of the user system 302 (e.g., the speaker 318), a tone configured to be played through a front, right channel of the user system 302 (e.g., the speaker 320), a tone configured to be played through a rear, left channel of the user system 302 (e.g., the speaker 322), and/or a tone configured to be played through a rear, right channel of the user system 302 (e.g., the speaker 324)).
  • The user device 308 may cause the user system 302 to display the graphical or video content, produce the sound(s), and record the sample(s) of the sound(s) as produced by the user system 302. The user device 310 may record the sample(s) of the sound(s) as produced by the user system 302. The user device 312 may record the sample(s) of the sound(s) as produced by the user system 302. The user device 308 may communicate (e.g., via the network(s) 306) the sample(s) it recorded to the backend system(s) 304. Similarly, the user device 310 may communicate (e.g., via the network(s) 306) the sample(s) it recorded to the backend system(s) 304, and the user device 312 may communicate (e.g., via the network(s) 306) the sample(s) it recorded to the backend system(s) 304. The backend system(s) 304 may analyze the sample(s). For example, the sample(s) may comprise a plurality of channels, each of which may correspond to a microphone that recorded a portion of the sample(s) (e.g., a channel corresponding to a portion of the sample(s) recorded by the microphone associated with the user device 308, a channel corresponding to a portion of the sample(s) recorded by the microphone associated with the user device 310, and/or a channel corresponding to a portion of the sample(s) recorded by the microphone associated with the user device 312), and the backend system(s) 304 may analyze the sample(s) (e.g., by comparing recording(s) (e.g., of the tone(s)) produced by each of the user devices 308, 310, and 312 to one another) to determine a number of channels within the digitally encoded audio corresponding to the at least a portion of the video asset that the user system 302 distinguishably produced relative to each other channel within the digitally encoded audio corresponding to the at least a portion of the video asset. For example, the user system 302 may include hardware configured to produce five separate channels of audio (e.g., the speakers 316, 318, 320, 322, and 324), however, due to a configuration of the user system 302 (e.g., speakers 316, 318, and 320 may be located too close to one another), the backed system(s) 304 may determine based on the analysis of the sample(s) that the user system 302 does not distinguishably produce the five separate channels relative to one another but instead produces three. The backend system(s) 304 may select, from amongst the plurality of formats of digitally encoded audio associated with the video asset, a format comprising the number of channels the user system 302 distinguishably produced and may generate data comprising a subsequent portion of the video asset and corresponding audio (e.g., audio encoded in accordance with the format comprising the number of channels the user system 302 distinguishably produced) and may communicate (e.g., via the network(s) 306) the data comprising the subsequent portion of the video asset and the corresponding audio to the user device 308 (e.g., for display and production by the user system 302).
  • Referring to FIG. 7, the backend system(s) 304 may identify one or more user devices associated with the user system 302. For example, a user of the user system 302 may have selected (e.g., via the user device 308) media content available from the backend system(s) 304, and the backend computing system(s) 304 may determine that the user devices 308, 310, and 312 are associated with the user system 302 (e.g., based on their inclusion within the user system 302, connection to a network associated with the user system 302, affiliation with a service provider or an account thereof, or the like). The backend system(s) 304 may generate data comprising configuration information for the user device 308 and may communicate (e.g., via the network(s) 306) the data comprising the configuration information to the user device 308. Similarly, the backend system(s) 304 may generate data comprising configuration information for the user device 310 and may communicate (e.g., via the network(s) 306) the data comprising the configuration information to the user device 310, and the backend system(s) 304 may generate data comprising configuration information for the user device 312 and may communicate (e.g., via the network(s) 306) the data comprising the configuration information to the user device 312. For example, selection of the media content may have initiated a configuration routine configured to cause the backend system(s) 304 to generate the data comprising the configuration information and communicate the data comprising the configuration information to the user devices 308, 310, and 312.
  • As indicated above, such configuration information may be configured to cause the user system 302 to produce one or more sounds encoded within digital audio content and to record one or more samples of the sound(s) as produced by the user system 302. For example, the configuration information communicated to the user device 308 may be configured to cause the user device 308 to cause the user system 302 to produce one or more sounds encoded within digital audio content and to record (e.g., via a microphone associated with the user device 308) one or more samples of the sound(s) as produced by the user system 302. Similarly, the configuration information communicated to the user device 310 may be configured to cause the user device 310 to record (e.g., via a microphone associated with the user device 310) one or more samples of the sound(s) as produced by the user system 302, and the configuration information communicated to the user device 312 may be configured to cause the user device 312 to record (e.g., via a microphone associated with the user device 312) one or more samples of the sound(s) as produced by the user system 302. In some embodiments, the configuration information may comprise instructions configured to cause the user system 302 to analyze the sample(s) of the sound(s) and to communicate results of the analysis to the backend system(s) 304. For example, the configuration information communicated to the user device 308 may be configured to cause the user device 308 to analyze the sample(s) of the sound(s) it records and to communicate results of the analysis to the backend system(s) 304. Similarly, the configuration information communicated to the user device 310 may be configured to cause the user device 310 to analyze the sample(s) of the sound(s) it records and to communicate results of the analysis to the backend system(s) 304, and the configuration information communicated to the user device 312 may be configured to cause the user device 312 to analyze the sample(s) of the sound(s) it records and to communicate results of the analysis to the backend system(s) 304.
  • The backend system(s) 304 may generate data comprising digital media content (e.g., digitally encoded audio and/or video, or the like) and may communicate (e.g., via the network(s) 306) the data comprising the digital media content to the user device 308. For example, the selected media content may comprise a digitally encoded video asset (e.g., a movie, television program, or the like) and a plurality of formats of digitally encoded audio associated with the video asset, and the backend system(s) 304 may generate data comprising at least a portion of the video asset and corresponding audio and may communicate (e.g., via the network(s) 306) the data comprising the at least a portion of the video asset and the corresponding audio to the user device 308. The data comprising the at least a portion of the video asset and the corresponding audio may comprise one or more sound(s) that the user system 302 is configured to record (e.g., the corresponding audio may include such sound(s)).
  • The user device 308 may cause the user system 302 to display the graphical or video content, produce the sound(s), and record the sample(s) of the sound(s) as produced by the user system 302. The user device 310 may record the sample(s) of the sound(s) as produced by the user system 302. The user device 312 may record the sample(s) of the sound(s) as produced by the user system 302. The user device 308 may analyze the sample(s) of the sound(s) it records. The user device 310 may analyze the sample(s) of the sound(s) it records. The user device 312 may analyze the sample(s) of the sound(s) it records. The user device 308 may generate data comprising results of its analysis and may communicate (e.g., via the network(s) 306) the data comprising the results of its analysis to the backend system(s) 304. The user device 310 may generate data comprising results of its analysis and may communicate (e.g., via the network(s) 306) the data comprising the results of its analysis to the backend system(s) 304. The user device 312 may generate data comprising results of its analysis and may communicate (e.g., via the network(s) 306) the data comprising the results of its analysis to the backend system(s) 304. The backend system(s) 304 may analyze the results of the analyses. For example, each of the sound(s) may have been encoded using a different format of the plurality of formats of digitally encoded audio associated with the video asset, and the backend system(s) 304 may analyze the results of the analyses to determine a format for the user system 302. The backend system(s) 304 may select, from amongst the plurality of formats of digitally encoded audio associated with the video asset, the format determined by analyzing the results of the analyses and may generate data comprising a subsequent portion of the video asset and corresponding audio (e.g., audio encoded in accordance with the format determined by analyzing the results of the analyses) and may communicate (e.g., via the network(s) 306) the data comprising the subsequent portion of the video asset and the corresponding audio to the user device 308 (e.g., for display and production by the user system 302).
  • FIG. 8 depicts an illustrative method in accordance with one or more aspects of the disclosure. Referring to FIG. 8, at a step 802, one or more user devices may be configured to record one or more samples of one or more sounds encoded within digitally encoded audio. For example, the backend system(s) 304 may configure the user device 308 to record one or more samples of one or more sounds encoded within digitally encoded audio associated with a video asset. At a step 804, digital media content (e.g., audio/visual content) comprising the sound(s) may be communicated to a user system associated with the user device. For example, the backend system(s) 304 may communicate data comprising a portion of the video asset and corresponding audio to the user system 302. At a step 806, the user device may record the sample(s) of the sound(s) as produced by the user system. For example, the user device 308 may record the sample(s) of the sound(s) encoded within the digitally encoded audio associated with the video asset as produced by the user system 302. At a step 808, the sample(s) may be analyzed. For example, the backend system(s) 304 may analyze the sample(s) recorded by the user device 308.
  • At a step 810, a determination may be made, based on results of the analysis, whether to change to a different format of the digitally encoded audio. For example, backend system(s) 304 may determine, based on the results of analyzing the sample(s) recorded by the user device 308, whether to change to a different format of the digitally encoded audio associated with the video asset. At a step 812, responsive to determining to change to a different format of the digitally encoded audio, the different format may be selected, and the method may return to the step 804, in which digital media content comprising audio digitally encoded in accordance with the selected format may be communicated to the user system. For example, responsive to determining to change to a different format of the digitally encoded audio associated with the video asset, backend system(s) 304 may select the different format and may communicate data comprising a subsequent portion of the video asset and corresponding audio digitally encoded in accordance with the selected format to the user system 302. Returning to step 810, responsive to determining not to change to a different format of the digitally encoded audio, the method may return to the step 804, in which digital media content comprising audio digitally encoded in accordance with the previously utilized format may be communicated to the user system. For example, responsive to determining not to change to a different format of the digitally encoded audio associated with the video asset, backend system(s) 304 may communicate data comprising a subsequent portion of the video asset and corresponding audio digitally encoded in accordance with the previously utilized format to the user system 302.
  • The methods and features recited herein may be implemented through any number of computer readable media that are able to store computer readable instructions. Examples of computer readable media that may be used include RAM, ROM, Electrically Erasable Programmable Read-Only Memory (EEPROM), flash memory or other memory technology, CD-ROM, DVD, or other optical disk storage, magnetic cassettes, magnetic tape, magnetic storage, and the like.
  • Additionally or alternatively, in at least some embodiments, the methods and features recited herein may be implemented through one or more Integrated Circuits (IC s). An IC may, for example, be a microprocessor that accesses programming instructions or other data stored in a ROM. In some embodiments, a ROM may store program instructions that cause an IC to perform operations according to one or more of the methods described herein. In some embodiments, one or more of the methods described herein may be hardwired into an IC. For example, an IC may comprise an Application Specific Integrated Circuit (ASIC) having gates and/or other logic dedicated to the calculations and other operations described herein. In still other embodiments, an IC may perform some operations based on execution of programming instructions read from ROM or RAM, with other operations hardwired into gates or other logic. Further, an IC may be configured to output image data to a display buffer.
  • Although specific examples of carrying out the disclosure have been described, those skilled in the art will appreciate that there are numerous variations and permutations of the above-described apparatuses and methods that are contained within the spirit and scope of the disclosure as set forth in the appended claims. Additionally, numerous other embodiments, modifications, and variations within the scope and spirit of the appended claims may occur to persons of ordinary skill in the art from a review of this disclosure. Specifically, one or more of the features described herein may be combined with any or all of the other features described herein.
  • The various features described above are merely non-limiting examples, and may be rearranged, combined, subdivided, omitted, and/or altered in any desired manner. For example, features of the servers may be subdivided among multiple processors and/or computing devices. The true scope of this patent should only be defined by the claims that follow.

Claims (21)

1. (canceled)
2. A method comprising:
sending, by a computing system and to a user system, digitally encoded audio corresponding to first video content; and
sending, to the user system, digitally encoded audio corresponding to second video content and encoded in an audio data type that is:
different from an audio data type of the digitally encoded audio corresponding to the first video content; and
selected, by the computing system, based on information associated with output, at the user system, of the digitally encoded audio corresponding to the first video content.
3. The method of claim 2, wherein the first video content is a first portion of a video asset and the second video content is a second portion of the video asset.
4. The method of claim 2, wherein the information associated with the output comprises a sound sample that is generated by the user system and that is based on the digitally encoded audio corresponding to the first video content.
5. The method of claim 2, wherein the information associated with the output comprises data indicating an analysis of a sound sample, wherein the sound sample is generated by the user system based on the digitally encoded audio corresponding to the first video content.
6. The method of claim 2, wherein the user system comprises a plurality of speakers, and the information associated with the output comprises data indicating a layout of the plurality of speakers.
7. The method of claim 2, wherein the user system comprises a user device, and the information associated with the output comprises data indicating a physical location of the user device.
8. The method of claim 2, wherein the audio data type of the digitally encoded audio corresponding to the second video content comprises fewer audio channels than the audio data type of the digitally encoded audio corresponding to the first video content.
9. A method comprising:
sending, by a computing system and to a user system, digitally encoded audio corresponding to first video content;
receiving, from the user system, audio information associated with processing, at the user system, of the digitally encoded audio corresponding to the first video content;
selecting, based on the audio information, an audio data type for digitally encoded audio corresponding to second video content; and
sending, to the user system, the digitally encoded audio corresponding to the second video content, wherein the digitally encoded audio corresponding to the second video content is based on the selected audio data type.
10. The method of claim 9, wherein the first video content is a first portion of a video asset and the second video content is a second portion of the video asset.
11. The method of claim 9, wherein the audio information comprises a sound sample that is generated by the user system and that is based on the digitally encoded audio corresponding to the first video content.
12. The method of claim 9, wherein the audio information comprises data indicating an analysis of a sound sample, wherein the sound sample is generated by the user system based on the digitally encoded audio corresponding to the first video content.
13. The method of claim 9, wherein the user system comprises a plurality of speakers, and the audio information comprises data indicating a layout of the plurality of speakers.
14. The method of claim 9, wherein the user system comprises a user device, and the audio information comprises data indicating a physical location of the user device.
15. The method of claim 9, wherein the audio data type of the digitally encoded audio corresponding to the second video content comprises fewer audio channels than an audio data type of the digitally encoded audio corresponding to the first video content.
16. A method comprising:
receiving, by a user system and from a computing system, digitally encoded audio corresponding to first video content;
processing the digitally encoded audio corresponding to the first video content;
sending, to the computing system, audio information associated with the processing the digitally encoded audio corresponding to the first video content; and
receiving, from the computing system, digitally encoded audio corresponding to second video content, wherein an audio data type of the digitally encoded audio corresponding to the second video content is based on the audio information.
17. The method of claim 16, wherein the first video content is a first portion of a video asset and the second video content is a second portion of the video asset.
18. The method of claim 16, wherein the audio information comprises a sound sample that is generated by the user system and that is based on the digitally encoded audio corresponding to the first video content.
19. The method of claim 16, wherein the audio information comprises data indicating an analysis of a sound sample, wherein the sound sample is generated by the user system based on the digitally encoded audio corresponding to the first video content.
20. The method of claim 16, wherein the user system comprises a plurality of speakers, and the audio information comprises data indicating a layout of the plurality of speakers.
21. The method of claim 16, wherein the audio data type of the digitally encoded audio corresponding to the second video content comprises fewer audio channels than an audio data type of the digitally encoded audio corresponding to the first video content.
US15/697,571 2015-09-14 2017-09-07 Device-Based Audio-Format Selection Pending US20180152739A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US15/697,571 US20180152739A1 (en) 2015-09-14 2017-09-07 Device-Based Audio-Format Selection

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US14/853,680 US9800905B2 (en) 2015-09-14 2015-09-14 Device based audio-format selection
US15/697,571 US20180152739A1 (en) 2015-09-14 2017-09-07 Device-Based Audio-Format Selection

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US14/853,680 Continuation US9800905B2 (en) 2015-09-14 2015-09-14 Device based audio-format selection

Publications (1)

Publication Number Publication Date
US20180152739A1 true US20180152739A1 (en) 2018-05-31

Family

ID=58237553

Family Applications (2)

Application Number Title Priority Date Filing Date
US14/853,680 Active US9800905B2 (en) 2015-09-14 2015-09-14 Device based audio-format selection
US15/697,571 Pending US20180152739A1 (en) 2015-09-14 2017-09-07 Device-Based Audio-Format Selection

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US14/853,680 Active US9800905B2 (en) 2015-09-14 2015-09-14 Device based audio-format selection

Country Status (1)

Country Link
US (2) US9800905B2 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10762911B2 (en) * 2015-12-01 2020-09-01 Ati Technologies Ulc Audio encoding using video information
US10162853B2 (en) * 2015-12-08 2018-12-25 Rovi Guides, Inc. Systems and methods for generating smart responses for natural language queries

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030021589A1 (en) * 2001-07-27 2003-01-30 Shu Lin Recording and playing back multiple programs
US20050036519A1 (en) * 2003-08-13 2005-02-17 Jeyendran Balakrishnan Method and system for re-multiplexing of content-modified MPEG-2 transport streams using interpolation of packet arrival times
US9686625B2 (en) * 2015-07-21 2017-06-20 Disney Enterprises, Inc. Systems and methods for delivery of personalized audio
US9924252B2 (en) * 2013-03-13 2018-03-20 Polycom, Inc. Loudspeaker arrangement with on-screen voice positioning for telepresence system
US11062368B1 (en) * 2014-03-19 2021-07-13 Google Llc Selecting online content using offline data

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
NL8403200A (en) * 1984-10-22 1986-05-16 Philips Nv NOISE-DEPENDENT AND VOICE-INDEPENDENT VOLUME CONTROL.
US6678505B1 (en) * 2001-04-18 2004-01-13 David Leason Extravehicular communication system and method
WO2003085968A1 (en) * 2002-04-11 2003-10-16 Konica Minolta Holdings, Inc. Information recording medium and manufacturing method thereof
JP2004336734A (en) * 2003-04-17 2004-11-25 Sharp Corp Wireless terminal, base apparatus, wireless system, control method of wireless terminal, control program of wireless terminal, and computer-readable recording medium for recording the same
US7359409B2 (en) * 2005-02-02 2008-04-15 Texas Instruments Incorporated Packet loss concealment for voice over packet networks
US8443403B2 (en) * 2009-09-04 2013-05-14 Time Warner Cable Inc. Methods and apparatus for providing voice mail services
US8825020B2 (en) * 2012-01-12 2014-09-02 Sensory, Incorporated Information access and device control using mobile phones and audio in the home environment
US9207857B2 (en) * 2014-02-14 2015-12-08 EyeGroove, Inc. Methods and devices for presenting interactive media items
US20150287403A1 (en) * 2014-04-07 2015-10-08 Neta Holzer Zaslansky Device, system, and method of automatically generating an animated content-item

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030021589A1 (en) * 2001-07-27 2003-01-30 Shu Lin Recording and playing back multiple programs
US20050036519A1 (en) * 2003-08-13 2005-02-17 Jeyendran Balakrishnan Method and system for re-multiplexing of content-modified MPEG-2 transport streams using interpolation of packet arrival times
US9924252B2 (en) * 2013-03-13 2018-03-20 Polycom, Inc. Loudspeaker arrangement with on-screen voice positioning for telepresence system
US11062368B1 (en) * 2014-03-19 2021-07-13 Google Llc Selecting online content using offline data
US9686625B2 (en) * 2015-07-21 2017-06-20 Disney Enterprises, Inc. Systems and methods for delivery of personalized audio

Also Published As

Publication number Publication date
US20170078710A1 (en) 2017-03-16
US9800905B2 (en) 2017-10-24

Similar Documents

Publication Publication Date Title
US11616818B2 (en) Distributed control of media content item during webcast
US9537737B2 (en) Consolidated performance metric analysis
US9112623B2 (en) Asynchronous interaction at specific points in content
US20200389756A1 (en) Dynamic Positional Audio
US20210286586A1 (en) Sound effect adjustment method, device, electronic device and storage medium
US20090217324A1 (en) System, method and program product for customizing presentation of television content to a specific viewer and location
US20100138761A1 (en) Techniques to push content to a connected device
US20100305729A1 (en) Audio-based synchronization to media
US20130326575A1 (en) Social Media Driven Generation of a Highlight Clip from a Media Content Stream
US20070106941A1 (en) System and method of providing audio content
US20100049719A1 (en) Techniques for the association, customization and automation of content from multiple sources on a single display
CN108124172B (en) Cloud projection method, device and system
US20160316267A1 (en) Aggregation Of Multiple Media Types Of User Consumption Habits And Device Preferences
CN108573393A (en) Comment information processing method, device, server and storage medium
US9584761B2 (en) Videoconference terminal, secondary-stream data accessing method, and computer storage medium
US20160294903A1 (en) Method and device for pushing resources to mobile communication terminal by smart television
US20180152739A1 (en) Device-Based Audio-Format Selection
CN101562550A (en) Digital content service integrated system
CN108337556B (en) Method and device for playing audio-video file
CN104038772B (en) Generate the method and device of ring signal file
CN111541905B (en) Live broadcast method and device, computer equipment and storage medium
US20090006581A1 (en) Method and System For Downloading Streaming Content
US11050499B1 (en) Audience response collection and analysis
CN111164982A (en) Method and apparatus for determining a source of a media presentation
KR100991264B1 (en) Method and system for playing and sharing music sources on an electric device

Legal Events

Date Code Title Description
STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

AS Assignment

Owner name: COMCAST CABLE COMMUNICATIONS, LLC, PENNSYLVANIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:ROSE, DAVE;WEBUYE, WHITE;OHARE, DAVE;SIGNING DATES FROM 20150702 TO 20150804;REEL/FRAME:050133/0680

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER