US10656901B2 - Automatic audio level adjustment during media item presentation - Google Patents

Automatic audio level adjustment during media item presentation Download PDF

Info

Publication number
US10656901B2
US10656901B2 US15/841,259 US201715841259A US10656901B2 US 10656901 B2 US10656901 B2 US 10656901B2 US 201715841259 A US201715841259 A US 201715841259A US 10656901 B2 US10656901 B2 US 10656901B2
Authority
US
United States
Prior art keywords
media
media item
audio level
audio
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US15/841,259
Other versions
US20180107448A1 (en
Inventor
Christian Weitenberner
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Google LLC
Original Assignee
Google LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Google LLC filed Critical Google LLC
Priority to US15/841,259 priority Critical patent/US10656901B2/en
Assigned to GOOGLE INC. reassignment GOOGLE INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: WEITENBERNER, CHRISTIAN
Publication of US20180107448A1 publication Critical patent/US20180107448A1/en
Assigned to GOOGLE LLC reassignment GOOGLE LLC CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: GOOGLE INC.
Application granted granted Critical
Publication of US10656901B2 publication Critical patent/US10656901B2/en
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/165Management of the audio stream, e.g. setting of volume, audio stream path
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/57Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for processing of video signals
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel
    • G11B27/19Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier
    • G11B27/28Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier by using information signals recorded by the same method as the main recording
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel
    • G11B27/34Indicating arrangements 

Definitions

  • the disclosure generally relates to presentation of media content on a computing device, and more specifically to automatic audio level adjustment during presentation.
  • FIG. 6 is a high-level block diagram illustrating physical components of a computer used as part or all of one or more of the entities described herein in one embodiment.
  • the adjustable audio level controlled by the user is not the actual sound strength as experienced by the user listening to the audio component, however it does relate to and control that value.
  • audio output data When audio output data is converted to sound by the audio output device, it has a sound strength which corresponds to a loudness of the audio component of the media item, for example, in decibels.
  • objective measures of sound strength e.g., sound pressure level, sound intensity, sound power, etc.
  • audio level is not entirely determinative of the sound strength experienced by the user. Because sound strength may vary based on characteristics of the audio component of the media item, if two media items are played in sequence at the same audio level through the same audio output device, the sound strength of the media items, and by extension, the loudness perceived by a user, may differ.
  • FIG. 1 illustrates a computing environment for automatic audio level adjustment in a media player application.
  • the environment includes a client 110 connected by a network 150 to a media server 120 .
  • FIG. 1 illustrates one client 110 and one media server 120 , but there may be multiple instances of each of these entities. For example, there may be thousands or millions of clients 110 in communication with multiple media servers 120 .
  • the media server 120 includes one or more computer servers that provide media items to the client 110 .
  • the media server 120 may be a video streaming website (e.g., YouTube®).
  • Media items may be of different types (e.g., video media items or audio media items, etc.), formats (4:3 aspect ratio, 16:9 aspect ratio, etc.), and be encoded/compressed using different techniques (H.264, MPEG, etc.).
  • a video media item includes a picture component and an audio component.
  • a video media item may be a video data file and/or a portion thereof.
  • An audio media item includes an audio component, but does not include a picture component.
  • An audio media item may be an audio data file and/or a portion thereof
  • the media player application 114 may execute in conjunction with an operating system of the client 110 .
  • the media player application 114 is a dedicated software application designed to work specifically with the media server 120 .
  • the media player application 114 is provided via a more general application for accessing many types of content, such as a web browser. The browser may provide access to the media server 120 , for example, via a web interface.
  • the media player application 114 presents the media item as it is being streamed over the network 150 from, for example, the media server 120 .
  • the media player application 114 may present a user interface, for example, on a display device of client 110 .
  • the user interface may include control elements with which the user of the client 110 may send control commands via a user input device (e.g., mouse, keyboard, touchscreen, trackpad, buttons, etc.). Control commands may also be received by the client 110 or the operating system of the client 110 via physical buttons on the client 110 or a device communicatively coupled to the client 110 . Control commands may be received via executed software code (e.g., API call). Control commands may further be sent to the client 110 by media server 120 in the form of playback instructions, including audio level adjustment (ALA) instructions, as discussed below with respect to FIG. 4 in Section IV.
  • ALA audio level adjustment
  • Control commands may include commands for controlling playback of a media item being presented by the media player application 114 , including stopping playback of a media item, beginning playback of a media item, and requesting a media item from media server 120 .
  • Control commands may further include audio control commands such as increasing the audio level, decreasing the audio level, setting the audio level to a particular value, or muting the audio component.
  • the audio module 116 may receive input audio data representing the audio component of a media item from the media player application 114 , change the amplitude of the audio component, and send audio output data representing the adjusted audio component to the audio output device 118 . Audio output data may be converted to sound by the audio output device 118 .
  • the audio module 116 may have an associated audio level, which corresponds to a relationship (e.g., ratio, percentage, linear or non-linear function, etc.) that sets the amplitude of the audio output signal to the amplitude of the audio data, regardless of how that audio data was originally recorded or encoded.
  • the audio level value does not correspond to any specific numerical value (e.g., in decibels) for the actual sound strength as would be perceived by a user.
  • the audio module 116 may adjust the audio level responsive to receiving audio control commands, either automatically (e.g., by software code) or via a user input.
  • audio output data is converted to sound by an audio output device 118 , it has a sound strength which corresponds to a loudness of the audio component of the media item.
  • the audio level may have an associated audio level value (e.g., within a range from 0-10, 1-100, etc.).
  • objective measures of sound strength may vary based on the audio output device 118 , intrinsic characteristics of the audio component of the media item, and other factors. Because sound strength may vary based on characteristics of the audio component of the media item, if two media items are played in sequence at the same audio level value through the same audio output device 118 , the sound strength of the media items, and by extension, the loudness perceived by a user, may differ.
  • the audio level may be automatically adjusted according to ALA instructions when a second media item is presented after a first media item to mitigate the difference in loudness perceived by the user due to differences in between the underlying audio data of the two media items.
  • ALA instructions may be determined from user information, media item metadata, or data regarding user-initiated ALAs for the media item pair. Collecting data regarding user-initiated ALAs is discussed in more detail below with respect to FIG. 2 in Section III, below.
  • the audio module 116 may be a component of the media player application 114 , the operating system, the client 110 , a separate software application, or some combination thereof.
  • Audio output devices include devices for producing sound that are communicatively coupled to the client 110 .
  • the audio output device 118 may be a component of the client 110 (e.g. a loudspeaker). Other example audio output devices include headphones, external speakers, gramophones, etc.
  • the audio output device 118 may be communicatively coupled to the client 110 via a wired or wireless connection.
  • the audio module 116 may be configured to determine a type of the audio output device 118 (e.g., internal speaker, external speaker, headphones, etc.).
  • the format of audio output data may differ depending on the audio output device 118 .
  • the output audio data is an audio signal that represents sound using voltage.
  • the audio signal may be converted to sound by the audio output device 118 such as a loudspeaker or headphones.
  • the output audio data is an audio signal in a digital format.
  • Sound strength may correspond to an amplitude of a sound wave, and is closely related to the level at which a person experiences sound. A relatively low sound strength may be perceived as quiet, while a relatively high sound strength may be perceived as loud.
  • Information relating to media items may further include playback data including, for example, control commands received during playback of the media item such as commands received from a user to adjust the audio level of the audio module 116 .
  • Playback data may further include a set of audio level values corresponding to various time during playback of a media item when the audio level is to be adjusted. Playback data may be collected by media player application 114 and sent to media server 120 as described below with respect to FIG. 2 in Section III, below.
  • Information relating to media items may further include audio level adjustment (ALA) instructions to automatically adjust the audio level of the audio module 116 with a media item that are sent by media server 120 to the client 110 for playback.
  • ALA instructions may comprise software code that causes the audio level of audio module 116 to be adjusted when the media item is presented after a particular other media item.
  • the audio level adjustment may occur automatically at the start of playback of the media item or at another time during playback.
  • audio control commands initiated by a user may override or alter the ALA value, for example, by scaling the ALA value to correspond to a user-specified value.
  • Automatic ALAs improve the consistency of audio playback between media items, which may increase average watch time, viewership, advertising revenue, subscription revenue, and engagement on the media server platform.
  • the media player application 114 then begins 215 presentation of a second media item, either responsive to a user input or automatically as determined and initiated by server 120 .
  • the audio module 116 may remain set to the primary audio level.
  • the primary audio level may not correspond to an appropriate sound strength for presentation of the second media item to the user.
  • intrinsic characteristics of the audio component of the second media item may result in the sound strength during presentation of the second media item being greater or less than the sound strength during presentation of the first media item. For example, if the first media item contains a relatively loud heavy metal song and the second media item contains a relatively quiet piece of classical music, the user may not be able to hear the audio component of the second media item well.
  • the audio module 116 receives 220 the audio control command, and changes the audio level to the secondary audio level.
  • the audio module 116 may register and store the audio control command at a storage location on the client 110 or the media server 120 .
  • FIG. 3 illustrates example audio level index entries, for example as collected by the process described with respect to FIG. 2 .
  • These example audio level index entries list media item IDs and primary and secondary audio levels for media items viewed in sequence.
  • a key 310 of the audio level index 300 may be a pair of media item IDs created, for example, by combining a first media item ID entry 312 and a second media item ID entry 314 .
  • Values 320 of the audio level index may include a primary audio level entry 322 and a secondary audio level entry 324 .
  • Values 320 may further include a difference value 326 for each entry representing a difference between the audio levels.
  • FIG. 4 is a flowchart of the steps for an example process for determining an ALA value to be included in ALA instructions associated with a first media item and a second media item presented in sequence.
  • the ALA module 122 sets 405 a default ALA value.
  • the default ALA value may be based on the type of media item, genre information, or other metadata. For example, if the media item is a video of a person making a speech, the default ALA value may correspond to an increase in the audio level. Similarly, if the media item is a video of a concert, the default ALA value may correspond to a decrease in the audio level.
  • the ALA module 122 determines 410 an audio level difference (ALD) value based on audio level index entries that correspond to a particular first and second media item pair.
  • the ALD value is a numerical representation of the collective difference between the primary audio level and the secondary audio level for each of the audio level index entries for that particular media item pair.
  • the ALD value may be determined, for example, by taking the mean, median, or mode of the difference values for each of the entries that correspond to the first and second media item. For example, returning to FIG.
  • the ALD may be calculated in a variety of different ways beyond those mentioned above. For example, a median, a mode, a more complicated function may be used, outlier data may be thrown out to reduce variability in the result, etc.
  • the ALA module 122 uses a subset of the audio level index entries for a media item pair to determine the ALD value. For example, the ALA module 122 may only use entries in which the audio level was changed from the primary audio level to the secondary audio level during a certain time period. The time period may be, for example, the first 30 seconds of presentation of the second media item. This implementation rests on an assumption that if the sound strength for the second media item is not appropriate, a user is more likely to adjust the audio level closer to the beginning of presentation of the second media item. In contrast, adjustments later in presentation are less likely to be the result of an inappropriate sound strength. Thus, analyzing entries within a proscribed time period allows the ALA module 122 to determine ALA values that are more likely to lead to a more appropriate sound strength for the second media item.
  • the ALA value may be based on the determined ALD value.
  • the ALA value may be equal to the ALD value.
  • ALA module 122 determines whether the determined ALD value exceeds a threshold for adjusting the ALA value. If the ALD value exceeds the threshold, the ALA module 122 adjusts 415 the ALA value to account for the audio level difference. The ALA value adjustment may be proportional to the ALD. If the ALD value does not exceed the threshold, no adjustment is made to the ALA value.
  • Requiring that the ALD value exceeds a threshold may conserve computing resources in cases where the adjustment would be so minute as to be indiscernible by a user, or where data regarding user-initiated ALAs does not show a clear pattern of user-initiated adjustments.
  • the ALA index entry corresponding to a particular pair of media items may have multiple possible ALA values.
  • a particular media item pair may have different ALA values to be used with different audio output devices 118 or corresponding to different users.
  • the ALA module 122 may determine separate ALD values for audio level index entries corresponding to different audio output devices 118 and store different ALA values in the ALA index entry corresponding to the media item pair. This may result in a better user experience by accounting for sound strength variations among different users and different types of audio output devices 118 .
  • steps in the process of determining an ALA value may be performed in a different order than the order illustrated in FIG. 4 .
  • the steps in the process may be performed at determined time intervals, responsive to a request from a media player application 114 for one of the media items in a pair, or at the behest of the server 120 or another logic process.
  • FIG. 5 is a flowchart of the steps for an example process for sending ALA instructions to a media player application that cause an automatic ALA adjustment when a second media item is presented for presentation after a first media item.
  • the media server 120 receives 505 a request to provide a second media item for presentation after a first media item.
  • the ALA module 122 retrieves 510 the ALA value associated with the first media item ID and the second media item ID from the ALA index.
  • the ALA module 122 generates 515 the ALA instructions to be sent to the requesting media player application 114 .
  • the ALA instructions include the ALA value for the requested first/second media item pair, and may further include instructions (e.g., computer software code) that cause the audio module 116 to automatically adjust the audio level based on the ALA value.
  • the media server 120 sends 520 the ALA instructions to the requesting client 110 .
  • the ALA instructions may be sent to the requesting client 110 along with the content of the second media item for presentation on the client 110 , or they may be sent separately.
  • the audio module 116 automatically adjusts the audio level according to the ALA value. If, at any point, the audio module 116 receives an audio control command from the user to change the audio level, and the audio module 116 may register and store the audio control command according to the process of FIG. 2 for use in generating updated, future ALA instructions.
  • FIG. 6 is a high-level block diagram illustrating physical components of a computer 600 used as part or all of one or more of the entities described herein in one embodiment.
  • instances of the illustrated computer 600 may be used as the client 110 or the media server 120 .
  • Illustrated are at least one processor 602 coupled to a chipset 604 .
  • Also coupled to the chipset 604 are a memory 606 , a storage device 608 , a keyboard 610 , a graphics adapter 612 , a pointing device 614 , and a network adapter 616 .
  • a display 618 is coupled to the graphics adapter 612 .
  • the functionality of the chipset 604 is provided by a memory controller hub 620 and an I/O controller hub 622 .
  • the memory 606 is coupled directly to the processor 602 instead of the chipset 604 .
  • one or more audio output device is coupled to chipset 604 .
  • the storage device 608 is any non-transitory computer-readable storage medium, such as a hard drive, compact disk read-only memory (CD-ROM), DVD, or a solid-state memory device.
  • the memory 606 holds instructions and data used by the processor 602 .
  • the pointing device 614 may be a mouse, track ball, or other type of pointing device, and is used in combination with the keyboard 610 to input data into the computer 600 .
  • the graphics adapter 612 displays images and other information on the display 618 .
  • the network adapter 616 couples the computer system 600 to a local or wide area network.
  • a computer 600 can have different and/or other components than those shown in FIG. 6 .
  • the computer 600 can lack certain illustrated components.
  • a computer 600 may lack a keyboard 610 , pointing device 614 , graphics adapter 612 , and/or display 618 .
  • the storage device 608 can be local and/or remote from the computer 600 (such as embodied within a storage area network (SAN)).
  • SAN storage area network

Abstract

A media item that was presented in media players of computing devices at a first audio level may be identified, each of the media players having a corresponding user of a first set of users. A second audio level value corresponding to an amplitude setting selected by a user of the set of users during playback of the media item may be determined for each of the media players. An audio level difference (ALD) value for each of the media players may be determined based on a corresponding second audio level value. A second audio level value for an amplitude setting to be provided for the media item in response to a request of a second user to play the media item may be determined based on determined ALD values.

Description

PRIORITY CLAIM
This continuation application claims priority to U.S. patent application Ser. No. 14/937,752 filed on Nov. 10, 2015 which is hereby incorporated by reference herein.
BACKGROUND
The disclosure generally relates to presentation of media content on a computing device, and more specifically to automatic audio level adjustment during presentation.
Video streaming websites and other media servers allow users access to millions of items of media content (media items). High user engagement is an important goal of content creators, advertisers, and other affiliates of a media server. Thus, it is desired for users to watch multiple videos in one sitting. When user watch multiple videos, ensuring a good user experience is critical, and depends in part on good transitions between media items.
However, when watching media items back-to-back, the loudness of the audio perceived by a user can often vary dramatically between media items. The experience of moving from one media item to the next can be jarring, especially when the subsequent media item's audio component is significantly louder or quieter than the previous one. Many creators who upload media items to media servers do not normalize sound strength before uploading, or otherwise process audio according to any known industry standard. Further, creators of audio cannot always be sure of the order in which media items will be played back to a user, and thus although their own uploads may be consistent in terms of loudness, they will not necessarily match those of other users. Consequently, when a user plays multiple media items back-to-back, that user may have to constantly adjust the audio level to keep the loudness at a reasonable level. This results in a sub-par user experience and can cause users to abandon watch sessions.
BRIEF DESCRIPTION OF DRAWINGS
Figure (FIG. 1 illustrates a computing environment for automatic audio level adjustment (ALA) in a media player application.
FIG. 2 is a flowchart of the steps for an example process for presenting two media items in sequence and collecting playback data that may be stored in an audio level index and used to determine ALA instructions.
FIG. 3 illustrates example audio level index entries, which list media item IDs and primary and secondary audio levels for media items viewed in sequence.
FIG. 4 is a flowchart of the steps for an example process for determining an ALA value to be included in ALA instructions associated with a first media item and a second media item presented in sequence.
FIG. 5 is a flowchart of the steps for an example process for sending ALA instructions to a media player application that cause an automatic ALA adjustment when a second media item is presented for presentation after a first media item.
FIG. 6 is a high-level block diagram illustrating physical components of a computer used as part or all of one or more of the entities described herein in one embodiment.
The Figures (FIGS.) and the following description relate to example embodiments by way of illustration only. It should be noted that from the following discussion, alternative embodiments of the structures and methods disclosed herein will be readily recognized as viable alternatives that may be employed without departing from the principles of what is claimed.
DETAILED DESCRIPTION
I. Configuration Overview
A media server facilitates automatic audio level adjustment during playback of a media item by a media player application running on a computing device. The media server provides media items, such as videos, to client computing devices, such as desktop computers or mobile phones, via a network. A media player application on the client presents the media items to the user. The media player application may also collect playback data, such as adjustments of the audio level (or amplitude) of the player application, and send the data back to the media server. Such adjustments are also used to change the sound strength of the audio that comes out of an audio output device of the client, such as headphones or a loudspeaker.
The adjustable audio level controlled by the user is not the actual sound strength as experienced by the user listening to the audio component, however it does relate to and control that value. When audio output data is converted to sound by the audio output device, it has a sound strength which corresponds to a loudness of the audio component of the media item, for example, in decibels. When audio output data is presented at an audio level, objective measures of sound strength (e.g., sound pressure level, sound intensity, sound power, etc.) may vary based on the audio output device, intrinsic characteristics of the audio component of the media item, and other factors. As the actual sound strength will vary between media items (e.g., those professionally recorded vs. those recorded via home video camera), audio level is not entirely determinative of the sound strength experienced by the user. Because sound strength may vary based on characteristics of the audio component of the media item, if two media items are played in sequence at the same audio level through the same audio output device, the sound strength of the media items, and by extension, the loudness perceived by a user, may differ.
An audio module within the client adjusts the audio level responsive to receiving audio control commands. Audio control commands include commands to increase the audio level, decrease the audio level, or set the audio level to a particular value, and may be initiated automatically (e.g., by software code) or via a user input. To enhance the user experience of the media player application, the audio level may be automatically adjusted when a second media item is presented after a first media item to mitigate the difference sound strengths between the two media items which would otherwise result in a difference in loudness to the user absent such a modification to the audio level. The automatic adjustment may be based on an ALA value, which may be determined from user information, media item metadata, or data regarding user-initiated ALAs for the media item pair consisting of the first and second media items.
To collect data regarding user-initiated ALAs for the media item pair, the media server may provide the first media item and the second media item for sequential presentation by the media player application to one or more different users. During presentation of the first media item, the user may change the audio level to correspond to an appropriate sound strength for the presentation of the first media item. When presentation of the first media item ends, the audio level may be set to a primary audio level. When presentation of the second media item begins, the primary audio level may not correspond to an appropriate sound strength for the presentation of the second media item. Thus, the user may send an audio command to change the audio level to a secondary audio level to correspond to an appropriate sound strength. The media player application may register and store the audio command and may send a data entry to the media server comprising media item identifiers of the first and second media items, and the primary and secondary audio levels. The data entry may further comprise audio output device information and audio control command information. The media server may store the data entry elements in an audio level index.
To facilitate an automatic ALA, the media server may generate ALA instructions (e.g., computer software code) based on audio level index entries. ALA instructions may cause the audio level to change automatically when the second media item is played after the first media item. The automatic change of the audio level enhances the user experience by automatically setting the audio level to correspond to a more appropriate sound strength for the second media item. Subsequent changes to the audio level may be registered and stored by the media player application and sent to the media server to determine updated ALA values.
II. Computing Environment
FIG. 1 illustrates a computing environment for automatic audio level adjustment in a media player application. The environment includes a client 110 connected by a network 150 to a media server 120. FIG. 1 illustrates one client 110 and one media server 120, but there may be multiple instances of each of these entities. For example, there may be thousands or millions of clients 110 in communication with multiple media servers 120.
The network 150 may comprise any combination of local area and/or wide area networks, the internet, or one or more intranets, using both wired and wireless communication systems.
The media server 120 includes one or more computer servers that provide media items to the client 110. In some embodiments, the media server 120 may be a video streaming website (e.g., YouTube®). Media items may be of different types (e.g., video media items or audio media items, etc.), formats (4:3 aspect ratio, 16:9 aspect ratio, etc.), and be encoded/compressed using different techniques (H.264, MPEG, etc.). A video media item includes a picture component and an audio component. A video media item may be a video data file and/or a portion thereof. An audio media item includes an audio component, but does not include a picture component. An audio media item may be an audio data file and/or a portion thereof
Clients 110 are computing devices that execute computer program modules—e.g., a web browser, e-book reader, media player, or other client application—which allow a user to consume audio and/or video data. A client 110 might be, for example, a personal computer, a tablet computer, a smart phone, a laptop computer, a dedicated e-reader including at least audio playback functionality, or other type of network-capable device such as a networked television or set-top box.
A user of the client 110 may have an account with the media server 120. An account module 126 provides functionality allowing a user to manage his or her account with the media server 120. The account module 124 further receives user information corresponding to a user's activities related to the media server 120. User information may comprise identifiers of media items provided to a client 110 associated with the user, user preferences, and playback data associated with the user, including the order of playback of media items. User information and other account information may be stored in an account data store 130 of the media server 120. Depending upon the embodiment, the account data store 130 may include one or more types of non-transitory computer-readable persistent storage media.
The client 110 may include a media player application 114. The media player application 114 may be a software application executed by a processor of the client 110 for presenting media items to a user operating the client 110. For example, a video media item may be presented to the user by presenting the picture component via a display of the client 110 and presenting the audio component, through the audio module 116 as described below, and as audible audio signals via an audio output device 118 of the client 110.
The media player application 114 may execute in conjunction with an operating system of the client 110. In one embodiment, the media player application 114 is a dedicated software application designed to work specifically with the media server 120. In another embodiment, the media player application 114 is provided via a more general application for accessing many types of content, such as a web browser. The browser may provide access to the media server 120, for example, via a web interface. In some embodiments, the media player application 114 presents the media item as it is being streamed over the network 150 from, for example, the media server 120.
The media player application 114 may present a user interface, for example, on a display device of client 110. The user interface may include control elements with which the user of the client 110 may send control commands via a user input device (e.g., mouse, keyboard, touchscreen, trackpad, buttons, etc.). Control commands may also be received by the client 110 or the operating system of the client 110 via physical buttons on the client 110 or a device communicatively coupled to the client 110. Control commands may be received via executed software code (e.g., API call). Control commands may further be sent to the client 110 by media server 120 in the form of playback instructions, including audio level adjustment (ALA) instructions, as discussed below with respect to FIG. 4 in Section IV.
The media player application 114 and/or the client 110 may be configured to receive control commands. Control commands may include commands for controlling playback of a media item being presented by the media player application 114, including stopping playback of a media item, beginning playback of a media item, and requesting a media item from media server 120. Control commands may further include audio control commands such as increasing the audio level, decreasing the audio level, setting the audio level to a particular value, or muting the audio component.
The audio module 116 may receive input audio data representing the audio component of a media item from the media player application 114, change the amplitude of the audio component, and send audio output data representing the adjusted audio component to the audio output device 118. Audio output data may be converted to sound by the audio output device 118.
The audio module 116 may have an associated audio level, which corresponds to a relationship (e.g., ratio, percentage, linear or non-linear function, etc.) that sets the amplitude of the audio output signal to the amplitude of the audio data, regardless of how that audio data was originally recorded or encoded. In one implementation, the audio level value does not correspond to any specific numerical value (e.g., in decibels) for the actual sound strength as would be perceived by a user. The audio module 116 may adjust the audio level responsive to receiving audio control commands, either automatically (e.g., by software code) or via a user input. When audio output data is converted to sound by an audio output device 118, it has a sound strength which corresponds to a loudness of the audio component of the media item. There are various objective measures for sound strength, including for example, sound pressure (in Pascals), sound pressure level (in decibels), sound intensity (in watts per square meter), and sound power (in watts). The audio level may have an associated audio level value (e.g., within a range from 0-10, 1-100, etc.).
When audio output data is presented at an audio level, objective measures of sound strength (e.g., sound pressure level, sound intensity, sound power, etc.) may vary based on the audio output device 118, intrinsic characteristics of the audio component of the media item, and other factors. Because sound strength may vary based on characteristics of the audio component of the media item, if two media items are played in sequence at the same audio level value through the same audio output device 118, the sound strength of the media items, and by extension, the loudness perceived by a user, may differ.
To enhance the user experience of the media player application 114, the audio level may be automatically adjusted according to ALA instructions when a second media item is presented after a first media item to mitigate the difference in loudness perceived by the user due to differences in between the underlying audio data of the two media items. ALA instructions may be determined from user information, media item metadata, or data regarding user-initiated ALAs for the media item pair. Collecting data regarding user-initiated ALAs is discussed in more detail below with respect to FIG. 2 in Section III, below.
The audio module 116 may be a component of the media player application 114, the operating system, the client 110, a separate software application, or some combination thereof. Audio output devices include devices for producing sound that are communicatively coupled to the client 110. The audio output device 118 may be a component of the client 110 (e.g. a loudspeaker). Other example audio output devices include headphones, external speakers, gramophones, etc. The audio output device 118 may be communicatively coupled to the client 110 via a wired or wireless connection.
The audio module 116 may be configured to determine a type of the audio output device 118 (e.g., internal speaker, external speaker, headphones, etc.). The format of audio output data may differ depending on the audio output device 118. In one embodiment, the output audio data is an audio signal that represents sound using voltage. The audio signal may be converted to sound by the audio output device 118 such as a loudspeaker or headphones. In another embodiment, the output audio data is an audio signal in a digital format. When the audio output data is converted to sound by the audio output device 118, the sound has an associated sound strength. Sound strength may correspond to an amplitude of a sound wave, and is closely related to the level at which a person experiences sound. A relatively low sound strength may be perceived as quiet, while a relatively high sound strength may be perceived as loud.
The media server 120 maintains information relating to media items. Information relating to a media item may include a media item identifier (ID), a media item address, metadata associated with a media item, or some combination thereof. The media item ID uniquely identifies a media item. The media item address is a computer network address where the media item is physically stored and may be downloaded or streamed from. The metadata describes different aspects of the media item. The metadata may include, for example, author, date of publishing, reviews, genre information, publisher, ratings, and a media item identifier.
Information relating to media items may further include playback data including, for example, control commands received during playback of the media item such as commands received from a user to adjust the audio level of the audio module 116. Playback data may further include a set of audio level values corresponding to various time during playback of a media item when the audio level is to be adjusted. Playback data may be collected by media player application 114 and sent to media server 120 as described below with respect to FIG. 2 in Section III, below.
Information relating to media items may further include audio level adjustment (ALA) instructions to automatically adjust the audio level of the audio module 116 with a media item that are sent by media server 120 to the client 110 for playback. For example, ALA instructions may comprise software code that causes the audio level of audio module 116 to be adjusted when the media item is presented after a particular other media item. The audio level adjustment may occur automatically at the start of playback of the media item or at another time during playback. During playback, audio control commands initiated by a user may override or alter the ALA value, for example, by scaling the ALA value to correspond to a user-specified value. Automatic ALAs improve the consistency of audio playback between media items, which may increase average watch time, viewership, advertising revenue, subscription revenue, and engagement on the media server platform.
ALA instructions may be generated by an audio level adjustment module 122 of the media server 120. ALA instructions may be based on analysis of playback data, including a determined audio level difference values, as discussed in more detail below with respect to FIG. 4 in Section IV. ALA instructions may be generated responsive to a request from a client or at pre-determined intervals.
Media items, playback data, ALA instructions, and other information relating to media items may be stored in a media data store 128 of the media server 120. Depending upon the embodiment, the media data store 128 may include one or more types of non-transitory computer-readable persistent storage media.
III. Playback Data Collection and Indexing
For a particular pair of media items, ALA instructions may be based on audio level adjustments made by users who were previously presented the pair of media items in sequence. FIG. 2 is a flowchart of the steps for an example process for presenting two media items in sequence and collecting playback data that may be stored in an audio level index and used to determine ALA instructions. The media player application 114 of the client 110 begins 205 presentation of a first media item comprising a first audio component. During presentation, the user of the client 110 may decide to change the audio level and provide a control command to adjust the audio level to a more appropriate sound strength for presentation of the first media item. When presentation of the first media item ends, either at the end of the item or upon user or external command, the audio module 116 records 210 a primary audio level, either as initially set upon the beginning of presentation or as adjusted based on input from the user.
The media player application 114 then begins 215 presentation of a second media item, either responsive to a user input or automatically as determined and initiated by server 120. When presentation of the second media item begins, the audio module 116 may remain set to the primary audio level. The primary audio level may not correspond to an appropriate sound strength for presentation of the second media item to the user. For example, intrinsic characteristics of the audio component of the second media item may result in the sound strength during presentation of the second media item being greater or less than the sound strength during presentation of the first media item. For example, if the first media item contains a relatively loud heavy metal song and the second media item contains a relatively quiet piece of classical music, the user may not be able to hear the audio component of the second media item well. This difference may cause the user to send an audio command to change the audio level for the second media item to correspond to a more appropriate sound strength. The audio module 116 receives 220 the audio control command, and changes the audio level to the secondary audio level. The audio module 116 may register and store the audio control command at a storage location on the client 110 or the media server 120.
The media player application 114 sends 225 a data entry to the media server 120 including the media item ID of the first media item, the media item ID of the second media item, the primary audio level, and the second audio level. The data entry may further comprise a list of audio control commands received during presentation of the first and the second media items, including both their timestamps of occurrence during presentation of the associated media item as well as their change to the audio level. The data entry may also include information about the audio output device 118 such as an audio output device identifier (ID) and information about whether the audio output device had a wired or wireless connection.
The media server 120 receives the data entry from the media player application 114. The media server 120 may store data entry elements in one or more indices in media data store 128 and/or account data store 130 for use in generating ALA instructions. For example, the ALA module 122 may store data entry elements in an audio level index, which is independent of the user from which the data entry was received and may contain data entries from multiple users. If the user has an account with the media server 120, the data entry elements may be stored in an account index associated with the user in account data store 130.
FIG. 3 illustrates example audio level index entries, for example as collected by the process described with respect to FIG. 2. These example audio level index entries list media item IDs and primary and secondary audio levels for media items viewed in sequence. For example, as shown in FIG. 3, a key 310 of the audio level index 300 may be a pair of media item IDs created, for example, by combining a first media item ID entry 312 and a second media item ID entry 314. Values 320 of the audio level index may include a primary audio level entry 322 and a secondary audio level entry 324. Values 320 may further include a difference value 326 for each entry representing a difference between the audio levels. The difference value may be positive (e.g., representing a user command to increase the audio level), negative (e.g., representing a user command to decrease the audio level), or zero, and may be calculated by audio level adjustment module 122. Audio level index entries may be received from multiple users. Audio level index entries may further comprise audio output device information and the list of audio control commands.
IV. Audio Level Adjustment Determination
ALA instructions may cause an automatic ALA based on an ALA value, which may be determined from multiple sources of data, including (A) user information, (B) media item metadata, (C) audio level index entries for user-initiated ALAs for the media item pair, or some combination of these data sources. If multiple data sources exist, the determination of which sources to use to determine an ALA value may be hierarchical (e.g., data from source A is preferred, data from source B is used in the absence of data from source A, and data from source C is used in the absence of data from A or B), additive (e.g., data from source A, source B, and source C is used), or some combination thereof. The example process of FIG. 4 is an example of additive data use.
FIG. 4 is a flowchart of the steps for an example process for determining an ALA value to be included in ALA instructions associated with a first media item and a second media item presented in sequence. The ALA module 122 sets 405 a default ALA value. The default ALA value may be based on the type of media item, genre information, or other metadata. For example, if the media item is a video of a person making a speech, the default ALA value may correspond to an increase in the audio level. Similarly, if the media item is a video of a concert, the default ALA value may correspond to a decrease in the audio level. There may be rules stored in media data store 128 that cause ALA module 122 to set default ALA values if other instructions are not available. If the user associated with the requesting media player application 114 has an account with the media server 120, the ALA module 122 may adjust the default ALA value according to user information stored in account data store 130. For example, if a user sends audio commands to turn down a certain type of media item more often than other users, the ALA value may be changed accordingly.
The ALA module 122 determines 410 an audio level difference (ALD) value based on audio level index entries that correspond to a particular first and second media item pair. The ALD value is a numerical representation of the collective difference between the primary audio level and the secondary audio level for each of the audio level index entries for that particular media item pair. The ALD value may be determined, for example, by taking the mean, median, or mode of the difference values for each of the entries that correspond to the first and second media item. For example, returning to FIG. 3, if the first media item ID is ‘Cat.mov’ and the second media item ID is ‘Pig.mov,’ the ALD may be determined by taking the mean of the four difference values 326 that have ‘Cat.mov’ as the first media item ID and ‘Pig.mov’ as the second media item ID. The mean would be (33+23+0−4)±4=+13. Thus, the ALD for the media item sequence may be +13. Depending upon the implementation, the ALD may be calculated in a variety of different ways beyond those mentioned above. For example, a median, a mode, a more complicated function may be used, outlier data may be thrown out to reduce variability in the result, etc.
In one implementation, the ALA module 122 uses a subset of the audio level index entries for a media item pair to determine the ALD value. For example, the ALA module 122 may only use entries in which the audio level was changed from the primary audio level to the secondary audio level during a certain time period. The time period may be, for example, the first 30 seconds of presentation of the second media item. This implementation rests on an assumption that if the sound strength for the second media item is not appropriate, a user is more likely to adjust the audio level closer to the beginning of presentation of the second media item. In contrast, adjustments later in presentation are less likely to be the result of an inappropriate sound strength. Thus, analyzing entries within a proscribed time period allows the ALA module 122 to determine ALA values that are more likely to lead to a more appropriate sound strength for the second media item.
In addition to being based on the default value derived from media item metadata and user information, the ALA value may be based on the determined ALD value. The ALA value may be equal to the ALD value. In another embodiment, ALA module 122 determines whether the determined ALD value exceeds a threshold for adjusting the ALA value. If the ALD value exceeds the threshold, the ALA module 122 adjusts 415 the ALA value to account for the audio level difference. The ALA value adjustment may be proportional to the ALD. If the ALD value does not exceed the threshold, no adjustment is made to the ALA value. Requiring that the ALD value exceeds a threshold may conserve computing resources in cases where the adjustment would be so minute as to be indiscernible by a user, or where data regarding user-initiated ALAs does not show a clear pattern of user-initiated adjustments.
The ALA module 122 adds 420 a new ALA index entry to an ALA index stored in media data store 128. In addition to default ALA index entries and user-specific ALA values, ALA index entries based on ALD values include data such as the media item IDs of the first and second media item pair, and the determined ALA value.
The ALA index entry corresponding to a particular pair of media items may have multiple possible ALA values. A particular media item pair may have different ALA values to be used with different audio output devices 118 or corresponding to different users. For example, the ALA module 122 may determine separate ALD values for audio level index entries corresponding to different audio output devices 118 and store different ALA values in the ALA index entry corresponding to the media item pair. This may result in a better user experience by accounting for sound strength variations among different users and different types of audio output devices 118.
Various steps in the process of determining an ALA value may be performed in a different order than the order illustrated in FIG. 4. The steps in the process may be performed at determined time intervals, responsive to a request from a media player application 114 for one of the media items in a pair, or at the behest of the server 120 or another logic process.
V. Audio Level Adjustment Application
FIG. 5 is a flowchart of the steps for an example process for sending ALA instructions to a media player application that cause an automatic ALA adjustment when a second media item is presented for presentation after a first media item. The media server 120 receives 505 a request to provide a second media item for presentation after a first media item. The ALA module 122 retrieves 510 the ALA value associated with the first media item ID and the second media item ID from the ALA index.
The ALA module 122 generates 515 the ALA instructions to be sent to the requesting media player application 114. The ALA instructions include the ALA value for the requested first/second media item pair, and may further include instructions (e.g., computer software code) that cause the audio module 116 to automatically adjust the audio level based on the ALA value.
The media server 120 sends 520 the ALA instructions to the requesting client 110. The ALA instructions may be sent to the requesting client 110 along with the content of the second media item for presentation on the client 110, or they may be sent separately. When the media player application 114 of the requesting client 110 begins presentation of the second media item after the first media item, the audio module 116 automatically adjusts the audio level according to the ALA value. If, at any point, the audio module 116 receives an audio control command from the user to change the audio level, and the audio module 116 may register and store the audio control command according to the process of FIG. 2 for use in generating updated, future ALA instructions.
VI. Additional Considerations
FIG. 6 is a high-level block diagram illustrating physical components of a computer 600 used as part or all of one or more of the entities described herein in one embodiment. For example, instances of the illustrated computer 600 may be used as the client 110 or the media server 120. Illustrated are at least one processor 602 coupled to a chipset 604. Also coupled to the chipset 604 are a memory 606, a storage device 608, a keyboard 610, a graphics adapter 612, a pointing device 614, and a network adapter 616. A display 618 is coupled to the graphics adapter 612. In one embodiment, the functionality of the chipset 604 is provided by a memory controller hub 620 and an I/O controller hub 622. In another embodiment, the memory 606 is coupled directly to the processor 602 instead of the chipset 604. In one embodiment, one or more audio output device is coupled to chipset 604.
The storage device 608 is any non-transitory computer-readable storage medium, such as a hard drive, compact disk read-only memory (CD-ROM), DVD, or a solid-state memory device. The memory 606 holds instructions and data used by the processor 602. The pointing device 614 may be a mouse, track ball, or other type of pointing device, and is used in combination with the keyboard 610 to input data into the computer 600. The graphics adapter 612 displays images and other information on the display 618. The network adapter 616 couples the computer system 600 to a local or wide area network.
As is known in the art, a computer 600 can have different and/or other components than those shown in FIG. 6. In addition, the computer 600 can lack certain illustrated components. In one embodiment, a computer 600 may lack a keyboard 610, pointing device 614, graphics adapter 612, and/or display 618. Moreover, the storage device 608 can be local and/or remote from the computer 600 (such as embodied within a storage area network (SAN)).
As is known in the art, the computer 600 is adapted to execute computer program modules for providing functionality described herein. As used herein, the term “module” refers to computer program logic utilized to provide the specified functionality. Thus, a module can be implemented in hardware, firmware, and/or software. In one embodiment, program modules are stored on the storage device 608, loaded into the memory 606, and executed by the processor 602.
Some portions of this description describe the embodiments in terms of algorithms and symbolic representations of operations on information. These algorithmic descriptions and representations are commonly used by those skilled in the data processing arts to convey the substance of their work effectively to others skilled in the art. These operations, while described functionally, computationally, or logically, are understood to be implemented by computer programs or equivalent electrical circuits, microcode, or the like.
It will be understood that the named components represent one embodiment, and other embodiments may include other components. In addition, other embodiments may lack the components described herein and/or distribute the described functionality among the components in a different manner. Additionally, the functionalities attributed to more than one component can be incorporated into a single component.
Finally, the language used in the specification has been principally selected for readability and instructional purposes, and it may not have been selected to delineate or circumscribe the inventive subject matter. It is therefore intended that the scope of the invention be limited not by this detailed description, but rather by any claims that issue on an application based hereon. Accordingly, the disclosure of the embodiments described is intended to be illustrative, but not limiting, of the scope of what is protectable, which is set forth in the following claims.

Claims (20)

What is claimed is:
1. A method comprising:
identifying a media item that was presented in a plurality of first media players of first computing devices at a first audio level value, each of the plurality of first media players having a respective first user of a set of first users;
determining, for each of the plurality of first media players, a second audio level value corresponding to an amplitude setting selected by the respective first user of the set of first users during playback of the media item on a respective first computing device;
determining, by a processor, an audio level difference (ALD) value for each of the plurality of first media players of a respective first computing device based on a determined second audio level value corresponding to the amplitude setting selected by the respective first user during playback of the media item on the respective first computing device; and
determining, based on ALD values determined for respective first media players of respective first computing devices, a third audio level value for an amplitude setting to be provided for the media item to be played on a second media player of a second computing device for a second user, in response to a request of the second user to play the media item on the second media player of the second computing device, the second user not being part of the set of first users.
2. The method of claim 1, further comprising:
comparing the ALD value to a threshold; and
responsive to the ALD value exceeding the threshold, adding, to an audio level adjustment (ALA) index, a new ALA index entry comprising a media identifier associated with the media item.
3. The method of claim 1, wherein determining the ALD value comprises:
determining at least one of a mean, median, or mode of the ALD values determined for the plurality of first media players.
4. The method of claim 1, further comprising:
determining, for each of the plurality of first media players, a change time indicating a time, during playback of the media item, at which an audio level associated with the respective first media player was set to the second audio level value.
5. The method of claim 4, wherein the ALD value is determined using a portion of the plurality of first media players having the change time within a defined range.
6. The method of claim 1, further comprising:
determining, for each of the first computing devices, an audio output device identifier (ID) identifying an audio output device operatively coupled to a respective first computing device during playback of the media item.
7. The method of claim 6, further comprising:
determining an output device-specific ALD value based on ALD values determined for a portion of the first computing devices having the audio output device ID.
8. A system comprising:
a memory; and
a processor, operatively coupled to the memory, to:
identify a media item that was presented in a plurality of first media players of first computing devices at a first audio level value, each of the plurality of first media players having a respective first user of a set of first users;
determine, for each of the plurality of first media players, a second audio level value corresponding to an amplitude setting selected by the respective first user of the set of first users during playback of the media item on a respective first computing device;
determine an audio level difference (ALD) value for each of the plurality of first media players of a respective first computing device based on a determined second audio level value corresponding to the amplitude setting selected by the respective first user during playback of the media item on the respective first computing device; and
determine, based on ALD values determined for respective first media players of respective first computing devices, a third audio level value for an amplitude setting to be provided for the media item to be played on a second media player of a second computing device for a second user, in response to a request of the second user to play the media item on the second media player of the second computing device, the second user not being part of the set of first users.
9. The system of claim 8, wherein the processor is further to:
compare the ALD value to a threshold; and
responsive to the ALD value exceeding the threshold, add, to an audio level adjustment (ALA) index, a new ALA index entry comprising a media identifier associated with the media item.
10. The system of claim 8, wherein to determine the ALD value, the processor is further to:
determine at least one of a mean, median, or mode of the ALD values determined for the plurality of first media players.
11. The system of claim 8, wherein the processor is further to:
determine, for each of the plurality of first media players, a change time indicating a time during playback of the media item, at which an audio level associated with the respective first media player was set to the second audio level value.
12. The system of claim 8, wherein the ALD value is determined using a portion of the plurality of first media players having a change time within a defined range.
13. The system of claim 8, wherein the processor is further to:
determine, for each of the first computing devices, an audio output device identifier (ID) identifying an audio output device operatively coupled to a respective first computing device during playback of the media item.
14. The system of claim 13, wherein the processor is further to:
determine an output device-specific ALD value based on ALD values determined for a portion of the first computing devices having the audio output device ID.
15. A non-transitory computer-readable storage medium having instructions stored therein, which when executed, cause a processor to:
identify a media item that was presented in a plurality of first media players of first computing devices at a first audio level value, each of the plurality of first media players having a respective first user of a set of first users;
determine, for each of the plurality of first media players, a second audio level value corresponding to an amplitude setting selected by the respective first user of the set of first users during playback of the media item on a respective first computing device;
determine an audio level difference (ALD) value for each of the plurality of first media players on a respective first computing device based on a determined second audio level value corresponding to the amplitude setting selected by the respective first user during playback of the media item on the respective first computing device; and
determine, based on ALD values determined for respective first media players of respective first computing devices, a third audio level value for an amplitude setting to be provided for the media item to be played on a second media player of a second computing device for a second user, in response to a request of the second user to play the media item on the second media player of the second computing device, the second user not being part of the set of first users.
16. The non-transitory computer-readable storage medium of claim 15, wherein the processor is further to:
compare the ALD value to a threshold; and
responsive to the ALD value exceeding the threshold, add, to an audio level adjustment (ALA) index, a new ALA index entry comprising a media identifier associated with the media item.
17. The non-transitory computer-readable storage medium of claim 15, wherein to determine the ALD value, the processor is further to:
determine at least one of a mean, median, or mode of the ALD values determined for the plurality of first media players.
18. The non-transitory computer-readable storage medium of claim 15, wherein the processor is further to:
determine, for each of the plurality of first media players, a change time indicating a time during playback of the media item, at which an audio level associated with the respective first media player was set to the second audio level value.
19. The non-transitory computer-readable storage medium of claim 15, wherein the ALD value is determined using a portion of the plurality of first media players having a change time within a defined range.
20. The non-transitory computer-readable storage medium of claim 15, wherein the processor is further to:
determine, for each of the first computing devices, an audio output device identifier (ID) identifying an audio output device operatively coupled to a respective first computing device during playback of the media item.
US15/841,259 2015-11-10 2017-12-13 Automatic audio level adjustment during media item presentation Active 2036-06-16 US10656901B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US15/841,259 US10656901B2 (en) 2015-11-10 2017-12-13 Automatic audio level adjustment during media item presentation

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US14/937,752 US9858036B2 (en) 2015-11-10 2015-11-10 Automatic audio level adjustment during media item presentation
US15/841,259 US10656901B2 (en) 2015-11-10 2017-12-13 Automatic audio level adjustment during media item presentation

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US14/937,752 Continuation US9858036B2 (en) 2015-11-10 2015-11-10 Automatic audio level adjustment during media item presentation

Publications (2)

Publication Number Publication Date
US20180107448A1 US20180107448A1 (en) 2018-04-19
US10656901B2 true US10656901B2 (en) 2020-05-19

Family

ID=57396262

Family Applications (2)

Application Number Title Priority Date Filing Date
US14/937,752 Active US9858036B2 (en) 2015-11-10 2015-11-10 Automatic audio level adjustment during media item presentation
US15/841,259 Active 2036-06-16 US10656901B2 (en) 2015-11-10 2017-12-13 Automatic audio level adjustment during media item presentation

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US14/937,752 Active US9858036B2 (en) 2015-11-10 2015-11-10 Automatic audio level adjustment during media item presentation

Country Status (3)

Country Link
US (2) US9858036B2 (en)
EP (1) EP3168740B1 (en)
CN (1) CN106970773B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10992719B2 (en) 2017-11-14 2021-04-27 Rovi Guides, Inc. Systems and methods for establishing a voice link between users accessing media
US11347470B2 (en) * 2018-11-16 2022-05-31 Roku, Inc. Detection of media playback loudness level and corresponding adjustment to audio during media replacement event
US11785386B2 (en) * 2019-01-03 2023-10-10 Harman International Industries, Incorporated Multistep sound preference determination

Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060094474A1 (en) * 2002-10-15 2006-05-04 Peter Zatloukal Mobile digital communication/computing device having a context sensitive audio system
US7158624B1 (en) 2002-06-17 2007-01-02 Cisco Technology, Inc. Methods and apparatus for selectively including an audio signal component within an audio output signal
US20070256014A1 (en) 2006-03-31 2007-11-01 General Instrument Corporation Multimedia Processing Apparatus an Method for Adjusting the Audio Level of Multimedia Content
US20080130958A1 (en) * 2006-11-30 2008-06-05 Motorola, Inc. Method and system for vision-based parameter adjustment
US20090304205A1 (en) 2008-06-10 2009-12-10 Sony Corporation Of Japan Techniques for personalizing audio levels
US7702014B1 (en) 1999-12-16 2010-04-20 Muvee Technologies Pte. Ltd. System and method for video production
US7720238B2 (en) 2008-03-31 2010-05-18 Kabushiki Kaisha Toshiba Video-audio output device and video/audio method
WO2011011219A1 (en) 2009-07-23 2011-01-27 Sling Media Pvt Ltd. Adaptive gain control for digital audio samples in a media stream
CN102033776A (en) 2009-09-29 2011-04-27 联想(北京)有限公司 Audio playing method and computing device
CN102567468A (en) 2011-12-06 2012-07-11 上海聚力传媒技术有限公司 Method for adjusting player volume of media files and equipment utilizing same
CN103124165A (en) 2011-11-14 2013-05-29 谷歌公司 Automatic gain control
CN103823654A (en) 2014-02-24 2014-05-28 联想(北京)有限公司 Information processing method and electronic device
US20140173437A1 (en) 2012-12-19 2014-06-19 Bitcentral Inc. Nonlinear proxy-based editing system and method having improved audio level controls
CN103931199A (en) 2011-11-14 2014-07-16 苹果公司 Generation of multi -views media clips
CN103959286A (en) 2011-08-26 2014-07-30 谷歌公司 System and method for identifying availability of media items
US8874448B1 (en) * 2014-04-01 2014-10-28 Google Inc. Attention-based dynamic audio level adjustment
US20150243163A1 (en) * 2012-12-14 2015-08-27 Biscotti Inc. Audio Based Remote Control Functionality
CN104932681A (en) 2014-03-21 2015-09-23 意美森公司 Automatic tuning of haptic effects
US9431980B2 (en) * 2012-01-30 2016-08-30 Echostar Ukraine Llc Apparatus, systems and methods for adjusting output audio volume based on user location
US20160259497A1 (en) * 2015-03-08 2016-09-08 Apple Inc. Devices, Methods, and Graphical User Interfaces for Manipulating User Interface Objects with Visual and/or Haptic Feedback

Patent Citations (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7702014B1 (en) 1999-12-16 2010-04-20 Muvee Technologies Pte. Ltd. System and method for video production
US7158624B1 (en) 2002-06-17 2007-01-02 Cisco Technology, Inc. Methods and apparatus for selectively including an audio signal component within an audio output signal
US20060094474A1 (en) * 2002-10-15 2006-05-04 Peter Zatloukal Mobile digital communication/computing device having a context sensitive audio system
US20070256014A1 (en) 2006-03-31 2007-11-01 General Instrument Corporation Multimedia Processing Apparatus an Method for Adjusting the Audio Level of Multimedia Content
US20080130958A1 (en) * 2006-11-30 2008-06-05 Motorola, Inc. Method and system for vision-based parameter adjustment
US7720238B2 (en) 2008-03-31 2010-05-18 Kabushiki Kaisha Toshiba Video-audio output device and video/audio method
US20090304205A1 (en) 2008-06-10 2009-12-10 Sony Corporation Of Japan Techniques for personalizing audio levels
CN102498664A (en) 2009-07-23 2012-06-13 斯灵媒体有限公司 Adaptive gain control for digital audio samples in a media stream
WO2011011219A1 (en) 2009-07-23 2011-01-27 Sling Media Pvt Ltd. Adaptive gain control for digital audio samples in a media stream
US20110019839A1 (en) 2009-07-23 2011-01-27 Sling Media Pvt Ltd Adaptive gain control for digital audio samples in a media stream
CN102033776A (en) 2009-09-29 2011-04-27 联想(北京)有限公司 Audio playing method and computing device
CN103959286A (en) 2011-08-26 2014-07-30 谷歌公司 System and method for identifying availability of media items
CN103124165A (en) 2011-11-14 2013-05-29 谷歌公司 Automatic gain control
CN103931199A (en) 2011-11-14 2014-07-16 苹果公司 Generation of multi -views media clips
CN102567468A (en) 2011-12-06 2012-07-11 上海聚力传媒技术有限公司 Method for adjusting player volume of media files and equipment utilizing same
US9431980B2 (en) * 2012-01-30 2016-08-30 Echostar Ukraine Llc Apparatus, systems and methods for adjusting output audio volume based on user location
US20150243163A1 (en) * 2012-12-14 2015-08-27 Biscotti Inc. Audio Based Remote Control Functionality
US20140173437A1 (en) 2012-12-19 2014-06-19 Bitcentral Inc. Nonlinear proxy-based editing system and method having improved audio level controls
CN103823654A (en) 2014-02-24 2014-05-28 联想(北京)有限公司 Information processing method and electronic device
CN104932681A (en) 2014-03-21 2015-09-23 意美森公司 Automatic tuning of haptic effects
US8874448B1 (en) * 2014-04-01 2014-10-28 Google Inc. Attention-based dynamic audio level adjustment
US20160259497A1 (en) * 2015-03-08 2016-09-08 Apple Inc. Devices, Methods, and Graphical User Interfaces for Manipulating User Interface Objects with Visual and/or Haptic Feedback

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Extended European Search Report dated Mar. 23, 2017 issued by the European Patent Office in European Application No. 16197998.4.

Also Published As

Publication number Publication date
CN106970773B (en) 2020-06-16
EP3168740A1 (en) 2017-05-17
US20180107448A1 (en) 2018-04-19
EP3168740B1 (en) 2019-01-09
US9858036B2 (en) 2018-01-02
US20170131966A1 (en) 2017-05-11
CN106970773A (en) 2017-07-21

Similar Documents

Publication Publication Date Title
JP7071508B2 (en) Methods for volume control, computer-readable storage media and equipment
US9686586B2 (en) Interstitial audio control
US10656901B2 (en) Automatic audio level adjustment during media item presentation
US11611800B2 (en) Methods and apparatus for audio equalization
US20090062943A1 (en) Methods and apparatus for automatically controlling the sound level based on the content
US20120308196A1 (en) System and method for uploading and downloading a video file and synchronizing videos with an audio file
US20220159088A1 (en) Media player for receiving media content from a remote server
US9053710B1 (en) Audio content presentation using a presentation profile in a content header
WO2015144243A1 (en) Image display device with automatic sound enhancement function
US10075140B1 (en) Adaptive user interface configuration
US9197920B2 (en) Shared media experience distribution and playback
KR20210027707A (en) Method and system for applying loudness normalization
US10110943B2 (en) Flexible output of streaming media
US10924079B2 (en) Intelligent power reduction in audio amplifiers
US20180285358A1 (en) Media recommendations based on media presentation attributes
US20230103596A1 (en) Systems and methods for customizing media player playback speed
US20230283843A1 (en) Systems and methods for detecting and analyzing audio in a media presentation environment to determine whether to replay a portion of the media
TWI717596B (en) Volume control method and volume control device

Legal Events

Date Code Title Description
FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

AS Assignment

Owner name: GOOGLE INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:WEITENBERNER, CHRISTIAN;REEL/FRAME:044548/0118

Effective date: 20151213

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

AS Assignment

Owner name: GOOGLE LLC, CALIFORNIA

Free format text: CHANGE OF NAME;ASSIGNOR:GOOGLE INC.;REEL/FRAME:052280/0089

Effective date: 20170929

STPP Information on status: patent application and granting procedure in general

Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED

STCF Information on status: patent grant

Free format text: PATENTED CASE

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4