CN113348438A - Output of content on a remote control device - Google Patents

Output of content on a remote control device Download PDF

Info

Publication number
CN113348438A
CN113348438A CN202080010242.2A CN202080010242A CN113348438A CN 113348438 A CN113348438 A CN 113348438A CN 202080010242 A CN202080010242 A CN 202080010242A CN 113348438 A CN113348438 A CN 113348438A
Authority
CN
China
Prior art keywords
content
condition
output
electronic device
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202080010242.2A
Other languages
Chinese (zh)
Inventor
C·德卡马戈·巴尔塞维修斯
T·阿尔西纳
E·T·施密特
A·班达汉
N·C·海内斯
J·T·酷-拉蒂格
C·D·乔根森
J·A·贝内特
T·G·卡瑞根
P·L·考夫曼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apple Inc
Original Assignee
Apple Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US16/705,073 external-priority patent/US20200233573A1/en
Application filed by Apple Inc filed Critical Apple Inc
Publication of CN113348438A publication Critical patent/CN113348438A/en
Pending legal-status Critical Current

Links

Images

Abstract

An apparatus implementing a system that provides a set of controls for remotely controlling output of content includes at least one processor configured to determine that a user interaction with respect to the apparatus satisfies a first condition, the user interaction being associated with a second apparatus. The at least one processor is further configured to determine that a state of output of the content on the second device satisfies a second condition. The at least one processor is further configured to provide, on the device, a set of controls for remotely controlling output of the content on the second device based on determining that the first condition and the second condition have been satisfied.

Description

Output of content on a remote control device
Cross Reference to Related Applications
THE benefit OF U.S. provisional patent application serial No. 62/795,499 entitled "removal control OF THE present ON a DEVICE" filed ON day 1, month 22, 2019 and U.S. provisional patent application serial No. 62/825,618 entitled "removal control OF THE present ON a DEVICE" filed ON day 3, month 28, 2019, both OF which are hereby incorporated by reference in their entirety and form part OF THE present U.S. patent application for all purposes.
Technical Field
This specification relates generally to remotely controlling the output of content, including providing a set of controls for the output of content on a remote control device.
Background
The user may be able to select between multiple devices for outputting content. In some cases, a user may use a first device to control the output of content on a second device.
Drawings
Some of the features of the subject technology are set forth in the appended claims. However, for purposes of explanation, several embodiments of the subject technology are set forth in the following figures.
FIG. 1 illustrates an exemplary network environment for providing a set of controls for remotely controlling output of content in accordance with one or more implementations.
FIG. 2 illustrates an exemplary device in which a system for providing a set of controls for remotely controlling output of content can be implemented in accordance with one or more implementations.
FIG. 3 illustrates a flow diagram of an exemplary process for providing a set of controls for remotely controlling output of content in accordance with one or more implementations.
FIG. 4 illustrates an exemplary user interface of a control application for selecting a device to output content according to one or more implementations.
FIG. 5 illustrates an example of a user interface with a set of controls for remotely controlling output of content in accordance with one or more implementations.
FIG. 6 illustrates an example of a user interface of a virtual assistant application having a set of controls for remotely controlling output of content according to one or more implementations.
FIG. 7 illustrates a flow diagram of another exemplary process for providing a set of controls for remotely controlling output of content in accordance with one or more implementations.
FIG. 8 illustrates a flow diagram of an exemplary process for providing a user interface to output content to a proximity device in accordance with one or more implementations.
Fig. 9 illustrates an exemplary electronic system that may be used to implement various aspects of the subject technology in accordance with one or more implementations.
Detailed Description
The detailed description set forth below is intended as a description of various configurations of the subject technology and is not intended to represent the only configurations in which the subject technology may be practiced. The accompanying drawings are incorporated herein and constitute a part of the detailed description. The detailed description includes specific details for the purpose of providing a thorough understanding of the subject technology. However, the subject technology is not limited to the specific details described herein, and may be practiced using one or more other implementations. In one or more implementations, structures and components are shown in block diagram form in order to avoid obscuring the concepts of the subject technology.
As described above, a user may be able to remotely select between multiple devices for outputting content. In some cases, a user may wish to use a first device, such as their mobile device, to control the output of content on a second device, such as a digital media player. For example, the first device and the second device may be connected to a network (e.g., a local area network), and the second device (e.g., a digital media player connected to a television) may output content such as music and/or video. A user may wish to remotely control the output of content from his/her first device (e.g., a mobile device such as a smartphone and/or a smartwatch) using a set of controls disposed within a user interface on the first device.
The subject system enables automatically providing (or rendering) a control based on a condition being satisfied with respect to playback of the first device, the second device, and/or the content. For example, the condition may indicate that the user may prefer and/or intend to use his/her first device to remotely control the output of content on the second device.
The first device may determine that a user interaction with respect to the first device satisfies a first condition, where the user interaction is associated with the second device. For example, the user interaction may involve user input received on a first device (e.g., within a remote control application, a virtual assistant application, and/or a control application running on the first device) to initiate output of content (e.g., music, video, etc.) on a second device.
The first device may further determine (e.g., independent of determining that the user interaction satisfies the first condition) that a state of output of the content on the second device satisfies the second condition. For example, the second condition may correspond to the second device currently outputting content or having paused outputting content for less than a predefined period of time. In accordance with a determination that the first condition and the second condition have been satisfied, the first device may provide (e.g., within a lock screen of the first device) a set of controls for remotely controlling output of content on the second device. In another aspect, in accordance with a determination that the first condition and the second condition have not been met, the first may forgo providing a set of controls for controlling output of content on the second device.
By providing controls based on meeting these predefined conditions, remote control of playback of content for a user can be facilitated.
In one or more implementations, the subject system further provides for surfacing an interface element that allows a user to select a device to output content, such as when the user is in the process of selecting content for output but has not yet initiated output of the content. For example, a user may be interacting with a media player application running on a first device. The user may be in the process of selecting a content item from a list of content items, wherein selecting a content item will result in playback of the content item. Alternatively, the first device may be displaying an interface in which the content item is presented separately (e.g., as part of a summary page for the content item) and available for playback (e.g., by the user clicking a "play" button). In each of these cases, the subject system may be able to infer the user's intent to output the content soon, even if the user has not selected any particular content to output.
Thus, prior to the device initiating output of the content, the first device renders an interface element that is selectable by a user to output the content on the second device when one or more conditions are satisfied. For example, the first device may determine whether the received user input satisfies the first condition, such as when the received user input is associated with content selected for output and the received user input is separate from initiating output of the content. Additionally, the first device may further determine whether its position relative to the second device satisfies a second condition. For example, the second condition may correspond to the first device and the second device being close to each other (e.g., in the same room or within a predefined distance of each other).
In response to detecting that the first condition and the second condition have been satisfied, the first device provides an interface element for outputting content on the second device. For example, an interface element may be provided for a user to select an output device (e.g., a second device) from a list of multiple devices. As another example, an interface element may be provided for a user to output content, and the second device may be automatically selected for output (e.g., based on a history of the user selecting the second device for output). Playback of content on a different device for a user may be facilitated by presenting an interface element for outputting the content on a second device.
FIG. 1 illustrates an exemplary network environment for providing a set of controls for remotely controlling output of content in accordance with one or more implementations. However, not all of the depicted components may be used in all implementations, and one or more implementations may include additional or different components than those shown in the figures. Variations in the arrangement and type of these components may be made without departing from the spirit or scope of the claims set forth herein. Additional components, different components, or fewer components may be provided.
Network environment 100 includes electronic devices 102, 104, 106, 108, and 110 (hereinafter referred to as 102 and 110), a network 112, and a server 114. Network 112 may communicatively couple (directly or indirectly) any two or more of electronic devices 102 and 110 and server 114, for example. In one or more implementations, the network 112 may be an interconnected network that may include the internet and/or devices communicatively coupled to the internet. In one or more implementations, the network 112 may correspond to a local area network (e.g., a WiFi network) that connects one or more of the electronic devices 102 and 110. For purposes of explanation, network environment 100 is shown in FIG. 1 as including electronic device 102 and a single server 114; however, network environment 100 may include any number of electronic devices and any number of servers.
One or more of the electronic devices 102 and 110 may be, for example, a portable computing device such as a laptop computer, a smartphone, a smart speaker, a digital media player, a peripheral device (e.g., a digital camera, a headset), a tablet device, a wearable device such as a smart watch, a band, etc., or any other suitable device that includes, for example, one or more wireless interfaces, such as a WLAN radio, a cellular radio, a bluetooth radio, a Zigbee radio, a Near Field Communication (NFC) radio, and/or other radios. In fig. 1, by way of example, electronic device 102 is depicted as a smartphone, electronic device 104 is depicted as a laptop computer, electronic device 106 is depicted as a smartwatch, electronic device 108 is depicted as a digital media player (e.g., configured to receive and stream digital data such as music and/or video to a television or other video display), and electronic device 110 is depicted as a smart speaker.
The electronic devices 102 and 110 can be configured to communicate or otherwise interact with the server 114, such as to receive digital content from the server 114 for output on the respective electronic devices 102 and 110. Each of the electronic devices 102 and 110 may be and/or may include all or a portion of the devices discussed below with respect to fig. 2 and/or the electronic systems discussed below with respect to fig. 9.
Server 114 may be and/or may include all or a portion of the electronic system discussed below with respect to fig. 9. Server 114 may include one or more servers, such as a server cloud. For purposes of explanation, a single server 114 is shown and discussed with respect to various operations. However, these operations and other operations discussed herein may be performed by one or more servers, and each different operation may be performed by the same or different servers.
FIG. 2 illustrates an exemplary device in which a system for providing a set of controls for remotely controlling output of content can be implemented in accordance with one or more implementations. For purposes of explanation, fig. 2 is described herein primarily with reference to electronic device 102. However, FIG. 2 may correspond to any of the electronic devices 102 and 110 of FIG. 1. However, not all of the depicted components may be used in all implementations, and one or more implementations may include additional or different components than those shown in the figures. Variations in the arrangement and type of these components may be made without departing from the spirit or scope of the claims set forth herein. Additional components, different components, or fewer components may be provided.
The electronic device 102 may include a processor 202, a memory 204, and a communication interface 206. The processor 202 may comprise suitable logic, circuitry, and/or code that may enable processing data and/or controlling the operation of the electronic device 102. In this regard, the processor 202 may be enabled to provide control signals to various other components of the electronic device 102. The processor 202 may also control data transfer between various portions of the electronic device 102. Additionally, the processor 202 may enable an operating system to be implemented or code to be otherwise executed to manage the operation of the electronic device 102.
The memory 204 may comprise suitable logic, circuitry, and/or code that may enable storage of various types of information, such as received data, generated data, code, and/or configuration information. Memory 204 may include, for example, Random Access Memory (RAM), Read Only Memory (ROM), flash memory, and/or a magnetic storage device.
In one or more implementations, memory 204 may store one or more applications for initiating playback of content, including but not limited to a control application, a virtual assistant application, and/or a remote control application. Memory 204 may further store logic for enabling the lock screen feature and for providing a set of controls within a user interface of the lock screen.
The communication interface 206 may comprise suitable logic, circuitry, and/or code that may enable wired or wireless communication over the network 112, such as between any of the electronic devices 102 and 110 and the server 114. The communication interface 206 may include, for example, one or more of a bluetooth communication interface, a cellular interface, an NFC interface, a Zigbee communication interface, a WLAN communication interface, a USB communication interface, or generally any communication interface.
In one or more implementations, one or more of the processor 202, the memory 204, the communication interface 206, and/or one or more portions thereof, may be implemented in software (e.g., subroutines and code), in hardware (e.g., an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA), a Programmable Logic Device (PLD), a controller, a state machine, gated logic, discrete hardware components, or any other suitable device), and/or a combination of both.
FIG. 3 illustrates a flow diagram of an exemplary process for providing a set of controls for remotely controlling output of content in accordance with one or more implementations. For purposes of explanation, process 300 is described herein primarily with reference to electronic device 102, electronic device 108, and server 114 of fig. 1. However, the process 300 is not limited to the electronic device 102, the electronic device 108, and the server 114 of fig. 1, and one or more blocks (or operations) of the process 300 may be performed by one or more other components (e.g., of the electronic device 102) and/or other suitable devices (e.g., any of the electronic devices 102 and 110). For further explanation purposes, the blocks of process 300 are described herein as occurring sequentially or linearly. However, multiple blocks of process 300 may occur in parallel. Further, the blocks of the process 300 need not be performed in the order shown, and/or one or more blocks of the process 300 need not be performed and/or may be replaced by other operations.
As described above, the electronic device 102 can provide a set of controls for remotely controlling the output of content on another device (e.g., the electronic device 108). The electronic device 108 may be "proximate" to the electronic device 102, i.e., on the same local area network as the electronic device 102. The content may be provided by electronic device 102 or another device, such as server 114, to electronic device 108 for output thereon. For example, server 114 may implement a cloud-based service for providing media content to electronic device 102 and 110.
The set of controls may be provided in response to one or more conditions being met with respect to user interaction on the electronic device 102. Conditions include, but are not limited to: user selection of the electronic device 108 (e.g., a proximity device) via a control application running on the electronic device 102 (e.g., corresponding to block 304); a user selection of the electronic device 108 via a virtual assistant application running on the electronic device 102 (e.g., corresponding to block 306); and/or user selection of the electronic device 108 via a remote control application running on the electronic device 102 (e.g., corresponding to block 308).
The control application may be implemented as part of an operating system running on the electronic device 102 or may be a third party application. In one or more implementations, the control application may provide direct access to predefined settings of the electronic device 102. The control application may be activated, for example, via a predefined user gesture (e.g., swiping up from the bottom of the display of the electronic device 102).
In one or more implementations, the control application may indicate content (e.g., media content) that may be output locally (e.g., on the electronic device 102) and/or on a proximate device (e.g., the electronic device 108). In this regard, fig. 4 illustrates an exemplary user interface 400 that may be included as part of a control application running on the electronic device 102, where the user interface 400 provides for selecting a proximate device (e.g., via a WiFi connection) to output media content.
The user interface 400 may include graphical elements 402 that identify content (e.g., audio, image, and/or video content) for output. In the example of fig. 4, the graphical elements 402 indicate a song album, song title, and corresponding album image. The user interface 400 also includes a volume control 416 and a play/pause button 404 (e.g., to initiate output of content to a selected one of the proximity devices).
In addition, the user interface 400 lists proximate devices that are selectable for outputting content. The selectable devices may correspond to those connected to a local area network (e.g., a WiFi network). In the example of fig. 4, user interface 400 includes an option 406 for selecting a smart phone (e.g., electronic device 102), an option 408 for selecting a smart speaker (e.g., electronic device 110), an option 410 for selecting a first digital media player (e.g., electronic device 108) coupled to a living room television, and an option 410 for selecting a second digital media player (not shown) coupled to a master bedroom television.
In one or more implementations, the ordering of options 406 and 412 within user interface 400 may be based on the type of content (e.g., song or movie) provided for output and the available devices for outputting the content. For example, if the content for output is video content (e.g., a movie, show, or other type of video), the electronic device 102 may be sorted based on the following priority order (highest priority): a local device (e.g., electronic device 102); personal devices (e.g., smart watches); digital media players (e.g., for TV); and a smart speaker.
In another example, if the content for output is audio content (e.g., music or other audio recording), the electronic device may sort based on the following priority order (highest priority): a local device (e.g., electronic device 102); personal devices (e.g., headphones, earplugs); an intelligent speaker; and digital media players (e.g., for TV). Thus, the electronic device 102 may prioritize audio-specific (and/or audio-only) output devices (headphones, speakers, etc.) for output of audio content.
Thus, after the start block 302 in fig. 3, the electronic device 102 determines whether the user has selected a proximity device (e.g., electronic device 108) for outputting content via a control application running on the electronic device 102 (e.g., via one of options 406 and 412) (304). If so, the electronic device 102 determines whether the state of the output of the content on the electronic device 108 satisfies a predefined condition (310).
The electronic device 102 may be configured to determine an indication of a playback status (e.g., currently playing, paused, terminated) of content on the electronic device 108 even if the content is provided to the electronic device 108 by a device other than the electronic device 102 (e.g., the server 114). For example, the electronic device 102 can receive an indication (e.g., via a local area network shared with the electronic device 108) that the content is currently playing, paused (e.g., and a pause time elapsed), and/or terminated (e.g., where an application playing the content is closed by a user or has otherwise been closed (e.g., due to a system crash)).
Based on this information, the electronic device 102 may determine whether the state of the output of the content on the electronic device 108 satisfies a predefined condition. For example, the condition may be satisfied based on one or more of the following: content is currently being output on the electronic device 108; paused playback of content on the electronic device 108 for less than a predefined amount of time (e.g., less than 8 minutes or less than any configurable amount of time); and/or playback of the content has not terminated (e.g., by a user or otherwise, such as a playback application crash on the electronic device 108).
If the determination at block 310 is positive (the state of output of the content on the electronic device 108 satisfies the predefined condition), the electronic device 102 provides a set of controls for remotely controlling the output of the content on the electronic device 102 (312).
FIG. 5 illustrates an example of a user interface 500 with a set of controls for remotely controlling output of content in accordance with one or more implementations. The user interface 500 includes graphical elements 502 that identify content (e.g., audio, image, and/or video content) for output. In the example of fig. 5, graphical element 502 indicates the device (e.g., a living room TV) on which content is being output, the name of the TV series, and the episode number of the TV series.
The user interface also includes a TV control element 504 (e.g., for selecting a living room TV or other device), a playback control bar 506 for selecting a playback position for the content, a graphical element 508 for selecting supplemental/add-on tools for playing back the content, a skip back (e.g., 15 seconds) button 510, a pause/play toggle 512, a skip forward (e.g., 15 seconds button 514), a message element 516 (e.g., which may invoke a messaging interface for sharing the content and/or delivering messages to others), and an audio volume control 518.
In one or more implementations, the set of controls is provided within a lock screen of the electronic device 102. For example, the lock screen may correspond to a visual interface on the electronic device 102 that is available before the user has entered a password or otherwise activated all functions of the device, and/or that appears after a predefined amount of non-use.
If the determination at block 310 is negative (output of content is not detected on the proximity device), the process ends (block 314). In other words, the electronic device 102 does not provide the set of controls.
With respect to the virtual assistant application, if the determination at block 304 of fig. 3 is negative, the electronic device 102 determines whether the user has selected the electronic device 108 for output of content via the virtual assistant application running on the electronic device 102 and/or via the virtual assistant application running on a proximate electronic device (306).
The virtual assistant application may be implemented as part of an operating system running on the electronic device 102 or may be a third party application. In one or more implementations, the virtual assistant application can use voice queries and natural language user interfaces to answer questions, make recommendations, and perform actions by delegating requests to a set of services (e.g., internet services and/or services within a local area network).
In one or more implementations, a virtual assistant application (e.g., running on the electronic device 102) can receive, at the electronic device 102, a voice query that outputs content on the electronic device 108. As described above, the voice query may alternatively be received at a proximate device (e.g., a smart watch such as electronic device 106, or a smart speaker such as electronic device 110). The query may indicate to play music content (e.g., by song title, album title, band title, music genre, etc.), video content (e.g., by movie or show title, episode title, genre, etc.), and/or other media content. Further, the voice query may indicate a device (e.g., a living room TV, a master bedroom TV, a kitchen smart speaker, etc.) on which the content is to be played.
In this regard, fig. 6 illustrates an example of a user interface 600 of a virtual assistant application having a set of controls for remotely controlling the output of content. In the example of fig. 6, the user interface 600 indicates a sample voice query 602 input by the user requesting "play episode 1 of series ABC on living room TV". Thus, the voice query 602 is an example of a user having selected the electronic device 108 for outputting content by the virtual assistant application. In response to the voice query 602, the virtual assistant application can provide a visual and/or audio confirmation 604 (e.g., "OK") that the voice query has been received.
Thus, if the determination at block 306 is positive (the user has selected the electronic device 108 for outputting content by the virtual assistant application), the electronic device 102 determines whether output of the content is detected on the electronic device 108 (310). As described above, if the determination at block 310 is positive, the electronic device 102 provides a set of controls for remotely controlling the output of content on the electronic device 102 (312). The set of controls can be provided at the electronic device 102 even if the voice query is initially received at the user's smart watch (e.g., electronic device 106) or smart speaker (e.g., electronic device 110). The set of controls may be the same as or different from those shown in FIG. 5. For example, the set of components may vary based on the type of content being output (e.g., audio or video) and/or the device on which the content is being output (e.g., a TV output via a digital media player, or a smartphone).
In the example of fig. 6, the set of controls 606 includes a graphical element 608 indicating the device on which the content is being output, a graphical element 610 indicating the content being output, a playback control bar 612 for selecting a playback position of the content, a skip back (e.g., 15 seconds) button 614, a pause/play toggle button 616, a skip forward (e.g., 15 seconds) button 618, and an audio volume control 620. In one or more implementations, the set of controls 606 may be provided within a lock screen of the electronic device 102.
If the determination at block 310 is negative (output of content is not detected on the proximity device), the process ends (block 314). In other words, the electronic device 102 does not provide the set of controls 606.
With respect to the remote control application, if the determination at block 306 of fig. 3 is negative, the electronic device 102 determines whether the user has selected the electronic device 108 for outputting content via the remote control application running on the electronic device 102 (306).
The remote control application may be implemented as part of an operating system running on the electronic device 102 or may be a third party application. In one or more implementations, the remote control application may allow remote control of, for example, a proximity device (e.g., smart speakers, digital media player) connected on the same local area network (e.g., WiFi) as the electronic device 102.
In one or more implementations, a remote control application (e.g., running on the electronic device 102) can receive user input at the electronic device 102 to play content on the electronic device 108. The user input may indicate that music content (e.g., songs, albums, bands, music genres, etc.), video content (e.g., movies, shows, titles, genres, etc.), and/or other media content is played. Further, the user input may indicate a device (e.g., a living room TV, a master bedroom TV, a kitchen smart speaker, etc.) on which the content is to be played.
Accordingly, if the determination at block 308 is positive (the user has selected electronic device 108 for outputting content by the remote control application), then electronic device 102 determines whether output of the content is detected on electronic device 108 (310). As described above, if the determination at block 310 is positive, the electronic device 102 provides a set of controls for remotely controlling the output of content on the electronic device 102 (312). The set of controls may be the same as or different from those shown in fig. 5 and/or 6 (e.g., the set of controls 606). The set of controls may vary based on the type of content being output (e.g., audio or video) and/or the device on which the content is being output (e.g., a TV output via a digital media player, or a smart phone).
If the determination at block 310 is negative (output of content is not detected on the proximity device), the process ends (block 314). In other words, the electronic device 102 does not provide the set of controls 606.
In one or more implementations, and although not shown in the example of fig. 3, the subject system can provide for surfacing the set of components (e.g., as in fig. 5 or 6) if the user has selected a proximity device (e.g., electronic device 108) via a media player application (e.g., different from the control application, virtual assistant application, and/or remote control application corresponding to respective operations 304, 306, and 308). For example, the user may have used a media player application (e.g., running on the electronic device 102) to initiate output of content (e.g., audio and/or video) on the electronic device 108, as described below with respect to fig. 8.
As described above, a set of controls (e.g., as in fig. 5 or 6) may be provided for remotely controlling content output on the electronic device 108, for example, if the content is playing, has been paused for less than a predefined amount of time, and/or has not expired. Thus, the set of controls may not be provided in the event that the content has been paused for more than a predefined amount of time and/or has expired. Thus, the set of controls can be initially provided on the electronic device 102 (e.g., before a predefined amount of time has elapsed and/or before playback has terminated) and can be removed at a later time (e.g., after the predefined amount of time has elapsed and/or after playback has terminated).
As also described above, in the case where the electronic devices 102 and 108 are connected to the same local area network (e.g., a WiFi network), the set of controls (e.g., as in fig. 5 or 6) may be provided for remotely controlling the output of content on the electronic device 108. Thus, the set of controls may not be provided in the event that the electronic device 102 is no longer connected to the same local area network via WiFi. Thus, the set of controls may be initially provided on the electronic device 102 (e.g., when connected via WiFi), and may be removed at a later time (e.g., when the electronic device 102 is out of WiFi range and/or otherwise not connected via WiFi).
In one or more implementations, the user can initiate playback of content on the electronic device 102 while providing the set of controls for remotely controlling playback of content on the electronic device 108. In this case, the set of controls for remotely controlling playback may be removed (or otherwise de-prioritized, such as by switching the set of controls to background) on the electronic device 102. In this way, a set of controls for controlling local content (e.g., on the electronic device 102) may override a set of controls for remotely controlling content provided on the electronic device 108.
In one or more implementations, the location of the electronic device 102 (e.g., relative to the electronic device 108) can be used as an alternative or additional parameter to determine whether to provide the set of controls. For example, micro-location techniques may provide for determining the location of the electronic device 102 within a structure (e.g., home, building) with a level of accuracy in order to determine which room (e.g., or which portion of the room) the electronic device 102 is in. For example, the electronic device 102 may determine a time of arrival and/or an angle of arrival relative to one or more signals exchanged with the electronic device, such as a wideband signal and/or an ultra wideband signal.
In this way, a location at which the user performs certain user interactions may be determined (e.g., a location at which the user is located when he/she selects output content via the control application, the virtual assistant application, and/or the remote control application). The micro-position data may be used to determine whether to provide the set of controls. For example, if the user initiates playback of content on the electronic device 108 located in a first room, but the user has been in a second room for a predefined amount of time (e.g., based on the location of the electronic device 102), the set of controls may be provided when the user returns to the first room. In another example, the user may not have initiated playback of the content via the electronic device 102, but the set of controls may be provided if the user enters a room where the electronic device 108 is outputting the content.
Thus, the provision of the set of components may be based on whether one or more predefined conditions are met (e.g., user input received at the electronic device 102 via the control application, virtual assistant application, and/or remote control application, state of output being played back on the electronic device 108, micro-position). In one or more implementations, a machine learning model can be generated and/or trained to determine whether the set of controls should be provided.
For example, a machine learning model may be generated and/or trained with input parameters including, but not limited to: previous user interactions received with respect to the control application, the virtual assistant application, and/or the remote control application (e.g., with respect to outputting content to the electronic device 108); user activity indicating a degree to which the set of controls provided to the user for remotely controlling playback of content on the electronic device 108 is used/contributed to the user; micro-location data of the electronic device 102; a playback state; the type of content being played; the time of playing the content; a room where the content is played; and/or the person initiating playback of the content. In one or more implementations, a machine learning model may be trained using interaction data associated with a set of users.
After training, the machine learning model may be configured to receive similar input parameters in real-time. Based on the input parameters, the machine learning model may be configured to output an indication of whether to provide the set of controls. Alternatively or additionally, the machine learning model may output an indication of which controls provide a retention time for a particular proximity device and/or for the set of controls.
FIG. 7 illustrates a flow diagram of an exemplary process for providing a set of controls for remotely controlling output of content in accordance with one or more implementations. For purposes of explanation, process 700 is described herein primarily with reference to electronic device 102, electronic device 108, and server 114 of fig. 1. However, the process 700 is not limited to the electronic device 102, the electronic device 108, and the server 114 of fig. 1, and one or more blocks (or operations) of the process 700 may be performed by one or more other components (e.g., of the electronic device 102) and/or other suitable devices (e.g., any of the electronic devices 102 and 110). For further explanation purposes, the blocks of process 700 are described herein as occurring sequentially or linearly. However, multiple blocks of process 700 may occur in parallel. Further, the blocks of process 700 need not be performed in the order shown, and/or one or more blocks of process 700 need not be performed and/or may be replaced by other operations.
The electronic device 102 determines that a user interaction with respect to the electronic device 102, the user interaction being associated with the electronic device 108, satisfies a first condition (702). The electronic device 102 and the electronic device 108 may be connected to a local area network. The machine learning model may be used to determine that a user interaction with respect to the electronic device 102 satisfies a first condition.
The user interaction may include a user input received at the electronic device 102 to initiate output of content on the electronic device 108. The user input may be received within at least one of a remote control application, a virtual assistant application, or a control application running on the electronic device 102. Alternatively or additionally, the positioning of the electronic device 102 relative to the electronic device 108 (e.g., using the micro-position of the electronic device 102) may be used to determine that the user interaction relative to the electronic device 102 satisfies the first condition.
The electronic device 102 determines (e.g., independent of determining that the user interaction satisfies the first condition) that the state of the output of the content on the electronic device 108 satisfies the second condition (704). In one or more implementations, the content output on the electronic device 108 is not provided by the electronic device 102 to the electronic device 108. For example, content may be provided (e.g., streamed) from server 114 to electronic device 108.
Determining that the state satisfies the second condition may include determining that content is currently being output on the electronic device 108. Alternatively or additionally, determining that the state satisfies the second condition may include determining that output of content on the electronic device 108 has been paused for less than a predefined amount of time (e.g., less than 8 minutes).
The electronic device 102 provides a set of controls on the electronic device 102 for remotely controlling output of content on the electronic device 108 based on determining that the first condition and the second condition have been satisfied (706). The set of controls may be provided within a lock screen of the electronic device 102. In another aspect, in accordance with a determination that the first condition and the second condition have not been met, the electronic device 102 can forgo providing the set of controls on the electronic device 102 for controlling output of content on the electronic device 108.
The electronic device 102 may remove the set of components on the electronic device 102 (e.g., from the lock screen) based on determining that the electronic device 102 is no longer connected to the local area network. Alternatively or in addition, the electronic device 102 can remove the set of components on the electronic device 102 (e.g., from the lock screen) based on determining that the output of the content on the electronic device 108 has terminated (e.g., in the event that an application playing the content is closed by a user or the application crashes).
FIG. 8 illustrates a flow diagram of an exemplary process for providing a user interface to output content to a proximity device in accordance with one or more implementations. For purposes of explanation, process 800 is described herein primarily with reference to electronic device 102, electronic device 108, and server 114 of fig. 1. However, the process 800 is not limited to the electronic device 102, the electronic device 108, and the server 114 of fig. 1, and one or more blocks (or operations) of the process 800 may be performed by one or more other components (e.g., of the electronic device 102) and/or other suitable devices (e.g., any of the electronic devices 102 and 110). For further explanation purposes, the blocks of process 800 are described herein as occurring sequentially or linearly. However, multiple blocks of process 800 may occur in parallel. Further, the blocks of the process 800 need not be performed in the order shown, and/or one or more blocks of the process 800 need not be performed and/or may be replaced by other operations.
As described above, the subject system can provide for surfacing an interface element for selecting an output device (e.g., electronic device 108) prior to a user initiating output of content. The appearance of such interface elements may be based on the satisfaction of one or more predefined conditions. Electronic device 102 and electronic device 108 may be in proximity to each other, e.g., connected to the same local area network.
As shown in fig. 8, the electronic device 102 determines that the received user input satisfies a first condition (802). The first condition may correspond to the user input being associated with content selected for output, but independent of initiating output of the content (e.g., prior to the initiation). For example, the user may be interacting with a media player application running on the electronic device 102. The electronic device 102 may determine that there is a high likelihood that the user may be about to select a content item (e.g., a song, a playlist, a movie, a show, a series, etc.) from a list of content items, where selecting the content item will result in playback of the content item. Alternatively, the electronic device 102 can be displaying an interface in which the content item is presented separately (e.g., as part of a summary page for the content item) and available for playback (e.g., by the user clicking a "play" button). For example, in each of these cases, the electronic device 102 may determine that there is a high likelihood that the user may be about to output the content (or may have an intent to output the content), but has not yet initiated output of the content.
The electronic device 102 then determines that its position relative to the electronic device 108 satisfies a second condition (804). For example, the second condition may correspond to the electronic device 102 and the electronic device 108 being close to each other (e.g., in the same room).
As described above, the micro-location technique may be used to determine, with a level of accuracy, a location of the electronic device 102 (e.g., and/or the electronic device 108) within a structure (e.g., a home, a building) in order to determine in which room (e.g., or which portion of a room) the electronic device 102 is. For example, the electronic device 102 may determine a time of arrival and/or an angle of arrival relative to one or more signals exchanged with the electronic device, such as a wideband signal and/or an ultra wideband signal. In this way, a location at which the user performs certain user interactions may be determined (e.g., a location at which the user was at when he/she provided user input associated with selecting content for output (such as navigating through a content item or selecting a profile page for the content item), but prior to initiating output of the content). Thus, the micro-location data may be used to determine whether a second condition has been met (e.g., whether the electronic device 102 is in the same room as the electronic device 108 or within a predefined distance of the electronic device 108).
Further, a machine learning model may be used to determine whether the first condition and/or the second condition has been satisfied. For example, a machine learning model may be generated and/or trained with input parameters from a set of users, including but not limited to: previous user interactions received with respect to the media player application; user activity indicating a degree to which an interface element provided to a user for outputting content proximate to a device is used/helpful to the user; a playback state; the type of content being played; the date/time the content was played (e.g., time of day); an elapsed time since playback (e.g., relative to electronic device 102 and/or electronic device 108); micro-location data (e.g., a room where the content is played); and/or the person initiating playback of the content. In one or more implementations, a machine learning model may be trained using interaction data associated with a set of users.
After training, the machine learning model may be configured to receive similar input parameters in real-time. Based on the input parameters, the machine learning model may be configured to output an indication of whether the first condition and/or the second condition has been satisfied.
The electronic device 102 provides, on the electronic device 102, an interface element for outputting content on the electronic device 108 based on determining that the first condition and the second condition have been satisfied (e.g., in conjunction with output from the machine learning model) (806).
In one or more implementations, the interface element may appear as a list from which the user may select the electronic device 108 from a plurality of devices. For example, the first condition and the second condition may have been satisfied with respect to electronic device 108 and one or more other devices (e.g., electronic device 110). In this case, the electronic device 102 may present selectable options for the user to select a proximity device (e.g., electronic device 108 or electronic device 110) for outputting the content. For example, such selectable options may be displayed in a manner similar to options 408 and 410 of fig. 4 (e.g., where option 408 is used to select a smart speaker, such as electronic device 110, and option 410 is used to select a digital media player, such as electronic device 108). Upon selection of one of these options, output of the content may be initiated on the selected electronic device (e.g., electronic device 108 or 110).
Alternatively or in addition, the interface element may provide for selection of simply outputting content (e.g., where the user does not select a particular device from a plurality of devices), and the electronic device 102 may automatically select the electronic device 108 for output. For example, the user may have previously selected (e.g., a predefined number of times) electronic device 108 instead of electronic device 110 for outputting content. The predefined number may correspond to a static value and/or may be based on output from a machine learning model (e.g., that has been trained with a corresponding data set). In such cases, the interface element may correspond to presenting (e.g., or remaining displayed if presented) the following: a list of content items, wherein selection of a content item will result in playback of the content item; or an interface in which the content items are presented individually and available for playback (e.g., by clicking a "play" button within the profile/summary). Upon selection of one of the options, output of the content may be initiated on the electronic device 108 (e.g., where it is automatically determined that the electronic device 108 is selected as the output device as discussed above).
With respect to outputting content, the content may be provided by the electronic device 102 to the electronic device 108 (e.g., where the content is wirelessly streamed from the electronic device 102 to the electronic device 108). Alternatively or in addition, content output on the electronic device 108 can be provided to the electronic device 108 by a device other than the electronic device 102 (e.g., where the server 114 implements a cloud-based service for providing media content to the electronic device 108, e.g., in association with a user account).
As described above with respect to the description of fig. 3, while content is being output to the electronic device 108, the subject system can provide a set of controls for surfacing (e.g., on the electronic device 102) output for remotely controlling content on the electronic device 108. In one or more implementations, the appearance of such controls may have been satisfied based on predefined conditions, as described herein with respect to fig. 3-7.
In one or more implementations, in the event that content is being output on electronic device 102 or electronic device 108 and the relative positioning of the devices satisfies a predefined condition (e.g., the devices are in the same room and/or within a predefined distance of each other), the subject system can provide for surfacing an interface element for switching output of the content to another device. For example, if a predefined condition with respect to relative positioning is satisfied and the electronic device 102 has locally output content, the electronic device 102 may display an interface element for switching output from the electronic device 102 to the electronic device 108. In another example, if a predefined condition with respect to relative positioning is satisfied and the electronic device 108 is already outputting content, the electronic device 102 can display an interface element for switching output from the electronic device 108 to the electronic device 102.
As described above, one aspect of the subject technology is the collection and use of data available from specific and legitimate sources for output of media content. The present disclosure contemplates that, in some instances, the collected data may include personal information data that uniquely identifies or may be used to identify a particular person. Such personal information data may include demographic data, location-based data, online identifiers, phone numbers, email addresses, home addresses, data or records related to the user's health or fitness level (e.g., vital signs measurements, medication information, exercise information), date of birth, or any other personal information.
The present disclosure recognizes that the use of such personal information data in the present technology may be useful to benefit the user. For example, the personal information data may be used to output media content. Accordingly, using such personal information data may facilitate transactions (e.g., online transactions). In addition, the present disclosure also contemplates other uses for which personal information data is beneficial to a user. For example, health and fitness data may be used according to a user's preferences to provide insight into their overall health status, or may be used as positive feedback to individuals using technology to pursue a health goal.
The present disclosure contemplates that entities responsible for collecting, analyzing, disclosing, transmitting, storing, or otherwise using such personal information data will comply with established privacy policies and/or privacy practices. In particular, it would be desirable for such entities to implement and consistently apply privacy practices generally recognized as meeting or exceeding industry or government requirements to maintain user privacy. Such information regarding usage of personal data should be prominently and conveniently accessible to users and should be updated as data is collected and/or used. The user's personal information should be collected for legitimate use only. In addition, such collection/sharing should only occur after receiving user consent or other legal grounds as set forth in applicable law. Furthermore, such entities should consider taking any necessary steps to defend and secure access to such personal information data, and to ensure that others who have access to the personal information data comply with their privacy policies and procedures. In addition, such entities may subject themselves to third party evaluations to prove compliance with widely accepted privacy policies and practices. In addition, policies and practices should be tailored to the particular type of personal information data being collected and/or accessed and made applicable to applicable laws and standards, including jurisdiction-specific considerations that may be used to impose higher standards. For example, in the united states, the collection or acquisition of certain health data may be governed by federal and/or state laws, such as the health insurance association and accountability act (HIPAA); while other countries may have health data subject to other regulations and policies and should be treated accordingly.
Regardless of the foregoing, the present disclosure also contemplates embodiments in which a user selectively prevents use or access to personal information data. That is, the present disclosure contemplates that hardware elements and/or software elements may be provided to prevent or block access to such personal information data. For example, in the case of outputting media content, the subject technology may be configured to allow a user to opt-in or opt-out to participate in the collection of personal information data during the registration service or at any time thereafter. In addition to providing "opt-in" and "opt-out" options, the present disclosure contemplates providing notifications related to accessing or using personal information. For example, the user may be notified that their personal information data is to be accessed when the application is downloaded, and then be reminded again just before the personal information data is accessed by the application.
Further, it is an object of the present disclosure that personal information data should be managed and processed to minimize the risk of inadvertent or unauthorized access or use. Once the data is no longer needed, the risk can be minimized by limiting data collection and deleting data. In addition, and when applicable, including in certain health-related applications, data de-identification may be used to protect the privacy of the user. De-identification may be facilitated by removing identifiers, controlling the amount or specificity of stored data (e.g., collecting location data at a city level rather than at an address level), controlling how data is stored (e.g., aggregating data among users), and/or other methods such as differential privacy, as appropriate.
Thus, while the present disclosure broadly covers the use of personal information data to implement one or more of the various disclosed embodiments, the present disclosure also contemplates that various embodiments may be implemented without the need to access such personal information data. That is, various embodiments of the present technology do not fail to function properly due to the lack of all or a portion of such personal information data.
Fig. 9 illustrates an electronic system 900 that may be utilized to implement one or more implementations of the subject technology. The electronic system 900 may be and/or may be part of one or more of the electronic devices 102 and 110 shown in fig. 1 and/or one or the server 114. Electronic system 900 may include various types of computer-readable media and interfaces for various other types of computer-readable media. Electronic system 900 includes a bus 908, one or more processing units 912, system memory 904 (and/or cache), ROM 910, persistent storage 902, an input device interface 914, an output device interface 906, and one or more network interfaces 916, or subsets and variations thereof.
Bus 908 generally represents all of the system bus, peripheral buses, and chipset buses that communicatively connect the many internal devices of electronic system 900. In one or more implementations, bus 908 communicatively connects one or more processing units 912 with ROM 910, system memory 904, and persistent storage 902. One or more processing units 912 retrieve instructions to be executed and data to be processed from these various memory units in order to perform the processes of the subject disclosure. In different implementations, one or more processing units 912 may be a single processor or a multi-core processor.
ROM 910 stores static data and instructions for the one or more processing units 912, as well as other modules of the electronic system 900. Persistent storage 902, on the other hand, may be a read-write memory device. Persistent storage 902 may be a non-volatile memory unit that stores instructions and data even when electronic system 900 is turned off. In one or more implementations, a mass storage device (such as a magnetic disk or optical disc and its corresponding magnetic disk drive) may be used as persistent storage 902.
In one or more implementations, a removable storage device (such as a floppy disk, a flash drive, and their corresponding disk drives) may be used as persistent storage 902. Like the persistent storage 902, the system memory 904 may be a read-write memory device. However, unlike persistent storage 902, system memory 904 may be a volatile read-and-write memory, such as a random access memory. System memory 904 may store any of the instructions and data that may be needed by one or more processing units 912 during runtime. In one or more implementations, the processes of the subject disclosure are stored in system memory 904, persistent storage 902, and/or ROM 910. One or more processing units 912 retrieve instructions to be executed and data to be processed from these various memory units in order to perform one or more embodied processes.
Bus 908 is also connected to an input device interface 914 and an output device interface 906. The input device interface 914 enables a user to communicate information and select commands to the electronic system 900. Input devices that may be used with input device interface 914 may include, for example, an alphanumeric keyboard and a pointing device (also referred to as a "cursor control device"). The output device interface 906 may, for example, enable display of images generated by the electronic system 900. Output devices that may be used with output device interface 906 may include, for example, printers and display devices, such as Liquid Crystal Displays (LCDs), Light Emitting Diode (LED) displays, Organic Light Emitting Diode (OLED) displays, flexible displays, flat panel displays, solid state displays, projectors, or any other device for outputting information. One or more implementations may include a device that acts as both an input device and an output device, such as a touch screen. In these implementations, the feedback provided to the user can be any form of sensory feedback, such as visual feedback, auditory feedback, or tactile feedback; and input from the user can be received in any form, including acoustic, speech, or tactile input.
Finally, as shown in fig. 9, bus 908 also couples electronic system 900 to one or more networks and/or to one or more network nodes, such as server 114 shown in fig. 1, through one or more network interfaces 916. In this manner, electronic system 900 may be part of a computer network, such as a LAN, wide area network ("WAN"), or intranet, or may be part of a network of networks, such as the internet. Any or all of the components of electronic system 900 may be used with the subject disclosure.
Implementations within the scope of the present disclosure may be realized, in part or in whole, by a tangible computer-readable storage medium (or multiple tangible computer-readable storage media of one or more types) having one or more instructions written thereon. The tangible computer readable storage medium may also be non-transitory in nature.
A computer-readable storage medium may be any storage medium that can be read, written, or otherwise accessed by a general purpose or special purpose computing device and that includes any processing electronics and/or processing circuitry capable of executing instructions. For example, without limitation, the computer-readable medium may include any volatile semiconductor memory, such as RAM, DRAM, SRAM, T-RAM, Z-RAM, and TTRAM. The computer readable medium may also include any non-volatile semiconductor memory, such as ROM, PROM, EPROM, EEPROM, NVRAM, flash memory, nvSRAM, FeRAM, FeTRAM, MRAM, PRAM, CBRAM, SONOS, RRAM, NRAM, racetrack memory, FJG, and Millipede memory.
Further, the computer-readable storage medium may include any non-semiconductor memory, such as optical disk storage, magnetic tape, other magnetic storage devices, or any other medium capable of storing one or more instructions. In one or more implementations, the tangible computer-readable storage medium may be directly coupled to the computing device, while in other implementations, the tangible computer-readable storage medium may be indirectly coupled to the computing device, e.g., via one or more wired connections, one or more wireless connections, or any combination thereof.
The instructions may be directly executable or may be used to develop executable instructions. For example, the instructions may be implemented as executable or non-executable machine code, or may be implemented as high-level language instructions that may be compiled to produce executable or non-executable machine code. Further, instructions may also be implemented as, or may include, data. Computer-executable instructions may also be organized in any format, including routines, subroutines, programs, data structures, objects, modules, applications, applets, functions, and the like. As those skilled in the art will recognize, details including, but not limited to, number, structure, sequence, and organization of instructions may vary significantly without changing the underlying logic, function, processing, and output.
Although the above discussion has primarily referred to microprocessor or multi-core processors executing software, one or more implementations are performed by one or more integrated circuits, such as ASICs or FPGAs. In one or more implementations, such integrated circuits execute instructions stored on the circuit itself.
Those skilled in the art will appreciate that the various illustrative blocks, modules, elements, components, methods, and algorithms described herein may be implemented as electronic hardware, computer software, or combinations of both. To illustrate this interchangeability of hardware and software, various illustrative blocks, modules, elements, components, methods, and algorithms have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application. The various components and blocks may be arranged differently (e.g., arranged in a different order, or divided in a different manner) without departing from the scope of the subject technology.
It is to be understood that the specific order or hierarchy of blocks in the processes disclosed is an illustration of exemplary approaches. Based upon design preferences, it is understood that the specific order or hierarchy of blocks in the processes may be rearranged or that all illustrated blocks may be performed. Any of these blocks may be performed simultaneously. In one or more implementations, multitasking and parallel processing may be advantageous. Moreover, the division of various system components in the implementations described above should not be understood as requiring such division in all implementations, and it should be understood that program components and systems can generally be integrated together in a single software product or packaged into multiple software products.
As used in this specification and any claims of this patent application, the terms "base station," "receiver," "computer," "server," "processor," and "memory" all refer to electronic or other technical devices. These terms exclude a person or group of persons. For the purposes of this specification, the term "display" or "displaying" means displaying on an electronic device.
As used herein, the phrase "at least one of," following the use of the term "and" or "to separate a series of items from any one of the items, modifies the list as a whole and not every member of the list (i.e., every item). The phrase "at least one of" does not require the selection of at least one of each of the items listed; rather, the phrase allows the meaning of at least one of any one item and/or at least one of any combination of items and/or at least one of each item to be included. For example, the phrases "at least one of A, B and C" or "at least one of A, B or C" each refer to a only, B only, or C only; A. any combination of B and C; and/or A, B and C.
The predicate words "configured to", "operable to", and "programmed to" do not imply any particular tangible or intangible modification to a certain subject but are intended to be used interchangeably. In one or more implementations, a processor configured to monitor and control operations or components may also mean that the processor is programmed to monitor and control operations or that the processor is operable to monitor and control operations. Also, a processor configured to execute code may be interpreted as a processor that is programmed to execute code or that is operable to execute code.
Phrases such as an aspect, the aspect, another aspect, some aspects, one or more aspects, a specific implementation, the specific implementation, another specific implementation, some specific implementation, one or more specific implementations, embodiments, the embodiment, another embodiment, some embodiments, one or more embodiments, configurations, the configuration, other configurations, some configurations, one or more configurations, the subject technology, the disclosure, the present disclosure, other variations thereof, and the like are for convenience and do not imply that a disclosure relating to such phrase or phrases is essential to the subject technology, nor that such disclosure applies to all configurations of the subject technology. Disclosure relating to such one or more phrases may apply to all configurations or one or more configurations. Disclosure relating to such one or more phrases may provide one or more examples. Phrases such as an aspect or some aspects may refer to one or more aspects and vice versa and this applies similarly to the other preceding phrases.
The word "exemplary" is used herein to mean "serving as an example, instance, or illustration. Any embodiment described herein as "exemplary" or as "exemplary" is not necessarily to be construed as preferred or advantageous over other implementations. Furthermore, to the extent that the terms "includes," "has," "having," "has," "with," "has," "having," "contains," "containing," "contain" within a certain extent to be able to contain or contain said.
All structural and functional equivalents to the elements of the various aspects described throughout this disclosure that are known or later come to be known to those of ordinary skill in the art are expressly incorporated herein by reference and are intended to be encompassed by the claims. Moreover, nothing disclosed herein is intended to be dedicated to the public regardless of whether such disclosure is explicitly recited in the claims. No claim element should be construed in accordance with the provisions of 35 u.s.c. § 112(f), unless the element is explicitly recited using the phrase "means for … …", or for method claims, the element is recited using the phrase "step for … …".
The previous description is provided to enable any person skilled in the art to practice the various aspects described herein. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects. Thus, the claims are not intended to be limited to the aspects shown herein, but are to be accorded the full scope consistent with the language claims, wherein reference to an element in a singular value is not intended to mean "one only" and means "one or more" unless specifically so stated. The term "some" means one or more unless specifically stated otherwise. Pronouns for men (e.g., his) include women and neutrals (e.g., her and its), and vice versa. Headings and sub-headings (if any) are used for convenience only and do not limit the subject disclosure.

Claims (32)

1. A method, comprising:
determining that a user interaction with respect to a first device satisfies a first condition, the user interaction being associated with a second device;
determining that a state of output of content on the second device satisfies a second condition independent of determining that the user interaction satisfies the first condition;
in accordance with a determination that the first condition and the second condition have been satisfied, providing, on the first device, a set of controls for controlling output of the content on the second device; and
in accordance with a determination that the first condition and the second condition have not been met, forgoing providing the set of controls on the first device for controlling output of the content on the second device.
2. The method of claim 1, wherein the content output on the second device is not provided to the second device by the first device.
3. The method of claim 1, wherein the set of controls is provided within a lock screen of the first device.
4. The method of claim 1, wherein a machine learning model is used to determine that the user interaction with respect to the first device satisfies the first condition.
5. The method of claim 1, wherein the user interaction comprises a user input received on the first device, the user interaction to initiate output of the content on the second device.
6. The method of claim 5, wherein the user input is received within at least one of a remote control application, a virtual assistant application, or a control application running on the first device.
7. The method of claim 1, wherein the positioning of the first device relative to the second device is used to determine that the user interaction relative to the first device satisfies the first condition.
8. The method of claim 1, wherein the first device and the second device are connected to a local area network.
9. The method of claim 8, further comprising:
removing the set of controls on the first device based on determining that the first device is no longer connected to the local area network.
10. The method of claim 1, further comprising:
removing the set of controls on the first device based on determining that output of the content on the second device has terminated.
11. The method of claim 1, wherein determining that the state satisfies the second condition comprises:
determining that the content is currently being output on the second device.
12. The method of claim 1, wherein determining that the state satisfies the second condition comprises:
determining that output of the content on the second device has been paused for less than a predefined amount of time.
13. An apparatus, comprising:
at least one processor; and
a memory comprising instructions that, when executed by the at least one processor, cause the at least one processor to:
determining that a user interaction with respect to the device satisfies a first condition, the user interaction being associated with a second device;
determining that a state of output of content on the second device satisfies a second condition independent of determining that the user interaction satisfies the first condition;
in accordance with a determination that the first condition and the second condition have been met, providing, on the device, a set of controls for controlling output of the content on the second device; and
in accordance with a determination that the first condition and the second condition have not been met, forgoing providing, on the device, the set of controls for controlling output of the content on the second device.
14. The device of claim 13, wherein the content output on the second device is not provided by the device to the second device.
15. The device of claim 13, wherein the set of controls is provided within a lock screen of the device.
16. The device of claim 13, wherein a machine learning model is used to determine that the user interaction with respect to the device satisfies the first condition.
17. The device of claim 13, wherein the user interaction comprises a user input received on the device, the user interaction to initiate output of the content on the second device.
18. The device of claim 17, wherein the user input is received within at least one of a remote control application, a virtual assistant application, or a control application running on the device.
19. The device of claim 13, wherein the positioning of the device relative to the second device is used to determine that the user interaction relative to the device satisfies the first condition.
20. A computer program product comprising code stored in a non-transitory computer-readable storage medium, the code comprising:
code for determining that a user activity satisfies a first condition, the user activity associated with a first device;
code for determining that a state of output of content on the second device satisfies a second condition independent of determining that the user activity satisfies the first condition;
code for providing, on the first device, a set of controls for controlling output of the content on the second device in accordance with a determination that the first condition and the second condition have been satisfied; and
code for, in accordance with a determination that the first condition and the second condition have not been met, forgoing providing the set of controls on the first device for controlling output of the content on the second device.
21. The computer program product of claim 20, wherein the user activity comprises an interaction with at least one of the second device or a third device.
22. The computer program product of claim 20, wherein the content is stored locally on the first device or the content is provided to the first device by a server.
23. A method, comprising:
determining that a user input received on a first device, the user input being associated with content selected for output and independent of initiating output of the content, satisfies a first condition;
determining that a position of the first device relative to a second device satisfies a second condition; and
providing, on the first device, an interface element for outputting the content on the second device based on determining that the first condition and the second condition have been satisfied.
24. The method of claim 23, further comprising:
receiving a user selection of the interface element; and
facilitating output of the content on the second device in response to receiving the user selection.
25. The method of claim 24, wherein the content output on the second device is provided to the second device by the first device.
26. The method of claim 24, wherein the content output on the second device is provided to the second device by a device other than the first device.
27. The method of claim 23, wherein a machine learning model is used to determine at least one of: whether the user input received on the first device satisfies the first condition or whether the positioning of the first device relative to the second device satisfies the second condition.
28. The method of claim 23, wherein the interface element provides for selecting the second device from a plurality of devices for outputting the content.
29. The method of claim 23, wherein the interface element provides a selection for outputting the content, the method further comprising:
automatically selecting the second device from a plurality of devices for outputting the content.
30. The method of claim 29, wherein automatically selecting the second device is based on a previous user interaction corresponding to selecting the second device from the plurality of devices.
31. The method of claim 23, wherein the first device and the second device are connected to a local area network.
32. The method of claim 23, wherein the user input is received within a media player application running on the first device.
CN202080010242.2A 2019-03-28 2020-02-21 Output of content on a remote control device Pending CN113348438A (en)

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
US201962825618P 2019-03-28 2019-03-28
US62/825,618 2019-03-28
US16/705,073 US20200233573A1 (en) 2019-01-22 2019-12-05 Remotely controlling the output of content on a device
US16/705,073 2019-12-05
PCT/US2020/019336 WO2020154747A1 (en) 2019-01-22 2020-02-21 Remotely controlling the output of content on a device

Publications (1)

Publication Number Publication Date
CN113348438A true CN113348438A (en) 2021-09-03

Family

ID=77468664

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202080010242.2A Pending CN113348438A (en) 2019-03-28 2020-02-21 Output of content on a remote control device

Country Status (1)

Country Link
CN (1) CN113348438A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160378269A1 (en) * 2015-06-24 2016-12-29 Spotify Ab Method and an electronic device for performing playback of streamed media including related media content
CN106415476A (en) * 2014-06-24 2017-02-15 苹果公司 Input device and user interface interactions
CN106462617A (en) * 2014-06-30 2017-02-22 苹果公司 Intelligent automated assistant for tv user interactions
CN108882159A (en) * 2017-05-16 2018-11-23 苹果公司 Transfer plays queue between devices
CN109314795A (en) * 2016-06-12 2019-02-05 苹果公司 Equipment, method and graphic user interface for media playback

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106415476A (en) * 2014-06-24 2017-02-15 苹果公司 Input device and user interface interactions
CN106462617A (en) * 2014-06-30 2017-02-22 苹果公司 Intelligent automated assistant for tv user interactions
US20160378269A1 (en) * 2015-06-24 2016-12-29 Spotify Ab Method and an electronic device for performing playback of streamed media including related media content
CN109314795A (en) * 2016-06-12 2019-02-05 苹果公司 Equipment, method and graphic user interface for media playback
CN108882159A (en) * 2017-05-16 2018-11-23 苹果公司 Transfer plays queue between devices

Similar Documents

Publication Publication Date Title
TWI578775B (en) Intelligent automated assistant for tv user interactions
CN114128239B (en) Multi-user equipment in interconnected home environment
US11580973B2 (en) Multi-user devices in a connected home environment
US11516221B2 (en) Multi-user devices in a connected home environment
EP4156646A1 (en) Sharing content in a messaging application
US11621003B2 (en) Multi-user devices in a connected home environment
US11893585B2 (en) Associating multiple user accounts with a content output device
CN113805948A (en) Proximity-based personalization of computing devices
CN112714098B (en) Multi-user content queues
US11281802B2 (en) Providing obfuscated user identifiers for managing user-specific application state on multiuser devices
US11588903B2 (en) User switching for multi-user devices
EP3915006B1 (en) Remotely controlling the output of content on a device
Hennig Siri, Alexa, and Other Digital Assistants: The Librarian's Quick Guide
CN113348438A (en) Output of content on a remote control device
US11962854B2 (en) Providing content recommendations for user groups
EP4148603A1 (en) User interface-based restriction on content access
US20220394336A1 (en) Providing content recommendations for user groups
US11681718B2 (en) Scoping a system-wide search to a user-specified application
EP4089522A1 (en) Managing notifications on electronic devices
CN113383315A (en) Adaptive in-application instant messaging
WO2022260872A1 (en) Providing content recommendations for user groups
CN116033229A (en) Synchronized playback of media content
CN115357816A (en) Content integration

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination