WO2023113945A1 - Generating notifications that provide context for predicted content interruptions - Google Patents

Generating notifications that provide context for predicted content interruptions Download PDF

Info

Publication number
WO2023113945A1
WO2023113945A1 PCT/US2022/049444 US2022049444W WO2023113945A1 WO 2023113945 A1 WO2023113945 A1 WO 2023113945A1 US 2022049444 W US2022049444 W US 2022049444W WO 2023113945 A1 WO2023113945 A1 WO 2023113945A1
Authority
WO
WIPO (PCT)
Prior art keywords
event
notification
application
rendered
user
Prior art date
Application number
PCT/US2022/049444
Other languages
French (fr)
Inventor
Cliff Kuang
Jesse Kaczmarek
Andy Gugel
Jonathan Lee
Original Assignee
Google Llc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US17/572,433 external-priority patent/US11523190B1/en
Application filed by Google Llc filed Critical Google Llc
Publication of WO2023113945A1 publication Critical patent/WO2023113945A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • G06F9/453Help systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/466Learning process for intelligent management, e.g. learning user preferences for recommending movies
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47217End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for controlling playback functions for recorded or on-demand content, e.g. using progress bars, mode or play-point indicators or bookmarks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/488Data services, e.g. news ticker
    • H04N21/4882Data services, e.g. news ticker for displaying messages, e.g. warnings, reminders
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback

Definitions

  • the user may attempt to retrieve details about the incoming phone call from another device (e.g., their cellular phone). As a result, this can waste resources — particularly when there are no additional details available at the other device, except for a phone number and/or a name.
  • the user may not instantly recall any relevant context for the doorbell activation. As a result, the user may bypass stepping away from their television to answer their door. For instance, when the doorbell activation relates to a delivery that requires the user to provide a signature, resources can be wasted during duplicative attempts to deliver packages.
  • Implementations disclosed herein relate to providing, during rendering of streaming content, notification(s) of an event. Some of those implementations relate to preemptively rendering a notification to reflect a predicted time, within the streaming content, that the event will occur. Some of those implementations additionally or alternatively relate to utilizing contextual data (e.g, application data) in determining whether to render a notification, preemptively or responsive to occurrence of the event, and/or in generating content, for the notification, that provides further details for the event and that may not be directly derivable from the event itself.
  • contextual data e.g, application data
  • Some implementations relate to one or more applications that can provide preemptive notifications for events that may interrupt rendering of streaming content.
  • an application such as an automated assistant application can process contextual data to determine whether a particular event is predicted to interrupt consumption of content by a user.
  • the contextual data can include, for example, application data from one or more applications that a user has interacted with prior to, or during, the rendering of certain content.
  • the user can interact with a food delivery application in furtherance of ordering a food delivery to their home while they are watching a movie.
  • Application data associated with the food delivery application can provide an estimation of a time for arrival of the food delivery.
  • the application data can include a push notification from the food delivery application and/or an email generated based on the interaction with the food delivery application.
  • the automated assistant can determine, based on this time for arrival, that the time for arrival may occur while the user is watching a movie on their smart television (e.g., a television that includes one or more internet applications). Based on this determination, the automated assistant can generate the notification and/or can cause the notification to be rendered over a timeline (e.g. a scrubber user interface (UI)) corresponding to content streaming at the smart television.
  • a timeline e.g. a scrubber user interface (UI)
  • the notification can include content that provides further details about the food delivery' and/or the food delivery application, thereby putting the user on notice of the type of interruption that may occur at a certain point during the content stream.
  • an icon utilized by the food delivery application can be rendered over the scrubber UI of a movie as a timestamp and/or callout “bubble.”
  • the notification can be rendered with textual content and/or graphical content that characterizes the type of interruption that can occur (e.g., an image of a food delivery person that has been captured by a smart doorbell camera or other internet of things (loT) device).
  • the notification can be rendered at a location on the scrubber UI that is subsequent to a current instance of time during which a portion of the content stream is being rendered. For example, when the user initially turns on their smart television, a streaming application can start rendering a movie from an instance of time in the movie where the user had previously stopped watching the movie the night before. At this instance of time, the notification can be rendered at a location on the scrubber UI (or other timeline UI) that corresponds to the estimation of the time for arrival of the food delivery.
  • the notification can be rendered at a subsequent instance of time and/or a range of time (e.g., on a point in the timeline corresponding to 47:40 or 45:00-50:00).
  • the location of the notification at the scrubber UI can be adjusted according to the operation.
  • the notification can be rendered in response to the user adjusting playback of content, thereby giving an update for the predicted time that the user can expect the predicted event to occur relative to a current time within the rendering duration.
  • the notification can be rendered with one or more frames of the content, and the one or more frames of the content can correspond to a point in time, or a range of time, when the predicted event is expected to occur and/or before the predicted event is expected to occur.
  • a later-frame in a movie can be rendered for the user, with prior permission from the user, during an earlier portion of the movie, when the user is being put on notice that the predicted event may interrupt a later portion of the movie. Thereafter, when the later-frame of the movie is rendered during playback of the movie, a recollection of the notification can be triggered, thereby allowing the user to better anticipate the interruption.
  • implementations can cause rendering, within an interface of a playback application via which playback of content is occurring, of a preemptive notification of an event that is yet to occur and that is determined based on user interaction with a separate application.
  • the event can be rendered in conjunction with a timeline, of the interface, that illustrates an overall duration of the content and a current point of playback of the content.
  • the rendering of the content in conjunction with the timeline can reflect a currently anticipated time or time range, during yet to occur playback of the content, at which the notification is anticipated to occur and/or can reflect detail(s) of the event.
  • a user viewing the playback of the content can quickly ascertain, through a single interface and in advance of an event occurring, when the event is likely to occur relative to playback of the content.
  • This obviates the need for utilization of a separate interface (separate from the interface via which playback of content is occurring) and/or utilization of a separate device (separate from the device via which playback of content is occurring) in ascertaining when the event is likely to occur relative to playback of the content and/or in ascertaining details of the event when it does occur during playback of the content.
  • certain events can result in a notification being rendered with content that predicts the context for the event.
  • a notification being rendered in response to occurrence of an event and, optionally, without also first preemptively rendering a notification for the event.
  • a user can have a variety of different smart devices within an environment, such as their home, and those devices can include a computerized doorbell.
  • the doorbell can be activated by various neighborhood children more frequently than the doorbell is activated by, for example, someone delivering an important order placed by the user.
  • the automated assistant, or other application can assist the user with providing context for such events (e.g., a doorbell “ring”) by processing contextual data to predict the occurrence of certain events (with prior permission from the user).
  • an event notification such as a doorbell, that is predicted to be associated with a particular user in the home, can be directed to that particular user instead of interrupting all users within the home.
  • the event notification can be rendered with the content that predicts the context for the event, enabling a user that is consuming (e.g., viewing) the notification to quickly ascertain whether the event is one that deserves further attention by the user or is one that can be quickly dismissed by the user.
  • the automated assistant can predict, based on application data and with prior permission from the user, that the user has ordered an item from a website and that the item will be delivered between 2:00 p.m. and 3:00 p.m. of a certain date.
  • a notification can be rendered with content characterizing the delivery, the website, and/or the product that, was ordered.
  • the notification can be rendered with an image or text that reflects the product and/or with an icon that reflects the website and/or a delivery’ service for the delivery.
  • the notification can be rendered with such content characterizing the delivery based on the determination, from the application data, that the item will be delivered between 2:00 p.m. and 3:00 p.m. and based on the event (the doorbell ringing) occurring during that time window. Further, such content can be generated based on the application data. For example, the image or text that reflects the product can be extracted from the application data and/or the icon can be selected based on the application data being mapped to a corresponding website and/or delivery service. In these and other manners, application data can be utilized to generate a particularized notification and map that particularized notification to a predicted event (e.g., a doorbell ring) and predicted temporal condition(s) for the predicted event.
  • a predicted event e.g., a doorbell ring
  • the content of the particularized notification includes content that is not directly derivable from the event itself. For example, even if the computerized doorbell includes a camera, the website, the delivery service, and/or the product(s) (e.g, within a nondescript box) will not be directly derivable from processing image(s) from the camera.
  • an automated assistant can identify a particular computing device that the user is interacting with when the doorbell rings and cause that particular computing device to render the notification — without the doorbell indiscriminately causing all devices within the home to render a doorbell notification.
  • the automated assistant can, in response to determining playback of media content is occurring at a particular computing device, cause the notification to be rendered within an interface of the application providing the playback of media content, or overlaid on the interface, and can suppress provisioning of the notification at any other computing device(s).
  • implementations can include a non-transitory computer readable storage medium storing instructions executable by one or more processors (e.g., central processing unit(s) (CPU(s)), graphics processing unit(s) (GPU(s)), and/or tensor processing unit(s) (TPU(s)) to perform a method such as one or more of the methods described above and/or elsewhere herein.
  • processors e.g., central processing unit(s) (CPU(s)), graphics processing unit(s) (GPU(s)), and/or tensor processing unit(s) (TPU(s)
  • CPU(s) central processing unit
  • GPU(s) graphics processing unit
  • TPU(s) tensor processing unit
  • implementations can include a system of one or more computers that include one or more processors operable to execute stored instructions to perform a method such as one or more of the methods described above and/or elsewhere herein.
  • FIG. 1 A, FIG. IB, FIG. 1C, and FIG. ID illustrate views of a user that can view a stream of content and receive notifications for a predicted event.
  • FIG. 2 illustrates a system that can render notifications for events that can interrupt content streams being viewed by a user, and/or can provide predictive context for the event notifications.
  • FIG. 3 illustrates a method for rendering a notification over streaming content to put a user on notice of the context in which an interruption to the content may occur.
  • FIG. 4 is a block diagram of an example computer system.
  • FIG. 1 A, FIG. IB, FIG. 1C, and FIG. ID illustrate a view 100, a view 120, a view 140, and a view 160 of a user 102 that can view a stream of content and receive a notification for a predicted event.
  • the notification can include content that indicates the type of event and/or an application associated with the event.
  • the user 102 can be interacting with a contractor application 106 via a computing device 104 in furtherance of hiring a contractor to repair their dishwasher, as illustrated in view 100 of FIG. 1 A.
  • the user 102 can use their hand 114 to select a selectable GUI element 110 rendered at a display interface 108 of the computing device 104 to cause the contractor application 106 to communicate with one or more other persons.
  • the one or more other persons can include a contractor, which can be associated with the application as a candidate contractor for the user 102 to hire.
  • the contractor can receive a notification via another computing device for confirming an appointment with the user 102. This appointment can be characterized by application data that is stored at the computing device 104 and/or is otherwise accessible to the computing device 104.
  • FIG. IB illustrates a view 120 of the user 102 after the user 102 has interacted with the contractor application 106 to create the appointment with the contractor. Subsequent to creating the appointment, the user 102 can take an action 122 of watching a movie from where they last paused the movie.
  • the movie can be rendered at a smart television 126 (e.g., a television with network connectivity), which can provide access to one or more applications, such as an automated assistant application.
  • the automated assistant application and/or other application can determine whether an event is occurring or is predicted to occur, and selectively interrupt the streaming of content at the smart television 126.
  • the automated assistant can access various sources of data, such as the application data, to determine an estimated time of arrival for the contractor.
  • the automated assistant can process contextual data (e.g., image data captured by a standalone smart display 130) with prior permission from the user 102, to determine a context in which the content is streaming at the smart television 126 within an environment 128.
  • the automated assistant can determine, for example, that the user 102 is the person who is viewing the content rendered at the smart television 126.
  • a notification 148 can be rendered at a display interface 150 of the smart television 126, as illustrated in view 140 of FIG. 1C.
  • the notification 148 can be rendered at a point on a scrubber UI 152 that is subsequent to a current point 142 in time in which content is being rendered at the display interface 150.
  • the notification 148 can be rendered as a point in time and/or a range of time, as illustrated in FIG. 1C.
  • the notification 148 can be rendered between a first point 144 and a second point 146, which can represent a beginning and an end of an estimated range of time that an event will occur during the movie (e.g., a movie about the White House).
  • the notification 148 can be rendered with data that can provide context for the notification 148.
  • the notification 148 can include an identifier for the contractor application (i.e., “Help App”), and/or an image that can be selected based on the event that is predicted to occur (e.g., an image of the contractor that is expected to ring the smart doorbell and/or be viewable via a smart doorbell application).
  • a separate notification 162 can be rendered at the smart television 126 with an image captured by an internet of things (loT) device, such as a smart doorbell 164.
  • the smart doorbell 164 can be located at an outer portion 168 of a home of the user 102, and can capture an image of a contractor 166 that has arrived at the home.
  • the contractor 166 can arrive at the appointment time generated using the contractor application 106, and, as a result, cause the notification 162 to be rendered with an indication of the contractor application 106.
  • the notification 162 can be rendered with a logo corresponding to the contractor application 106 and/or other image provided by the contractor application 106, as illustrated in view 160 of FIG. ID.
  • a doorbell notification can be rendered with content associated with the contractor application 106 responsive to determining that the doorbell is activated within the range of time predicted by the automated assistant and/or other application. For example, when the doorbell rings between the first point 144 and the second point 146 (i.e., at the new “current” time), a notification 162 can be rendered with content (e.g., an image of the contractor 166) associated with the contractor application 106. In contrast, if the doorbell instead rings outside the range of time predicted by the automated assistant and/or other application, the doorbell notification would be rendered without content associated with the contractor application. This can be due to the doorbell being activated outside of the range of time.
  • the notification 162 can be rendered without pre-emptively providing a prior notification to the user 102.
  • FIG. ID can represent a standalone instance in which an event occurs when the user 102 is viewing content, but the user 102 is not provided a notification prior to the event.
  • the notification 162 that is rendered can include content that can be based on one or more sources of data (e.g., application data) to provide context for the notification 162.
  • generating and/or rendering of a notification can be facilitated using one or more application programming interfaces (APIs).
  • APIs application programming interfaces
  • an application e.g., am automated assistant
  • another application that renders streaming content (e.g., a TV series).
  • the communicated notification data can include textual content and/or images to be included with a notification, as well as a particular time that the notification should be rendered and/or a time that the event is predicted to occur.
  • the application that receives this notification data via an API can then utilize the notification data to render the notification preemptively (according to the notification data) and/or at the predicted time for the event (at or near a time when the notification data is received).
  • FIG. 2 illustrates a system 200 that can render notifications for events that can interrupt content streams being viewed by a user, and/or can provide predictive context for event notifications.
  • the notifications can be provided by an application, such as an automated assistant 204, which can operate as part of an assistant application that is provided at one or more computing devices, such as a computing device 202 and/or a server device.
  • a user can interact with the automated assistant 204 via assistant interface(s) 220, which can be a microphone, a camera, a touch screen display, a user interface, and/or any other apparatus capable of providing an interface between a user and an application.
  • a user can initialize the automated assistant 204 by providing a verbal, textual, and/or a graphical input to an assistant interface 220 to cause the automated assistant 204 to initialize one or more actions (e.g., provide data, control a peripheral device, access an agent, generate an input and/or an output, etc.).
  • the automated assistant 204 can be initialized based on processing of contextual data 236 using one or more trained machine learning models.
  • the contextual data 236 can characterize one or more features of an environment in which the automated assistant 204 is accessible, and/or one or more features of a user that is predicted to be intending to interact with the automated assistant 204.
  • the computing device 202 can include a display device, which can be a display panel that includes a touch interface for receiving touch inputs and/or gestures for allowing a user to control applications 234 of the computing device 202 via the touch interface.
  • the computing device 202 can lack a display device, thereby providing an audible user interface output, without providing a graphical user interface output.
  • the computing device 202 can provide a user interface, such as a microphone, for receiving spoken natural language inputs from a user.
  • the computing device 202 can include a touch interface and can be void of a camera, but can optionally include one or more other sensors.
  • the computing device 202 and/or other third party client devices can be in communication with a server device over a network, such as the internet. Additionally, the computing device 202 and any other computing devices can be in communication with each other over a local area network (LAN), such as a Wi-Fi network.
  • the computing device 202 can offload computational tasks to the server device in order to conserve computational resources at the computing device 202.
  • the server device can host the automated assistant 204, and/or computing device 202 can transmit inputs received at one or more assistant interfaces 220 to the server device.
  • the automated assistant 204 can be hosted at the computing device 202, and various processes that can be associated with automated assistant operations can be performed at the computing device 202.
  • all or less than all aspects of the automated assistant 204 can be implemented on the computing device 202.
  • aspects of the automated assistant 204 are implemented via the computing device 202 and can interface with a server device, which can implement other aspects of the automated assistant 204.
  • the server device can optionally serve a plurality of users and their associated assistant applications via multiple threads.
  • the automated assistant 204 can be an application that is separate from an operating system of the computing device 202 (e.g., installed “on top” of the operating system) - or can alternatively be implemented directly by the operating system of the computing device 202 e.g., considered an application of, but integral with, the operating system).
  • the automated assistant 204 can include an input processing engine 206, which can employ multiple different modules for processing inputs and/or outputs for the computing device 202 and/or a server device.
  • the input processing engine 206 can include a speech processing engine 208, which can process audio data received at an assistant interface 220 to identify the text embodied in the audio data.
  • the audio data can be transmitted from, for example, the computing device 202 to the server device in order to preserve computational resources at the computing device 202. Additionally, or alternatively, the audio data can be exclusively processed at the computing device 202.
  • the process for converting the audio data to text can include a speech recognition algorithm, which can employ neural networks, and/or statistical models for identifying groups of audio data corresponding to words or phrases.
  • the text converted from the audio data can be parsed by a data parsing engine 210 and made available to the automated assistant 204 as textual data that can be used to generate and/or identify command phrase(s), intent(s), action(s), slot value(s), and/or any other content specified by the user.
  • output data provided by the data parsing engine 210 can be provided to a parameter engine 212 to determine whether the user provided an input that corresponds to a particular intent, action, and/or routine capable of being performed by the automated assistant 204 and/or an application or agent that is capable of being accessed via the automated assistant 204.
  • assistant data 238 can be stored at the server device and/or the computing device 202, and can include data that defines one or more actions capable of being performed by the automated assistant 204, as well as parameters necessary to perform the actions.
  • the parameter engine 212 can generate one or more parameters for an intent, action, and/or slot value, and provide the one or more parameters to an output generating engine 214.
  • the output generating engine 214 can use the one or more parameters to communicate with an assistant interface 220 for providing an output to a user, and/or communicate with one or more applications 234 for providing an output to one or more applications 234.
  • the automated assistant 204 can be an application that can be installed “on-top of’ an operating system of the computing device 202 and/or can itself form part of (or the entirety of) the operating system of the computing device 202.
  • the automated assistant application includes, and/or has access to, on-device speech recognition, on-device natural language understanding, and on-device fulfillment.
  • on-device speech recognition can be performed using an on-device speech recognition module that processes audio data (detected by the microphone(s)) using an end-to-end speech recognition machine learning model stored locally at the computing device 202.
  • the on-device speech recognition generates recognized text for a spoken utterance (if any) present in the audio data.
  • on-device natural language understanding (NLU) can be performed using an on-device NLU module that processes recognized text, generated using the on-device speech recognition, and optionally contextual data, to generate NLU data.
  • NLU on-device natural language understanding
  • NLU data can include intent(s) that correspond to the spoken utterance and optionally param eter(s) e.g., slot values) for the intent(s).
  • On-device fulfillment can be performed using an on-device fulfillment module that utilizes the NLU data (from the on-device NLU), and optionally other local data, to determine action(s) to take to resolve the intent(s) of the spoken utterance (and optionally the parameter(s) for the intent). This can include determining local and/or remote responses (e.g., answers) to the spoken utterance, interact!
  • on(s) with locally installed application(s) to perform based on the spoken utterance command(s) to transmit to internet-of-things (loT) device(s) (directly or via corresponding remote system(s)) based on the spoken utterance, and/or other resolution action(s) to perform based on the spoken utterance.
  • the on-device fulfillment can then initiate local and/or remote performance/execution of the determined action(s) to resolve the spoken utterance.
  • remote speech processing, remote NLU, and/or remote fulfillment can at least selectively be utilized.
  • recognized text can at least selectively be transmitted to remote automated assistant component(s) for remote NLU and/or remote fulfillment.
  • the recognized text can optionally be transmitted for remote performance in parallel with on-device performance, or responsive to failure of on-device NLU and/or on-device fulfillment.
  • on-device speech processing, on-device NLU, on- device fulfillment, and/or on-device execution can be prioritized at least due to the latency reductions they provide when resolving a spoken utterance (due to no client-server roundtrip(s) being needed to resolve the spoken utterance).
  • on-device functionality can be the only functionality that is available in situations with no or limited network connectivity.
  • the computing device 202 can include one or more applications 234 which can be provided by a third-party entity that is different from an entity that provided the computing device 202 and/or the automated assistant 204.
  • An application state engine of the automated assistant 204 and/or the computing device 202 can access application data 230 to determine one or more actions capable of being performed by one or more applications 234, as well as a state of each application of the one or more applications 234 and/or a state of a respective device that is associated with the computing device 202.
  • a device state engine of the automated assistant 204 and/or the computing device 202 can access device data 232 to determine one or more actions capable of being performed by the computing device 202 and/or one or more devices that are associated with the computing device 202.
  • the application data 230 and/or any other data can be accessed by the automated assistant 204 to generate contextual data 236, which can characterize a context in which a particular application 234 and/or device is executing, and/or a context in which a particular user is accessing the computing device 202, accessing an application 234, and/or any other device or module.
  • contextual data 236 can characterize a context in which a particular application 234 and/or device is executing, and/or a context in which a particular user is accessing the computing device 202, accessing an application 234, and/or any other device or module.
  • the device data 232 can characterize a current operating state of each application 234 executing at the computing device 202.
  • the application data 230 can characterize one or more features of an executing application 234, such as content of one or more graphical user interfaces being rendered at the direction of one or more applications 234.
  • the application data 230 can characterize an action schema, which can be updated by a respective application and/or by the automated assistant 204, based on a current operating status of the respective application.
  • one or more action schemas for one or more applications 234 can remain static, but can be accessed by the application state engine in order to determine a suitable action to initialize via the automated assistant 204.
  • the computing device 202 can further include an assistant invocation engine 222 that can use one or more trained machine learning models to process application data 230, device data 232, contextual data 236, and/or any other data that is accessible to the computing device 202.
  • the assistant invocation engine 222 can process this data in order to determine whether or not to wait for a user to explicitly speak an invocation phrase to invoke the automated assistant 204, or consider the data to be indicative of an intent by the user to invoke the automated assistant — in lieu of requiring the user to explicitly speak the invocation phrase.
  • the one or more trained machine learning models can be trained using instances of training data that are based on scenarios in which the user is in an environment where multiple devices and/or applications are exhibiting various operating states.
  • the instances of training data can be generated in order to capture training data that characterizes contexts in which the user invokes the automated assistant and other contexts in which the user does not invoke the automated assistant.
  • the assistant invocation engine 222 can cause the automated assistant 204 to detect, or limit detecting, spoken invocation phrases from a user based on features of a context and/or an environment.
  • the automated assistant 204 can be invoked when one or more other persons attempt to communicate with a user. During such an invocation, the automated assistant 204 can solicit the user to provide instructions for responding to the one or more other persons (e.g., when someone is knocking at a door of a home of the user).
  • the system 200 can include a priority threshold engine 216 that can generate one or more priority thresholds for determining whether or not to render a notification over content being viewed by a user.
  • the priority threshold engine 216 can determine whether the one or more priority thresholds have been satisfied for a particular notification and/or a particular portion of content.
  • the automated assistant 204 can employ an event prediction engine 226 for determining whether a predicted event will occur, and/or whether the predicted event will occur when a user is viewing content and/or otherwise consuming content. Based on this determination, the automated assistant 204 can process event data for determining whether the priority threshold has been satisfied.
  • an event such as a food delivery person ringing a doorbell can satisfy a priority threshold for rendering a doorbell notification over content that a user is viewing.
  • a priority threshold for rendering a doorbell notification over content that a user is viewing.
  • an event such as a neighborhood kid ringing a doorbell immediately after school is let out may not satisfy the priority threshold.
  • the priority threshold can be established using one or more heuristic processes and/or one or trained machine learning models. For instance, prior interactions between the user and certain notifications can provide a basis for establishing a stricter priority threshold for a user that does not typically respond to those certain notifications.
  • an event duration engine 218 can be utilized to predicted a range of time within which the event may occur.
  • a delivery application can store application data that indicates a range of time that a user can expect to receive an order delivery.
  • the event duration engine 218 can utilize this application data to establish the range of time (relative to when the user is predicted to be viewing the content) that the event (e.g., the order delivery) may occur.
  • a notification content engine 224 of the system can utilize this determined range of time for generating content that will embody the notification, and how often and/or how long to render the notification.
  • the notification content engine 224 can generate a notification for a predicted event using contextual data 236, application data 230, and/or any other data that can be accessed by the automated assistant 204.
  • image data stored in association with a particular application 234 and/or device can be utilized, with prior permission from the user, by the notification content engine 224 for determining one or more images that can be rendered with a predicted event notification.
  • a predicted event such as an order delivery can have a corresponding notification that includes a logo for an order delivery application, a sample image of an item that was ordered, and/or text characterizing the order.
  • the notification content engine 224 can render the notification at a scrubber UI with a portion of the notification indicating that the predicted event may occur at an instance in time and/or within a range of time in the future. In this way, when the user is viewing certain content, they can be notified that a predicted event may interrupt their content, and that they can view information (e.g., images, text, video, etc.) that can provide context for the predicted event.
  • FIG. 3 illustrates a method 300 for rendering a notification to put a user on notice of a context in which an event may occur. The method 300 can be performed by one or more devices, applications, and/or any other apparatus or module capable of providing notifications to a user.
  • the method 300 can include an operation 302 of determining whether content is being rendered at a device interface.
  • the device interface can be, for example, a display interface of a computing device, such as a smart television, and the content can be, for example, a streaming television series.
  • the method 300 can proceed from the operation 302 to an operation 304. Otherwise, the method 300 may not proceed until a computing device is rendering content.
  • the operation 304 includes determining whether an event has occurred, or is predicted to occur, during a rendering duration of the content.
  • the rendering duration can be an amount of time that the content is rendered at the device interface.
  • the television series can be rendered for an amount of time that corresponds to a full episode for the television series.
  • the content can be a live stream that may not have a finite duration, but may nonetheless have a duration during which the content will be rendered.
  • a live stream can be rendered with a scrubber UI that shows an amount of time in the future and in the past, and a selectable element that allows a user to view different portions of the content at different portions of the live stream.
  • the event that occurs, or is predicted to occur can be an event that can potentially interrupt the user viewing the content.
  • the event can be associated with one or more other persons that may attempt to communicate with the user via a separate computing device (e.g., separate from the computing device with the display interface rendering the content).
  • the event can include a person ringing a doorbell that is attached to a home of the user. This event can cause an interruption to the streaming of the content, especially when the doorbell can communicate with the computing device over a home network (e.g., a home Wi-Fi network).
  • the method of 300 can proceed from the operation 304 to an operation 306 when an event has occurred or is predicted to occur. Otherwise, when no event is predicted to occur, or has occurred, the method 300 can return to the operation 302 for determining whether content is being rendered at the device interface.
  • the operation 306 can include determining whether the predicted event is associated with a particular application and/or instance of data.
  • the particular application can be a delivery application that can be utilized by the user to place an order that can be delivered to the home of the user.
  • Application data can characterize the order, thereby allowing an automated assistant, or other application, to process the application data to determine when the delivery is estimated to arrive at the home of the user. Therefore, when the doorbell rings within the estimated time of arrival for the delivery, the event of the doorbell ringing can be considered associated with the particular application and/or instance of application data.
  • the method 300 can proceed from the operation 306 to an operation 310. Otherwise, the method 300 can proceed from the operation 306 to an operation 308.
  • the operation 308 can include generating a notification for the event based at least on the predicted event. For example, when the event is a doorbell ring that is not determined to be associated with an application, notification data can be generated to characterize the doorbell ring. For instance, when the doorbell device has a camera, an image captured by the camera can be utilized when generating the notification. The notification can therefore include preview and/or depiction of a person who rang the doorbell. The method 300 can then proceed from the operation 308 to an operation 312 for causing the notification to be rendered with or without an indication of the associated application.
  • the method 300 can proceed from the operation 306 to an operation 310.
  • the operation 310 can include generating a notification for the event based on the particular application and/or device. For example, when the event occurs at a time, or within a time range, associated with a delivery ordered by the user via a delivery application, the notification can be generated with an identifier for the delivery application and/or any other content associated with order. In this way, the user can be put on notice of what the purpose of the event is, rather than having no context for the event.
  • the method 300 can then proceed from the operation 310 to an operation 312, which can include causing the notification to be rendered with or without an indication of the application and/or instance of relevant data.
  • the notification can be rendered at a scrubber UI (e.g., a timeline GUI element), to indicate an approximate time and/or range of time that the event is predicted to occur.
  • a scrubber UI e.g., a timeline GUI element
  • the notification can be rendered at the scrubber UI at a location that indicates a relative amount of time from beginning of the content e.g., since the user selected to “play” the content) and/or to an end of the content (e.g., the end of a movie).
  • the notification can be rendered over a current display frame of the content, and can include graphics that are based on the application and/or a type of event.
  • the notification can extend from a point on the scrubber UI and can include graphics depicting a person ringing a doorbell and/or a logo for the delivery application.
  • the method 300 can proceed from the operation 312 to an optional operation 314, which can include causing the notification to be rendered at a particular, temporal location on the scrubber UI of the content.
  • the notification can be rendered at the scrubber UI at a point in time (e.g., on a timeline for the scrubber UI) and/or a range of time that is temporally subsequent to a current time that content is being rendered at the display interface. In this way, the user can have a visual estimation of an amount of time until the event will occur.
  • the notification can be rendered with an indication of a particular user that the notification and/or event may be associated with.
  • the notification can be rendered prior to an estimated time for the predicted event, and can be rendered with an indication of the particular user that is associated with the event (e.g., a university student expecting a call from a professor) and/or an application that may have been utilized to initialize the event (e.g., a homework help application for a particular university). Thereafter, the method 300 can return to the operation 302 for determining whether another event has occurred or is predicted to occur.
  • a predicted event e.g., a phone call from a school
  • the notification can be rendered prior to an estimated time for the predicted event, and can be rendered with an indication of the particular user that is associated with the event (e.g., a university student expecting a call from a professor) and/or an application that may have been utilized to initialize the event (e.g., a homework help application for a particular university).
  • the method 300 can return to the operation 302 for determining whether another event has occurred or is predicted to occur.
  • FIG. 4 is a block diagram 400 of an example computer system 410.
  • Computer system 410 typically includes at least one processor 414 which communicates with a number of peripheral devices via bus subsystem 412. These peripheral devices may include a storage subsystem 424, including, for example, a memory 425 and a file storage subsystem 426, user interface output devices 420, user interface input devices 422, and a network interface subsystem 416. The input and output devices allow user interaction with computer system 410.
  • Network interface subsystem 416 provides an interface to outside networks and is coupled to corresponding interface devices in other computer systems.
  • User interface input devices 422 may include a keyboard, pointing devices such as a mouse, trackball, touchpad, or graphics tablet, a scanner, a touchscreen incorporated into the display, audio input devices such as voice recognition systems, microphones, and/or other types of input devices.
  • pointing devices such as a mouse, trackball, touchpad, or graphics tablet
  • audio input devices such as voice recognition systems, microphones, and/or other types of input devices.
  • use of the term “input device” is intended to include all possible types of devices and ways to input information into computer system 410 or onto a communication network.
  • User interface output devices 420 may include a display subsystem, a printer, a fax machine, or non-visual displays such as audio output devices.
  • the display subsystem may include a cathode ray tube (CRT), a flat-panel device such as a liquid crystal display (LCD), a projection device, or some other mechanism for creating a visible image.
  • the display subsystem may also provide non-visual display such as via audio output devices.
  • output device is intended to include all possible types of devices and ways to output information from computer system 410 to the user or to another machine or computer system.
  • Storage subsystem 424 stores programming and data constructs that provide the functionality of some or all of the modules described herein.
  • the storage subsystem 424 may include the logic to perform selected aspects of method 300, and/or to implement one or more of system 200, computing device 104, automated assistant, and/or any other application, device, apparatus, and/or module discussed herein.
  • Memory 425 used in the storage subsystem 424 can include a number of memories including a main random access memory (RAM) 430 for storage of instructions and data during program execution and a read only memory (ROM) 432 in which fixed instructions are stored.
  • a file storage subsystem 426 can provide persistent storage for program and data files, and may include a hard disk drive, a floppy disk drive along with associated removable media, a CD-ROM drive, an optical drive, or removable media cartridges.
  • the modules implementing the functionality of certain implementations may be stored by file storage subsystem 426 in the storage subsystem 424, or in other machines accessible by the processor(s) 414.
  • Bus subsystem 412 provides a mechanism for letting the various components and subsystems of computer system 410 communicate with each other as intended. Although bus subsystem 412 is shown schematically as a single bus, alternative implementations of the bus subsystem may use multiple busses.
  • Computer system 410 can be of varying types including a workstation, server, computing cluster, blade server, server farm, or any other data processing system or computing device. Due to the ever-changing nature of computers and networks, the description of computer system 410 depicted in FIG. 4 is intended only as a specific example for purposes of illustrating some implementations. Many other configurations of computer system 410 are possible having more or fewer components than the computer system depicted in FIG. 4.
  • the systems described herein collect personal information about users (or as often referred to herein, “participants”), or may make use of personal information
  • the users may be provided with an opportunity to control whether programs or features collect user information (e.g., information about a user’s social network, social actions or activities, profession, a user’s preferences, or a user’s current geographic location), or to control whether and/or how to receive content from the content server that may be more relevant to the user.
  • certain data may be treated in one or more ways before it is stored or used, so that personal identifiable information is removed.
  • a user’s identity may be treated so that no personal identifiable information can be determined for the user, or a user’s geographic location may be generalized where geographic location information is obtained (such as to a city, ZIP code, or state level), so that a particular geographic location of a user cannot be determined.
  • the user may have control over how information is collected about the user and/or used.
  • a method implemented by one or more processors includes determining that content, that has a rendering duration, is being rendered at a display interface of a computing device that is located within an environment. The method further includes determining, in response to content being rendered at the display interface, whether a user has interacted with an application in furtherance of causing a subsequent event to occur during the rendering duration. The method further includes, when the user has interacted with the application to cause the subsequent event to occur during the rendering duration: generating a predicted time for the subsequent event to occur during the rendering duration, and causing, prior to the predicted time for the subsequent event to occur, a notification to be rendered at the display interface of the computing device. The notification indicates the predicted time, within the rendering duration, for the subsequent event to occur.
  • causing the notification to be rendered at the display interface of the computing device prior to the predicted time for the subsequent event to occur includes: causing the notification to be rendered with an identifier for the application that the user interacted with to cause the subsequent event to occur. In some implementations, causing the notification to be rendered at the display interface of the computing device prior to the predicted time for the subsequent event to occur includes: causing the notification to be rendered with an image that is captured by an additional computing device that is different than the computing device. In some implementations, the subsequent event involves one or more persons attempting to communicate with the user during the rendering duration, and the predicted time is based on whether the user had adjusted playback of the content during the rendering duration.
  • determining whether the user has interacted with the application in furtherance of causing the subsequent event to occur includes: determining whether the subsequent event satisfies a priority threshold for rendering the notification over the content being rendered at the display interface.
  • the predicted time indicated by the notification is a range of time within the rendering duration that the event is predicted to occur.
  • the range of time is based on a type of event that corresponds to the subsequent event that is predicted to occur.
  • the range of time is shorter than when the type of event is a third party event.
  • causing the notification to be rendered at the display interface of the computing device includes: causing the notification to be rendered at a location on a timeline.
  • the timeline corresponds to the duration of time for the rendering of the content
  • the location on the timeline corresponds to an instance of time subsequent to a current instance of time during which a portion of the content is being rendered.
  • causing the notification to be rendered at the display interface of the computing device includes: causing the notification to be rendered at a location on a timeline, wherein the timeline corresponds to the duration of time for the rendering of the content.
  • the location on the timeline corresponds to a range of time subsequent to a current instance of time during which a portion of the content is being rendered.
  • a method implemented by one or more processors includes determining, at a computing device, that one or more persons are attempting to contact a user via one or more computing devices within an environment of the user. The method further includes determining, in response to the one or more persons attempting to contact the user, whether a context in which the one or more persons are attempting to contact the user is associated with a prior interaction between the user and an application. The method further includes, when the context is associated with the prior interaction between the user and the application: generating notification data that characterizes content that is based on the prior interaction between the user and the application, and causing, based on the notification data, a notification to be rendered for the user with the content at a particular computing device.
  • the one or more computing devices include a doorbell computing device and the content includes an identifier for the application.
  • determining whether the context in which the one or more persons are attempting to contact the user is associated with the prior interaction between the user and the application includes: determining that the prior interaction resulted in event data that identifies the one or more persons as participants in an event characterized by the event data.
  • determining whether the context in which the one or more persons are attempting to contact the user is associated with the prior interaction between the user and the application includes: determining that the prior interaction resulted in event data that identifies a particular time that corresponds to a current time in which the one or more persons are attempting to contact the user.
  • a method implemented by one or more processors includes determining that an event is predicted to occur during a subsequent duration of time when content is streaming via an interface of a computing device.
  • the content is rendered by an application that is different than an additional application associated with the event that is predicted to occur.
  • the method further includes determining, based on the event, whether the event satisfies a priority threshold for rendering a notification for the event during the subsequent duration of time.
  • the priority threshold is based on contextual data that is available via one or more different computing devices and indicates a relative importance of the event.
  • the method further includes, when the event is determined to satisfy the priority threshold for rendering the notification for the event: generating notification data that identifies the additional application associated with the event, and causing, based on the notification data, the notification to be rendered during the subsequent duration of time when the content is streaming at the interface of the computing device.
  • the notification is rendered via the interface of the computing device and indicates the additional application associated with the event.
  • the priority threshold is based on contextual data that is available via one or more different computing devices and indicates the relative importance of the event.
  • the contextual data includes an identifier for the user that is viewing the content streaming at the interface of the computing device.
  • the contextual data characterizes a type of event that is predicted to occur during the subsequent duration of time.
  • determining that the event is predicted to occur is performed when prior content is being rendered by the application at the interface of the computing device.
  • causing the notification to be rendered during the subsequent duration of time when the content is streaming at the interface of the computing device includes: causing the notification to be rendered simultaneous to a scrubber user interface (UI) being rendered at the interface of the computing device.
  • UI scrubber user interface
  • a method implemented by processor(s) includes determining, based on a prior user interaction with an application, a predicted time for an application event to occur.
  • the application event corresponds to the prior user interaction with the application.
  • the method further includes receiving, from an internet of things (loT) device and during rendering of streaming content at a display interface of a computing device, an indication that an loT event has occurred.
  • the method further includes determining that the loT event corresponds to the application event and, in response to determining that the loT event corresponds to the application event, causing a notification to be rendered, at the computing device and during the rendering of the streaming content, that includes application content that is based on the prior user interaction with the application.
  • LoT internet of things
  • determining that the loT event corresponds to the application event is based on determining that the indication, that the loT event has occurred, corresponds to the predicted time.
  • the predicted time is a range of time during which the application event is predicted to occur and determining that the indication, that the loT event has occurred, corresponds to the predicted time, comprises determining that the indication that the loT event has occurred was received within the range of time.
  • the loT device is a smart doorbell and/or the application is a food delivery application.
  • causing the notification to be rendered during the rendering of the streaming content includes causing the notification to be rendered along with an image captured by the smart doorbell.
  • the method further includes determining the range of time based on a type classification of the application event.
  • determining the predicted time is based on application data that is provided by the application and that is based on the prior user interaction.
  • causing the notification to be rendered during the rendering of the streaming content includes causing the notification to be rendered in a scrubber user interrace of the streaming application.
  • the application content includes an icon that corresponds to the application.
  • the application event is a delivery and the application content includes product content characterizing a product to be delivered.
  • the method further includes causing, prior to causing the notification to be rendered and prior to receiving the indication that the loT event has occurred, a predictive notification to be rendered at the computing device during the rendering of the streaming content.
  • the predictive notification can include the application content and is rendered, within a scrubber user interface of the streaming application, at a relative time, within the scrubber user interface, that corresponds to the predicted time.
  • causing the notification to be rendered at the computing device and during the rendering of the streaming content is further in response to determining that the application event satisfies a priority threshold.
  • a method implemented by processor(s) includes determining, based on data from an application that is in addition to an active application rendering streaming content at a computing device, that a future event is predicted to occur during a subsequent duration of time.
  • the method further includes determining, based on the data, whether the future event satisfies a priority threshold for rendering a notification, for the future event, during the subsequent duration of time.
  • the method further includes, in response to determining that the future event satisfies the priority threshold for rendering the notification, causing a notification of the future event to be rendered, at the computing device and during rendering the streaming content at the computing device, but before actual occurrence of the future event.
  • the notification characterizes the application based on which the future event is determined.
  • the priority threshold is based on contextual data that is available via one or more additional computing devices and indicates a relative importance of the event.
  • the contextual data includes an identifier for the user that is viewing the streaming content at the computing device and/or the contextual data characterizes a type classification for the future event.
  • determining that the event is predicted to occur is performed when prior content is being rendered by the application at the interface of the computing device.
  • causing the notification of the future event to be rendered includes causing the notification to be rendered simultaneous to a scrubber user interface being rendered by the active application at the interface of the computing device. In some of those implementations, causing the notification of the future event to be rendered includes causing the notification to be rendered within the scrubber user interface at a relative time, within the scrubber user interface, that corresponds to the subsequent duration of time of the future event.

Abstract

Implementations set forth herein relate to providing notifications regarding events that may interrupt content being rendered at an interface. The notifications can be preemptive and/or can indicate a predicted time and/or source for the events. The event can be, for example, a person attempting to contact a user who is viewing content at a display interface. The person can be associated with a food delivery that has been ordered by the user via a delivery application. An application, such an automated assistant application, can predict when the person is expected to arrive with the food delivery, and generate a notification ahead of the person arriving. In some implementations, the notification can be rendered at a scrubber user interface (UI) at a location corresponding to the time that the food delivery is expected to arrive, thereby putting the user on notice of when the streaming content may be interrupted.

Description

Generating Notifications that Provide Context for Predicted Content Interruptions
Background
[0001] The recent proliferation of computing devices within homes has led to an increase in the number of interfaces via which a user can view media content, interact with applications, and receive notifications. For example, a user can view a streaming movie through a television that is connected to the internet, and also receive notifications regarding incoming calls via the same television. When a content stream is interrupted by certain events, such as a phone call or an activation of a doorbell, a user may not consider every event important enough to interrupt their content stream. Regardless, the user and/or television may respond to each event indiscriminately, thereby wasting resources such as power and network bandwidth by frequently stopping and starting the content stream. This issue can be particularly problematic when an event, such as a phone call or doorbell ring, is unexpected by a user. For example, when an incoming phone call unexpectedly interrupts a streaming episode of a television series, the user may attempt to retrieve details about the incoming phone call from another device (e.g., their cellular phone). As a result, this can waste resources — particularly when there are no additional details available at the other device, except for a phone number and/or a name. In some instances, even when a doorbell activation is otherwise important enough for a user to direct their attention away from the television, the user may not instantly recall any relevant context for the doorbell activation. As a result, the user may bypass stepping away from their television to answer their door. For instance, when the doorbell activation relates to a delivery that requires the user to provide a signature, resources can be wasted during duplicative attempts to deliver packages.
Summary
[0002] Implementations disclosed herein relate to providing, during rendering of streaming content, notification(s) of an event. Some of those implementations relate to preemptively rendering a notification to reflect a predicted time, within the streaming content, that the event will occur. Some of those implementations additionally or alternatively relate to utilizing contextual data (e.g, application data) in determining whether to render a notification, preemptively or responsive to occurrence of the event, and/or in generating content, for the notification, that provides further details for the event and that may not be directly derivable from the event itself.
[0003] Some implementations relate to one or more applications that can provide preemptive notifications for events that may interrupt rendering of streaming content. In order to provide the notifications preemptively, an application such as an automated assistant application can process contextual data to determine whether a particular event is predicted to interrupt consumption of content by a user. The contextual data can include, for example, application data from one or more applications that a user has interacted with prior to, or during, the rendering of certain content. For example, the user can interact with a food delivery application in furtherance of ordering a food delivery to their home while they are watching a movie. Application data associated with the food delivery application can provide an estimation of a time for arrival of the food delivery. For example, the application data can include a push notification from the food delivery application and/or an email generated based on the interaction with the food delivery application. The automated assistant can determine, based on this time for arrival, that the time for arrival may occur while the user is watching a movie on their smart television (e.g., a television that includes one or more internet applications). Based on this determination, the automated assistant can generate the notification and/or can cause the notification to be rendered over a timeline (e.g. a scrubber user interface (UI)) corresponding to content streaming at the smart television.
[0004] In some implementations, the notification can include content that provides further details about the food delivery' and/or the food delivery application, thereby putting the user on notice of the type of interruption that may occur at a certain point during the content stream. For example, an icon utilized by the food delivery application can be rendered over the scrubber UI of a movie as a timestamp and/or callout “bubble.” Alternatively, or additionally, the notification can be rendered with textual content and/or graphical content that characterizes the type of interruption that can occur (e.g., an image of a food delivery person that has been captured by a smart doorbell camera or other internet of things (loT) device). In some implementations, the notification can be rendered at a location on the scrubber UI that is subsequent to a current instance of time during which a portion of the content stream is being rendered. For example, when the user initially turns on their smart television, a streaming application can start rendering a movie from an instance of time in the movie where the user had previously stopped watching the movie the night before. At this instance of time, the notification can be rendered at a location on the scrubber UI (or other timeline UI) that corresponds to the estimation of the time for arrival of the food delivery. For example, if the current instance of time that the user is watching the movie is 32:28 (i.e., 32 minutes and 28 seconds into the movie), the notification can be rendered at a subsequent instance of time and/or a range of time (e.g., on a point in the timeline corresponding to 47:40 or 45:00-50:00). When the user performs an operation to modify playback of content, such as by rewinding or “fast forwarding” their streaming content, the location of the notification at the scrubber UI can be adjusted according to the operation. In some implementations, the notification can be rendered in response to the user adjusting playback of content, thereby giving an update for the predicted time that the user can expect the predicted event to occur relative to a current time within the rendering duration. In some implementations, the notification can be rendered with one or more frames of the content, and the one or more frames of the content can correspond to a point in time, or a range of time, when the predicted event is expected to occur and/or before the predicted event is expected to occur. For instance, a later-frame in a movie can be rendered for the user, with prior permission from the user, during an earlier portion of the movie, when the user is being put on notice that the predicted event may interrupt a later portion of the movie. Thereafter, when the later-frame of the movie is rendered during playback of the movie, a recollection of the notification can be triggered, thereby allowing the user to better anticipate the interruption.
[0005] Accordingly, implementations can cause rendering, within an interface of a playback application via which playback of content is occurring, of a preemptive notification of an event that is yet to occur and that is determined based on user interaction with a separate application. Further, in some of those implementations the event can be rendered in conjunction with a timeline, of the interface, that illustrates an overall duration of the content and a current point of playback of the content. Yet further, the rendering of the content in conjunction with the timeline can reflect a currently anticipated time or time range, during yet to occur playback of the content, at which the notification is anticipated to occur and/or can reflect detail(s) of the event. In these and other manners, a user viewing the playback of the content can quickly ascertain, through a single interface and in advance of an event occurring, when the event is likely to occur relative to playback of the content. This obviates the need for utilization of a separate interface (separate from the interface via which playback of content is occurring) and/or utilization of a separate device (separate from the device via which playback of content is occurring) in ascertaining when the event is likely to occur relative to playback of the content and/or in ascertaining details of the event when it does occur during playback of the content. [0006] In some implementations, certain events can result in a notification being rendered with content that predicts the context for the event. Those implementations can result in a notification being rendered in response to occurrence of an event and, optionally, without also first preemptively rendering a notification for the event. For example, a user can have a variety of different smart devices within an environment, such as their home, and those devices can include a computerized doorbell. The doorbell can be activated by various neighborhood children more frequently than the doorbell is activated by, for example, someone delivering an important order placed by the user. The automated assistant, or other application, can assist the user with providing context for such events (e.g., a doorbell “ring”) by processing contextual data to predict the occurrence of certain events (with prior permission from the user). In these and other manners, an event notification, such as a doorbell, that is predicted to be associated with a particular user in the home, can be directed to that particular user instead of interrupting all users within the home. Additionally or alternatively, the event notification can be rendered with the content that predicts the context for the event, enabling a user that is consuming (e.g., viewing) the notification to quickly ascertain whether the event is one that deserves further attention by the user or is one that can be quickly dismissed by the user.
[0007] For example, the automated assistant can predict, based on application data and with prior permission from the user, that the user has ordered an item from a website and that the item will be delivered between 2:00 p.m. and 3:00 p.m. of a certain date. When the doorbell rings between 2:00 p.m. and 3:00 p.m. on that date, a notification can be rendered with content characterizing the delivery, the website, and/or the product that, was ordered. For example, the notification can be rendered with an image or text that reflects the product and/or with an icon that reflects the website and/or a delivery’ service for the delivery. The notification can be rendered with such content characterizing the delivery based on the determination, from the application data, that the item will be delivered between 2:00 p.m. and 3:00 p.m. and based on the event (the doorbell ringing) occurring during that time window. Further, such content can be generated based on the application data. For example, the image or text that reflects the product can be extracted from the application data and/or the icon can be selected based on the application data being mapped to a corresponding website and/or delivery service. In these and other manners, application data can be utilized to generate a particularized notification and map that particularized notification to a predicted event (e.g., a doorbell ring) and predicted temporal condition(s) for the predicted event. This enables providing of the particularized notification when the predicted event occurs with the predicted temporal condition(s). Providing the particularized notification in such situations enables a user consuming the particularized notification to quickly ascertain that the event pertains to the separate application data. It is noted that in many implementations the content of the particularized notification includes content that is not directly derivable from the event itself. For example, even if the computerized doorbell includes a camera, the website, the delivery service, and/or the product(s) (e.g, within a nondescript box) will not be directly derivable from processing image(s) from the camera. In some implementations, an automated assistant can identify a particular computing device that the user is interacting with when the doorbell rings and cause that particular computing device to render the notification — without the doorbell indiscriminately causing all devices within the home to render a doorbell notification. For example, the automated assistant can, in response to determining playback of media content is occurring at a particular computing device, cause the notification to be rendered within an interface of the application providing the playback of media content, or overlaid on the interface, and can suppress provisioning of the notification at any other computing device(s).
[0008] The above description is provided as an overview of some implementations of the present disclosure. Further description of those implementations, and other implementations, are described in more detail below.
[0009] Other implementations can include a non-transitory computer readable storage medium storing instructions executable by one or more processors (e.g., central processing unit(s) (CPU(s)), graphics processing unit(s) (GPU(s)), and/or tensor processing unit(s) (TPU(s)) to perform a method such as one or more of the methods described above and/or elsewhere herein. Yet other implementations can include a system of one or more computers that include one or more processors operable to execute stored instructions to perform a method such as one or more of the methods described above and/or elsewhere herein.
[0010] It should be appreciated that all combinations of the foregoing concepts and additional concepts described in greater detail herein are contemplated as being part of the subject matter disclosed herein. For example, all combinations of claimed subject matter appearing at the end of this disclosure are contemplated as being part of the subject matter disclosed herein. Brief Description of the Drawings
[0011] FIG. 1 A, FIG. IB, FIG. 1C, and FIG. ID illustrate views of a user that can view a stream of content and receive notifications for a predicted event.
[0012] FIG. 2 illustrates a system that can render notifications for events that can interrupt content streams being viewed by a user, and/or can provide predictive context for the event notifications.
[0013] FIG. 3 illustrates a method for rendering a notification over streaming content to put a user on notice of the context in which an interruption to the content may occur.
[0014] FIG. 4 is a block diagram of an example computer system.
Detailed Description
[0015] FIG. 1 A, FIG. IB, FIG. 1C, and FIG. ID illustrate a view 100, a view 120, a view 140, and a view 160 of a user 102 that can view a stream of content and receive a notification for a predicted event. In some implementations, the notification can include content that indicates the type of event and/or an application associated with the event. For example, the user 102 can be interacting with a contractor application 106 via a computing device 104 in furtherance of hiring a contractor to repair their dishwasher, as illustrated in view 100 of FIG. 1 A. The user 102 can use their hand 114 to select a selectable GUI element 110 rendered at a display interface 108 of the computing device 104 to cause the contractor application 106 to communicate with one or more other persons. The one or more other persons can include a contractor, which can be associated with the application as a candidate contractor for the user 102 to hire. In response to selecting the selectable GUI element 110, the contractor can receive a notification via another computing device for confirming an appointment with the user 102. This appointment can be characterized by application data that is stored at the computing device 104 and/or is otherwise accessible to the computing device 104.
[0016] FIG. IB illustrates a view 120 of the user 102 after the user 102 has interacted with the contractor application 106 to create the appointment with the contractor. Subsequent to creating the appointment, the user 102 can take an action 122 of watching a movie from where they last paused the movie. The movie can be rendered at a smart television 126 (e.g., a television with network connectivity), which can provide access to one or more applications, such as an automated assistant application. When the smart television 126 is performing an action 124 of playing the movie, the automated assistant application and/or other application can determine whether an event is occurring or is predicted to occur, and selectively interrupt the streaming of content at the smart television 126. For example, the automated assistant can access various sources of data, such as the application data, to determine an estimated time of arrival for the contractor. Alternatively, or additionally, the automated assistant can process contextual data (e.g., image data captured by a standalone smart display 130) with prior permission from the user 102, to determine a context in which the content is streaming at the smart television 126 within an environment 128. The automated assistant can determine, for example, that the user 102 is the person who is viewing the content rendered at the smart television 126.
[0017] Based on the data processed by the automated assistant and/or another application, a notification 148 can be rendered at a display interface 150 of the smart television 126, as illustrated in view 140 of FIG. 1C. In some implementations, the notification 148 can be rendered at a point on a scrubber UI 152 that is subsequent to a current point 142 in time in which content is being rendered at the display interface 150. In some instances, the notification 148 can be rendered as a point in time and/or a range of time, as illustrated in FIG. 1C. For instance, the notification 148 can be rendered between a first point 144 and a second point 146, which can represent a beginning and an end of an estimated range of time that an event will occur during the movie (e.g., a movie about the White House). In some implementations, the notification 148 can be rendered with data that can provide context for the notification 148. For example, the notification 148 can include an identifier for the contractor application (i.e., “Help App”), and/or an image that can be selected based on the event that is predicted to occur (e.g., an image of the contractor that is expected to ring the smart doorbell and/or be viewable via a smart doorbell application).
[0018] In some implementations, when the event occurs, a separate notification 162 can be rendered at the smart television 126 with an image captured by an internet of things (loT) device, such as a smart doorbell 164. In accordance with the aforementioned example, the smart doorbell 164 can be located at an outer portion 168 of a home of the user 102, and can capture an image of a contractor 166 that has arrived at the home. The contractor 166 can arrive at the appointment time generated using the contractor application 106, and, as a result, cause the notification 162 to be rendered with an indication of the contractor application 106. For instance, the notification 162 can be rendered with a logo corresponding to the contractor application 106 and/or other image provided by the contractor application 106, as illustrated in view 160 of FIG. ID. In some implementations, a doorbell notification can be rendered with content associated with the contractor application 106 responsive to determining that the doorbell is activated within the range of time predicted by the automated assistant and/or other application. For example, when the doorbell rings between the first point 144 and the second point 146 (i.e., at the new “current” time), a notification 162 can be rendered with content (e.g., an image of the contractor 166) associated with the contractor application 106. In contrast, if the doorbell instead rings outside the range of time predicted by the automated assistant and/or other application, the doorbell notification would be rendered without content associated with the contractor application. This can be due to the doorbell being activated outside of the range of time.
[0019] In some implementations, the notification 162 can be rendered without pre-emptively providing a prior notification to the user 102. In other words, FIG. ID can represent a standalone instance in which an event occurs when the user 102 is viewing content, but the user 102 is not provided a notification prior to the event. Regardless, the notification 162 that is rendered can include content that can be based on one or more sources of data (e.g., application data) to provide context for the notification 162. In some implementations, generating and/or rendering of a notification can be facilitated using one or more application programming interfaces (APIs). For example, an application (e.g., am automated assistant) that generates notification data can communicate with another application that renders streaming content (e.g., a TV series). Communications between these applications can be facilitated using one or more APIs and/or any other module(s) and/or hardware that can facilitate communications between applications. For instance, the communicated notification data can include textual content and/or images to be included with a notification, as well as a particular time that the notification should be rendered and/or a time that the event is predicted to occur. The application that receives this notification data via an API can then utilize the notification data to render the notification preemptively (according to the notification data) and/or at the predicted time for the event (at or near a time when the notification data is received).
[0020] FIG. 2 illustrates a system 200 that can render notifications for events that can interrupt content streams being viewed by a user, and/or can provide predictive context for event notifications. The notifications can be provided by an application, such as an automated assistant 204, which can operate as part of an assistant application that is provided at one or more computing devices, such as a computing device 202 and/or a server device. A user can interact with the automated assistant 204 via assistant interface(s) 220, which can be a microphone, a camera, a touch screen display, a user interface, and/or any other apparatus capable of providing an interface between a user and an application. For instance, a user can initialize the automated assistant 204 by providing a verbal, textual, and/or a graphical input to an assistant interface 220 to cause the automated assistant 204 to initialize one or more actions (e.g., provide data, control a peripheral device, access an agent, generate an input and/or an output, etc.). Alternatively, the automated assistant 204 can be initialized based on processing of contextual data 236 using one or more trained machine learning models. The contextual data 236 can characterize one or more features of an environment in which the automated assistant 204 is accessible, and/or one or more features of a user that is predicted to be intending to interact with the automated assistant 204. The computing device 202 can include a display device, which can be a display panel that includes a touch interface for receiving touch inputs and/or gestures for allowing a user to control applications 234 of the computing device 202 via the touch interface. In some implementations, the computing device 202 can lack a display device, thereby providing an audible user interface output, without providing a graphical user interface output. Furthermore, the computing device 202 can provide a user interface, such as a microphone, for receiving spoken natural language inputs from a user. In some implementations, the computing device 202 can include a touch interface and can be void of a camera, but can optionally include one or more other sensors.
[0021] The computing device 202 and/or other third party client devices can be in communication with a server device over a network, such as the internet. Additionally, the computing device 202 and any other computing devices can be in communication with each other over a local area network (LAN), such as a Wi-Fi network. The computing device 202 can offload computational tasks to the server device in order to conserve computational resources at the computing device 202. For instance, the server device can host the automated assistant 204, and/or computing device 202 can transmit inputs received at one or more assistant interfaces 220 to the server device. However, in some implementations, the automated assistant 204 can be hosted at the computing device 202, and various processes that can be associated with automated assistant operations can be performed at the computing device 202. [0022] In various implementations, all or less than all aspects of the automated assistant 204 can be implemented on the computing device 202. In some of those implementations, aspects of the automated assistant 204 are implemented via the computing device 202 and can interface with a server device, which can implement other aspects of the automated assistant 204. The server device can optionally serve a plurality of users and their associated assistant applications via multiple threads. In implementations where all or less than all aspects of the automated assistant 204 are implemented via computing device 202, the automated assistant 204 can be an application that is separate from an operating system of the computing device 202 (e.g., installed “on top” of the operating system) - or can alternatively be implemented directly by the operating system of the computing device 202 e.g., considered an application of, but integral with, the operating system).
[0023] In some implementations, the automated assistant 204 can include an input processing engine 206, which can employ multiple different modules for processing inputs and/or outputs for the computing device 202 and/or a server device. For instance, the input processing engine 206 can include a speech processing engine 208, which can process audio data received at an assistant interface 220 to identify the text embodied in the audio data. The audio data can be transmitted from, for example, the computing device 202 to the server device in order to preserve computational resources at the computing device 202. Additionally, or alternatively, the audio data can be exclusively processed at the computing device 202.
[0024] The process for converting the audio data to text can include a speech recognition algorithm, which can employ neural networks, and/or statistical models for identifying groups of audio data corresponding to words or phrases. The text converted from the audio data can be parsed by a data parsing engine 210 and made available to the automated assistant 204 as textual data that can be used to generate and/or identify command phrase(s), intent(s), action(s), slot value(s), and/or any other content specified by the user. In some implementations, output data provided by the data parsing engine 210 can be provided to a parameter engine 212 to determine whether the user provided an input that corresponds to a particular intent, action, and/or routine capable of being performed by the automated assistant 204 and/or an application or agent that is capable of being accessed via the automated assistant 204. For example, assistant data 238 can be stored at the server device and/or the computing device 202, and can include data that defines one or more actions capable of being performed by the automated assistant 204, as well as parameters necessary to perform the actions. The parameter engine 212 can generate one or more parameters for an intent, action, and/or slot value, and provide the one or more parameters to an output generating engine 214. The output generating engine 214 can use the one or more parameters to communicate with an assistant interface 220 for providing an output to a user, and/or communicate with one or more applications 234 for providing an output to one or more applications 234.
[0025] In some implementations, the automated assistant 204 can be an application that can be installed “on-top of’ an operating system of the computing device 202 and/or can itself form part of (or the entirety of) the operating system of the computing device 202. The automated assistant application includes, and/or has access to, on-device speech recognition, on-device natural language understanding, and on-device fulfillment. For example, on-device speech recognition can be performed using an on-device speech recognition module that processes audio data (detected by the microphone(s)) using an end-to-end speech recognition machine learning model stored locally at the computing device 202. The on-device speech recognition generates recognized text for a spoken utterance (if any) present in the audio data. Also, for example, on-device natural language understanding (NLU) can be performed using an on-device NLU module that processes recognized text, generated using the on-device speech recognition, and optionally contextual data, to generate NLU data.
[0026] NLU data can include intent(s) that correspond to the spoken utterance and optionally param eter(s) e.g., slot values) for the intent(s). On-device fulfillment can be performed using an on-device fulfillment module that utilizes the NLU data (from the on-device NLU), and optionally other local data, to determine action(s) to take to resolve the intent(s) of the spoken utterance (and optionally the parameter(s) for the intent). This can include determining local and/or remote responses (e.g., answers) to the spoken utterance, interact! on(s) with locally installed application(s) to perform based on the spoken utterance, command(s) to transmit to internet-of-things (loT) device(s) (directly or via corresponding remote system(s)) based on the spoken utterance, and/or other resolution action(s) to perform based on the spoken utterance. The on-device fulfillment can then initiate local and/or remote performance/execution of the determined action(s) to resolve the spoken utterance.
[0027] In various implementations, remote speech processing, remote NLU, and/or remote fulfillment can at least selectively be utilized. For example, recognized text can at least selectively be transmitted to remote automated assistant component(s) for remote NLU and/or remote fulfillment. For instance, the recognized text can optionally be transmitted for remote performance in parallel with on-device performance, or responsive to failure of on-device NLU and/or on-device fulfillment. However, on-device speech processing, on-device NLU, on- device fulfillment, and/or on-device execution can be prioritized at least due to the latency reductions they provide when resolving a spoken utterance (due to no client-server roundtrip(s) being needed to resolve the spoken utterance). Further, on-device functionality can be the only functionality that is available in situations with no or limited network connectivity.
[0028] In some implementations, the computing device 202 can include one or more applications 234 which can be provided by a third-party entity that is different from an entity that provided the computing device 202 and/or the automated assistant 204. An application state engine of the automated assistant 204 and/or the computing device 202 can access application data 230 to determine one or more actions capable of being performed by one or more applications 234, as well as a state of each application of the one or more applications 234 and/or a state of a respective device that is associated with the computing device 202. A device state engine of the automated assistant 204 and/or the computing device 202 can access device data 232 to determine one or more actions capable of being performed by the computing device 202 and/or one or more devices that are associated with the computing device 202.
Furthermore, the application data 230 and/or any other data (e.g., device data 232) can be accessed by the automated assistant 204 to generate contextual data 236, which can characterize a context in which a particular application 234 and/or device is executing, and/or a context in which a particular user is accessing the computing device 202, accessing an application 234, and/or any other device or module.
[0029] While one or more applications 234 are executing at the computing device 202, the device data 232 can characterize a current operating state of each application 234 executing at the computing device 202. Furthermore, the application data 230 can characterize one or more features of an executing application 234, such as content of one or more graphical user interfaces being rendered at the direction of one or more applications 234. Alternatively, or additionally, the application data 230 can characterize an action schema, which can be updated by a respective application and/or by the automated assistant 204, based on a current operating status of the respective application. Alternatively, or additionally, one or more action schemas for one or more applications 234 can remain static, but can be accessed by the application state engine in order to determine a suitable action to initialize via the automated assistant 204.
[0030] The computing device 202 can further include an assistant invocation engine 222 that can use one or more trained machine learning models to process application data 230, device data 232, contextual data 236, and/or any other data that is accessible to the computing device 202. The assistant invocation engine 222 can process this data in order to determine whether or not to wait for a user to explicitly speak an invocation phrase to invoke the automated assistant 204, or consider the data to be indicative of an intent by the user to invoke the automated assistant — in lieu of requiring the user to explicitly speak the invocation phrase. For example, the one or more trained machine learning models can be trained using instances of training data that are based on scenarios in which the user is in an environment where multiple devices and/or applications are exhibiting various operating states. The instances of training data can be generated in order to capture training data that characterizes contexts in which the user invokes the automated assistant and other contexts in which the user does not invoke the automated assistant. When the one or more trained machine learning models are trained according to these instances of training data, the assistant invocation engine 222 can cause the automated assistant 204 to detect, or limit detecting, spoken invocation phrases from a user based on features of a context and/or an environment. In some implementations, the automated assistant 204 can be invoked when one or more other persons attempt to communicate with a user. During such an invocation, the automated assistant 204 can solicit the user to provide instructions for responding to the one or more other persons (e.g., when someone is knocking at a door of a home of the user).
[0031] In some implementations, the system 200 can include a priority threshold engine 216 that can generate one or more priority thresholds for determining whether or not to render a notification over content being viewed by a user. Alternatively, or additionally, the priority threshold engine 216 can determine whether the one or more priority thresholds have been satisfied for a particular notification and/or a particular portion of content. For example, the automated assistant 204 can employ an event prediction engine 226 for determining whether a predicted event will occur, and/or whether the predicted event will occur when a user is viewing content and/or otherwise consuming content. Based on this determination, the automated assistant 204 can process event data for determining whether the priority threshold has been satisfied. For instance, an event such as a food delivery person ringing a doorbell can satisfy a priority threshold for rendering a doorbell notification over content that a user is viewing. Alternatively, or additionally, an event such as a neighborhood kid ringing a doorbell immediately after school is let out may not satisfy the priority threshold. In some implementations, the priority threshold can be established using one or more heuristic processes and/or one or trained machine learning models. For instance, prior interactions between the user and certain notifications can provide a basis for establishing a stricter priority threshold for a user that does not typically respond to those certain notifications.
[0032] When an event is predicted to occur when a user is predicted to be viewing content (e.g., a calendar entry indicates that the user will turn on the television to watch a show in 2 hours), an event duration engine 218 can be utilized to predicted a range of time within which the event may occur. For instance, a delivery application can store application data that indicates a range of time that a user can expect to receive an order delivery. The event duration engine 218 can utilize this application data to establish the range of time (relative to when the user is predicted to be viewing the content) that the event (e.g., the order delivery) may occur. A notification content engine 224 of the system can utilize this determined range of time for generating content that will embody the notification, and how often and/or how long to render the notification.
[0033] In some implementations, the notification content engine 224 can generate a notification for a predicted event using contextual data 236, application data 230, and/or any other data that can be accessed by the automated assistant 204. For example, image data stored in association with a particular application 234 and/or device can be utilized, with prior permission from the user, by the notification content engine 224 for determining one or more images that can be rendered with a predicted event notification. For instance, a predicted event such as an order delivery can have a corresponding notification that includes a logo for an order delivery application, a sample image of an item that was ordered, and/or text characterizing the order. In some implementations, the notification content engine 224 can render the notification at a scrubber UI with a portion of the notification indicating that the predicted event may occur at an instance in time and/or within a range of time in the future. In this way, when the user is viewing certain content, they can be notified that a predicted event may interrupt their content, and that they can view information (e.g., images, text, video, etc.) that can provide context for the predicted event. [0034] FIG. 3 illustrates a method 300 for rendering a notification to put a user on notice of a context in which an event may occur. The method 300 can be performed by one or more devices, applications, and/or any other apparatus or module capable of providing notifications to a user. The method 300 can include an operation 302 of determining whether content is being rendered at a device interface. The device interface can be, for example, a display interface of a computing device, such as a smart television, and the content can be, for example, a streaming television series. When content is determined to be rendered at the device interface, the method 300 can proceed from the operation 302 to an operation 304. Otherwise, the method 300 may not proceed until a computing device is rendering content.
[0035] The operation 304 includes determining whether an event has occurred, or is predicted to occur, during a rendering duration of the content. The rendering duration can be an amount of time that the content is rendered at the device interface. For example, the television series can be rendered for an amount of time that corresponds to a full episode for the television series. Alternatively, or additionally, the content can be a live stream that may not have a finite duration, but may nonetheless have a duration during which the content will be rendered. For example, a live stream can be rendered with a scrubber UI that shows an amount of time in the future and in the past, and a selectable element that allows a user to view different portions of the content at different portions of the live stream. The event that occurs, or is predicted to occur, can be an event that can potentially interrupt the user viewing the content. In some implementations, the event can be associated with one or more other persons that may attempt to communicate with the user via a separate computing device (e.g., separate from the computing device with the display interface rendering the content). For example, the event can include a person ringing a doorbell that is attached to a home of the user. This event can cause an interruption to the streaming of the content, especially when the doorbell can communicate with the computing device over a home network (e.g., a home Wi-Fi network).
[0036] The method of 300 can proceed from the operation 304 to an operation 306 when an event has occurred or is predicted to occur. Otherwise, when no event is predicted to occur, or has occurred, the method 300 can return to the operation 302 for determining whether content is being rendered at the device interface. The operation 306 can include determining whether the predicted event is associated with a particular application and/or instance of data. For example, the particular application can be a delivery application that can be utilized by the user to place an order that can be delivered to the home of the user. Application data can characterize the order, thereby allowing an automated assistant, or other application, to process the application data to determine when the delivery is estimated to arrive at the home of the user. Therefore, when the doorbell rings within the estimated time of arrival for the delivery, the event of the doorbell ringing can be considered associated with the particular application and/or instance of application data.
[0037] When the event is predicted to be associated with the particular application, the method 300 can proceed from the operation 306 to an operation 310. Otherwise, the method 300 can proceed from the operation 306 to an operation 308. The operation 308 can include generating a notification for the event based at least on the predicted event. For example, when the event is a doorbell ring that is not determined to be associated with an application, notification data can be generated to characterize the doorbell ring. For instance, when the doorbell device has a camera, an image captured by the camera can be utilized when generating the notification. The notification can therefore include preview and/or depiction of a person who rang the doorbell. The method 300 can then proceed from the operation 308 to an operation 312 for causing the notification to be rendered with or without an indication of the associated application.
[0038] When the event is predicted to be associated with a particular application and/or device, the method 300 can proceed from the operation 306 to an operation 310. The operation 310 can include generating a notification for the event based on the particular application and/or device. For example, when the event occurs at a time, or within a time range, associated with a delivery ordered by the user via a delivery application, the notification can be generated with an identifier for the delivery application and/or any other content associated with order. In this way, the user can be put on notice of what the purpose of the event is, rather than having no context for the event. The method 300 can then proceed from the operation 310 to an operation 312, which can include causing the notification to be rendered with or without an indication of the application and/or instance of relevant data.
[0039] The notification can be rendered at a scrubber UI (e.g., a timeline GUI element), to indicate an approximate time and/or range of time that the event is predicted to occur. Alternatively, or additionally, when the event has occurred and/or is currently occurring, the notification can be rendered at the scrubber UI at a location that indicates a relative amount of time from beginning of the content e.g., since the user selected to “play” the content) and/or to an end of the content (e.g., the end of a movie). When the event is predicted to occur in the future, the notification can be rendered over a current display frame of the content, and can include graphics that are based on the application and/or a type of event. For instance, when the predicted event is a doorbell ringing for a delivery initialized via a delivery application, the notification can extend from a point on the scrubber UI and can include graphics depicting a person ringing a doorbell and/or a logo for the delivery application.
[0040] The method 300 can proceed from the operation 312 to an optional operation 314, which can include causing the notification to be rendered at a particular, temporal location on the scrubber UI of the content. For example, the notification can be rendered at the scrubber UI at a point in time (e.g., on a timeline for the scrubber UI) and/or a range of time that is temporally subsequent to a current time that content is being rendered at the display interface. In this way, the user can have a visual estimation of an amount of time until the event will occur. In some implementations, the notification can be rendered with an indication of a particular user that the notification and/or event may be associated with. For instance, when an entire family is watching a movie and a predicted event is expect to occur (e.g., a phone call from a school), the notification can be rendered prior to an estimated time for the predicted event, and can be rendered with an indication of the particular user that is associated with the event (e.g., a university student expecting a call from a professor) and/or an application that may have been utilized to initialize the event (e.g., a homework help application for a particular university). Thereafter, the method 300 can return to the operation 302 for determining whether another event has occurred or is predicted to occur.
[0041] FIG. 4 is a block diagram 400 of an example computer system 410. Computer system 410 typically includes at least one processor 414 which communicates with a number of peripheral devices via bus subsystem 412. These peripheral devices may include a storage subsystem 424, including, for example, a memory 425 and a file storage subsystem 426, user interface output devices 420, user interface input devices 422, and a network interface subsystem 416. The input and output devices allow user interaction with computer system 410. Network interface subsystem 416 provides an interface to outside networks and is coupled to corresponding interface devices in other computer systems.
[0042] User interface input devices 422 may include a keyboard, pointing devices such as a mouse, trackball, touchpad, or graphics tablet, a scanner, a touchscreen incorporated into the display, audio input devices such as voice recognition systems, microphones, and/or other types of input devices. In general, use of the term “input device” is intended to include all possible types of devices and ways to input information into computer system 410 or onto a communication network.
[0043] User interface output devices 420 may include a display subsystem, a printer, a fax machine, or non-visual displays such as audio output devices. The display subsystem may include a cathode ray tube (CRT), a flat-panel device such as a liquid crystal display (LCD), a projection device, or some other mechanism for creating a visible image. The display subsystem may also provide non-visual display such as via audio output devices. In general, use of the term “output device” is intended to include all possible types of devices and ways to output information from computer system 410 to the user or to another machine or computer system.
[0044] Storage subsystem 424 stores programming and data constructs that provide the functionality of some or all of the modules described herein. For example, the storage subsystem 424 may include the logic to perform selected aspects of method 300, and/or to implement one or more of system 200, computing device 104, automated assistant, and/or any other application, device, apparatus, and/or module discussed herein.
[0045] These software modules are generally executed by processor 414 alone or in combination with other processors. Memory 425 used in the storage subsystem 424 can include a number of memories including a main random access memory (RAM) 430 for storage of instructions and data during program execution and a read only memory (ROM) 432 in which fixed instructions are stored. A file storage subsystem 426 can provide persistent storage for program and data files, and may include a hard disk drive, a floppy disk drive along with associated removable media, a CD-ROM drive, an optical drive, or removable media cartridges. The modules implementing the functionality of certain implementations may be stored by file storage subsystem 426 in the storage subsystem 424, or in other machines accessible by the processor(s) 414.
[0046] Bus subsystem 412 provides a mechanism for letting the various components and subsystems of computer system 410 communicate with each other as intended. Although bus subsystem 412 is shown schematically as a single bus, alternative implementations of the bus subsystem may use multiple busses. [0047] Computer system 410 can be of varying types including a workstation, server, computing cluster, blade server, server farm, or any other data processing system or computing device. Due to the ever-changing nature of computers and networks, the description of computer system 410 depicted in FIG. 4 is intended only as a specific example for purposes of illustrating some implementations. Many other configurations of computer system 410 are possible having more or fewer components than the computer system depicted in FIG. 4.
[0048] In situations in which the systems described herein collect personal information about users (or as often referred to herein, “participants”), or may make use of personal information, the users may be provided with an opportunity to control whether programs or features collect user information (e.g., information about a user’s social network, social actions or activities, profession, a user’s preferences, or a user’s current geographic location), or to control whether and/or how to receive content from the content server that may be more relevant to the user. Also, certain data may be treated in one or more ways before it is stored or used, so that personal identifiable information is removed. For example, a user’s identity may be treated so that no personal identifiable information can be determined for the user, or a user’s geographic location may be generalized where geographic location information is obtained (such as to a city, ZIP code, or state level), so that a particular geographic location of a user cannot be determined. Thus, the user may have control over how information is collected about the user and/or used.
[0049] While several implementations have been described and illustrated herein, a variety of other means and/or structures for performing the function and/or obtaining the results and/or one or more of the advantages described herein may be utilized, and each of such variations and/or modifications is deemed to be within the scope of the implementations described herein. More generally, all parameters, dimensions, materials, and configurations described herein are meant to be exemplary and that the actual parameters, dimensions, materials, and/or configurations will depend upon the specific application or applications for which the teachings is/are used. Those skilled in the art will recognize, or be able to ascertain using no more than routine experimentation, many equivalents to the specific implementations described herein. It is, therefore, to be understood that the foregoing implementations are presented by way of example only and that, within the scope of the appended claims and equivalents thereto, implementations may be practiced otherwise than as specifically described and claimed. Implementations of the present disclosure are directed to each individual feature, system, article, material, kit, and/or method described herein. In addition, any combination of two or more such features, systems, articles, materials, kits, and/or methods, if such features, systems, articles, materials, kits, and/or methods are not mutually inconsistent, is included within the scope of the present disclosure.
[0050] In some implementations, a method implemented by one or more processors is provided and includes determining that content, that has a rendering duration, is being rendered at a display interface of a computing device that is located within an environment. The method further includes determining, in response to content being rendered at the display interface, whether a user has interacted with an application in furtherance of causing a subsequent event to occur during the rendering duration. The method further includes, when the user has interacted with the application to cause the subsequent event to occur during the rendering duration: generating a predicted time for the subsequent event to occur during the rendering duration, and causing, prior to the predicted time for the subsequent event to occur, a notification to be rendered at the display interface of the computing device. The notification indicates the predicted time, within the rendering duration, for the subsequent event to occur.
[0051] These and other implementations of the technology disclosed herein can include one or more of the following features.
[0052] In some implementations, causing the notification to be rendered at the display interface of the computing device prior to the predicted time for the subsequent event to occur includes: causing the notification to be rendered with an identifier for the application that the user interacted with to cause the subsequent event to occur. In some implementations, causing the notification to be rendered at the display interface of the computing device prior to the predicted time for the subsequent event to occur includes: causing the notification to be rendered with an image that is captured by an additional computing device that is different than the computing device. In some implementations, the subsequent event involves one or more persons attempting to communicate with the user during the rendering duration, and the predicted time is based on whether the user had adjusted playback of the content during the rendering duration.
[0053] In some implementations, determining whether the user has interacted with the application in furtherance of causing the subsequent event to occur includes: determining whether the subsequent event satisfies a priority threshold for rendering the notification over the content being rendered at the display interface. In some implementations, the predicted time indicated by the notification is a range of time within the rendering duration that the event is predicted to occur. In some implementations, the range of time is based on a type of event that corresponds to the subsequent event that is predicted to occur. In some implementations, when the type of event is a user-created event, the range of time is shorter than when the type of event is a third party event. In some implementations, causing the notification to be rendered at the display interface of the computing device includes: causing the notification to be rendered at a location on a timeline. In some of those implementations, the timeline corresponds to the duration of time for the rendering of the content, and the location on the timeline corresponds to an instance of time subsequent to a current instance of time during which a portion of the content is being rendered. In some implementations, causing the notification to be rendered at the display interface of the computing device includes: causing the notification to be rendered at a location on a timeline, wherein the timeline corresponds to the duration of time for the rendering of the content. In some of those implementations, the location on the timeline corresponds to a range of time subsequent to a current instance of time during which a portion of the content is being rendered.
[0054] In some implementations, a method implemented by one or more processors is provided and includes determining, at a computing device, that one or more persons are attempting to contact a user via one or more computing devices within an environment of the user. The method further includes determining, in response to the one or more persons attempting to contact the user, whether a context in which the one or more persons are attempting to contact the user is associated with a prior interaction between the user and an application. The method further includes, when the context is associated with the prior interaction between the user and the application: generating notification data that characterizes content that is based on the prior interaction between the user and the application, and causing, based on the notification data, a notification to be rendered for the user with the content at a particular computing device.
[0055] These and other implementations of the technology disclosed herein can include one or more of the following features.
[0056] In some implementations, the one or more computing devices include a doorbell computing device and the content includes an identifier for the application. In some implementations, determining whether the context in which the one or more persons are attempting to contact the user is associated with the prior interaction between the user and the application includes: determining that the prior interaction resulted in event data that identifies the one or more persons as participants in an event characterized by the event data. In some implementations, determining whether the context in which the one or more persons are attempting to contact the user is associated with the prior interaction between the user and the application includes: determining that the prior interaction resulted in event data that identifies a particular time that corresponds to a current time in which the one or more persons are attempting to contact the user.
[0057] In some implementations, a method implemented by one or more processors is provided and includes determining that an event is predicted to occur during a subsequent duration of time when content is streaming via an interface of a computing device. The content is rendered by an application that is different than an additional application associated with the event that is predicted to occur. The method further includes determining, based on the event, whether the event satisfies a priority threshold for rendering a notification for the event during the subsequent duration of time. The priority threshold is based on contextual data that is available via one or more different computing devices and indicates a relative importance of the event. The method further includes, when the event is determined to satisfy the priority threshold for rendering the notification for the event: generating notification data that identifies the additional application associated with the event, and causing, based on the notification data, the notification to be rendered during the subsequent duration of time when the content is streaming at the interface of the computing device. The notification is rendered via the interface of the computing device and indicates the additional application associated with the event.
[0058] In some implementations, the priority threshold is based on contextual data that is available via one or more different computing devices and indicates the relative importance of the event. In some implementations, the contextual data includes an identifier for the user that is viewing the content streaming at the interface of the computing device. In some implementations, the contextual data characterizes a type of event that is predicted to occur during the subsequent duration of time. In some implementations, determining that the event is predicted to occur is performed when prior content is being rendered by the application at the interface of the computing device. In some implementations, causing the notification to be rendered during the subsequent duration of time when the content is streaming at the interface of the computing device includes: causing the notification to be rendered simultaneous to a scrubber user interface (UI) being rendered at the interface of the computing device.
[0059] In some implementations, a method implemented by processor(s) is provided and includes determining, based on a prior user interaction with an application, a predicted time for an application event to occur. The application event corresponds to the prior user interaction with the application. The method further includes receiving, from an internet of things (loT) device and during rendering of streaming content at a display interface of a computing device, an indication that an loT event has occurred. The method further includes determining that the loT event corresponds to the application event and, in response to determining that the loT event corresponds to the application event, causing a notification to be rendered, at the computing device and during the rendering of the streaming content, that includes application content that is based on the prior user interaction with the application.
[0060] These and other implementations of the technology disclosed herein can include one or more of the following features.
[0061] In some implementations, determining that the loT event corresponds to the application event is based on determining that the indication, that the loT event has occurred, corresponds to the predicted time. In some of those implementations, the predicted time is a range of time during which the application event is predicted to occur and determining that the indication, that the loT event has occurred, corresponds to the predicted time, comprises determining that the indication that the loT event has occurred was received within the range of time.
[0062] In some implementations, the loT device is a smart doorbell and/or the application is a food delivery application. In some versions of those implementations, causing the notification to be rendered during the rendering of the streaming content includes causing the notification to be rendered along with an image captured by the smart doorbell. In some of those versions, the method further includes determining the range of time based on a type classification of the application event.
[0063] In some implementations, determining the predicted time is based on application data that is provided by the application and that is based on the prior user interaction. [0064] In some implementations, causing the notification to be rendered during the rendering of the streaming content includes causing the notification to be rendered in a scrubber user interrace of the streaming application.
[0065] In some implementations, the application content includes an icon that corresponds to the application.
[0066] In some implementations, the application event is a delivery and the application content includes product content characterizing a product to be delivered.
[0067] In some implementations, the method further includes causing, prior to causing the notification to be rendered and prior to receiving the indication that the loT event has occurred, a predictive notification to be rendered at the computing device during the rendering of the streaming content. The predictive notification can include the application content and is rendered, within a scrubber user interface of the streaming application, at a relative time, within the scrubber user interface, that corresponds to the predicted time.
[0068] In some implementations, causing the notification to be rendered at the computing device and during the rendering of the streaming content is further in response to determining that the application event satisfies a priority threshold.
[0069] In some implementations, a method implemented by processor(s) is provided and includes determining, based on data from an application that is in addition to an active application rendering streaming content at a computing device, that a future event is predicted to occur during a subsequent duration of time. The method further includes determining, based on the data, whether the future event satisfies a priority threshold for rendering a notification, for the future event, during the subsequent duration of time. The method further includes, in response to determining that the future event satisfies the priority threshold for rendering the notification, causing a notification of the future event to be rendered, at the computing device and during rendering the streaming content at the computing device, but before actual occurrence of the future event. The notification characterizes the application based on which the future event is determined. [0070] These and other implementations of the technology disclosed herein can include one or more of the following features.
[0071] In some implementations, the priority threshold is based on contextual data that is available via one or more additional computing devices and indicates a relative importance of the event. In some of those implementations, the contextual data includes an identifier for the user that is viewing the streaming content at the computing device and/or the contextual data characterizes a type classification for the future event.
[0072] In some implementations, determining that the event is predicted to occur is performed when prior content is being rendered by the application at the interface of the computing device.
[0073] In some implementations, causing the notification of the future event to be rendered includes causing the notification to be rendered simultaneous to a scrubber user interface being rendered by the active application at the interface of the computing device. In some of those implementations, causing the notification of the future event to be rendered includes causing the notification to be rendered within the scrubber user interface at a relative time, within the scrubber user interface, that corresponds to the subsequent duration of time of the future event.

Claims

CLAIMS We claim:
1. A method implemented by one or more processors, the method comprising: determining that content is being rendered at a display interface of a computing device that is located within an environment, wherein the content has a rendering duration; determining, in response to content being rendered at the display interface, whether a user has interacted with an application in furtherance of causing a subsequent event to occur during the rendering duration; and when the user has interacted with the application to cause the subsequent event to occur during the rendering duration: generating a predicted time for the subsequent event to occur during the rendering duration, and causing, prior to the predicted time for the subsequent event to occur, a notification to be rendered at the display interface of the computing device, wherein the notification indicates the predicted time, within the rendering duration, for the subsequent event to occur.
2. The method of claim 1, wherein causing the notification to be rendered at the display interface of the computing device prior to the predicted time for the subsequent event to occur includes: causing the notification to be rendered with an identifier for the application that the user interacted with to cause the subsequent event to occur.
3. The method of claim 1 or claim 2, wherein causing the notification to be rendered at the display interface of the computing device prior to the predicted time for the subsequent event to occur includes: causing the notification to be rendered with an image that is captured by an additional computing device that is different than the computing device.
4. The method of any preceding claim, wherein the subsequent event involves one or more persons attempting to communicate with the user during the rendering duration, and wherein the predicted time is based on whether the user had adjusted playback of the content during the rendering duration.
5. The method of any preceding claim, wherein determining whether the user has interacted with the application in furtherance of causing the subsequent event to occur includes: determining whether the subsequent event satisfies a priority threshold for rendering the notification over the content being rendered at the display interface.
6. The method of any preceding claim, wherein the predicted time indicated by the notification is a range of time within the rendering duration that the event is predicted to occur.
7. The method of claim 6, wherein the range of time is based on a type of event that corresponds to the subsequent event that is predicted to occur.
8. The method of claim 7, wherein, when the type of event is a user-created event, the range of time is shorter than when the type of event is a third party event.
9. The method of any preceding claim, wherein causing the notification to be rendered at the display interface of the computing device includes: causing the notification to be rendered at a location on a timeline, wherein the timeline corresponds to the duration of time for the rendering of the content, and the location on the timeline corresponds to an instance of time subsequent to a current instance of time during which a portion of the content is being rendered.
10. The method of any of claims 1 to 8, wherein causing the notification to be rendered at the display interface of the computing device includes: causing the notification to be rendered at a location on a timeline, wherein the timeline corresponds to the duration of time for the rendering of the content, and the location on the timeline corresponds to a range of time subsequent to a current instance of time during which a portion of the content is being rendered.
11. A method implemented by one or more processors, the method comprising: determining that one or more persons are attempting to contact a user via one or more computing devices within an environment of the user; determining, in response to the one or more persons attempting to contact the user, whether a context in which the one or more persons are attempting to contact the user is associated with a prior interaction between the user and an application; and when the context is associated with the prior interaction between the user and the application: generating notification data that characterizes content that is based on the prior interaction between the user and the application, and causing, based on the notification data, a notification to be rendered for the user with the content at a particular computing device.
12. The method of claim 11, wherein the one or more computing devices include a doorbell computing device and the content includes an identifier for the application.
13. The method of claim 11 or claim 12, wherein determining whether the context in which the one or more persons are attempting to contact the user is associated with the prior interaction between the user and the application includes: determining that the prior interaction resulted in event data that identifies the one or more persons as participants in an event characterized by the event data.
14. The method of claim 11 or claim 12, wherein determining whether the context in which the one or more persons are attempting to contact the user is associated with the prior interaction between the user and the application includes: determining that the prior interaction resulted in event data that identifies a particular time that corresponds to a current time in which the one or more persons are attempting to contact the user.
15. A method implemented by one or more processors, the method comprising: determining that an event is predicted to occur during a subsequent duration of time when content is streaming via an interface of a computing device, wherein the content is rendered by an application that is different than an additional application associated with the event that is predicted to occur; determining, based on the event, whether the event satisfies a priority threshold for rendering a notification for the event during the subsequent duration of time, wherein the priority threshold is based on contextual data that is available via one or more different computing devices and indicates a relative importance of the event; and when the event is determined to satisfy the priority threshold for rendering the notification for the event: generating notification data that identifies the additional application associated with the event, and causing, based on the notification data, the notification to be rendered during the subsequent duration of time when the content is streaming at the interface of the computing device, wherein the notification is rendered via the interface of the computing device and indicates the additional application associated with the event.
16. The method of claim 15, wherein the priority threshold is based on contextual data that is available via one or more different computing devices and indicates the relative importance of the event.
17. The method of claim 16, wherein the contextual data includes an identifier for the user that is viewing the content streaming at the interface of the computing device.
18. The method of claim 15 or claim 16, wherein the contextual data characterizes a type of event that is predicted to occur during the subsequent duration of time.
19. The method of any of claims 15 to 18, wherein determining that the event is predicted to occur is performed when prior content is being rendered by the application at the interface of the computing device.
20. The method of claim 19, causing the notification to be rendered during the subsequent duration of time when the content is streaming at the interface of the computing device includes: causing the notification to be rendered simultaneous to a scrubber user interface (UI) being rendered at the interface of the computing device.
21. A method implemented by one or more processors, the method comprising: determining, based on a prior user interaction with an application, a predicted time for an application event to occur, wherein the application event corresponds to the prior user interaction with the application; receiving, from an internet of things (loT) device and during rendering of streaming content at a display interface of a computing device, an indication that an loT event has occurred; determining that the loT event corresponds to the application event, wherein determining that the loT event corresponds to the application event is based on determining that the indication, that the loT event has occurred, corresponds to the predicted time; in response to determining that the loT event corresponds to the application event: causing a notification to be rendered, at the computing device and during the rendering of the streaming content, that includes application content that is based on the prior user interaction with the application.
22. The method of claim 21, wherein the loT device is a smart doorbell.
23. The method of claim 21 or 22, wherein the application is a food delivery application.
24. The method of any of claims 21 to 23, wherein determining the predicted time is based on application data that is provided by the application and that is based on the prior user interaction.
25. The method of any of claims 21 to 24, wherein causing the notification to be rendered during the rendering of the streaming content comprises causing the notification to be rendered in a scrubber user interrace of the streaming application.
26. The method of any of claims 22 to 25, wherein causing the notification to be rendered during the rendering of the streaming content comprises causing the notification to be rendered along with an image captured by the smart doorbell.
27. The method of any of claims 21 to 26, wherein the application content includes an icon that corresponds to the application.
28. The method of any of claims 21 to 27, wherein the application event is a delivery and wherein the application content includes product content characterizing a product to be delivered.
29. The method of any of claims 21 to 28, further comprising: causing, prior to causing the notification to be rendered and prior to receiving the indication that the loT event has occurred, a predictive notification to be rendered at the computing device during the rendering of the streaming content; wherein the predictive notification includes the application content and is rendered, within a scrubber user interface of the streaming application, at a relative time, within the scrubber user interface, that corresponds to the predicted time.
30. The method of any of claims 21 to 29, wherein causing the notification to be rendered at the computing device and during the rendering of the streaming content is further in response to: determining that the application event satisfies a priority threshold.
31. The method of any of claims 21 to 30, wherein the predicted time is a range of time during which the application event is predicted to occur and wherein determining that the indication, that the loT event has occurred, corresponds to the predicted time, comprises determining that the indication that the loT event has occurred was received within the range of time.
32. The method of claim 31, further comprising determining the range of time based on a type classification of the application event.
33. A method implemented by one or more processors, the method comprising: determining, based on data from an application that is in addition to an active application rendering streaming content at a computing device, that a future event is predicted to occur during a subsequent duration of time; determining, based on the data, whether the future event satisfies a priority threshold for rendering a notification, for the future event, during the subsequent duration of time; and in response to determining that the future event satisfies the priority threshold for rendering the notification: causing a notification of the future event to be rendered, at the computing device and during rendering the streaming content at the computing device, but before actual occurrence of the future event, wherein the notification characterizes the application based on which the future event is determined.
34. The method of claim 33, wherein the priority threshold is based on contextual data that is available via one or more additional computing devices and indicates a relative importance of the event.
35. The method of claim 34, wherein the contextual data includes an identifier for the user that is viewing the streaming content at the computing device.
36. The method of claim 34 or claim 35, wherein the contextual data characterizes a type classification for the future event.
37. The method of any of claims 33 to 36, wherein determining that the event is predicted to occur is performed when prior content is being rendered by the application at the interface of the computing device.
38. The method of any of claims 33 to 37, wherein causing the notification of the future event to be rendered includes: causing the notification to be rendered simultaneous to a scrubber user interface being rendered by the active application at the interface of the computing device.
39. The method of claim 38, wherein causing the notification of the future event to be rendered includes: causing the notification to be rendered within the scrubber user interface at a relative time, within the scrubber user interface, that corresponds to the subsequent duration of time of the future event.
40. A system comprising: one or more processors; and memory storing instructions that, when executed by the one or more processors, cause the one or more processors to perform operations of any one of claims 1 to 39.
41. A non-transitory computer-readable storage medium storing instructions that, when executed by one or more processors, cause the one or more processors to perform operations of any one of claims 1 to 39.
PCT/US2022/049444 2021-12-17 2022-11-09 Generating notifications that provide context for predicted content interruptions WO2023113945A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US202163290723P 2021-12-17 2021-12-17
US63/290,723 2021-12-17
US17/572,433 US11523190B1 (en) 2021-12-17 2022-01-10 Generating notifications that provide context for predicted content interruptions
US17/572,433 2022-01-10

Publications (1)

Publication Number Publication Date
WO2023113945A1 true WO2023113945A1 (en) 2023-06-22

Family

ID=84820223

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2022/049444 WO2023113945A1 (en) 2021-12-17 2022-11-09 Generating notifications that provide context for predicted content interruptions

Country Status (1)

Country Link
WO (1) WO2023113945A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160127765A1 (en) * 2014-10-31 2016-05-05 Echostar Uk Holdings Limited Pausing playback of media content based on user presence
US20180310045A1 (en) * 2017-04-19 2018-10-25 Rovi Guides, Inc. Systems and methods for modifying playback of a media asset based on an event that will interrupt playback of the media asset
US20200037038A1 (en) * 2018-07-25 2020-01-30 Rovi Guides, Inc. Systems and methods for displaying notifications of anticipated dynamic events on a media progress indicator

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160127765A1 (en) * 2014-10-31 2016-05-05 Echostar Uk Holdings Limited Pausing playback of media content based on user presence
US20180310045A1 (en) * 2017-04-19 2018-10-25 Rovi Guides, Inc. Systems and methods for modifying playback of a media asset based on an event that will interrupt playback of the media asset
US20200037038A1 (en) * 2018-07-25 2020-01-30 Rovi Guides, Inc. Systems and methods for displaying notifications of anticipated dynamic events on a media progress indicator

Similar Documents

Publication Publication Date Title
US11727930B2 (en) Pre-emptively initializing an automated assistant routine and/or dismissing a scheduled alarm
US20210089125A1 (en) Invoking automated assistant function(s) based on detected gesture and gaze
US11741959B2 (en) Isolating a device, from multiple devices in an environment, for being responsive to spoken assistant invocation(s)
US11664028B2 (en) Performing subtask(s) for a predicted action in response to a separate user interaction with an automated assistant prior to performance of the predicted action
JP2022119878A (en) Generating and/or adapting contents of automated assistant according to distance between user and automated assistant interface
US11830491B2 (en) Determining whether to automatically resume first automated assistant session upon cessation of interrupting second session
US11238868B2 (en) Initializing non-assistant background actions, via an automated assistant, while accessing a non-assistant application
US20240144927A1 (en) Mitigation of client device latency in rendering of remotely generated automated assistant content
US11032418B2 (en) Generating and/or prioritizing pre-call content for rendering when awaiting acceptance of an incoming call
US20240040037A1 (en) Generating and/or prioritizing pre-call content for rendering when awaiting acceptance of an incoming call
US20240064110A1 (en) Selective delaying of provisioning, to assistant device(s), assistant data that is locally utilizable by a corresponding local assistant client
US11882339B2 (en) Generating notifications that provide context for predicted content interruptions
WO2023113945A1 (en) Generating notifications that provide context for predicted content interruptions
US11775324B2 (en) Automated content switching rule for controlling access to interactive content on a computing device
WO2020226667A1 (en) Performing subtask(s) for a predicted action in response to a separate user interaction with an automated assistant prior to performance of the predicted action
US20240062757A1 (en) Generating and/or causing rendering of video playback-based assistant suggestion(s) that link to other application(s)
US20240038246A1 (en) Non-wake word invocation of an automated assistant from certain utterances related to display content

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22836357

Country of ref document: EP

Kind code of ref document: A1