US20200342569A1 - Dynamic adaptation of device interfaces in a voice-based system - Google Patents

Dynamic adaptation of device interfaces in a voice-based system Download PDF

Info

Publication number
US20200342569A1
US20200342569A1 US16/924,661 US202016924661A US2020342569A1 US 20200342569 A1 US20200342569 A1 US 20200342569A1 US 202016924661 A US202016924661 A US 202016924661A US 2020342569 A1 US2020342569 A1 US 2020342569A1
Authority
US
United States
Prior art keywords
user
image
digital component
client device
pose
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US16/924,661
Inventor
Jian Wei LEONG
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Google LLC
Original Assignee
Google LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US15/955,297 external-priority patent/US10853911B2/en
Application filed by Google LLC filed Critical Google LLC
Priority to US16/924,661 priority Critical patent/US20200342569A1/en
Assigned to GOOGLE LLC reassignment GOOGLE LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: LEONG, Jian Wei
Publication of US20200342569A1 publication Critical patent/US20200342569A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • G06T3/18
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/0093Geometric image transformation in the plane of the image for image warping, i.e. transforming by individually repositioning each pixel
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback
    • G06K9/00369
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/60Rotation of a whole image or part thereof
    • G06T3/608Skewing or deskewing, e.g. by two-pass or three-pass rotation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/103Static body considered as a whole, e.g. static pedestrian or occupant recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/205Parsing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/40Processing or translation of natural language

Definitions

  • An automated assistant (also known as a “personal assistant”, “mobile assistant”, etc.) can be interacted with by a user via a variety of client devices such as projectors, smart phones, tablet computers, wearable devices, automobile systems, and/or standalone personal assistant devices.
  • An automated assistant receives input from the user such as typed input, touch input, and/or spoken natural language input.
  • the automated assistant can respond with responsive content such as visual and/or audible natural language output.
  • An automated assistant interacted with a client device can be implemented via the client device itself and/or one or more remote computing devices, such as (but not limited to) computing device(s) in “the cloud”, that are connected to the client device via a network.
  • This disclosure relates to systems, methods, and apparatus for dynamic adaptation of images for projection by a projector, and/or of projection parameters, based on one or more properties of user(s) that are in an environment with the projector.
  • a base image can be transformed, in dependence on a pose of a user, to generate a transformed image.
  • a “pose” references a position of a user, and optionally also an orientation of the user.
  • the transformed image is different from the base image, but is generated such that when projected it appears to be substantially similar to the base image, when viewed from the pose of the user. Stated differently, if viewed from the same pose, the projected base image and the projected transformed image would be perceivable as different by the user. However, if the projected base image were viewed by the user from a first pose (e.g., “straight on”) and the projected transformed image were viewed by the user from a second pose (e.g., at a 70° angle relative to “straight on”), they would be perceived as the same.
  • a first pose e.g., “straight on”
  • the projected transformed image were viewed by the user from a second pose (e.g., at a 70° angle relative to “straight on”), they would be perceived as the same.
  • various implementations disclosed herein can selectively and dynamically transform base images, in dependence on a pose of a user.
  • transformed images can selectively be projected in lieu of their base image counterparts, such that projected images, when viewed by the user, appear to be substantially similar to their base image counterparts. It will be understood that some user poses will require no dynamic transformations, and the base image itself can be projected to the user.
  • a “base image” references a single image frame and optionally also an image that is part of a sequence of images that form a video or other dynamic sequence of images.
  • an automated assistant can identify active user(s) of the automated assistant in determining how to dynamically transform an image. For example, where multiple users are present, the automated assistant can identify a subset of those users as active users, determine at least one pose of the subset, and transform an image in dependence on the at least one pose of the subset.
  • Active users can be identified by an automated assistant in a number of ways including by movement, location, pose, facial identification, voice identification, and/or gaze.
  • rooms can contain more than one person and various numbers of active users. As an illustrative example, a room can contain one person and no active users, one person and one active user, several people and one active user, and/or several people and several active users. The number of active users in a room can change over time, and a redetermination of active users by the automated assistant can be used to determine new image transformation parameters to use in transforming base images for projection.
  • images can be dynamically transformed (or “warped”) by the automated assistant so the image appears the same as an active user moves within a room.
  • Image warping can be a linear transformation, and can include a variety of processes including rotating the image, scaling the image, and skew adjusting the image.
  • image warping assume a base image that includes a pair of parallel lines. If the base image is projected, the lines would appear parallel to a user that is viewing the projection from a pose that is perpendicular to (e.g., directly in front of) a surface on which the projection is provided.
  • the lines would appear non-parallel.
  • Generating a transformed image based on warping the base image, and projecting the transformed image in lieu of the base image can lead to the user still perceiving the lines as parallel even when the user is at a non-perpendicular angle (e.g., from the side).
  • the user's perception of the projection of the transformed image can be more similar to the base image than would be the user's perception of a projection of the base image itself.
  • a base image can be generated and/or identified based on a distance of the user, where the distance of the user is indicative of the distance between the user and the surface upon which the image is projected (e.g., the distance can be based on a distance between the user and the projector, and optionally the distance from the projector to the surface).
  • UI interactive user interface
  • UI elements can be included or excluded in a base image depending on the distance of the user from the projected image. For instance, when a user is relatively far away from a projection surface (e.g., more than 5 feet away or other “unreachable” distance), a base image can be identified or generated that lacks any interactive UI elements.
  • a base image when a user is relatively close to a projection surface (e.g., within “reach” of the projection surface), a base image can be identified or generated that includes interactive UI elements.
  • a first base image can be identified or generated when a user is within a first range of distances of the projection surface
  • a second base image can be identified or generated when the user is instead within a second range of distances of the projection surface.
  • the first base image or the second base image in response to a user's request for “weather”, either the first base image or the second base image can be identified and/or generated for projection, in dependence on a distance of the user.
  • the first range of distances can include farther distances and the first base image can include less information such as only today's weather report.
  • the second range of distances can include closer distances and the second base image can include more information such as today's weather report and the weather report for one or more additional days.
  • the projected image can be touch sensitive, giving a user close enough to touch the projection a modality via which to interact with the automated assistant (e.g., in addition to voice and/or gesture modalities).
  • Implementations disclosed herein can enhance the usability of an automated assistant through dynamic adaptation of what content is projected and/or how it is projected. Such dynamic adaptations can enable more accurate and/or more comprehendible representations of projected content from a variety of viewing poses. Such dynamic adaptations can be of benefit to, for example, users with low dexterity that may be constrained with respect to the poses from which they can view projected automated assistant content.
  • generating and projecting a transformed image can reduce the duration of time that it is necessary for a projector to project the transformed image, thereby conserving power resources that would otherwise be required to project the transformed image for a longer duration.
  • the user when a user views a projected transformed image, the user can comprehend the information presented in the image more quickly (i.e., relative to if a base image were instead projected) and/or is less likely to need to move to understand the information in the projected image.
  • This enables the projector to cease projecting of the transformed image more quickly.
  • the projector can cease projecting of the transformed image, for example, in response a user command to dismiss the image (e.g., a command that request additional content that will supplant the image) or as a time-out after determining the user is no longer viewing the image.
  • a cloud-based automated assistant component can send a base image and the client device can generate transformation(s) of the base image locally, obviating the need for further client-cloud-based automated assistant component network communications to request and transmit transformation(s). This can efficiently lessen the amount of data exchanged between cloud-based automated assistant component and the client since the cloud-based automated assistant component only needs to send a single base image instead of needing to send multiple image transformations along with each base image.
  • a method in some implementations, includes identifying, by an automated assistant client of a computing device in an environment, a base image for projecting onto a surface via a projector accessible to the automated assistant client.
  • the method further includes determining, using sensor data from at least one sensor, a first pose of a user in the environment.
  • the sensor data is accessible to the automated assistant client.
  • the method further includes determining, using the first pose of the user, first image transformation parameters for warping images.
  • the method further includes generating a first transformed image that is a transformation of the base image, and causing the projector to project the transformed image onto the surface. Generating the first transformed image includes using the first image transformation parameters to warp the base image.
  • the first transformed image when projected onto the surface and viewed from the first pose of the user, mitigates perceived differences relative to the base image.
  • the method further includes determining, using additional sensor data from the at least one sensor, a second pose of the user in the environment, where the second pose of the user indicates the user has moved.
  • the method further includes determining, using the second pose of the user, second image transformation parameters for warping images.
  • the method further includes generating a second transformed image that is a transformation of the base image or of an additional base image, and causing the projector to project the second transformed image onto the surface.
  • Generating the second transformed image includes using the second transformation parameters to warp the base image or the additional base image.
  • the second transformed image when projected onto the surface and viewed from the second pose of the user, mitigates perceived differences relative to the base image or the additional base image.
  • the base image is received, via a network interface of the computing device, from a remote automated assistant component that interfaces with the automated assistant client.
  • the base image is generated by the automated assistant client based on data received, via a network interface of the computing device, from a remote automated assistant component that interfaces with the automated assistant client.
  • the method further includes determining a distance of the surface to the projector using second sensor data from a second sensor.
  • the second sensor data is accessible to the automated assistant client.
  • determining the first image transformation parameters for warping images includes determining the first image transformation parameters using the first pose of the user and using the distance of the surface to the projector.
  • generating the second transformed image includes using the second pose of the user and the distance from the surface to the projector to warp the base image.
  • generating the first transformed image that is the transformation of the base image includes performing at least one linear transformation on the base image.
  • the at least one linear transformation is selected from a group consisting of rotation of the base image, scaling of the base image, and skew adjustment of the base image.
  • the base image has first dimensions and the first transformed image has the same first dimensions.
  • the base image includes base image pixels each having corresponding values assigned thereto, and the transformed image includes transformed image pixels.
  • the transformed image pixels have the same corresponding values as the base image pixels, but the assignment of the same corresponding values to the transformed image pixels differs from the assignment of the corresponding values to the base image pixels in the base image.
  • a given transformed image pixel, having a given X and Y position in the transformed image can have the same values as a given base image pixel, of the base image, where the given base image pixel that has a different X and Y position in the base image.
  • the method further includes determining, based on the first pose of the user, a desired size for the projection of the first transformed image. In some of those implementations, causing the projector to project the first transformed image onto the surface includes causing the projector to project the first transformed image to achieve the desired size for the projection.
  • identifying the base image includes selecting the base image, from a plurality of candidate base images, based on the first pose of the user.
  • selecting the base image based on the first pose of the user includes: determining a distance of the user based on the first pose of the; and selecting the base image based on the distance corresponding to a distance measure assigned to the base image.
  • the distance can be, for example, relative to the projector or relative to the surface.
  • the method further includes generating, by the automated assistant client, the base image based on the pose of the user.
  • generating the base image based on the pose of the user includes: determining the pose of the user is within a threshold distance of the surface; and based on determining the pose of the user is within the threshold distance of the surface: generating the base image to include one or more interactive interface elements.
  • a method if provided and includes identifying, by an automated assistant client of a computing device, a base image for projecting onto a surface via a projector accessible to the automated assistant client.
  • the method further includes identifying a plurality of users that are in an environment with the computing device, and determining, using sensor data from at least one sensor accessible to the computing device, that a subset of the users are active users for the automated assistant client.
  • the method further includes determining at least one pose for the subset of the users that are determined to be active users. Determining the at least one pose is based on the sensor data or additional sensor data from at least one additional sensor accessible to the computing device.
  • the method further includes, based on determining that the subset of the user are active users, using the at least one pose for the subset of the users in generating a transformed image of the base image.
  • the method further includes causing the projector to project the transformed image onto the surface.
  • the method further includes determining, using additional sensor data from the at least one sensor, that a second subset of the users are active users for the automated assistant client.
  • the additional sensor data is generated at a time subsequent to the sensor data, and the second subset of the users varies from the first subset of the users.
  • the method further includes: determining, based on the additional sensor data, at least one second pose for the second subset of the users that are determined to be active user; and based on determining that the second subset of the user are active users, using the at least one second pose for the second subset of the users in generating a second transformed image of the base image, or of an additional image, using the at least one second pose.
  • the method further includes causing the projector to project the second transformed image onto the surface.
  • the method further includes: determining, based on the sensor data or additional sensor data, a gaze for the subset of the users that are determined to be active users; and generating the transformed image using the gaze of the one or more active users.
  • the method further includes: determining, based on the pose of the subset of the users, a desired size for the projection of the transformed image.
  • causing the projector to project the transformed image onto the surface includes causing the projector to project the first transformed image to achieve the desired size for the projection.
  • a method includes identifying, by an automated assistant client of a computing device, a base image for projecting via a projector accessible to the automated assistant client.
  • the method further includes identifying a plurality of users that are in an environment with the computing device.
  • the method further includes determining, using sensor data from at least one sensor accessible to the computing device, that a subset of the users are active users for the automated assistant client.
  • the method further includes determining at least one pose for the subset of the users that are determined to be active users. Determining the at least one pose is based on the sensor data or additional sensor data from at least one additional sensor accessible to the computing device.
  • the method further includes, based on determining that the subset of the user are active users, using the at least one pose for the subset of the users in determining one or more projection parameters for a projection that includes the base image, or a transformed image that is a transform of the base image.
  • the method further includes causing the projector to project the projection using the projector.
  • the one or more projection parameters include one or multiple of: a size of the base image or the transformed image in the projection, a size of the projection, a location of the base image or the transformed image in the projection, and a location of the projection.
  • a system to generate interfaces in an audio based networked system can include a computing device that can include one or more processors and a memory
  • the one or more processors can be configured to execute a natural language processor, a content selector component, and a transformation component.
  • the computing device can receive an input audio signal that is detected by a sensor at a client device.
  • the computing device can parse input audio signal to identify a first request in the input audio signal and a keyword associated with the first request.
  • the computing device computing device can select a first base digital component based on at least the first digital component request.
  • the computing device can select a second base digital component based on at least the keyword associated with the first digital component request.
  • the computing device can determine a distance between the client device and a projection surface.
  • the computing device can determine, based on the distance between the client device and the projection surface, transformation parameters for the first base digital component and the second base digital component.
  • the transformation parameters can be configured to correct a skew of images projected onto the projection surface.
  • the computing device can generate a first transformed image based at least on the transformation parameters and the first base digital component and a second transformed image based at least on the transformation parameters and the second digital component.
  • the computing device can transmit the first transformed image and the second transformed image to the client device for projection onto the projection surface.
  • a method to generate interfaces in an audio-based networked system can include receiving, by a natural language processor executed by one or more processors of a computing device, an input audio signal detected by a sensor at a client device.
  • the method can include parsing, by the natural language processor, the input audio signal to identify a first request in the input audio signal and a keyword associated with the first request.
  • the method can include selecting, by a content selector component of the computing device, a first base digital component based on at least the first digital component request.
  • the method can include selecting, by the content selector component, a second base digital component based on at least the keyword associated with the first digital component request.
  • the method can include determining, by a transformation component executed by the one or more processors of the computing device and based on sensor data from the client device, a distance between the client device and a projection surface.
  • the method can include determining, by the transformation component, based on the distance between the client device and the projection surface, transformation parameters for the first base digital component and the second base digital component.
  • the transformation parameters configured to correct a skew of images projected onto the projection surface.
  • the method can include generating, by the transformation component, a first transformed image based at least on the transformation parameters and the first base digital component and a second transformed image based at least on the transformation parameters and the second digital component.
  • the method can include transmitting, by the transformation component, the first transformed image and the second transformed image to the client device for projection onto the projection surface.
  • implementations may include a non-transitory computer readable storage medium storing instructions executable by one or more processors (e.g., central processing unit(s) (CPU(s)), graphics processing unit(s) (GPU(s)), and/or tensor processing unit(s) (TPU(s)) to perform a method such as one or more of the methods described above and/or elsewhere herein.
  • processors e.g., central processing unit(s) (CPU(s)), graphics processing unit(s) (GPU(s)), and/or tensor processing unit(s) (TPU(s)
  • CPU(s) central processing unit
  • GPU(s) graphics processing unit
  • TPU(s) tensor processing unit
  • implementations may include a system of one or more computers and/or one or more robots that include one or more processors operable to execute stored instructions to perform a method such as one or more of the methods described above and/or elsewhere herein.
  • FIG. 1 is a block diagram illustrating an example environment in which various implementations can be implemented.
  • FIG. 2A is a diagram illustrating an example scene of a user in a first pose in a room looking at an image projected onto a wall by a projector using an automated assistant.
  • FIG. 2B is a diagram illustrating an example scene of a user in a second pose in a room looking at an image projected at the same location onto the wall by a projector using an automated assistant.
  • FIG. 2C is a diagram illustrating an example of a projected image, as perceived by a user from a perspective that is directly perpendicular to the projected image.
  • FIG. 2D illustrates an example of the projected image of FIG. 2C , but as perceived by a user from a perspective that is not directly perpendicular to the projected image.
  • FIG. 3 is a flowchart illustrating an example process according to implementations disclosed herein.
  • FIG. 4 is a flowchart illustrating an example process according to implementations disclosed herein.
  • FIGS. 5A, 5B, 5C, and 5D are diagrams illustrating examples of image processing adjustments.
  • FIG. 6 is a flowchart illustrating an example process according to implementations disclosed herein.
  • FIG. 7 is a flowchart illustrating an example process according to implementations disclosed herein.
  • FIG. 8A is a diagram illustrating an example scene of a user in a first pose in a room looking at an image projected onto a wall by a projector using an automated assistant.
  • FIG. 8B is a diagram illustrating an example scene of a user in a second pose in a room looking at an image projected at the same location onto the wall by a projector.
  • FIG. 9 is a flowchart illustrating an example process according to implementations disclosed herein.
  • FIG. 10 illustrates a block diagram of an example method to generate interfaces in an audio-based, networked system according to implementations disclosed herein.
  • FIG. 11 is a block diagram illustrating an example architecture of a computing device.
  • FIG. 1 illustrates an example environment 100 in which various implementations can be implemented.
  • the example environment 100 includes one or more client devices 102 .
  • Each client device 102 may execute a respective instance of an automated assistant client 112 .
  • One or more cloud-based automated assistant components 116 such as natural language processor 122 and digital component selector 126 , may be implemented on one or more computing systems (collectively referred to as a “cloud” computing system) that are communicatively coupled with client devices 102 via one or more local and/or wide area networks 114 (e.g., the internet).
  • the system 100 can include one or more digital component providers 128 that can provide digital components to the client device 102 via the cloud-based automated assistant components 116 and networks 114 .
  • the system 100 can include one or more digital component providers 128 .
  • the digital component providers 128 can provide audio, visual, or multimedia based digital components (which can also be referred to as content, images, or base images) for presentation by the client device 102 or the projector 106 as an audio and visual based output digital component.
  • the digital component can be or include other digital components.
  • the digital component can be or include a digital object.
  • the digital component can be configured for a parametrically driven text to speech technique.
  • the digital component can be configured for text-to-speech (TTS) implementations that convert normal language text into speech.
  • TTS text-to-speech
  • the digital component can include an image that is displayed on a projection surface as, via TTS, text related to the displayed image is presented to the user.
  • the digital component can be input to an application programming interface that utilizes a speech-synthesis capability to synthesize text into natural-sounding speech in a variety of languages, accents, and voices.
  • the digital component can be coded as plain text or a speech synthesis markup language (SSML).
  • SSML can include parameters that can be set to control aspects of speech, such as pronunciation, volume, pitch, or rate that can form an acoustic fingerprint or native voice.
  • the digital component provider 128 can provide selection criteria for the digital component, such as a value, keyword, concept, or other metadata or information to facilitate a content selection process.
  • the digital component provider 128 can provide video based digital components (or other digital components) to the content selector component 126 where they can be stored in a data repository.
  • the content selector component 126 can select the digital components from the data repository and provide the selected digital components to the client device 102 .
  • the digital component provider 128 can provide the digital component to the content selector component 126 for storage in the data repository in a content data structure.
  • the content selector component 126 can retrieve the digital component responsive to a request for content from the client device 102 or otherwise determining to provide the digital component.
  • the digital component provider 128 can establish a digital component campaign (or electronic content campaign).
  • a digital component campaign can refer to one or more content groups that correspond to a common theme.
  • a content campaign can include a hierarchical data structure that includes content groups, digital component data objects (e.g., digital components or digital objects), and content selection criteria.
  • digital component provider 128 can specify values for campaign level parameters of the digital component campaign.
  • the campaign level parameters can include, for example, a campaign name, a preferred content network for placing digital component objects, a value of resources to be used for the digital component campaign, start and end dates for the content campaign, a duration for the digital component campaign, a schedule for digital component object placements, language, geographical locations, type of computing devices on which to provide digital component objects.
  • an impression can refer to when a digital component object is fetched from its source and is countable. Due to the possibility of click fraud, robotic activity can be filtered and excluded, as an impression. Thus, an impression can refer to a measurement of responses from a Web server to a page request from a browser, which is filtered from robotic activity and error codes, and is recorded at a point as close as possible to opportunity to render the digital component object for display on the computing device 104 .
  • an impression can refer to a viewable or audible impression; e.g., the digital component object or digital component is at least partially (e.g., 20%, 30%, 30%, 40%, 50%, 60%, 70%, or more) viewable on a display device of the client device 102 , or audible via a speaker of the client device 102 .
  • a click or selection can refer to a user interaction with the digital component object, such as a voice response to an audible impression, a mouse-click, touch interaction, gesture, shake, audio interaction, or keyboard click.
  • a conversion can refer to a user taking a desired action with respect to the digital component objection; e.g., purchasing a product or service, completing a survey, visiting a physical store corresponding to the digital component, or completing an electronic transaction.
  • the digital component provider 128 can establish one or more content groups for a digital component campaign.
  • a content group includes one or more digital component objects and corresponding content selection criteria, such as keywords, words, terms, phrases, geographic locations, type of computing device, time of day, interest, topic, or vertical.
  • Content groups under the same content campaign can share the same campaign level parameters, but may have tailored specifications for content group level parameters, such as keywords, negative keywords (e.g., that block placement of the digital component in the presence of the negative keyword on main content), or parameters associated with the content campaign.
  • the digital component provider 128 can provide values for the content group level parameters of the content group.
  • the content group level parameters include, for example, a content group name or content group theme, and bids for different content placement opportunities (e.g., automatic placement or managed placement) or outcomes (e.g., clicks, impressions, or conversions).
  • a content group name or content group theme can be one or more terms that the digital component provider 128 can use to capture a topic or subject matter for which digital component objects of the content group is to be selected for display.
  • a food and beverage company can create a different content group for each brand of food or beverage it carries, and may further create a different content group for each model of vehicle it carries.
  • Examples of the content group themes that the food and beverage company can use include, for example, “Brand A cola”, “Brand B ginger ale,” “Brand C orange juice,” “Brand D sports drink,” or “Brand E purified water.”
  • An example content campaign theme can be “soda” and include content groups for both “Brand A cola” and “Brand B ginger ale”, for example.
  • the digital component (or digital component object or digital component) can include “Brand A”, “Brand B”, “Brand C”, “Brand D” or “Brand E”.
  • the digital component provider 128 can provide one or more keywords and digital component objects to each content group.
  • the keywords can include terms that are relevant to the product or services of associated with or identified by the digital component objects.
  • a keyword can include one or more terms or phrases.
  • the food and beverage company can include “soda,” “cola,” “soft drink,” as keywords for a content group or content campaign that can be descriptive of the goods or services the brand provides.
  • negative keywords can be specified by the content provider to avoid, prevent, block, or disable content placement on certain terms or keywords.
  • the content provider can specify a type of matching, such as exact match, phrase match, or broad match, used to select digital component objects.
  • the digital component provider 128 can provide the one or more keywords to be used by the content selector component 126 to select a digital component object provided by the digital component provider 128 .
  • the digital component provider 128 can provide additional content selection criteria to be used by the content selector component 126 to select digital component objects.
  • the content selector component 126 can run a content selection process involving multiple content providers 128 responsive to receiving an indication of a keyword of an electronic message.
  • the digital component provider 128 can provide one or more digital component objects for selection by the content selector component 126 .
  • the digital component objects can be a digital component or a collection of digital components.
  • the content selector component 126 can select the digital component objects when a content placement opportunity becomes available that matches the resource allocation, content schedule, maximum bids, keywords, and other selection criteria specified for the content group.
  • Different types of digital component objects can be included in a content group, such as a voice digital component, audio digital component, a text digital component, an image digital component, video digital component, multimedia digital component, or digital component link.
  • the content selector component 126 can transmit the digital component object for presentation or rendering on a client device 102 or display device of the client device 102 .
  • Presenting or rendering can include displaying the digital component on a display device or playing the digital component via a speaker of the client device 102 .
  • the content selector component 126 to present or render the digital component object.
  • the content selector component 126 can instruct the client device 102 to generate audio signals, acoustic waves, or visual output.
  • the automated assistant client 108 can present the selected digital component via an audio output.
  • the instance of an automated assistant client 108 by way of its interactions with one or more cloud-based automated assistant components 116 , may form what appears to be, from the user's perspective, a logical instance of an automated assistant 112 with which the user may engage in a dialogue.
  • a logical instance of an automated assistant 112 is depicted in FIG. 1 by a dashed line. It thus should be understood that each user that engages with an automated assistant client 108 executing on a client device 102 may, in effect, engage with his or her own logical instance of an automated assistant 112 .
  • automated assistant as used herein as “serving” a particular user may often refer to the combination of an automated assistant client 108 operated by the user and one or more cloud-based automated assistant components 116 (which may be shared amongst multiple automated assistant clients 108 ). It should also be understood that in some implementations, automated assistant 112 may respond to a request from any user regardless of whether the user is actually “served” by that particular instance of automated assistant 112 .
  • Client device 102 may include, for example, one or more of: a desktop computing device, a laptop computing device, a tablet computing device, a touch sensitive computing device (e.g., a computing device which can receive input via touch from a user), a mobile phone computing device, a computing device of a vehicle of the user (e.g., an in-vehicle communications system, an in-vehicle entertainment system, an in-vehicle navigation system), a standalone interactive speaker, a smart appliance such as a smart television, a projector, and/or a wearable apparatus of the user that includes a computing device (e.g., a watch of the user having a computing device, glasses of the user having a computing device, a virtual or augmented reality computing device). Additional and/or alternative client computing devices may be provided.
  • a desktop computing device e.g., a laptop computing device, a tablet computing device
  • a touch sensitive computing device e.g., a computing device which can receive input via touch from a
  • the client device 102 can interface with a projector 106 or can include the projector 106 .
  • the projector 106 can be a “smart” projector (e.g., the “smart” projector can either simply display images it receives from client device 102 and/or receive relevant data to generate image transformations at the projector before projecting a transformed image).
  • the projector 106 may include, for example, liquid crystal display (LCD) projectors, digital light processing (DLP) projectors, light emitting diode (LED) projectors, hybrid LED and laser diode projectors, and/or laser diode projectors.
  • the projector 106 can be a short throw or ultra-short throw projector.
  • a projected image can be touch sensitive and include a touch interface which can similarly receive touch inputs and/or gestures for allowing a user to control the automated assistant via the touch interface of a projected image.
  • Projectors displaying touch sensitive images can include a variety of infrared sensors, cameras, and/or other sensor(s) to detect a user's gestures and taps to determine how a user is interacting with the projected image.
  • the automated assistant client 108 can utilize either the projector integrated within client device 102 or a stand-alone projector 106 .
  • automated assistant client 108 can utilize both projectors, for example using a different projector for a different situation.
  • automated assistant client 108 can utilize the projector integrated within client device 102 to project still images and stand-alone projector 106 to project a video sequence.
  • the automated assistant client 108 can use different projectors in different lighting conditions depending on the specifications of the specific projectors, for example stand-alone projector 106 might project better in lower lighting conditions.
  • the client device 102 may include one or more presence sensors 104 that are configured to provide signals indicative of detected presence, particularly human presence. Presence sensors may come in various forms and can collect a variety of types of input to the automated assistant 112 such as verbal, textual, graphical, physical (e.g., a touch on a display device including a touch sensitive projector and/or a touch sensitive screen of a computing device), and/or visual (e.g., a gesture) based input. Some client devices 102 may be equipped with one or more digital cameras that are configured to capture and provide signal(s) indicative of movement detected in the fields of view. The client devices 102 may be equipped with presence sensors 104 that detect acoustic (or pressure) waves, such as one or more microphones.
  • Presence sensors may come in various forms and can collect a variety of types of input to the automated assistant 112 such as verbal, textual, graphical, physical (e.g., a touch on a display device including a touch sensitive projector and/or a touch sensitive screen
  • the presence sensors 104 may be configured to detect indications associated with human presence.
  • a client device 102 may be equipped with a presence sensor 104 that detects various types of waves (e.g., radio, ultrasonic, electromagnetic, etc.) emitted by, for instance, a mobile client device 102 carried/operated by a particular user.
  • some client devices 102 may be configured to emit waves that are imperceptible to humans, such as ultrasonic waves or infrared waves, that may be detected by other client devices 102 (e.g., via ultrasonic/infrared receives such as ultrasonic-capable microphones).
  • the various client devices 102 may emit other types of human-imperceptible waves, such as radio waves (e.g., Wi-Fi, Bluetooth, cellular, etc.) that may be detected by one or more client devices 102 and used to determine an operating user's particular position.
  • radio waves e.g., Wi-Fi, Bluetooth, cellular, etc.
  • Wi-Fi triangulation may be used to detect a user's position, e.g., based on Wi-Fi signals to/from a client device 102 for example, utilizing any of a variety of Wi-Fi SLAM methods.
  • other wireless signal characteristics, signal strength, etc. may be used by various client devices 102 alone or collectively, to determine a particular person's pose based on signals emitted by a client device 102 they carry.
  • Time-of-flight cameras can be used independently as presence sensors 104 to locate the pose of user(s) in an environment.
  • the automated assistant 112 may engage in dialog sessions with one or more users via user interface input and output devices of one or more client devices 102 .
  • the dialog sessions can be audio-based, image-based, or a combination of audio and images.
  • the one or more client device 102 can present selected digital components, such as images, videos, text, or audio to the user.
  • automated assistant 112 may engage in dialog sessions with a user in response to user interface input provided by the user via one or more user interface input devices of one of the client devices 102 . In some of those implementations, the user interface input is explicitly directly to automated assistant 112 .
  • a user may speak a predetermined invocation phrase, such as “OK, Assistant,” or “Hey, Assistant,” to cause automated assistant 112 to enter a state where the automated assistant 112 can receive inputs, such as input audio signals, text-based inputs, or touch-based inputs.
  • the inputs can include content requests.
  • the automated assistant 112 may engage in a dialog session in response to user interface input, even when that user interface input is not explicitly directly to automated assistant 112 .
  • automated assistant 112 may examine the contents of user interface input and engage in a dialog session in in response to certain terms being present in the user interface input and/or based on other cues.
  • automated assistant 112 may utilize speech recognition to convert utterances from users into text, and respond to the text accordingly, e.g., by providing visual information in the form of a base image and/or a transformed image, by providing search results, general information, and/or taking one or more response actions (e.g., playing media, launching a game, ordering food, etc.).
  • the automated assistant 112 can additionally or alternatively respond to utterances without converting the utterances into text.
  • the automated assistant 112 can convert voice input into an embedding, into entity representation(s) (that indicate entity/entities present in the voice input), and/or other “non-textual” representations and operate on such non-textual representations. Accordingly, implementations described herein as operating based on text converted from voice input my additionally and/or alternatively operate on the voice input directly and/or other non-textual representations of the voice input.
  • Each of the client computing devices 102 and computing device(s) operating cloud-based automated assistant components 116 may include one or more memories for storage of data and software applications, one or more processors for accessing data and executing applications, and other components that facilitate communication over a network.
  • the operations performed by one or more computing device 102 and/or automated assistant 112 may be distributed across multiple computer systems.
  • Automated assistant 112 may be implemented as, for example, computer programs running on one or more computers running in one or more locations that are coupled to each other through a network.
  • the client computing device 102 may operate an automated assistant client 108 .
  • each automated assistant client 108 may include a corresponding speech capture/text-to-speech (“TTS”)/speech-to-text (“STT”) module 110 .
  • TTS speech capture/text-to-speech
  • STT speech-to-text
  • one or more aspects of speech capture/TTS/STT module 110 may be implemented separately from the automated assistant client 108 .
  • Each speech capture/TTS/STT module 110 may be configured to perform one or more functions: capture a user's speech, e.g., via a microphone (which in some cases may include presence sensor 104 ); convert that captured audio to text (and/or to other representations or embeddings); and/or convert text to speech.
  • a client device 102 may be relatively constrained in terms of computing resources (e.g., processor cycles, memory, battery, etc.)
  • the speech capture/TTS/STT module 110 that is local to each client device 102 may be configured to convert a finite number of different spoken phrases—particularly phrases that invoke automated assistant 112 —to text (or other forms, such as lower dimensionality embeddings).
  • Other speech input may be sent to cloud-based automated assistant components 116 , which may include cloud-based TTS module 118 and/or cloud-based STT module 120 .
  • Cloud-based STT module 120 may be configured to leverage the resources of the cloud to convert audio data captured by speech capture/TTS/STT module 110 into text (which may then be provided to natural language processor 122 ).
  • Cloud-based TTS module 118 may be configured to leverage the virtually limitless resources of the cloud to convert textual data (e.g., natural language responses formulated by automated assistant 112 ) into computer-generated speech output.
  • the TTS module 118 may provide the computer-generated speech output to client device 102 to be output directly, e.g., using one or more speakers.
  • textual data e.g., natural language responses
  • automated assistant 112 may be provided to speech capture/TTS/STT module 110 , which may then convert the textual data into computer-generated speech that is output locally.
  • Automated assistant 112 may include a natural language processor 122 , the TTS module 118 , the STT module 120 , transformation parameters 124 , the digital component selector 126 , and other components.
  • one or more of the engines and/or modules of automated assistant 112 may be omitted, combined, and/or implemented in a component that is separate from automated assistant 112 .
  • one or more of the components of automate assistant 112 such as natural language processor 122 , speech capture/TTS/STT module 110 , etc., may be implemented at least in part on client device 102 (e.g., to the exclusion of the cloud).
  • the automated assistant 112 can generate or select responsive content (e.g., digital components) in response to various inputs generated by a user of client device 102 during a human-to-computer dialog session with automated assistant 112 .
  • Automated assistant 112 may provide the responsive content (e.g., over one or more networks 114 when separate from a client device of a user) for presentation to the user as part of the dialog session.
  • automated assistant 112 may generate responsive content in response to free-form natural language input provided via client device 102 .
  • free-form input is input that is formulated by the user that is not constrained to a group of options presented for selection by the user.
  • Natural language processor 122 of automated assistant 112 processes natural language input generate by users via client device 102 and may generate annotated output for use by one or more components of automated assistant 112 .
  • the natural language processor 122 may process natural language free-form input that is generated by a user via one or more user interface input devices of client device 102 .
  • the generated annotated output includes one or more annotations of the natural language input and optionally one or more (e.g., all) of the terms of the natural language input.
  • Natural language processor 122 can parse the input to identify the content request and one or more keywords in the input.
  • the natural language processor 122 can identify and annotate various types of grammatical information in natural language input.
  • the natural language processor 122 may include a part of speech tagger configured to annotate terms with their grammatical roles.
  • the natural language processor 122 may additionally and/or alternatively include a dependency parser configured to determine syntactic relationships between terms in natural language input.
  • the natural language processor 122 can include an entity tagger configured to annotate entity references in one or more segments such as references to people (including, for instances, literary characters, celebrities, public figures, etc.), organizations, locations (real and imaginary), and so forth.
  • entity tagger of the natural language processor 122 may annotate references to an entity at a high level of granularity (e.g., to enable identification of all references to an entity class such as people) and/or a lower level of granularity (e.g., to enable identification of all references to a particular entity such as a particular person).
  • the entity tagger may rely on content of the natural language input to resolve a particular entity and/or may optionally communicate with a knowledge graph or other entity database to resolve a particular entity.
  • the natural language processor 122 may additionally and/or alternatively include a coreference resolver configured to group, or “cluster”, references to the same entity based on one or more contextual cues.
  • the coreference resolver may be utilized to resolve the term “there” to “Hypothetical Café” in the natural language input “I liked Hypothetical Café last time we ate there.”
  • One or more components of the natural language processor 122 can use annotations from one or more other components of the natural language processor 122 .
  • the named entity tagger may rely on annotations from the coreference resolver and/or dependency parsers in annotating all mentions to a particular entity.
  • the coreference resolver may rely on annotations from the dependency parser in clustering references to the same entity.
  • one or more components of the natural language processor 122 may use related prior input and/or other related data outside of the particular natural language input to determine one or more annotations.
  • the natural language processor 122 can determine a request, such as a content request, within an audio input request received from the client device 102 .
  • the digital component selector 126 can be a part of the cloud-based automated assistant component 116 or separate from the cloud-based automated assistant component 116 .
  • the digital component selector 126 can receive the content request or an indication thereof.
  • the content selector component 126 can receive prior audio inputs (or packaged data object) for the selection of a digital component based on the content request.
  • the content selector component 126 execute a real-time digital component selection process to select the digital component.
  • the content selector component 126 can select addition or supplemental digital components based on the input request.
  • the real-time digital component selection process can refer to, or include, selecting digital component objects (which may include sponsored digital component objects) provided by third party content providers 128 .
  • the real-time content selection process can include a service in which digital components provided by multiple content providers are parsed, processed, weighted, or matched based on the packaged data object in order to select one or more digital components to provide to the client device 102 .
  • a plurality of content provider devices can provide a digital component with associated bid to the digital component selector 126 . Based on a ranking of the bids from each of the content provider devices, the digital component selector 126 can select one of the provided digital components.
  • the digital component selector 126 can perform the content selection process in real-time.
  • Performing the content selection process in real-time can refer to performing the content selection process responsive to the request for content received via the client device 102 .
  • the real-time content selection process can be performed (e.g., initiated or completed) within a time interval of receiving the request (e.g., 5 seconds, 10 seconds, 20 seconds, 30 seconds, 1 minute, 2 minutes, 3 minutes, 5 minutes, 10 minutes, or 20 minutes).
  • the real-time content selection process can be performed during a communication session with the client device 102 , or within a time interval after the communication session is terminated.
  • the digital component selector 126 can be designed, constructed, configured or operational to select digital component objects based on the content request in the input audio signal.
  • the digital component selector 126 can identify, analyze, or recognize voice, audio, terms, characters, text, symbols, or images of the candidate digital components using an image processing technique, character recognition technique, natural language processing technique, or database lookup.
  • the candidate digital components can include metadata indicative of the subject matter of the candidate digital components, in which case digital component selector 126 can process the metadata to determine whether the subject matter of the candidate digital component corresponds to the content request.
  • the content selector component 126 can select a digital component object from a database associated with the digital component provider 128 and provide the digital component for presentation via the client device 102 .
  • the digital component object can be provided by a digital component provider 128 .
  • the content selector component 126 can select multiple digital components.
  • the multiple digital components can be provided by different digital component providers 128 .
  • a first digital component provider 128 can provide a primary digital component responsive to the request and a second digital component provider 128 can provide a supplemental digital component that is associated with or relates to the primary digital component.
  • the client device 102 or a user thereof can interact with the digital component object.
  • the client device 102 can receive an audio, touch, or other input response to the digital component.
  • the client device 102 can receive an indication to select a hyperlink or other button associated with the digital component object that causes or allows the client device 102 to identify digital component provider 128 , request a service from the digital component provider 128 , instruct the digital component provider 128 to perform a service, transmit information to the digital component provider 128 , or otherwise identify a good or service associated with digital component provider 128 .
  • the digital component selector 126 can select a digital component that includes text, strings, or characters that can be processed by a text to speech system or presentable via a display.
  • the digital component selector 126 can select a digital component that is in a parameterized format configured for a parametrically driven text to speech technique.
  • the digital component selector 126 can select a digital component that is in a format configured for display via client device 102 or the projector 106 .
  • the digital component selector 126 can select a digital component that can be re-formatted to match a native output format of the client device 102 , application, or projector 106 to which the digital component is transmitted.
  • the digital component selector 126 can provide the selected digital component to the client device 102 or automated assistant client 108 or application executing on the client device 102 for presentation by the client device 102 or the projector 106 .
  • the automated assistant 112 can generate dynamic image transformations to display a base image and/or transformed version of the base image, such as the digital components selected by the content selector component 126 .
  • the automated assistant 112 can identify one or more active users, generate image transformations to display for active user(s), and/or generate a base image to project based on the distance of a user from the projected image.
  • the cloud-based annotated assistant components 116 may generate transformation parameters 124 .
  • transformation parameters 124 may be generated separately from cloud-based automated assistant components 116 , e.g., on client device 102 , by automated assistant client 108 and/or on another computer system (e.g., in the so-called “cloud”).
  • the transformation parameters 124 can be used by automated assistant client 108 or cloud-based automated assistant components 116 to generate a transformed digital component from a base digital component.
  • the transformation parameters can be used to generate a transformed image from base image.
  • transformation parameters 124 can include identification information for a user's position within an environment.
  • Warping parameters can be a specific type of transformation parameters which can be used by automated assistant client 108 and/or cloud-based automated assistant components 116 to warp a base image into a transformed image.
  • Warping parameters may include, for example, one or more of: the pose of a user, the gaze of a user, the facial identification of a user (with approval of the user), the voice identification of a user (with approval of the user), the distance from a projector to the surface an image is projected onto, the shape of the surface the image is projected onto, or any combination thereof.
  • automated assistant 112 can perform image warping, a linear transformation similar to image rectification (described in more detail below) on transformation parameters 124 to generate a transformed image from a base image and/or an additional base image.
  • a user's pose can be determined via presence sensors 104 , and the automated assistant 112 can use the distance from the user (which can be generated from the user's pose) to the projection surface (e.g., the location where the image is projected) to identify a base image to use for that particular user.
  • the information contained in the base image can be dependent on the distance from the user to the projection surface. For example, a base image identified for a user located far from a surface can contain limited information from the user's calendar such as only the next item on the user's calendar. In contrast, a base image identified for a user located near the projection surface can contain more detailed information from the user's calendar such as the user's schedule for the entire day. If the user is close enough to the projection surface to touch the projection, in many implementations, the base image can also contain touch sensitive elements, for example, the ability for the user to scroll through calendar events for the entire week.
  • Client device 102 and/or cloud-based automated assistant components 116 can be in communication with one or more third party agents hosted by remote device(s) (e.g., another cloud-based component). For instance, a user voice command “order a large pepperoni pizza from Agent X” can cause the automated assistant client 108 (or cloud-based automated assistant component(s) 116 ) to send an agent command to a third party agent “Agent X”.
  • the third party agent can be a digital component provider 128 .
  • the third party agent can provide, to the automated assistant 112 , responsive content that includes (or enables generation of) base digital components relevant to the pizza order.
  • the base digital component can be a base image that can include graphical representations of the order being confirmed, as well as of a status of the pizza order.
  • the content selector component 126 can also select supplemental or additional digital components to display in association with the base digital component.
  • the content selector component 126 can select an additional digital component that can include a video sequence of real time tracking of the pizza delivery driver on a map as the pizza is being delivered.
  • the automated assistant client 108 and/or the cloud-based automated assistant components 116 can generate a transformation of the base image(s) and a transformed image can be projected onto the wall for the user.
  • FIGS. 2A and 2B illustrate an example of a user viewing an image projected onto a wall with respect to different user locations.
  • Image 200 contains a scene of a room at a first time and is illustrated in FIG. 2A .
  • Image 200 contains user 202 , projected image 204 , client device 206 (that includes an integrated projector and/or is in communication with a locally accessible separate projector), and table 208 .
  • image 200 is merely illustrative and for example the client device and/or projector can be separate devices, the client device and/or projector can be on a surface other than a table such as a desk, a dresser, and/or mounted onto surfaces such as a wall and/or celling, more than one client device can be present within the room, more than one projector can be within the room, and/or more than one user can be in the room.
  • Client device 206 can determine the pose of user 202 . If client device 206 determines the user 202 is an active user, the client device can project a variety of images 204 for the user including a base image and/or a transformed image that is a transformation of the base image.
  • Image 225 is a scene of the room captured at a second time.
  • Image 225 contains the same client device 206 (that includes an integrated projector and/or is in communication with a locally accessible separate projector) and table 208 , but the pose of the user has changed to a second pose 226 .
  • the user has moved to the right, from previous pose 228 , to the second pose 226 .
  • Client device 206 can detect the user in the second pose 226 and make a determination if the user is an active user in the second pose. If the user is an active user in the second pose, projected image 230 can be projected onto the wall for the user in the second pose 226 .
  • Projected image 230 can be projected onto the same position of the wall as projected image 204 despite any change of content within the images themselves through image transformation processes.
  • contents of projected image 230 can change as user moves to the second pose 226 .
  • projected image 204 can be a first transformed image and projected image 230 can be a second transformed image both of the same base image.
  • client device 206 can generate image transformations such that the user in the first pose 202 and the user in the second pose 226 will perceive the same projected image even though the perspective of the user has changed relative to the position to the image on the wall has changed.
  • client device 206 can use a base image and can perform image transformations to project transformed images for both user poses such that projected image 204 for user pose 202 is a first transformed image and projected image 230 for the user in the second pose 226 is a second transformed image.
  • client device 206 can project the untransformed base image as projected image 204 (and similarly computing device 206 can project a transformed image as projected image 230 for the user in the second pose 226 ).
  • the client device 206 can project additional digital components for viewing by the users near the client device 206 .
  • the client device 206 can project an additional image 232 onto the wall.
  • the subject matter of the additional image 232 can be related to subject matter of the projected image 230 .
  • the additional image 232 can provide additional information regarding the content of the projected image 230 .
  • the additional image 232 can be a duplicate of the projected image 230 that is projected onto a different location of the wall.
  • the client device 206 can project the additional image 232 as a duplicate of the projected image 230 to the different location to improve the ability of a second user to view the projected content.
  • the client device 206 can generate the additional image 232 and project the additional image 232 to a location selected to be viewed by a user pose other than the user pose 226 for which the location of the projected image 230 is selected.
  • the client device 206 can select the location of the additional image 232 such that the additional image is displayed in a non-prominent location.
  • the pizza tracker information can be included in the additional image 232 .
  • the pizza tracker information can be displayed on a non-prominent location of the wall, such as, near a corner, intersection with the floor, intersection with the wall, or near an object positioned near the wall (such as near a lamp positioned in front of the wall).
  • the content selector component 126 can select digital components for the client device 206 to project onto the wall that are not related to a request parsed from an input signal.
  • the content selector component 126 can select digital component to be projected on the wall between times that the client device 206 receives input signals.
  • content selector component 126 can select supplemental digital components based on the location, context, or action of the client device, or based on the preferences a user associated with the client device 206 .
  • the client device can project supplemental digital components onto the wall between the times that the client device 206 receives input signals, such as weather information, time information, or other digital components selected or configured by the user of the client device 206 .
  • Example images further illustrating a base image and the same base image without dynamic image transformation viewed by a user from a non-perpendicular angle (e.g., from the side) are illustrated in FIG. 2C and 2D .
  • a base image can be directly projected onto a wall for a user who has a perspective of the image directly perpendicular to the wall.
  • a base image can be transformed using a variety of dynamic image transformations such that a client device can project a transformed image that will appear to look as substantially similar (if not identical) to the base image as the user changes perspective from location of the projected image within the room.
  • FIG. 2C contains image 250 which contains dashed line 252 and dashed line 254 .
  • Dashed line 252 and dashed line 254 are parallel lines (i.e., likes that are equidistant and will never meet).
  • image 250 can be an example of a base image used by a client device which can be processed using image transformations such that dashed line 252 and dashed line 254 can appear parallel to a user when projected onto a wall regardless of the pose of a user in the room. For example, when a user is viewing the projected image from a non-perpendicular angle (e.g., the side) image transformations can make the lines in the transformed base image still appear parallel.
  • a non-perpendicular angle e.g., the side
  • dashed line 252 and dashed line 254 appear parallel.
  • FIG. 2D contains image 275 made of dashed line 276 and dashed line 278 .
  • Dashed line 276 and dashed line 278 are non-parallel and slant towards each other on the left hand side of the image.
  • lines 276 and 278 are an example of how a base image of parallel lines (for example if the image 250 containing dashed parallel lines was utilized as a base image) could be viewed by a user from a perspective in the room other than perpendicular to the image if no image transformation was used on the image projection.
  • a pair of lines closer on the right side and further apart on the left side can indicate a user is standing to the right of a base image of parallel lines without image transformation.
  • This user pose can be similar to the user in the second pose 226 after the user has moved to the right side of the room in FIG. 2B .
  • Client device 206 can generate an image transformation on a base image similar to the parallel lines of image 250 in FIG. 2C such that a user in second pose 226 will see a transformed image generated by client device 206 with parallel which looks like image 250 instead of the non-parallel lines in image 275 the user would normally see from a non-parallel viewing position as illustrated in FIG. 2D . Additionally, the user in the second pose 226 will see an image projected that is the same (or substantially similar) size as the base image and that is in the same (or substantially similar) location as the base image.
  • the process 300 can be performed by one or more client devices, and or any other apparatus capable of interacting with an automated assistant.
  • the process includes identifying ( 302 ) a base image.
  • the base image can be used by an automated assistant (at the client device and/or at remote device(s)) to generate a transformed image in dependence on a pose of a user.
  • a base image can include any of a variety of information that a client device can cause a projector to project for presentation to user, such as weather information for a particular day.
  • a user can request information that can be included in a base image through interaction with the client device through verbal, textual, graphical, and/or visual input to the client device. For example, a user can ask the client device “OK Assistant—what is the weather tomorrow” and a base image can be identified containing weather information for the following day.
  • base images can be provided to a client device by a third party agent (optionally provided via the cloud-based automated assistant component(s) 116 ). For example, if a user orders a delivery from a restaurant, the third party agent associated with the restaurant can send the automated assistant a base image which can include status updates relating to the delivery.
  • the automated assistant can generate base images and provide them for projection independent of explicit user input. For example, an image that contains a weather forecast can be projected in response to detecting presence of a user, but without explicit input from the user.
  • a first pose of a user can be determined ( 304 ) within a room.
  • sensors in the client device such as presence sensors 104 in FIG. 1 , can be used to determine the first pose of the user such as a microphone, a camera, an infrared camera, a time of flight camera, and/or a touch screen display on the client device.
  • a client device can use a camera to detect a user and generate the pose of the user within the room.
  • a client device with many sensors can determine which sensors to use individually and/or in combination based on previously known information regarding the pose of a user.
  • a sensor which gathers information about the entire room can be used such as a time of flight camera.
  • a shorter range sensor could be used to determine the first pose of the user.
  • sensors integrated into third party agents can be used to determine a pose of a user. It will be understood that these examples are merely illustrative any of a variety of ways to detect user pose in a room can be utilized as appropriate in accordance with various implementations.
  • First image transformation parameters can be generated ( 306 ) using the first pose of the user to warp the base image.
  • Image warping can include a linear transformation process which can use image warping parameters such as a user's pose, position, gaze, facial identification (with approval of the user), voice identification (with approval of the user) and/or distance from the projector to the surface the image is projected onto to generate image transformation parameters.
  • image transformation parameters can include for example (but are not limited to) base image data, precalculated image transformation data for a base image, first user pose data, and/or any other information relevant to generating a transformed image.
  • first image transformation parameters can include image warping parameters.
  • a first transformed image can be generated ( 308 ) using the first image transformation parameters where the first transformed image is a transformation of the base image.
  • a transformed image can differ from the base image in content, but is generated in such a way that it appears to be substantially similar in size and location to the base image when projected onto the surface.
  • an image transformation is not necessary when the first pose of the user is viewing the image perpendicular to (e.g., directly in front of) the surface the image is projected onto.
  • first image transformation parameters can indicate to the client device that the generation of a first transformed image is unnecessary and the base image will take the place of the first transformed image.
  • the base image being identical to itself, will take up an identical size and position on the wall.
  • a transformed image while a transformed image will differ from its base image counterpart, it can be of the same size as the base image and/or can, when projected, be projected in the same location and as the same size as would the base image.
  • a base image and a transformed image when projected they can be of an identical size on a projection surface and in identical locations on the projection surface.
  • the transformed image is projected in a “substantially similar” position as the base image.
  • the first transformed image and the base image can be identical in size, but two images are not projected onto exactly the same position of the wall and the transformed image can take up 95% of the same position on the wall as the base image.
  • the transformed image can be slightly smaller than the base image, and while the transformed image is taking up the same position on the wall, because the transformed image is slightly smaller it is only taking up 90% of the wall space of the base image (but is not taking up any wall space outside of the area of the original base image).
  • a transformed image can be slightly larger than a base image and can take 105% of the wall space of the base image and still take up a substantially similar position on the wall.
  • the projector can be caused ( 310 ) by the client device to project the first transformed image onto a surface.
  • the projector can be integrated within a client device similar to client device 102 and/or a separate projector similar to projector 106 .
  • the surface can include various areas with in a room including a wall, the ceiling, and/or the floor. For example, if the first pose of a user is determined to be reclining on a couch, projecting information on the ceiling (instead onto the wall) might be a more useful location for the user to view the projected transformed image.
  • a second pose of the user can be determined ( 312 ) after the user has moved.
  • User movement can be determined using many of the same sensors used to detect the first pose of the user in step ( 304 ) including a microphone, a camera, an infrared camera, a time of flight camera, and/or a touch screen display on the client device.
  • a microphone that detected a user speaking in one pose in the room can detect when the user's voice is coming from a different direction, and thus the user has moved within the room.
  • the second pose of the user can be determined in a manner similar to determining the first pose of the user in step ( 304 ) including using a sensor and/or a combination of sensors.
  • Second image transformation parameters can be generated ( 314 ) using the second pose of the user to warp the image.
  • image warping can be a linear transformation process which can utilize image warping parameters to generate image transformation parameters.
  • a process for determining a transformation to warp an image in accordance with many implementations will be discussed below in FIG. 4 .
  • second image transformation parameters can include for example (but are not limited to) base image data, precalculated image transformation data for the base image, first user pose data, second user pose data, and/or any other information relevant about a user to generate a transformed image.
  • second image transformation parameters can include image warping parameters.
  • a second transformed image can be generated ( 316 ) using the second image transformation parameters where the second transformed image is a transformation of the base image or an additional base image.
  • Generating a second transformed image can be performed in a manner similar to step ( 308 ) described above.
  • an additional base image can be used in place of the base image to generate the second transformed image when a base image changes over time.
  • a base image of a projected calendar event image can change once the event has ended and an additional base image for the next calendar event image can be projected.
  • individual frames of a video sequence can make a corresponding sequence of base images.
  • the base image can change to an additional base image independent of user movement and additional image transformations can be calculated for a first pose of the user for additional base images.
  • the projector can be caused ( 318 ) by the client device to project the second transformed image onto the surface.
  • the second image can be projected in a manner similar to the projection of the first transformed image in step ( 310 ) above.
  • the client device can instruct the projector to project the second transformed image onto a different surface. For example, if the first pose of a user is determined to be reclining on a couch and the first transformed image is projected onto the ceiling, and the client device detects the user gets up from the couch and has moved within the room as the second pose, the second transformed image can be projected onto a wall instead of the ceiling.
  • the first transformed image can be projected onto the North wall.
  • the second transformed image can be projected onto the south wall.
  • the process 400 can be performed by one or more client devices, and/or any other apparatus capable of interacting with an automated assistant for generating image transformation parameters by image warping.
  • the process 400 can include identifying ( 402 ) image warping parameters.
  • Image warping parameters can include (but are not limited to) the pose of a user, the gaze of a user, the facial identification of a user (with approval of the user), the voice identification of a user (with approval of the user), the distance from a projector to the surface an image is projected onto, and/or any other of a variety of user and/or hardware related parameters which can be used as image warping parameter.
  • a client device can use a single image warping parameter such as the gaze of the user, to determine a transformation to warp an image.
  • a client device can use a combination of image warping parameters.
  • a client device can impact which warping parameters are available to the client device. For example, if a client device has sensors which can determine the gaze of a user, the client device can use gaze as an image warping parameter. Additionally, in some implementations, a client device can receive data to use as warping parameters from sensors in third party agents such as a “smart” thermostat and/or other “smart” devices located within the room.
  • third party agents such as a “smart” thermostat and/or other “smart” devices located within the room.
  • Image warping parameters can be identified by a client device individually and/or in combination.
  • a combination of warping parameters can be identified by particular sensors available to the client device, the client device itself can determine the combination, and/or the user can identify image warping parameters by predetermining which combination of warping parameters the client device should use.
  • a client device can use the gaze of a user as a warping parameter.
  • the client device can make the choice to not the gaze of the user as a warping parameter at night when the lights are off in the room and thus the client device can have a harder time determining the gaze of a user, and instead the client device can identify other warping parameters such as determining a position of a user using voice identification (which is often less sensitive to lighting conditions in the room).
  • Determining transformation parameters to warp a base image can depend on (but are not limited to) which warping parameters and the values of data contained in the warping parameters which are used by the client device.
  • different adjustments and/or combinations of adjustments can be made using image warping parameters to generate image transformation parameters which can warp a base image as an image transformation.
  • a base image with a particular set of image warping parameters can require only one adjustment to determine transformation parameters to such as only requiring a vertical rotation of the base image.
  • a base image when a user is in a different position, a base image can have a different set of image warping parameters and can require a combination of adjustments such as a horizontal rotation of the base image and a scaling of the base image. It will be understood that these examples are merely illustrative any number of combinations of image adjustment combinations can be made by a client device as appropriate to determine transformation parameters in accordance with various implementations.
  • a rotation of the base image can optionally be determined ( 404 ) by the client device.
  • Image rotations can spin an image on the same plane as the wall the image is projected onto (i.e., on a plane parallel with the projection surface).
  • a base image can be a rectangle that when viewed by the user perpendicular to (e.g. directly in front of) a square wall, the rectangular base image has lines parallel with the ceiling, floor, and walls.
  • Image warping parameters can determine an appropriate rotation of the base image to determine transformation parameters to generate a transformed image that is also a rectangle with lines parallel to the ceiling, floor, and walls when viewed from the pose of the user from a non-perpendicular pose (e.g., to the side of) of the square wall.
  • a scaling of the base image can optionally be determined ( 406 ) by the client device.
  • Image scaling adjusts the size of an image.
  • image warping parameters can determine an appropriate scaling of a base image to generate image warping parameters which can increase and/or increase the size of the base image when image transformation parameters are utilized to generate a transformed image such that the transformed image takes up the same and/or substantially similar amount of space of the wall when projected as the base image when viewed from the pose of the user from a non-perpendicular pose.
  • a skew adjustment of the base image can optionally be determined ( 408 ) by the client device.
  • a skew image is an image at an oblique angle (i.e., at a slant).
  • Image warping parameters can determine how to skew an image to generate image warping parameters to use in generating a transformed image which can change the angles within the base image (i.e., slant portions of the image) such the determined transformation parameters can generate a transformed image that appears non-skewed when viewed from the pose of the user from a non-perpendicular pose.
  • Transformation parameters to warp the base image as an image transformation can be determined ( 410 ) by the client device.
  • the number of image adjustments necessary to determine transformation parameters to warp a particular base image as an image transformation with a particular set of image warping parameters can vary based on a number of factors including the base image and/or one or more of the warping parameters.
  • image warping can be viewed in some ways as similar to image rectification with various differences.
  • Image rectification generally projects two images, each image having a different optical viewpoint, onto a common image plane.
  • a matching pair of transformations, H and H′ can be used to rectify the pair of images.
  • image warping shares two images (a base image and a transformed image), only one image in the pair (the transformed image) is being transformed.
  • the base image never changes and a single base image can be associated with many image transformations (and thus many transformed images) as the “optical viewpoint” of the user such as when the pose of the user within the room changes.
  • Image warping generates a single image transformation using a known “optical viewpoint” (which in this context can be viewed as determined image warping parameters) of a user to match a transformed image to a known base image.
  • a known “optical viewpoint” which in this context can be viewed as determined image warping parameters
  • similar mathematical techniques used in image rectification can be utilized in image warping including planar rectification, cylindrical rectification, and/or polar rectification.
  • a base image can contain a calendar event for a user.
  • Image warping can generate a single image transformation (in contrast to the matching pair of image transformations generated with image rectification) using a transformation parameter such as the pose of the user within a room in place of the “optical viewpoint” to generate a transformed image to correspond with the known base image for the user.
  • other transformation parameters can be utilized individually and/or in combination for example, the pose and the gaze of a user can be utilized as the “optical viewpoint” of the user when generating the single image transformation to generate a
  • Image 500 contains an example of image rotation as illustrated in FIG. 5A .
  • Image 500 contains a square 502 .
  • a rotation to the left of square 502 is illustrated as rotated square 504 . It will be understood that this is merely an illustrative example, and rotated square 504 could be located in any of a variety of positions where rotated square 504 turns around an axis within image 500 .
  • Image 525 contains an example of image scaling as illustrated in FIG. 5B .
  • Image 525 similarly contains square 502 .
  • square 502 is scaled to be larger and is illustrated as scaled square 526 .
  • Scaled square 526 is merely an illustrative example, as scaling can increase or decrease the size of an object.
  • Image 550 contains an example of image skewing as illustrated in FIG. 5C .
  • Square 502 is similarly contained in image 550 .
  • a skew transformation is performed on square 502 where square 502 is skewed to the right as skewed square 552 .
  • This example of image skewing is merely illustrative and skewing can occur in any direction including to the right, to the left, up, and/or down.
  • Image 575 contains an example of image translation as illustrated in FIG. 5D .
  • Square 502 as illustrated in image 575 is translated up and to the right to generate translated square 576 .
  • This example is merely illustrative and image translations can occur in any direction. Additionally, image transformations discussed in FIGS. 5A-5D can be performed on an image individually and/or in any of a variety of combinations.
  • a process for dynamic image transformation including detecting an active user in accordance with various implementations is illustrated in FIG. 6 .
  • the process 600 can be performed by one or more client devices, and/or any other apparatus capable of interacting with an automated assistant.
  • the process includes identifying ( 602 ) a base image.
  • a base image can be transformed in dependence on a pose of a user to generate a transformed image.
  • identification of a base image can be performed in a manner similar to step ( 302 ) in FIG. 3 .
  • An active user can be detected ( 604 ) by the client device.
  • an active user is a user who is actively engaged with the automated assistant, and can be detected in a number of ways including by movement, location, pose, facial identification (with approval of the user), voice identification (with approval of the user), and/or gaze.
  • Active engagement can include viewing a projecting image, listening to rendered audible content provided by the automated assistant, and/or providing input to the automated assistant (e.g., voice input, touch input, gestures, etc.).
  • Sensors such as any of a variety of sensors included in presence sensors 104 and/or sensors included in third party agents such as “smart” devices can collect sensor data to detect an active user.
  • a microphone detecting a user giving the client device a command “OK Assistant—show me my calendar for tomorrow” could be identified as an active user.
  • a camera detecting a user looking at an image projected onto a surface by the client device based on the user's gaze can be identified as an active user.
  • a combination of techniques can be combined to detect an active user such as identifying pose and facial identification (with approval of the user) of someone in a room to detect the person is an active user.
  • multiple active users can be detected by a client device in the same room. Detecting an active user can map the detected active user to a user profile or some other sort of identification of the user. Additionally or alternatively, detecting an active user can indicate only that an active user is engaged with the automated assistant.
  • a first transformed image can be generated ( 606 ) for the active user by the client device. Any of a variety of dynamic image transformations can be used to generate the first transformed image including image warping. In many implementations, generating a first transformed image can performed in a manner similar to steps ( 304 )-( 308 ) of FIG. 3 .
  • the client device can cause ( 608 ) the projector to project the first transformed image onto a surface.
  • the first transformed image is a transformation of the base image. Projecting the first transformed image can be performed in a manner similar to step ( 310 ) of FIG. 3 including using a projector integrated with the client device and/or using a separate projector.
  • User movement can be detected ( 610 ) by the client device.
  • the detection of movement can be performed in a manner similar to determining user movement in step ( 312 ) of FIG. 3 .
  • threshold of movement can be met before a client device determines the second pose of the user. For example, if a user moves less than a millimeter, it is unlikely this small amount of movement will generate a new transformed image that is different enough than the first transformed image to be worth generating, and it can save system resources to wait until a user moves a greater distance before making a determination of the second pose of the user. On the other hand, if a user moves 3 meters to the right similar to how the user moved to second pose 226 to the left in FIG.
  • a second transformed image can be substantially different than the first transformed image (depending on the content of the image). This can be especially true in implementations where gaze is being utilized to determine and/or as part of determining user pose, as user's eyes frequently make very small shifts. Computational resources could be saved if a threshold requiring a larger shift in gaze is required before a second transformed image for the active user is generated. Alternatively, in many implementations when user gaze is being utilized, any projected images can be stabilized using a variety of image processing techniques as a way to compensate for these small shifts a user's eyes are making.
  • a second transformed image can be generated ( 612 ) for the moved active user by the client device. Any of a variety of dynamic image transformations can be used to generate the second transformed image including image warping. In some implementations, generating a second transformed image can be performed in a manner similar to steps ( 312 )-( 316 ) of FIG. 3 .
  • the client device can cause ( 614 ) the projector to project the second transformed image onto the surface.
  • the second transformed image is a transformation of the base image or is a transformation of an additional image. Projecting the second transformed image can be performed in a manner similar to step ( 608 ) and/or step ( 318 ) of FIG. 3 .
  • FIG. 7 A process for dynamic image transformation for multiple active users in accordance with various implementations is illustrated in FIG. 7 .
  • the process 700 can be performed by one or more client devices, and/or any other apparatus capable of interacting with an automated assistant.
  • the process includes identifying ( 702 ) a base image.
  • a base image can be transformed in dependence on a pose of a user to generate a transformed image.
  • identification of a base image can be performed in a manner similar to step ( 302 ) in FIG. 3 .
  • Multiple active users in a group of users can be detected ( 704 ) by a client device.
  • many people can be in a room, but not all of them can be actively engaged with the client device.
  • a room can have two users and neither user is engaged with the client device (and thus the room has no active users), one of the two users can interacting with the client device and be an active user, and/or both of the two users can be engaged with the client device active users.
  • multiple active users can be detected in a manner similar to how individual active users can be detected in step ( 604 ) of FIG. 6 .
  • multiple active users can be detected by sensors in groups, and or all active users can be detected in a room simultaneously.
  • sensors integrated into a client device could detect a cluster of active users in the same portion of the room at the same time.
  • some types of sensors could process an entire room at the same time and could detect all active users in a room simultaneously.
  • a camera with a 360 degree view can detect which users are looking at a projected image in an entire room and thus detect active users simultaneously.
  • a combination of sensors can be used to detect a group of active users.
  • furniture in a room can block a camera from detecting one particular active user, but a microphone could be used to detect voice commands to the automated assistant from the user behind the furniture to determine that user is in the group of active users. It should be readily appreciated that these examples are merely illustrative and any of a number of ways to detect active users can be utilized in accordance with various implementations.
  • a first transformed image can be generated ( 706 ) for the multiple active users by the client device. Any of a variety of dynamic image transformations can be used to generate the first transformed image including image warping. In some implementations, generating a first transformed image can be performed in a manner similar to generating an image for a single active user such as steps ( 304 )-( 308 ) of FIG. 3 . Additionally, image warping parameters can take multiple user poses into account when generating image warping to generate a first transformed image. For example, if all the active users are clustered in a single area of a room, a client device can decide to treat the multiple active users in a way similar to a single active user when generating a transformed image.
  • the client device might largely ignore the active users in the second area of the room and still generate a transformed image in a way similar to a single active users.
  • the client device can make decisions during the image warping process to generate the first transformed image. For example, the client device could combine the poses of multiple active users in a meaningful way to generate a pose which can take the place of a single active user pose for use in generating a first transformed image. For example the poses of multiple active users in a room can be averaged into a single pose.
  • a client device can perform a weighted averaging of multiple active users, giving more weight to the pose of predetermined users who can be identified using facial identification (with approval of the user) and/or voice identification (with approval of the user). For example, if the first transformed image is an event from a group calendar, the client device can determine which active users have access to that group calendar, and give greater weight to their pose when generating a weighted averaging of multiple active users. In many implementations, there can be no “best” first transformed image to display for the entire group of active users and a client device can simply display the base image in place of a transformed image.
  • the client device can cause ( 708 ) the projector to project the first transformed image onto a surface.
  • the first transformed image is a transformation of the base image. Projecting the first transformed image can be performed in a manner similar to step ( 310 ) of FIG. 3 including using a projector integrated with the client device and/or using a separate projector.
  • Movement can be detected ( 710 ) in the group of users by the client device.
  • the detection of movement in a group of users can be performed in a manner similar to determining user movement in step ( 312 ) of FIG. 3 .
  • a threshold of movement can be met movement is detected in the group of users in a manner similar determining a threshold of movement in a user in step ( 610 ) of FIG. 6 .
  • a second group of multiple active users can be detected ( 712 ) by the client device.
  • detecting a second group of multiple active users can be performed in a manner similar to detecting multiple active users in step ( 704 ).
  • Active users in the second group of active users can be different than the active users in the first group.
  • there can be overlap between active users in the two groups of active users but it should be noted it is not a requirement to have any overlap between active users in the first group and active users in the second group).
  • a second transformed image for the second group of multiple active users can be generated ( 714 ) using the client device.
  • generating the second transformed image can be performed in a manner similar to generating the first transformed image for the multiple active users described in step ( 706 ).
  • the client device can cause ( 716 ) the projector to project the second transformed image onto the surface.
  • the second transformed image is a transformation of the base image or is a transformation of an additional image. Projecting the second transformed image can be performed in a manner similar to step ( 614 ) and/or step ( 318 ) of FIG. 3 .
  • Generating the base image can be based on the distance of the user from the projected image (i.e., the distance from the user to the surface the image is projected on).
  • An example of a user viewing different base images projected onto a wall, where the projected base image is determined based on the user's distance from the projected image is illustrated in FIGS. 8A and 8B .
  • Image 800 contains a first scene of a room at a first time and is illustrated in FIG. 8A .
  • Image 800 contains user 802 , projected image 804 , client device 806 (that includes and integrated projector and/or is in communication with a locally accessible separate projector), and table 808 .
  • the contents of the room in image 800 are merely illustrative and, for example, the client device and/or projector can be separate devices, the client device and/or projector can be on a surface other than a table such as a desk, a dresser, and/or mounted onto surfaces such as a wall and/or celling, more than one client device can be present within the room, more than one projector can be within the room, and/or more than one user can be in the room.
  • Client device 806 can determine the pose of user 802 , and can determine a distance from user 802 to projected image 804 . The client device identifies a base image depending on the distance from user 802 to projected image 804 . Projected image 804 can be projected as the identified base image and/or as a transformed image.
  • Image 850 is a second of the scene of the room captured at a second time which contains the same client device 806 (that includes and integrated projector and/or is in communication with a locally accessible separate projector), and table 808 .
  • the client device can determine the second pose of the user 852 before determining the distance from the second pose user 852 to projected image 854 .
  • the client device can then identify a second base image corresponding to the distance from second pose of the user 852 to projected image 854 .
  • Projected image 854 can be projected as the second identified base image and/or as a second transformed image. Generally, the closer a user is to a projected image, the more detailed UI elements the identified base image will contain. For example, the first pose of user 802 is further away from the projected image compared to the second pose user 852 . As such, projected image 804 contains less detailed weather information which includes weather information for today (for example, a graphical image of the sun and 72 degrees).
  • projected image 854 contains more detailed information which includes weather information for both today and tomorrow (for example a graphical image of the sun a temperature of 72 degrees indicated as today's weather and a graphical image of a cloud with rain and a temperature of 54 degrees indicated as tomorrow's weather).
  • FIG. 9 A process for generating a base image based on the distance of a user from a projected image in accordance with various implementations is illustrated in FIG. 9 .
  • the process 900 can be performed by one or more client devices, and or/any other apparatus capable of interacting with an automated assistant.
  • the process includes determining ( 902 ) the pose of a user by the client device. Determining the poser of a user (including user location) can be determined in a manner similar to determining a pose of a first user as described in step ( 304 ) of FIG. 3 .
  • a distance from the user to a projection surface can be determined ( 904 ) by the client device.
  • the projection surface is the location where a projected image is displayed.
  • a client device can determine the distance from the user to the projection surface only using user pose.
  • additional information such as the distance from the client device to the projection surface may be necessary to determine the distance from the user to the projection surface.
  • a base image can be identified ( 906 ) using the distance from the user to the projection surface.
  • Base images with more detailed UI elements are generally selected for users closer to the projection surface while base images with less detailed UI elements are generally selected for users further away from the projection surface.
  • a base image with touch sensitive UI elements can be identified for a user close enough to make physical contact with the projection surface.
  • a client device can select a base image with a full day of calendar information for a user who can touch the projection surface, and additionally in many implementations the user can touch the projected image to scroll through the projected calendar image.
  • a transformed image can be generated ( 908 ) from the base image using the pose of the user by the client device.
  • generating a transformed image can be performed in a manner similar to steps ( 304 )-( 308 ) of FIG. 3 .
  • process 900 can omit generation of a transformed image and the generated base image can instead be projected.
  • the client device can cause ( 910 ) the projector to project the transformed image onto the surface. Projecting the transformed image can be performed in a manner similar to step ( 310 ) of FIG. 3 .
  • FIG. 10 illustrates a block diagram of an example method to generate interfaces in an audio-based, networked system.
  • the method 1100 can include receiving an input audio signal ( 1102 ).
  • the method 1100 can include parsing the input audio signal ( 1104 ).
  • the method 1100 can include selecting a first digital component and a second digital component ( 1106 ).
  • the method 1100 can include determining a distance ( 1108 ).
  • the method 1100 can determine transformation parameters ( 1110 ).
  • the method 1100 can include generating a first transformed digital component and a second transformed digital component ( 1112 ).
  • the method 1100 can include transmitting the first transformed image and the second transformed image ( 1114 ).
  • the method 1100 can include receiving an input audio signal ( 1102 ).
  • the method 1100 can include receiving, by the natural language processor, the input audio signal.
  • the input audio signal can be an input audio signal that is detected by a microphone or other sensor located at a client device.
  • the data processing system can receive the audio input in one or more portions or as a bulk or batch upload (e.g., multiple portions of the conversations uploaded in a single transmission to reduce the number of transmissions).
  • the method 1100 can include parsing the input signal ( 204 ).
  • the natural language processor can parse the input signal to identify a request and one or more keywords in the input audio signal.
  • the request can be a request for a digital component.
  • the request can be for a digital component that includes images, video, text, audio files, or any combination thereof.
  • the keywords can include terms that are relevant, identified by, or associated with the requested digital component.
  • a keyword can include one or more terms or phrases. For example, for a request that a digital component including the current weather in San Francisco be displayed on a wall or other projection surface by the client device, the keyword can be “weather” or “San Francisco.”
  • the method 1100 can include selecting a first digital component and a second digital component ( 1106 ).
  • the first and second digital component can be base digital components, such as base images.
  • the base digital components can include one or more image or video files.
  • the content selector component can select the first base digital component based on the request parsed from the input audio signal.
  • the content selector component can select the second base digital component based on the keyword identified based on the input audio signal. For example, for the input audio signal “what is the weather in San Francisco,” the automated assistant can determine the request is for the current weather of San Francisco to be presented.
  • the first base digital component can be an image that includes graphics illustrating the current weather conditions and temperature in San Francisco.
  • the automated assistant can select the second base digital component based on a keyword associated with the request, such as “San Francisco.” For example, the second base digital component can be an image that includes information about a popular restaurant located in San Francisco.
  • the method 1100 can include determining a distance ( 1108 ).
  • the automated assistant can determine the distance between the automated assistant (or the projector associated with the automated assistant) and the projection surface (e.g., a wall) onto which the digital components are going to be projected.
  • the automated assistant can determine the distance using built in range finding sensors, such as ultrasonic or infrared sensors.
  • the end user when configuring the automated assistant, can input the distance to the automated assistant.
  • the distance can be determined each time an input audio signal is transmitted to the automated assistant, at predetermined intervals (e.g., daily or weekly), or during a configuration phase of the automated assistant.
  • the method 1100 can determining transformation parameters ( 1110 ).
  • the automated assistant can determine the transformation parameters based at least on the distance between the client device (or associated projector) and the projection surface.
  • the transformation parameters can be based on a pose or distance between the end user and the projection surface.
  • the transformation parameters can correct for a skew in the projection of digital components onto the projection surface based on the placement of the projector or the position of the user.
  • application of the transformation parameters can enable the automated assistant to perform a keystone correction on the digital component. Without the application of the transformation parameters, one or more of the edges of the digital component may be non-parallel to one another when projected onto the projection surface.
  • the transformation parameters can correct for the skew such that the edges of the digital component are parallel with one another when projected onto the projection surface.
  • the transformation parameters can include linear transformations.
  • the transformation parameters can be stored locally at the client device or projector.
  • the digital components can be transmitted to the client device, which can apply the transformation parameters to generate the transformed digital component.
  • the method 1100 can include generating a first transformed digital component and a second transformed digital component ( 1112 ).
  • the first and second transformed digital components can be transformed images.
  • the transformed version of the digital component can include the same content as the original digital component.
  • the images of the transformed digital components can be adjusted or scaled such that the edges of the transformed images appear parallel to one another when projected onto the projection surface.
  • the method 1100 can include transmitting the first transformed digital component and the second transformed digital component ( 1114 ).
  • the first and second transformed digital components can be transmitted to the client device to be projected onto the projection surface.
  • the transformation parameters and the base digital components can be transmitted to the client device.
  • the client device can apply the transformation parameters to the base digital components prior to projecting the base digital components onto the projection surface.
  • FIG. 11 is a block diagram of an example computing device 1010 that may optionally be utilized to perform one or more aspects of techniques described herein.
  • one or more of a client computing device, user-controlled resources module, and/or other component(s) may comprise one or more components of the example computing device 1010 .
  • Computing device 1010 typically includes at least one processor 1014 which communicates with a number of peripheral devices via bus subsystem 1012 .
  • peripheral devices may include a storage subsystem 1024 , including, for example, a memory subsystem 1025 and a file storage subsystem 1026 , user interface output devices 1020 , user interface input devices 1022 , and a network interface subsystem 1016 .
  • the input and output devices allow user interaction with computing device 1010 .
  • Network interface subsystem 1016 provides an interface to outside networks and is coupled to corresponding interface devices in other computing devices.
  • User interface input devices 1022 may include a keyboard, pointing devices such as a mouse, trackball, touchpad, or graphics tablet, a scanner, a touchscreen incorporated into the display, audio input devices such as voice recognition systems, microphones, and/or other types of input devices.
  • pointing devices such as a mouse, trackball, touchpad, or graphics tablet
  • audio input devices such as voice recognition systems, microphones, and/or other types of input devices.
  • use of the term “input device” is intended to include all possible types of devices and ways to input information into computing device 1010 or onto a communication network.
  • User interface output devices 1020 may include a display subsystem, a printer, a fax machine, or non-visual displays such as audio output devices.
  • the display subsystem may include a cathode ray tube (CRT), a flat-panel device such as a liquid crystal display (LCD), a projection device, or some other mechanism for creating a visible image.
  • the display subsystem may also provide non-visual display such as via audio output devices.
  • output device is intended to include all possible types of devices and ways to output information from computing device 1010 to the user or to another machine or computing device.
  • Storage subsystem 1024 stores programming and data constructs that provide the functionality of some or all of the modules described herein.
  • the storage subsystem 1024 may include the logic to perform selected aspects of the process of FIG. 3 , as well as to implement various components depicted in FIGS. 1 and 2 .
  • Memory 1025 used in the storage subsystem 1024 can include a number of memories including a main random access memory (RAM) 1030 for storage of instructions and data during program execution and a read only memory (ROM) 1032 in which fixed instructions are stored.
  • a file storage subsystem 1026 can provide persistent storage for program and data files, and may include a hard disk drive, a floppy disk drive along with associated removable media, a CD-ROM drive, an optical drive, or removable media cartridges.
  • the modules implementing the functionality of certain implementations may be stored by file storage subsystem 1026 in the storage subsystem 1024 , or in other machines accessible by the processor(s) 1014 .
  • Bus subsystem 1012 provides a mechanism for letting the various components and subsystems of computing device 1010 communicate with each other as intended. Although bus subsystem 1012 is shown schematically as a single bus, alternative implementations of the bus subsystem may use multiple busses.
  • Computing device 1010 can be of varying types including a workstation, server, computing cluster, blade server, server farm, or any other data processing system or computing device. Due to the ever-changing nature of computers and networks, the description of computing device 1010 depicted in FIG. 11 is intended only as a specific example for purposes of illustrating some implementations. Many other configurations of computing device 1010 are possible having more or fewer components than the computing device depicted in FIG. 11 .
  • the users may be provided with an opportunity to control whether programs or features collect user information (e.g., information about a user's social network, social actions or activities, profession, a user's preferences, or a user's current geographic location), or to control whether and/or how to receive content from the content server that may be more relevant to the user.
  • user information e.g., information about a user's social network, social actions or activities, profession, a user's preferences, or a user's current geographic location
  • certain data may be treated in one or more ways before it is stored or used, so that personal identifiable information is removed.
  • a user's identity may be treated so that no personal identifiable information can be determined for the user, or a user's geographic location may be generalized where geographic location information is obtained (such as to a city, ZIP code, or state level), so that a particular geographic location of a user cannot be determined.
  • the user may have control over how information is collected about the user and/or used.
  • users may opt out of having automated assistant 112 attempt to estimate their age range and/or vocabulary level.

Abstract

Implementations relate to dynamic adaptation of images for projection by a projector, based on one or more properties of user(s) that are in an environment with the projector. The projector can be associated with an automated assistant client of a client device. In some versions of those implementations, a pose of a user in the environment is determined and, based on the pose, a base image for projecting onto a surface is warped to generate a transformed image. The transformed image, when projected onto a surface and viewed from the pose of the user, mitigates perceived differences relative to the base image.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • The present application claims the benefit of priority under 35 U.S.C. § 120 as a continuation of U.S. patent application Ser. No. 15/973,456, titled “DYNAMIC ADAPTATION OF DEVICE INTERFACES IN A VOICE-BASED SYSTEM,” filed May 7, 2018, which claims the benefit of priority under 35 U.S.C. § 120 as a continuation-in-part of U.S. patent application Ser. No. 15/955,297, titled “DYNAMIC ADAPTATION OF IMAGES FOR PROJECTION, AND/OR OF PROJECTION PARAMETERS, BASED ON USER(S) IN ENVIRONMENT,” filed Apr. 17, 2018, each of which is incorporated by reference herein in its entirety.
  • BACKGROUND
  • An automated assistant (also known as a “personal assistant”, “mobile assistant”, etc.) can be interacted with by a user via a variety of client devices such as projectors, smart phones, tablet computers, wearable devices, automobile systems, and/or standalone personal assistant devices. An automated assistant receives input from the user such as typed input, touch input, and/or spoken natural language input. The automated assistant can respond with responsive content such as visual and/or audible natural language output. An automated assistant interacted with a client device can be implemented via the client device itself and/or one or more remote computing devices, such as (but not limited to) computing device(s) in “the cloud”, that are connected to the client device via a network.
  • SUMMARY
  • This disclosure relates to systems, methods, and apparatus for dynamic adaptation of images for projection by a projector, and/or of projection parameters, based on one or more properties of user(s) that are in an environment with the projector. Typically, as a user moves within a room, an image projected onto the same position of a wall will appear, to the user, to change due to the perspective change of the user based on the user's movement within the room. In other words, in such a typical situation the image projected onto the wall will remain the same as the user moves within the room—but the projected image, as perceived by the user, will appear to change as the user moves within the room. In implementations disclosed herein, a base image can be transformed, in dependence on a pose of a user, to generate a transformed image. As used herein, a “pose” references a position of a user, and optionally also an orientation of the user. The transformed image is different from the base image, but is generated such that when projected it appears to be substantially similar to the base image, when viewed from the pose of the user. Stated differently, if viewed from the same pose, the projected base image and the projected transformed image would be perceivable as different by the user. However, if the projected base image were viewed by the user from a first pose (e.g., “straight on”) and the projected transformed image were viewed by the user from a second pose (e.g., at a 70° angle relative to “straight on”), they would be perceived as the same.
  • Accordingly, various implementations disclosed herein can selectively and dynamically transform base images, in dependence on a pose of a user. Through the selective and dynamic transformation of base images, transformed images can selectively be projected in lieu of their base image counterparts, such that projected images, when viewed by the user, appear to be substantially similar to their base image counterparts. It will be understood that some user poses will require no dynamic transformations, and the base image itself can be projected to the user. As used herein, a “base image” references a single image frame and optionally also an image that is part of a sequence of images that form a video or other dynamic sequence of images.
  • In many implementations, an automated assistant can identify active user(s) of the automated assistant in determining how to dynamically transform an image. For example, where multiple users are present, the automated assistant can identify a subset of those users as active users, determine at least one pose of the subset, and transform an image in dependence on the at least one pose of the subset. Active users can be identified by an automated assistant in a number of ways including by movement, location, pose, facial identification, voice identification, and/or gaze. In some implementations, rooms can contain more than one person and various numbers of active users. As an illustrative example, a room can contain one person and no active users, one person and one active user, several people and one active user, and/or several people and several active users. The number of active users in a room can change over time, and a redetermination of active users by the automated assistant can be used to determine new image transformation parameters to use in transforming base images for projection.
  • In a variety of implementations, images can be dynamically transformed (or “warped”) by the automated assistant so the image appears the same as an active user moves within a room. Image warping can be a linear transformation, and can include a variety of processes including rotating the image, scaling the image, and skew adjusting the image. As an illustrative example of image warping, assume a base image that includes a pair of parallel lines. If the base image is projected, the lines would appear parallel to a user that is viewing the projection from a pose that is perpendicular to (e.g., directly in front of) a surface on which the projection is provided. However, if the user were instead viewing the projection of the base image from a non-perpendicular angle (e.g., from the side), the lines would appear non-parallel. Generating a transformed image based on warping the base image, and projecting the transformed image in lieu of the base image, can lead to the user still perceiving the lines as parallel even when the user is at a non-perpendicular angle (e.g., from the side). In other words, the user's perception of the projection of the transformed image can be more similar to the base image than would be the user's perception of a projection of the base image itself.
  • In some additional or alternative implementations, a base image can be generated and/or identified based on a distance of the user, where the distance of the user is indicative of the distance between the user and the surface upon which the image is projected (e.g., the distance can be based on a distance between the user and the projector, and optionally the distance from the projector to the surface). For example, interactive user interface (UI) elements can be included or excluded in a base image depending on the distance of the user from the projected image. For instance, when a user is relatively far away from a projection surface (e.g., more than 5 feet away or other “unreachable” distance), a base image can be identified or generated that lacks any interactive UI elements. In contrast, when a user is relatively close to a projection surface (e.g., within “reach” of the projection surface), a base image can be identified or generated that includes interactive UI elements. As another example, a first base image can be identified or generated when a user is within a first range of distances of the projection surface, and a second base image can be identified or generated when the user is instead within a second range of distances of the projection surface. For instance, in response to a user's request for “weather”, either the first base image or the second base image can be identified and/or generated for projection, in dependence on a distance of the user. For instance, the first range of distances can include farther distances and the first base image can include less information such as only today's weather report. In contrast, the second range of distances can include closer distances and the second base image can include more information such as today's weather report and the weather report for one or more additional days. In some implementations, the projected image can be touch sensitive, giving a user close enough to touch the projection a modality via which to interact with the automated assistant (e.g., in addition to voice and/or gesture modalities).
  • Implementations disclosed herein can enhance the usability of an automated assistant through dynamic adaptation of what content is projected and/or how it is projected. Such dynamic adaptations can enable more accurate and/or more comprehendible representations of projected content from a variety of viewing poses. Such dynamic adaptations can be of benefit to, for example, users with low dexterity that may be constrained with respect to the poses from which they can view projected automated assistant content. In some additional or alternative implementations, generating and projecting a transformed image can reduce the duration of time that it is necessary for a projector to project the transformed image, thereby conserving power resources that would otherwise be required to project the transformed image for a longer duration. For example, when a user views a projected transformed image, the user can comprehend the information presented in the image more quickly (i.e., relative to if a base image were instead projected) and/or is less likely to need to move to understand the information in the projected image. This enables the projector to cease projecting of the transformed image more quickly. The projector can cease projecting of the transformed image, for example, in response a user command to dismiss the image (e.g., a command that request additional content that will supplant the image) or as a time-out after determining the user is no longer viewing the image. In some additional or alternative implementations, a cloud-based automated assistant component can send a base image and the client device can generate transformation(s) of the base image locally, obviating the need for further client-cloud-based automated assistant component network communications to request and transmit transformation(s). This can efficiently lessen the amount of data exchanged between cloud-based automated assistant component and the client since the cloud-based automated assistant component only needs to send a single base image instead of needing to send multiple image transformations along with each base image.
  • The above description is provided as an overview of some implementations disclosed herein. Additional description of these and other implementations is set forth in more detail herein.
  • In some implementations, a method is provided and includes identifying, by an automated assistant client of a computing device in an environment, a base image for projecting onto a surface via a projector accessible to the automated assistant client. The method further includes determining, using sensor data from at least one sensor, a first pose of a user in the environment. The sensor data is accessible to the automated assistant client. The method further includes determining, using the first pose of the user, first image transformation parameters for warping images. The method further includes generating a first transformed image that is a transformation of the base image, and causing the projector to project the transformed image onto the surface. Generating the first transformed image includes using the first image transformation parameters to warp the base image. The first transformed image, when projected onto the surface and viewed from the first pose of the user, mitigates perceived differences relative to the base image. The method further includes determining, using additional sensor data from the at least one sensor, a second pose of the user in the environment, where the second pose of the user indicates the user has moved. The method further includes determining, using the second pose of the user, second image transformation parameters for warping images. The method further includes generating a second transformed image that is a transformation of the base image or of an additional base image, and causing the projector to project the second transformed image onto the surface. Generating the second transformed image includes using the second transformation parameters to warp the base image or the additional base image. The second transformed image, when projected onto the surface and viewed from the second pose of the user, mitigates perceived differences relative to the base image or the additional base image.
  • These and other implementations of the technology disclosed herein can include one or more of the following features.
  • In some implementations, the base image is received, via a network interface of the computing device, from a remote automated assistant component that interfaces with the automated assistant client.
  • In some implementations, the base image is generated by the automated assistant client based on data received, via a network interface of the computing device, from a remote automated assistant component that interfaces with the automated assistant client.
  • In some implementations, the method further includes determining a distance of the surface to the projector using second sensor data from a second sensor. The second sensor data is accessible to the automated assistant client. In some versions of those implementations, determining the first image transformation parameters for warping images includes determining the first image transformation parameters using the first pose of the user and using the distance of the surface to the projector. In some additional or alternative version of those implementations, generating the second transformed image includes using the second pose of the user and the distance from the surface to the projector to warp the base image.
  • In some implementations, generating the first transformed image that is the transformation of the base image includes performing at least one linear transformation on the base image. In some of those implementations, the at least one linear transformation is selected from a group consisting of rotation of the base image, scaling of the base image, and skew adjustment of the base image.
  • In some implementations, the base image has first dimensions and the first transformed image has the same first dimensions. In some of those implementations, the base image includes base image pixels each having corresponding values assigned thereto, and the transformed image includes transformed image pixels. The transformed image pixels have the same corresponding values as the base image pixels, but the assignment of the same corresponding values to the transformed image pixels differs from the assignment of the corresponding values to the base image pixels in the base image. For example, a given transformed image pixel, having a given X and Y position in the transformed image, can have the same values as a given base image pixel, of the base image, where the given base image pixel that has a different X and Y position in the base image.
  • In some implementations, the method further includes determining, based on the first pose of the user, a desired size for the projection of the first transformed image. In some of those implementations, causing the projector to project the first transformed image onto the surface includes causing the projector to project the first transformed image to achieve the desired size for the projection.
  • In some implementations, identifying the base image includes selecting the base image, from a plurality of candidate base images, based on the first pose of the user. In some of those implementations, selecting the base image based on the first pose of the user includes: determining a distance of the user based on the first pose of the; and selecting the base image based on the distance corresponding to a distance measure assigned to the base image. The distance can be, for example, relative to the projector or relative to the surface.
  • In some implementations, the method further includes generating, by the automated assistant client, the base image based on the pose of the user. In some of those implementations, generating the base image based on the pose of the user includes: determining the pose of the user is within a threshold distance of the surface; and based on determining the pose of the user is within the threshold distance of the surface: generating the base image to include one or more interactive interface elements.
  • In some implementations, a method if provided and includes identifying, by an automated assistant client of a computing device, a base image for projecting onto a surface via a projector accessible to the automated assistant client. The method further includes identifying a plurality of users that are in an environment with the computing device, and determining, using sensor data from at least one sensor accessible to the computing device, that a subset of the users are active users for the automated assistant client. The method further includes determining at least one pose for the subset of the users that are determined to be active users. Determining the at least one pose is based on the sensor data or additional sensor data from at least one additional sensor accessible to the computing device. The method further includes, based on determining that the subset of the user are active users, using the at least one pose for the subset of the users in generating a transformed image of the base image. The method further includes causing the projector to project the transformed image onto the surface.
  • These and other implementations of the technology disclosed herein can include one or more of the following features.
  • In some implementations, the method further includes determining, using additional sensor data from the at least one sensor, that a second subset of the users are active users for the automated assistant client. The additional sensor data is generated at a time subsequent to the sensor data, and the second subset of the users varies from the first subset of the users. In those implementations, the method further includes: determining, based on the additional sensor data, at least one second pose for the second subset of the users that are determined to be active user; and based on determining that the second subset of the user are active users, using the at least one second pose for the second subset of the users in generating a second transformed image of the base image, or of an additional image, using the at least one second pose. In those implementations, the method further includes causing the projector to project the second transformed image onto the surface.
  • In some implementations, the method further includes: determining, based on the sensor data or additional sensor data, a gaze for the subset of the users that are determined to be active users; and generating the transformed image using the gaze of the one or more active users.
  • In some implementations, the method further includes: determining, based on the pose of the subset of the users, a desired size for the projection of the transformed image. In some of those implementations, causing the projector to project the transformed image onto the surface includes causing the projector to project the first transformed image to achieve the desired size for the projection.
  • In some implementations, a method is provided that includes identifying, by an automated assistant client of a computing device, a base image for projecting via a projector accessible to the automated assistant client. The method further includes identifying a plurality of users that are in an environment with the computing device. The method further includes determining, using sensor data from at least one sensor accessible to the computing device, that a subset of the users are active users for the automated assistant client. The method further includes determining at least one pose for the subset of the users that are determined to be active users. Determining the at least one pose is based on the sensor data or additional sensor data from at least one additional sensor accessible to the computing device. The method further includes, based on determining that the subset of the user are active users, using the at least one pose for the subset of the users in determining one or more projection parameters for a projection that includes the base image, or a transformed image that is a transform of the base image. The method further includes causing the projector to project the projection using the projector.
  • These and other implementations of the technology disclosed herein can include one or more of the following features.
  • In some implementations, the one or more projection parameters include one or multiple of: a size of the base image or the transformed image in the projection, a size of the projection, a location of the base image or the transformed image in the projection, and a location of the projection.
  • According to at least one aspect of the disclosure, a system to generate interfaces in an audio based networked system can include a computing device that can include one or more processors and a memory The one or more processors can be configured to execute a natural language processor, a content selector component, and a transformation component. The computing device can receive an input audio signal that is detected by a sensor at a client device. The computing device can parse input audio signal to identify a first request in the input audio signal and a keyword associated with the first request. The computing device computing device can select a first base digital component based on at least the first digital component request. The computing device can select a second base digital component based on at least the keyword associated with the first digital component request. The computing device can determine a distance between the client device and a projection surface. The computing device can determine, based on the distance between the client device and the projection surface, transformation parameters for the first base digital component and the second base digital component. The transformation parameters can be configured to correct a skew of images projected onto the projection surface. The computing device can generate a first transformed image based at least on the transformation parameters and the first base digital component and a second transformed image based at least on the transformation parameters and the second digital component. The computing device can transmit the first transformed image and the second transformed image to the client device for projection onto the projection surface.
  • According to at least one aspect of the disclosure, a method to generate interfaces in an audio-based networked system can include receiving, by a natural language processor executed by one or more processors of a computing device, an input audio signal detected by a sensor at a client device. The method can include parsing, by the natural language processor, the input audio signal to identify a first request in the input audio signal and a keyword associated with the first request. The method can include selecting, by a content selector component of the computing device, a first base digital component based on at least the first digital component request. The method can include selecting, by the content selector component, a second base digital component based on at least the keyword associated with the first digital component request. The method can include determining, by a transformation component executed by the one or more processors of the computing device and based on sensor data from the client device, a distance between the client device and a projection surface. The method can include determining, by the transformation component, based on the distance between the client device and the projection surface, transformation parameters for the first base digital component and the second base digital component. The transformation parameters configured to correct a skew of images projected onto the projection surface. The method can include generating, by the transformation component, a first transformed image based at least on the transformation parameters and the first base digital component and a second transformed image based at least on the transformation parameters and the second digital component. The method can include transmitting, by the transformation component, the first transformed image and the second transformed image to the client device for projection onto the projection surface.
  • Other implementations may include a non-transitory computer readable storage medium storing instructions executable by one or more processors (e.g., central processing unit(s) (CPU(s)), graphics processing unit(s) (GPU(s)), and/or tensor processing unit(s) (TPU(s)) to perform a method such as one or more of the methods described above and/or elsewhere herein. Yet other implementations may include a system of one or more computers and/or one or more robots that include one or more processors operable to execute stored instructions to perform a method such as one or more of the methods described above and/or elsewhere herein.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a block diagram illustrating an example environment in which various implementations can be implemented.
  • FIG. 2A is a diagram illustrating an example scene of a user in a first pose in a room looking at an image projected onto a wall by a projector using an automated assistant.
  • FIG. 2B is a diagram illustrating an example scene of a user in a second pose in a room looking at an image projected at the same location onto the wall by a projector using an automated assistant.
  • FIG. 2C is a diagram illustrating an example of a projected image, as perceived by a user from a perspective that is directly perpendicular to the projected image.
  • FIG. 2D illustrates an example of the projected image of FIG. 2C, but as perceived by a user from a perspective that is not directly perpendicular to the projected image.
  • FIG. 3 is a flowchart illustrating an example process according to implementations disclosed herein.
  • FIG. 4 is a flowchart illustrating an example process according to implementations disclosed herein.
  • FIGS. 5A, 5B, 5C, and 5D are diagrams illustrating examples of image processing adjustments.
  • FIG. 6 is a flowchart illustrating an example process according to implementations disclosed herein.
  • FIG. 7 is a flowchart illustrating an example process according to implementations disclosed herein.
  • FIG. 8A is a diagram illustrating an example scene of a user in a first pose in a room looking at an image projected onto a wall by a projector using an automated assistant.
  • FIG. 8B is a diagram illustrating an example scene of a user in a second pose in a room looking at an image projected at the same location onto the wall by a projector.
  • FIG. 9 is a flowchart illustrating an example process according to implementations disclosed herein.
  • FIG. 10 illustrates a block diagram of an example method to generate interfaces in an audio-based, networked system according to implementations disclosed herein.
  • FIG. 11 is a block diagram illustrating an example architecture of a computing device.
  • DETAILED DESCRIPTION
  • FIG. 1 illustrates an example environment 100 in which various implementations can be implemented. The example environment 100 includes one or more client devices 102. Each client device 102 may execute a respective instance of an automated assistant client 112. One or more cloud-based automated assistant components 116, such as natural language processor 122 and digital component selector 126, may be implemented on one or more computing systems (collectively referred to as a “cloud” computing system) that are communicatively coupled with client devices 102 via one or more local and/or wide area networks 114 (e.g., the internet). The system 100 can include one or more digital component providers 128 that can provide digital components to the client device 102 via the cloud-based automated assistant components 116 and networks 114.
  • The system 100 can include one or more digital component providers 128. The digital component providers 128 can provide audio, visual, or multimedia based digital components (which can also be referred to as content, images, or base images) for presentation by the client device 102 or the projector 106 as an audio and visual based output digital component. The digital component can be or include other digital components. The digital component can be or include a digital object. The digital component can be configured for a parametrically driven text to speech technique. The digital component can be configured for text-to-speech (TTS) implementations that convert normal language text into speech. For example, the digital component can include an image that is displayed on a projection surface as, via TTS, text related to the displayed image is presented to the user. The digital component can be input to an application programming interface that utilizes a speech-synthesis capability to synthesize text into natural-sounding speech in a variety of languages, accents, and voices. The digital component can be coded as plain text or a speech synthesis markup language (SSML). SSML can include parameters that can be set to control aspects of speech, such as pronunciation, volume, pitch, or rate that can form an acoustic fingerprint or native voice.
  • The digital component provider 128 can provide selection criteria for the digital component, such as a value, keyword, concept, or other metadata or information to facilitate a content selection process. The digital component provider 128 can provide video based digital components (or other digital components) to the content selector component 126 where they can be stored in a data repository. The content selector component 126 can select the digital components from the data repository and provide the selected digital components to the client device 102.
  • The digital component provider 128 can provide the digital component to the content selector component 126 for storage in the data repository in a content data structure. The content selector component 126 can retrieve the digital component responsive to a request for content from the client device 102 or otherwise determining to provide the digital component.
  • The digital component provider 128 can establish a digital component campaign (or electronic content campaign). A digital component campaign can refer to one or more content groups that correspond to a common theme. A content campaign can include a hierarchical data structure that includes content groups, digital component data objects (e.g., digital components or digital objects), and content selection criteria. To create a digital component campaign, digital component provider 128 can specify values for campaign level parameters of the digital component campaign. The campaign level parameters can include, for example, a campaign name, a preferred content network for placing digital component objects, a value of resources to be used for the digital component campaign, start and end dates for the content campaign, a duration for the digital component campaign, a schedule for digital component object placements, language, geographical locations, type of computing devices on which to provide digital component objects. In some cases, an impression can refer to when a digital component object is fetched from its source and is countable. Due to the possibility of click fraud, robotic activity can be filtered and excluded, as an impression. Thus, an impression can refer to a measurement of responses from a Web server to a page request from a browser, which is filtered from robotic activity and error codes, and is recorded at a point as close as possible to opportunity to render the digital component object for display on the computing device 104. In some cases, an impression can refer to a viewable or audible impression; e.g., the digital component object or digital component is at least partially (e.g., 20%, 30%, 30%, 40%, 50%, 60%, 70%, or more) viewable on a display device of the client device 102, or audible via a speaker of the client device 102. A click or selection can refer to a user interaction with the digital component object, such as a voice response to an audible impression, a mouse-click, touch interaction, gesture, shake, audio interaction, or keyboard click. A conversion can refer to a user taking a desired action with respect to the digital component objection; e.g., purchasing a product or service, completing a survey, visiting a physical store corresponding to the digital component, or completing an electronic transaction.
  • The digital component provider 128 can establish one or more content groups for a digital component campaign. A content group includes one or more digital component objects and corresponding content selection criteria, such as keywords, words, terms, phrases, geographic locations, type of computing device, time of day, interest, topic, or vertical. Content groups under the same content campaign can share the same campaign level parameters, but may have tailored specifications for content group level parameters, such as keywords, negative keywords (e.g., that block placement of the digital component in the presence of the negative keyword on main content), or parameters associated with the content campaign.
  • To create a new content group, the digital component provider 128 can provide values for the content group level parameters of the content group. The content group level parameters include, for example, a content group name or content group theme, and bids for different content placement opportunities (e.g., automatic placement or managed placement) or outcomes (e.g., clicks, impressions, or conversions). A content group name or content group theme can be one or more terms that the digital component provider 128 can use to capture a topic or subject matter for which digital component objects of the content group is to be selected for display. For example, a food and beverage company can create a different content group for each brand of food or beverage it carries, and may further create a different content group for each model of vehicle it carries. Examples of the content group themes that the food and beverage company can use include, for example, “Brand A cola”, “Brand B ginger ale,” “Brand C orange juice,” “Brand D sports drink,” or “Brand E purified water.” An example content campaign theme can be “soda” and include content groups for both “Brand A cola” and “Brand B ginger ale”, for example. The digital component (or digital component object or digital component) can include “Brand A”, “Brand B”, “Brand C”, “Brand D” or “Brand E”.
  • The digital component provider 128 can provide one or more keywords and digital component objects to each content group. The keywords can include terms that are relevant to the product or services of associated with or identified by the digital component objects. A keyword can include one or more terms or phrases. For example, the food and beverage company can include “soda,” “cola,” “soft drink,” as keywords for a content group or content campaign that can be descriptive of the goods or services the brand provides. In some cases, negative keywords can be specified by the content provider to avoid, prevent, block, or disable content placement on certain terms or keywords. The content provider can specify a type of matching, such as exact match, phrase match, or broad match, used to select digital component objects.
  • The digital component provider 128 can provide the one or more keywords to be used by the content selector component 126 to select a digital component object provided by the digital component provider 128. The digital component provider 128 can provide additional content selection criteria to be used by the content selector component 126 to select digital component objects. The content selector component 126 can run a content selection process involving multiple content providers 128 responsive to receiving an indication of a keyword of an electronic message.
  • The digital component provider 128 can provide one or more digital component objects for selection by the content selector component 126. The digital component objects can be a digital component or a collection of digital components. The content selector component 126 can select the digital component objects when a content placement opportunity becomes available that matches the resource allocation, content schedule, maximum bids, keywords, and other selection criteria specified for the content group. Different types of digital component objects can be included in a content group, such as a voice digital component, audio digital component, a text digital component, an image digital component, video digital component, multimedia digital component, or digital component link. Upon selecting a digital component, the content selector component 126 can transmit the digital component object for presentation or rendering on a client device 102 or display device of the client device 102. Presenting or rendering can include displaying the digital component on a display device or playing the digital component via a speaker of the client device 102. The content selector component 126 to present or render the digital component object. The content selector component 126 can instruct the client device 102 to generate audio signals, acoustic waves, or visual output. For example, the automated assistant client 108 can present the selected digital component via an audio output.
  • The instance of an automated assistant client 108, by way of its interactions with one or more cloud-based automated assistant components 116, may form what appears to be, from the user's perspective, a logical instance of an automated assistant 112 with which the user may engage in a dialogue. One instance of such an automated assistant 112 is depicted in FIG. 1 by a dashed line. It thus should be understood that each user that engages with an automated assistant client 108 executing on a client device 102 may, in effect, engage with his or her own logical instance of an automated assistant 112. For the sake of brevity and simplicity, the term “automated assistant” as used herein as “serving” a particular user may often refer to the combination of an automated assistant client 108 operated by the user and one or more cloud-based automated assistant components 116 (which may be shared amongst multiple automated assistant clients 108). It should also be understood that in some implementations, automated assistant 112 may respond to a request from any user regardless of whether the user is actually “served” by that particular instance of automated assistant 112.
  • Client device 102 may include, for example, one or more of: a desktop computing device, a laptop computing device, a tablet computing device, a touch sensitive computing device (e.g., a computing device which can receive input via touch from a user), a mobile phone computing device, a computing device of a vehicle of the user (e.g., an in-vehicle communications system, an in-vehicle entertainment system, an in-vehicle navigation system), a standalone interactive speaker, a smart appliance such as a smart television, a projector, and/or a wearable apparatus of the user that includes a computing device (e.g., a watch of the user having a computing device, glasses of the user having a computing device, a virtual or augmented reality computing device). Additional and/or alternative client computing devices may be provided.
  • The client device 102 can interface with a projector 106 or can include the projector 106. In some implementations, the projector 106 can be a “smart” projector (e.g., the “smart” projector can either simply display images it receives from client device 102 and/or receive relevant data to generate image transformations at the projector before projecting a transformed image). Furthermore, the projector 106 may include, for example, liquid crystal display (LCD) projectors, digital light processing (DLP) projectors, light emitting diode (LED) projectors, hybrid LED and laser diode projectors, and/or laser diode projectors. The projector 106 can be a short throw or ultra-short throw projector. A projected image can be touch sensitive and include a touch interface which can similarly receive touch inputs and/or gestures for allowing a user to control the automated assistant via the touch interface of a projected image. Projectors displaying touch sensitive images can include a variety of infrared sensors, cameras, and/or other sensor(s) to detect a user's gestures and taps to determine how a user is interacting with the projected image.
  • The automated assistant client 108 can utilize either the projector integrated within client device 102 or a stand-alone projector 106. In many implementations, automated assistant client 108 can utilize both projectors, for example using a different projector for a different situation. For example, automated assistant client 108 can utilize the projector integrated within client device 102 to project still images and stand-alone projector 106 to project a video sequence. The automated assistant client 108 can use different projectors in different lighting conditions depending on the specifications of the specific projectors, for example stand-alone projector 106 might project better in lower lighting conditions.
  • The client device 102 may include one or more presence sensors 104 that are configured to provide signals indicative of detected presence, particularly human presence. Presence sensors may come in various forms and can collect a variety of types of input to the automated assistant 112 such as verbal, textual, graphical, physical (e.g., a touch on a display device including a touch sensitive projector and/or a touch sensitive screen of a computing device), and/or visual (e.g., a gesture) based input. Some client devices 102 may be equipped with one or more digital cameras that are configured to capture and provide signal(s) indicative of movement detected in the fields of view. The client devices 102 may be equipped with presence sensors 104 that detect acoustic (or pressure) waves, such as one or more microphones.
  • The presence sensors 104 may be configured to detect indications associated with human presence. For example, in some implementations, a client device 102 may be equipped with a presence sensor 104 that detects various types of waves (e.g., radio, ultrasonic, electromagnetic, etc.) emitted by, for instance, a mobile client device 102 carried/operated by a particular user. For example, some client devices 102 may be configured to emit waves that are imperceptible to humans, such as ultrasonic waves or infrared waves, that may be detected by other client devices 102 (e.g., via ultrasonic/infrared receives such as ultrasonic-capable microphones).
  • The various client devices 102 may emit other types of human-imperceptible waves, such as radio waves (e.g., Wi-Fi, Bluetooth, cellular, etc.) that may be detected by one or more client devices 102 and used to determine an operating user's particular position. In some implementations, Wi-Fi triangulation may be used to detect a user's position, e.g., based on Wi-Fi signals to/from a client device 102 for example, utilizing any of a variety of Wi-Fi SLAM methods. In other implementations, other wireless signal characteristics, signal strength, etc., may be used by various client devices 102 alone or collectively, to determine a particular person's pose based on signals emitted by a client device 102 they carry. Time-of-flight cameras can be used independently as presence sensors 104 to locate the pose of user(s) in an environment.
  • The automated assistant 112 may engage in dialog sessions with one or more users via user interface input and output devices of one or more client devices 102. The dialog sessions can be audio-based, image-based, or a combination of audio and images. In response to the input dialog from the user, the one or more client device 102 can present selected digital components, such as images, videos, text, or audio to the user. In some implementations, automated assistant 112 may engage in dialog sessions with a user in response to user interface input provided by the user via one or more user interface input devices of one of the client devices 102. In some of those implementations, the user interface input is explicitly directly to automated assistant 112. For example, a user may speak a predetermined invocation phrase, such as “OK, Assistant,” or “Hey, Assistant,” to cause automated assistant 112 to enter a state where the automated assistant 112 can receive inputs, such as input audio signals, text-based inputs, or touch-based inputs. The inputs can include content requests.
  • The automated assistant 112 may engage in a dialog session in response to user interface input, even when that user interface input is not explicitly directly to automated assistant 112. For example, automated assistant 112 may examine the contents of user interface input and engage in a dialog session in in response to certain terms being present in the user interface input and/or based on other cues. In many implementations, automated assistant 112 may utilize speech recognition to convert utterances from users into text, and respond to the text accordingly, e.g., by providing visual information in the form of a base image and/or a transformed image, by providing search results, general information, and/or taking one or more response actions (e.g., playing media, launching a game, ordering food, etc.). In some implementations, the automated assistant 112 can additionally or alternatively respond to utterances without converting the utterances into text. For example, the automated assistant 112 can convert voice input into an embedding, into entity representation(s) (that indicate entity/entities present in the voice input), and/or other “non-textual” representations and operate on such non-textual representations. Accordingly, implementations described herein as operating based on text converted from voice input my additionally and/or alternatively operate on the voice input directly and/or other non-textual representations of the voice input.
  • Each of the client computing devices 102 and computing device(s) operating cloud-based automated assistant components 116 may include one or more memories for storage of data and software applications, one or more processors for accessing data and executing applications, and other components that facilitate communication over a network. The operations performed by one or more computing device 102 and/or automated assistant 112 may be distributed across multiple computer systems. Automated assistant 112 may be implemented as, for example, computer programs running on one or more computers running in one or more locations that are coupled to each other through a network.
  • The client computing device 102 may operate an automated assistant client 108. In various implementations, each automated assistant client 108 may include a corresponding speech capture/text-to-speech (“TTS”)/speech-to-text (“STT”) module 110. In other implementations, one or more aspects of speech capture/TTS/STT module 110 may be implemented separately from the automated assistant client 108.
  • Each speech capture/TTS/STT module 110 may be configured to perform one or more functions: capture a user's speech, e.g., via a microphone (which in some cases may include presence sensor 104); convert that captured audio to text (and/or to other representations or embeddings); and/or convert text to speech. For example, in some implementations, because a client device 102 may be relatively constrained in terms of computing resources (e.g., processor cycles, memory, battery, etc.), the speech capture/TTS/STT module 110 that is local to each client device 102 may be configured to convert a finite number of different spoken phrases—particularly phrases that invoke automated assistant 112—to text (or other forms, such as lower dimensionality embeddings). Other speech input may be sent to cloud-based automated assistant components 116, which may include cloud-based TTS module 118 and/or cloud-based STT module 120.
  • Cloud-based STT module 120 may be configured to leverage the resources of the cloud to convert audio data captured by speech capture/TTS/STT module 110 into text (which may then be provided to natural language processor 122). Cloud-based TTS module 118 may be configured to leverage the virtually limitless resources of the cloud to convert textual data (e.g., natural language responses formulated by automated assistant 112) into computer-generated speech output. The TTS module 118 may provide the computer-generated speech output to client device 102 to be output directly, e.g., using one or more speakers. In other implementations, textual data (e.g., natural language responses) generated by automated assistant 112 may be provided to speech capture/TTS/STT module 110, which may then convert the textual data into computer-generated speech that is output locally.
  • Automated assistant 112 (e.g., cloud-based assistant components 116) may include a natural language processor 122, the TTS module 118, the STT module 120, transformation parameters 124, the digital component selector 126, and other components. In some implementations, one or more of the engines and/or modules of automated assistant 112 may be omitted, combined, and/or implemented in a component that is separate from automated assistant 112. In some implementations, to protect privacy, one or more of the components of automate assistant 112, such as natural language processor 122, speech capture/TTS/STT module 110, etc., may be implemented at least in part on client device 102 (e.g., to the exclusion of the cloud).
  • The automated assistant 112 can generate or select responsive content (e.g., digital components) in response to various inputs generated by a user of client device 102 during a human-to-computer dialog session with automated assistant 112. Automated assistant 112 may provide the responsive content (e.g., over one or more networks 114 when separate from a client device of a user) for presentation to the user as part of the dialog session. For example, automated assistant 112 may generate responsive content in response to free-form natural language input provided via client device 102. As used herein, free-form input is input that is formulated by the user that is not constrained to a group of options presented for selection by the user.
  • Natural language processor 122 of automated assistant 112 processes natural language input generate by users via client device 102 and may generate annotated output for use by one or more components of automated assistant 112. For example, the natural language processor 122 may process natural language free-form input that is generated by a user via one or more user interface input devices of client device 102. The generated annotated output includes one or more annotations of the natural language input and optionally one or more (e.g., all) of the terms of the natural language input. Natural language processor 122 can parse the input to identify the content request and one or more keywords in the input.
  • The natural language processor 122 can identify and annotate various types of grammatical information in natural language input. For example, the natural language processor 122 may include a part of speech tagger configured to annotate terms with their grammatical roles. Also, for example, in some implementations the natural language processor 122 may additionally and/or alternatively include a dependency parser configured to determine syntactic relationships between terms in natural language input.
  • The natural language processor 122 can include an entity tagger configured to annotate entity references in one or more segments such as references to people (including, for instances, literary characters, celebrities, public figures, etc.), organizations, locations (real and imaginary), and so forth. The entity tagger of the natural language processor 122 may annotate references to an entity at a high level of granularity (e.g., to enable identification of all references to an entity class such as people) and/or a lower level of granularity (e.g., to enable identification of all references to a particular entity such as a particular person). The entity tagger may rely on content of the natural language input to resolve a particular entity and/or may optionally communicate with a knowledge graph or other entity database to resolve a particular entity.
  • In some implementations, the natural language processor 122 may additionally and/or alternatively include a coreference resolver configured to group, or “cluster”, references to the same entity based on one or more contextual cues. For example, the coreference resolver may be utilized to resolve the term “there” to “Hypothetical Café” in the natural language input “I liked Hypothetical Café last time we ate there.”
  • One or more components of the natural language processor 122 can use annotations from one or more other components of the natural language processor 122. For example, in some implementations, the named entity tagger may rely on annotations from the coreference resolver and/or dependency parsers in annotating all mentions to a particular entity. Also, for example, in some implementations the coreference resolver may rely on annotations from the dependency parser in clustering references to the same entity. In many implementations, in processing a particular natural language input, one or more components of the natural language processor 122 may use related prior input and/or other related data outside of the particular natural language input to determine one or more annotations.
  • The natural language processor 122 can determine a request, such as a content request, within an audio input request received from the client device 102. The digital component selector 126 can be a part of the cloud-based automated assistant component 116 or separate from the cloud-based automated assistant component 116. The digital component selector 126 can receive the content request or an indication thereof. The content selector component 126 can receive prior audio inputs (or packaged data object) for the selection of a digital component based on the content request. The content selector component 126 execute a real-time digital component selection process to select the digital component. The content selector component 126 can select addition or supplemental digital components based on the input request.
  • The real-time digital component selection process can refer to, or include, selecting digital component objects (which may include sponsored digital component objects) provided by third party content providers 128. The real-time content selection process can include a service in which digital components provided by multiple content providers are parsed, processed, weighted, or matched based on the packaged data object in order to select one or more digital components to provide to the client device 102. For example, a plurality of content provider devices can provide a digital component with associated bid to the digital component selector 126. Based on a ranking of the bids from each of the content provider devices, the digital component selector 126 can select one of the provided digital components. The digital component selector 126 can perform the content selection process in real-time. Performing the content selection process in real-time can refer to performing the content selection process responsive to the request for content received via the client device 102. The real-time content selection process can be performed (e.g., initiated or completed) within a time interval of receiving the request (e.g., 5 seconds, 10 seconds, 20 seconds, 30 seconds, 1 minute, 2 minutes, 3 minutes, 5 minutes, 10 minutes, or 20 minutes). The real-time content selection process can be performed during a communication session with the client device 102, or within a time interval after the communication session is terminated.
  • For example, the digital component selector 126 can be designed, constructed, configured or operational to select digital component objects based on the content request in the input audio signal. The digital component selector 126 can identify, analyze, or recognize voice, audio, terms, characters, text, symbols, or images of the candidate digital components using an image processing technique, character recognition technique, natural language processing technique, or database lookup. The candidate digital components can include metadata indicative of the subject matter of the candidate digital components, in which case digital component selector 126 can process the metadata to determine whether the subject matter of the candidate digital component corresponds to the content request.
  • Responsive to the request identified in the input audio (or other) signal, the content selector component 126 can select a digital component object from a database associated with the digital component provider 128 and provide the digital component for presentation via the client device 102. The digital component object can be provided by a digital component provider 128. The content selector component 126 can select multiple digital components. The multiple digital components can be provided by different digital component providers 128. For example, a first digital component provider 128 can provide a primary digital component responsive to the request and a second digital component provider 128 can provide a supplemental digital component that is associated with or relates to the primary digital component. The client device 102 or a user thereof can interact with the digital component object. The client device 102 can receive an audio, touch, or other input response to the digital component. The client device 102 can receive an indication to select a hyperlink or other button associated with the digital component object that causes or allows the client device 102 to identify digital component provider 128, request a service from the digital component provider 128, instruct the digital component provider 128 to perform a service, transmit information to the digital component provider 128, or otherwise identify a good or service associated with digital component provider 128.
  • The digital component selector 126 can select a digital component that includes text, strings, or characters that can be processed by a text to speech system or presentable via a display. The digital component selector 126 can select a digital component that is in a parameterized format configured for a parametrically driven text to speech technique. The digital component selector 126 can select a digital component that is in a format configured for display via client device 102 or the projector 106. The digital component selector 126 can select a digital component that can be re-formatted to match a native output format of the client device 102, application, or projector 106 to which the digital component is transmitted. The digital component selector 126 can provide the selected digital component to the client device 102 or automated assistant client 108 or application executing on the client device 102 for presentation by the client device 102 or the projector 106.
  • The automated assistant 112 can generate dynamic image transformations to display a base image and/or transformed version of the base image, such as the digital components selected by the content selector component 126. The automated assistant 112 can identify one or more active users, generate image transformations to display for active user(s), and/or generate a base image to project based on the distance of a user from the projected image.
  • The cloud-based annotated assistant components 116 may generate transformation parameters 124. In other implementations, transformation parameters 124 may be generated separately from cloud-based automated assistant components 116, e.g., on client device 102, by automated assistant client 108 and/or on another computer system (e.g., in the so-called “cloud”).
  • The transformation parameters 124 can be used by automated assistant client 108 or cloud-based automated assistant components 116 to generate a transformed digital component from a base digital component. For example, the transformation parameters can be used to generate a transformed image from base image. For example, transformation parameters 124 can include identification information for a user's position within an environment. Warping parameters, can be a specific type of transformation parameters which can be used by automated assistant client 108 and/or cloud-based automated assistant components 116 to warp a base image into a transformed image. Warping parameters may include, for example, one or more of: the pose of a user, the gaze of a user, the facial identification of a user (with approval of the user), the voice identification of a user (with approval of the user), the distance from a projector to the surface an image is projected onto, the shape of the surface the image is projected onto, or any combination thereof. In some implementations, automated assistant 112 can perform image warping, a linear transformation similar to image rectification (described in more detail below) on transformation parameters 124 to generate a transformed image from a base image and/or an additional base image.
  • A user's pose can be determined via presence sensors 104, and the automated assistant 112 can use the distance from the user (which can be generated from the user's pose) to the projection surface (e.g., the location where the image is projected) to identify a base image to use for that particular user. The information contained in the base image can be dependent on the distance from the user to the projection surface. For example, a base image identified for a user located far from a surface can contain limited information from the user's calendar such as only the next item on the user's calendar. In contrast, a base image identified for a user located near the projection surface can contain more detailed information from the user's calendar such as the user's schedule for the entire day. If the user is close enough to the projection surface to touch the projection, in many implementations, the base image can also contain touch sensitive elements, for example, the ability for the user to scroll through calendar events for the entire week.
  • Client device 102 and/or cloud-based automated assistant components 116 can be in communication with one or more third party agents hosted by remote device(s) (e.g., another cloud-based component). For instance, a user voice command “order a large pepperoni pizza from Agent X” can cause the automated assistant client 108 (or cloud-based automated assistant component(s) 116) to send an agent command to a third party agent “Agent X”. The third party agent can be a digital component provider 128. The agent command can include, for example, a request that can include an intent value that indicates an “ordering” intent determined from the voice command, as well as optional slot values such as “type=pizza”, “toppings=pepperoni”, and “size=large.” In response, the third party agent can provide, to the automated assistant 112, responsive content that includes (or enables generation of) base digital components relevant to the pizza order. For example, the base digital component can be a base image that can include graphical representations of the order being confirmed, as well as of a status of the pizza order. The content selector component 126 can also select supplemental or additional digital components to display in association with the base digital component. For example, the content selector component 126 can select an additional digital component that can include a video sequence of real time tracking of the pizza delivery driver on a map as the pizza is being delivered. Once these base image(s) are received from the third party agent, the automated assistant client 108 and/or the cloud-based automated assistant components 116 can generate a transformation of the base image(s) and a transformed image can be projected onto the wall for the user.
  • FIGS. 2A and 2B illustrate an example of a user viewing an image projected onto a wall with respect to different user locations. Image 200 contains a scene of a room at a first time and is illustrated in FIG. 2A. Image 200 contains user 202, projected image 204, client device 206 (that includes an integrated projector and/or is in communication with a locally accessible separate projector), and table 208. It will be understood that that image 200 is merely illustrative and for example the client device and/or projector can be separate devices, the client device and/or projector can be on a surface other than a table such as a desk, a dresser, and/or mounted onto surfaces such as a wall and/or celling, more than one client device can be present within the room, more than one projector can be within the room, and/or more than one user can be in the room. Client device 206 can determine the pose of user 202. If client device 206 determines the user 202 is an active user, the client device can project a variety of images 204 for the user including a base image and/or a transformed image that is a transformation of the base image.
  • The same room captured at a second time is illustrated in FIG. 2B. Image 225 is a scene of the room captured at a second time. Image 225 contains the same client device 206 (that includes an integrated projector and/or is in communication with a locally accessible separate projector) and table 208, but the pose of the user has changed to a second pose 226. As an illustrative example, the user has moved to the right, from previous pose 228, to the second pose 226. Client device 206 can detect the user in the second pose 226 and make a determination if the user is an active user in the second pose. If the user is an active user in the second pose, projected image 230 can be projected onto the wall for the user in the second pose 226. Projected image 230 can be projected onto the same position of the wall as projected image 204 despite any change of content within the images themselves through image transformation processes. In a variety of implementations, contents of projected image 230 can change as user moves to the second pose 226. For example, projected image 204 can be a first transformed image and projected image 230 can be a second transformed image both of the same base image. In other words, client device 206 can generate image transformations such that the user in the first pose 202 and the user in the second pose 226 will perceive the same projected image even though the perspective of the user has changed relative to the position to the image on the wall has changed. In many implementations, client device 206 can use a base image and can perform image transformations to project transformed images for both user poses such that projected image 204 for user pose 202 is a first transformed image and projected image 230 for the user in the second pose 226 is a second transformed image. In contrast, if user 202 has a viewpoint perpendicular to (e.g., directly in front of) image 204, client device 206 can project the untransformed base image as projected image 204 (and similarly computing device 206 can project a transformed image as projected image 230 for the user in the second pose 226).
  • The client device 206 can project additional digital components for viewing by the users near the client device 206. For example, the client device 206 can project an additional image 232 onto the wall. The subject matter of the additional image 232 can be related to subject matter of the projected image 230. For example, the additional image 232 can provide additional information regarding the content of the projected image 230. The additional image 232 can be a duplicate of the projected image 230 that is projected onto a different location of the wall. For example, the client device 206 can project the additional image 232 as a duplicate of the projected image 230 to the different location to improve the ability of a second user to view the projected content.
  • The client device 206 can generate the additional image 232 and project the additional image 232 to a location selected to be viewed by a user pose other than the user pose 226 for which the location of the projected image 230 is selected. The client device 206 can select the location of the additional image 232 such that the additional image is displayed in a non-prominent location. For example, and continuing the above pizza ordering example, the pizza tracker information can be included in the additional image 232. In this example, the pizza tracker information can be displayed on a non-prominent location of the wall, such as, near a corner, intersection with the floor, intersection with the wall, or near an object positioned near the wall (such as near a lamp positioned in front of the wall).
  • The content selector component 126 can select digital components for the client device 206 to project onto the wall that are not related to a request parsed from an input signal. The content selector component 126 can select digital component to be projected on the wall between times that the client device 206 receives input signals. For example, content selector component 126 can select supplemental digital components based on the location, context, or action of the client device, or based on the preferences a user associated with the client device 206. For example, the client device can project supplemental digital components onto the wall between the times that the client device 206 receives input signals, such as weather information, time information, or other digital components selected or configured by the user of the client device 206.
  • Example images further illustrating a base image and the same base image without dynamic image transformation viewed by a user from a non-perpendicular angle (e.g., from the side) are illustrated in FIG. 2C and 2D. A base image can be directly projected onto a wall for a user who has a perspective of the image directly perpendicular to the wall. Additionally, a base image can be transformed using a variety of dynamic image transformations such that a client device can project a transformed image that will appear to look as substantially similar (if not identical) to the base image as the user changes perspective from location of the projected image within the room.
  • FIG. 2C contains image 250 which contains dashed line 252 and dashed line 254. Dashed line 252 and dashed line 254 are parallel lines (i.e., likes that are equidistant and will never meet). In many implementations, image 250 can be an example of a base image used by a client device which can be processed using image transformations such that dashed line 252 and dashed line 254 can appear parallel to a user when projected onto a wall regardless of the pose of a user in the room. For example, when a user is viewing the projected image from a non-perpendicular angle (e.g., the side) image transformations can make the lines in the transformed base image still appear parallel. Additionally, when image 250 is projected onto a surface by a client device and viewed by a user from a direction perpendicular to (e.g., directly in front of) a surface as a base image, dashed line 252 and dashed line 254 appear parallel.
  • In contract, FIG. 2D contains image 275 made of dashed line 276 and dashed line 278. Dashed line 276 and dashed line 278 are non-parallel and slant towards each other on the left hand side of the image. In a variety of implementations, lines 276 and 278 are an example of how a base image of parallel lines (for example if the image 250 containing dashed parallel lines was utilized as a base image) could be viewed by a user from a perspective in the room other than perpendicular to the image if no image transformation was used on the image projection. As an example, a pair of lines closer on the right side and further apart on the left side can indicate a user is standing to the right of a base image of parallel lines without image transformation. This user pose can be similar to the user in the second pose 226 after the user has moved to the right side of the room in FIG. 2B.
  • Client device 206 can generate an image transformation on a base image similar to the parallel lines of image 250 in FIG. 2C such that a user in second pose 226 will see a transformed image generated by client device 206 with parallel which looks like image 250 instead of the non-parallel lines in image 275 the user would normally see from a non-parallel viewing position as illustrated in FIG. 2D. Additionally, the user in the second pose 226 will see an image projected that is the same (or substantially similar) size as the base image and that is in the same (or substantially similar) location as the base image.
  • A process for dynamic image transformation using image warping in accordance with various implementations is illustrated in FIG. 3. The process 300 can be performed by one or more client devices, and or any other apparatus capable of interacting with an automated assistant. The process includes identifying (302) a base image. The base image can be used by an automated assistant (at the client device and/or at remote device(s)) to generate a transformed image in dependence on a pose of a user. For example, a base image can include any of a variety of information that a client device can cause a projector to project for presentation to user, such as weather information for a particular day. In many implementations, a user can request information that can be included in a base image through interaction with the client device through verbal, textual, graphical, and/or visual input to the client device. For example, a user can ask the client device “OK Assistant—what is the weather tomorrow” and a base image can be identified containing weather information for the following day. Moreover, base images can be provided to a client device by a third party agent (optionally provided via the cloud-based automated assistant component(s) 116). For example, if a user orders a delivery from a restaurant, the third party agent associated with the restaurant can send the automated assistant a base image which can include status updates relating to the delivery. Yet further, in various implementations the automated assistant can generate base images and provide them for projection independent of explicit user input. For example, an image that contains a weather forecast can be projected in response to detecting presence of a user, but without explicit input from the user.
  • A first pose of a user can be determined (304) within a room. A variety of sensors in the client device, such as presence sensors 104 in FIG. 1, can be used to determine the first pose of the user such as a microphone, a camera, an infrared camera, a time of flight camera, and/or a touch screen display on the client device. For example, a client device can use a camera to detect a user and generate the pose of the user within the room. In several implementations, a client device with many sensors can determine which sensors to use individually and/or in combination based on previously known information regarding the pose of a user. For example, when little information is known about a user's pose location, a sensor which gathers information about the entire room can be used such as a time of flight camera. Alternatively, if a user is interacting the client device at the touch screen, it can safely be assumed the user is close to the client device, and a shorter range sensor could be used to determine the first pose of the user. Additionally or alternatively, sensors integrated into third party agents can be used to determine a pose of a user. It will be understood that these examples are merely illustrative any of a variety of ways to detect user pose in a room can be utilized as appropriate in accordance with various implementations.
  • First image transformation parameters can be generated (306) using the first pose of the user to warp the base image. Image warping can include a linear transformation process which can use image warping parameters such as a user's pose, position, gaze, facial identification (with approval of the user), voice identification (with approval of the user) and/or distance from the projector to the surface the image is projected onto to generate image transformation parameters. A process for determining a transformation to warp an image in accordance with many implementations will be discussed below in FIG. 4. In many implementations, first image transformation parameters can include for example (but are not limited to) base image data, precalculated image transformation data for a base image, first user pose data, and/or any other information relevant to generating a transformed image. In several implementations, first image transformation parameters can include image warping parameters.
  • A first transformed image can be generated (308) using the first image transformation parameters where the first transformed image is a transformation of the base image. In general, a transformed image can differ from the base image in content, but is generated in such a way that it appears to be substantially similar in size and location to the base image when projected onto the surface. In many implementations, an image transformation is not necessary when the first pose of the user is viewing the image perpendicular to (e.g., directly in front of) the surface the image is projected onto. In this specific example, first image transformation parameters can indicate to the client device that the generation of a first transformed image is unnecessary and the base image will take the place of the first transformed image. Furthermore, the base image, being identical to itself, will take up an identical size and position on the wall. In many implementations, while a transformed image will differ from its base image counterpart, it can be of the same size as the base image and/or can, when projected, be projected in the same location and as the same size as would the base image. For example, when a base image and a transformed image are projected they can be of an identical size on a projection surface and in identical locations on the projection surface. In some implementations, the transformed image is projected in a “substantially similar” position as the base image. For example, the first transformed image and the base image can be identical in size, but two images are not projected onto exactly the same position of the wall and the transformed image can take up 95% of the same position on the wall as the base image. Alternatively, the transformed image can be slightly smaller than the base image, and while the transformed image is taking up the same position on the wall, because the transformed image is slightly smaller it is only taking up 90% of the wall space of the base image (but is not taking up any wall space outside of the area of the original base image). Additionally, a transformed image can be slightly larger than a base image and can take 105% of the wall space of the base image and still take up a substantially similar position on the wall.
  • The projector can be caused (310) by the client device to project the first transformed image onto a surface. The projector can be integrated within a client device similar to client device 102 and/or a separate projector similar to projector 106. The surface can include various areas with in a room including a wall, the ceiling, and/or the floor. For example, if the first pose of a user is determined to be reclining on a couch, projecting information on the ceiling (instead onto the wall) might be a more useful location for the user to view the projected transformed image.
  • A second pose of the user can be determined (312) after the user has moved. User movement can be determined using many of the same sensors used to detect the first pose of the user in step (304) including a microphone, a camera, an infrared camera, a time of flight camera, and/or a touch screen display on the client device. For example, a microphone that detected a user speaking in one pose in the room can detect when the user's voice is coming from a different direction, and thus the user has moved within the room. Once a client device has determined the user has moved, the second pose of the user can be determined in a manner similar to determining the first pose of the user in step (304) including using a sensor and/or a combination of sensors.
  • Second image transformation parameters can be generated (314) using the second pose of the user to warp the image. As previously described, image warping can be a linear transformation process which can utilize image warping parameters to generate image transformation parameters. A process for determining a transformation to warp an image in accordance with many implementations will be discussed below in FIG. 4. In some implementations, second image transformation parameters can include for example (but are not limited to) base image data, precalculated image transformation data for the base image, first user pose data, second user pose data, and/or any other information relevant about a user to generate a transformed image. In many implementations, second image transformation parameters can include image warping parameters.
  • A second transformed image can be generated (316) using the second image transformation parameters where the second transformed image is a transformation of the base image or an additional base image. Generating a second transformed image can be performed in a manner similar to step (308) described above. Additionally or alternatively, an additional base image can be used in place of the base image to generate the second transformed image when a base image changes over time. For example, a base image of a projected calendar event image can change once the event has ended and an additional base image for the next calendar event image can be projected. Additionally, individual frames of a video sequence can make a corresponding sequence of base images. In this case the base image can change to an additional base image independent of user movement and additional image transformations can be calculated for a first pose of the user for additional base images.
  • The projector can be caused (318) by the client device to project the second transformed image onto the surface. The second image can be projected in a manner similar to the projection of the first transformed image in step (310) above. However, if the client device detects the viewpoint of the user has substantially changed and a different surface would be preferable, the client device can instruct the projector to project the second transformed image onto a different surface. For example, if the first pose of a user is determined to be reclining on a couch and the first transformed image is projected onto the ceiling, and the client device detects the user gets up from the couch and has moved within the room as the second pose, the second transformed image can be projected onto a wall instead of the ceiling. Similarly, for example if a user if facing north for the first pose, the first transformed image can be projected onto the North wall. In some implementations, if the user moves to face south for the second pose, the second transformed image can be projected onto the south wall.
  • A process for determining a transformation to warp an image in accordance with various implementations is illustrated in FIG. 4. The process 400 can be performed by one or more client devices, and/or any other apparatus capable of interacting with an automated assistant for generating image transformation parameters by image warping. The process 400 can include identifying (402) image warping parameters. Image warping parameters can include (but are not limited to) the pose of a user, the gaze of a user, the facial identification of a user (with approval of the user), the voice identification of a user (with approval of the user), the distance from a projector to the surface an image is projected onto, and/or any other of a variety of user and/or hardware related parameters which can be used as image warping parameter. In many implementations, a client device can use a single image warping parameter such as the gaze of the user, to determine a transformation to warp an image. Alternatively, in several implementations, a client device can use a combination of image warping parameters.
  • Individual sensors available to a client device can impact which warping parameters are available to the client device. For example, if a client device has sensors which can determine the gaze of a user, the client device can use gaze as an image warping parameter. Additionally, in some implementations, a client device can receive data to use as warping parameters from sensors in third party agents such as a “smart” thermostat and/or other “smart” devices located within the room.
  • Image warping parameters can be identified by a client device individually and/or in combination. In various implementations a combination of warping parameters can be identified by particular sensors available to the client device, the client device itself can determine the combination, and/or the user can identify image warping parameters by predetermining which combination of warping parameters the client device should use. For example, a client device can use the gaze of a user as a warping parameter. However, the client device can make the choice to not the gaze of the user as a warping parameter at night when the lights are off in the room and thus the client device can have a harder time determining the gaze of a user, and instead the client device can identify other warping parameters such as determining a position of a user using voice identification (which is often less sensitive to lighting conditions in the room).
  • Determining transformation parameters to warp a base image can depend on (but are not limited to) which warping parameters and the values of data contained in the warping parameters which are used by the client device. In many implementations, different adjustments and/or combinations of adjustments can be made using image warping parameters to generate image transformation parameters which can warp a base image as an image transformation. For example, a base image with a particular set of image warping parameters can require only one adjustment to determine transformation parameters to such as only requiring a vertical rotation of the base image. In contrast, in several implementations, when a user is in a different position, a base image can have a different set of image warping parameters and can require a combination of adjustments such as a horizontal rotation of the base image and a scaling of the base image. It will be understood that these examples are merely illustrative any number of combinations of image adjustment combinations can be made by a client device as appropriate to determine transformation parameters in accordance with various implementations.
  • A rotation of the base image can optionally be determined (404) by the client device. Image rotations can spin an image on the same plane as the wall the image is projected onto (i.e., on a plane parallel with the projection surface). For example, a base image can be a rectangle that when viewed by the user perpendicular to (e.g. directly in front of) a square wall, the rectangular base image has lines parallel with the ceiling, floor, and walls. Image warping parameters can determine an appropriate rotation of the base image to determine transformation parameters to generate a transformed image that is also a rectangle with lines parallel to the ceiling, floor, and walls when viewed from the pose of the user from a non-perpendicular pose (e.g., to the side of) of the square wall.
  • A scaling of the base image can optionally be determined (406) by the client device. Image scaling adjusts the size of an image. Additionally or alternatively, image warping parameters can determine an appropriate scaling of a base image to generate image warping parameters which can increase and/or increase the size of the base image when image transformation parameters are utilized to generate a transformed image such that the transformed image takes up the same and/or substantially similar amount of space of the wall when projected as the base image when viewed from the pose of the user from a non-perpendicular pose.
  • A skew adjustment of the base image can optionally be determined (408) by the client device. In general, a skew image is an image at an oblique angle (i.e., at a slant). Image warping parameters can determine how to skew an image to generate image warping parameters to use in generating a transformed image which can change the angles within the base image (i.e., slant portions of the image) such the determined transformation parameters can generate a transformed image that appears non-skewed when viewed from the pose of the user from a non-perpendicular pose.
  • Transformation parameters to warp the base image as an image transformation can be determined (410) by the client device. As previously described, the number of image adjustments necessary to determine transformation parameters to warp a particular base image as an image transformation with a particular set of image warping parameters can vary based on a number of factors including the base image and/or one or more of the warping parameters.
  • In many implementations, image warping can be viewed in some ways as similar to image rectification with various differences. Image rectification generally projects two images, each image having a different optical viewpoint, onto a common image plane. A matching pair of transformations, H and H′ can be used to rectify the pair of images. In contrast, while image warping shares two images (a base image and a transformed image), only one image in the pair (the transformed image) is being transformed. The base image never changes and a single base image can be associated with many image transformations (and thus many transformed images) as the “optical viewpoint” of the user such as when the pose of the user within the room changes. Image warping generates a single image transformation using a known “optical viewpoint” (which in this context can be viewed as determined image warping parameters) of a user to match a transformed image to a known base image. In many implementations, similar mathematical techniques used in image rectification (sometimes with slight modifications) can be utilized in image warping including planar rectification, cylindrical rectification, and/or polar rectification. For example, a base image can contain a calendar event for a user. Image warping can generate a single image transformation (in contrast to the matching pair of image transformations generated with image rectification) using a transformation parameter such as the pose of the user within a room in place of the “optical viewpoint” to generate a transformed image to correspond with the known base image for the user. Additionally or alternatively, other transformation parameters can be utilized individually and/or in combination for example, the pose and the gaze of a user can be utilized as the “optical viewpoint” of the user when generating the single image transformation to generate a transformed image to correspond with the known base image.
  • To further illustrate image adjustments that can be utilized in determining transformation parameters to warp a base image, examples of a variety of image adjustments are illustrated in FIGS. 5A-5D. Image 500 contains an example of image rotation as illustrated in FIG. 5A. Image 500 contains a square 502. A rotation to the left of square 502 is illustrated as rotated square 504. It will be understood that this is merely an illustrative example, and rotated square 504 could be located in any of a variety of positions where rotated square 504 turns around an axis within image 500.
  • Image 525 contains an example of image scaling as illustrated in FIG. 5B. Image 525 similarly contains square 502. However, square 502 is scaled to be larger and is illustrated as scaled square 526. Scaled square 526 is merely an illustrative example, as scaling can increase or decrease the size of an object.
  • Image 550 contains an example of image skewing as illustrated in FIG. 5C. Square 502 is similarly contained in image 550. A skew transformation is performed on square 502 where square 502 is skewed to the right as skewed square 552. This example of image skewing is merely illustrative and skewing can occur in any direction including to the right, to the left, up, and/or down.
  • Image 575 contains an example of image translation as illustrated in FIG. 5D. Square 502 as illustrated in image 575 is translated up and to the right to generate translated square 576. This example is merely illustrative and image translations can occur in any direction. Additionally, image transformations discussed in FIGS. 5A-5D can be performed on an image individually and/or in any of a variety of combinations.
  • A process for dynamic image transformation including detecting an active user in accordance with various implementations is illustrated in FIG. 6. The process 600 can be performed by one or more client devices, and/or any other apparatus capable of interacting with an automated assistant. The process includes identifying (602) a base image. As described above, a base image can be transformed in dependence on a pose of a user to generate a transformed image. In several implementations, identification of a base image can be performed in a manner similar to step (302) in FIG. 3.
  • An active user can be detected (604) by the client device. In some implementations, an active user is a user who is actively engaged with the automated assistant, and can be detected in a number of ways including by movement, location, pose, facial identification (with approval of the user), voice identification (with approval of the user), and/or gaze. Active engagement can include viewing a projecting image, listening to rendered audible content provided by the automated assistant, and/or providing input to the automated assistant (e.g., voice input, touch input, gestures, etc.). Sensors such as any of a variety of sensors included in presence sensors 104 and/or sensors included in third party agents such as “smart” devices can collect sensor data to detect an active user. For example, a microphone detecting a user giving the client device a command “OK Assistant—show me my calendar for tomorrow” could be identified as an active user. Additionally, a camera detecting a user looking at an image projected onto a surface by the client device based on the user's gaze can be identified as an active user. In many implementations, a combination of techniques can be combined to detect an active user such as identifying pose and facial identification (with approval of the user) of someone in a room to detect the person is an active user. Additionally, multiple active users can be detected by a client device in the same room. Detecting an active user can map the detected active user to a user profile or some other sort of identification of the user. Additionally or alternatively, detecting an active user can indicate only that an active user is engaged with the automated assistant.
  • A first transformed image can be generated (606) for the active user by the client device. Any of a variety of dynamic image transformations can be used to generate the first transformed image including image warping. In many implementations, generating a first transformed image can performed in a manner similar to steps (304)-(308) of FIG. 3.
  • The client device can cause (608) the projector to project the first transformed image onto a surface. In a variety of implementations, the first transformed image is a transformation of the base image. Projecting the first transformed image can be performed in a manner similar to step (310) of FIG. 3 including using a projector integrated with the client device and/or using a separate projector.
  • User movement can be detected (610) by the client device. In some implementations, the detection of movement can be performed in a manner similar to determining user movement in step (312) of FIG. 3. Additionally, in a variety of implementations, threshold of movement can be met before a client device determines the second pose of the user. For example, if a user moves less than a millimeter, it is unlikely this small amount of movement will generate a new transformed image that is different enough than the first transformed image to be worth generating, and it can save system resources to wait until a user moves a greater distance before making a determination of the second pose of the user. On the other hand, if a user moves 3 meters to the right similar to how the user moved to second pose 226 to the left in FIG. 2B, a second transformed image can be substantially different than the first transformed image (depending on the content of the image). This can be especially true in implementations where gaze is being utilized to determine and/or as part of determining user pose, as user's eyes frequently make very small shifts. Computational resources could be saved if a threshold requiring a larger shift in gaze is required before a second transformed image for the active user is generated. Alternatively, in many implementations when user gaze is being utilized, any projected images can be stabilized using a variety of image processing techniques as a way to compensate for these small shifts a user's eyes are making.
  • A second transformed image can be generated (612) for the moved active user by the client device. Any of a variety of dynamic image transformations can be used to generate the second transformed image including image warping. In some implementations, generating a second transformed image can be performed in a manner similar to steps (312)-(316) of FIG. 3.
  • The client device can cause (614) the projector to project the second transformed image onto the surface. In many implementations, the second transformed image is a transformation of the base image or is a transformation of an additional image. Projecting the second transformed image can be performed in a manner similar to step (608) and/or step (318) of FIG. 3.
  • A process for dynamic image transformation for multiple active users in accordance with various implementations is illustrated in FIG. 7. The process 700 can be performed by one or more client devices, and/or any other apparatus capable of interacting with an automated assistant. The process includes identifying (702) a base image. As described above, a base image can be transformed in dependence on a pose of a user to generate a transformed image. In several implementations, identification of a base image can be performed in a manner similar to step (302) in FIG. 3.
  • Multiple active users in a group of users can be detected (704) by a client device. In various implementations, many people can be in a room, but not all of them can be actively engaged with the client device. For example, a room can have two users and neither user is engaged with the client device (and thus the room has no active users), one of the two users can interacting with the client device and be an active user, and/or both of the two users can be engaged with the client device active users. In several implementations, multiple active users can be detected in a manner similar to how individual active users can be detected in step (604) of FIG. 6. Alternatively, multiple active users can be detected by sensors in groups, and or all active users can be detected in a room simultaneously. For example, sensors integrated into a client device could detect a cluster of active users in the same portion of the room at the same time. Moreover, some types of sensors could process an entire room at the same time and could detect all active users in a room simultaneously. For example, a camera with a 360 degree view can detect which users are looking at a projected image in an entire room and thus detect active users simultaneously. Additionally, a combination of sensors can be used to detect a group of active users. For example, furniture in a room can block a camera from detecting one particular active user, but a microphone could be used to detect voice commands to the automated assistant from the user behind the furniture to determine that user is in the group of active users. It should be readily appreciated that these examples are merely illustrative and any of a number of ways to detect active users can be utilized in accordance with various implementations.
  • A first transformed image can be generated (706) for the multiple active users by the client device. Any of a variety of dynamic image transformations can be used to generate the first transformed image including image warping. In some implementations, generating a first transformed image can be performed in a manner similar to generating an image for a single active user such as steps (304)-(308) of FIG. 3. Additionally, image warping parameters can take multiple user poses into account when generating image warping to generate a first transformed image. For example, if all the active users are clustered in a single area of a room, a client device can decide to treat the multiple active users in a way similar to a single active user when generating a transformed image. Alternatively, if most active users are clustered in a single area of a room, and a single active user is in a second area of the room, the client device might largely ignore the active users in the second area of the room and still generate a transformed image in a way similar to a single active users. In some implementations, if active users are more equally spread out throughout the room, the client device can make decisions during the image warping process to generate the first transformed image. For example, the client device could combine the poses of multiple active users in a meaningful way to generate a pose which can take the place of a single active user pose for use in generating a first transformed image. For example the poses of multiple active users in a room can be averaged into a single pose. While this might not generate the best first image transformation for any individual active user, it can generate the best first image transformation for the group of users as a whole. Additionally, in some implementations, a client device can perform a weighted averaging of multiple active users, giving more weight to the pose of predetermined users who can be identified using facial identification (with approval of the user) and/or voice identification (with approval of the user). For example, if the first transformed image is an event from a group calendar, the client device can determine which active users have access to that group calendar, and give greater weight to their pose when generating a weighted averaging of multiple active users. In many implementations, there can be no “best” first transformed image to display for the entire group of active users and a client device can simply display the base image in place of a transformed image.
  • The client device can cause (708) the projector to project the first transformed image onto a surface. In some implementations, the first transformed image is a transformation of the base image. Projecting the first transformed image can be performed in a manner similar to step (310) of FIG. 3 including using a projector integrated with the client device and/or using a separate projector.
  • Movement can be detected (710) in the group of users by the client device. In some implementations, the detection of movement in a group of users can be performed in a manner similar to determining user movement in step (312) of FIG. 3. Additionally, in many implementations, a threshold of movement can be met movement is detected in the group of users in a manner similar determining a threshold of movement in a user in step (610) of FIG. 6.
  • A second group of multiple active users can be detected (712) by the client device. In many implementations, detecting a second group of multiple active users can be performed in a manner similar to detecting multiple active users in step (704). Active users in the second group of active users can be different than the active users in the first group. However, there can be overlap between active users in the two groups of active users (but it should be noted it is not a requirement to have any overlap between active users in the first group and active users in the second group).
  • A second transformed image for the second group of multiple active users can be generated (714) using the client device. In various implementations, generating the second transformed image can be performed in a manner similar to generating the first transformed image for the multiple active users described in step (706).
  • The client device can cause (716) the projector to project the second transformed image onto the surface. In many implementations, the second transformed image is a transformation of the base image or is a transformation of an additional image. Projecting the second transformed image can be performed in a manner similar to step (614) and/or step (318) of FIG. 3.
  • Generating the base image can be based on the distance of the user from the projected image (i.e., the distance from the user to the surface the image is projected on). An example of a user viewing different base images projected onto a wall, where the projected base image is determined based on the user's distance from the projected image is illustrated in FIGS. 8A and 8B. Image 800 contains a first scene of a room at a first time and is illustrated in FIG. 8A. Image 800 contains user 802, projected image 804, client device 806 (that includes and integrated projector and/or is in communication with a locally accessible separate projector), and table 808. The contents of the room in image 800 are merely illustrative and, for example, the client device and/or projector can be separate devices, the client device and/or projector can be on a surface other than a table such as a desk, a dresser, and/or mounted onto surfaces such as a wall and/or celling, more than one client device can be present within the room, more than one projector can be within the room, and/or more than one user can be in the room. Client device 806 can determine the pose of user 802, and can determine a distance from user 802 to projected image 804. The client device identifies a base image depending on the distance from user 802 to projected image 804. Projected image 804 can be projected as the identified base image and/or as a transformed image.
  • The same room is illustrated in FIG. 8B. Image 850 is a second of the scene of the room captured at a second time which contains the same client device 806 (that includes and integrated projector and/or is in communication with a locally accessible separate projector), and table 808. As an illustrative example, the user has moved to a second pose 852 much closer to projected image 854. Similarly to FIG. 8A, the client device can determine the second pose of the user 852 before determining the distance from the second pose user 852 to projected image 854. The client device can then identify a second base image corresponding to the distance from second pose of the user 852 to projected image 854. Projected image 854 can be projected as the second identified base image and/or as a second transformed image. Generally, the closer a user is to a projected image, the more detailed UI elements the identified base image will contain. For example, the first pose of user 802 is further away from the projected image compared to the second pose user 852. As such, projected image 804 contains less detailed weather information which includes weather information for today (for example, a graphical image of the sun and 72 degrees). In contrast, the second pose of user 852 is much closer to the projected image so projected image 854 contains more detailed information which includes weather information for both today and tomorrow (for example a graphical image of the sun a temperature of 72 degrees indicated as today's weather and a graphical image of a cloud with rain and a temperature of 54 degrees indicated as tomorrow's weather).
  • A process for generating a base image based on the distance of a user from a projected image in accordance with various implementations is illustrated in FIG. 9. The process 900 can be performed by one or more client devices, and or/any other apparatus capable of interacting with an automated assistant. The process includes determining (902) the pose of a user by the client device. Determining the poser of a user (including user location) can be determined in a manner similar to determining a pose of a first user as described in step (304) of FIG. 3.
  • A distance from the user to a projection surface can be determined (904) by the client device. The projection surface is the location where a projected image is displayed. In several implementations, a client device can determine the distance from the user to the projection surface only using user pose. In many implementations, additional information such as the distance from the client device to the projection surface may be necessary to determine the distance from the user to the projection surface.
  • A base image can be identified (906) using the distance from the user to the projection surface. Base images with more detailed UI elements are generally selected for users closer to the projection surface while base images with less detailed UI elements are generally selected for users further away from the projection surface. In several implementations, a base image with touch sensitive UI elements can be identified for a user close enough to make physical contact with the projection surface. For example, a client device can select a base image with a full day of calendar information for a user who can touch the projection surface, and additionally in many implementations the user can touch the projected image to scroll through the projected calendar image.
  • A transformed image can be generated (908) from the base image using the pose of the user by the client device. In many implementations, generating a transformed image can be performed in a manner similar to steps (304)-(308) of FIG. 3. In some implementations, process 900 can omit generation of a transformed image and the generated base image can instead be projected.
  • The client device can cause (910) the projector to project the transformed image onto the surface. Projecting the transformed image can be performed in a manner similar to step (310) of FIG. 3.
  • FIG. 10 illustrates a block diagram of an example method to generate interfaces in an audio-based, networked system. The method 1100 can include receiving an input audio signal (1102). The method 1100 can include parsing the input audio signal (1104). The method 1100 can include selecting a first digital component and a second digital component (1106). The method 1100 can include determining a distance (1108). The method 1100 can determine transformation parameters (1110). The method 1100 can include generating a first transformed digital component and a second transformed digital component (1112). The method 1100 can include transmitting the first transformed image and the second transformed image (1114).
  • The method 1100 can include receiving an input audio signal (1102). The method 1100 can include receiving, by the natural language processor, the input audio signal. The input audio signal can be an input audio signal that is detected by a microphone or other sensor located at a client device. The data processing system can receive the audio input in one or more portions or as a bulk or batch upload (e.g., multiple portions of the conversations uploaded in a single transmission to reduce the number of transmissions).
  • The method 1100 can include parsing the input signal (204). The natural language processor can parse the input signal to identify a request and one or more keywords in the input audio signal. The request can be a request for a digital component. For example, the request can be for a digital component that includes images, video, text, audio files, or any combination thereof. The keywords can include terms that are relevant, identified by, or associated with the requested digital component. A keyword can include one or more terms or phrases. For example, for a request that a digital component including the current weather in San Francisco be displayed on a wall or other projection surface by the client device, the keyword can be “weather” or “San Francisco.”
  • The method 1100 can include selecting a first digital component and a second digital component (1106). The first and second digital component can be base digital components, such as base images. The base digital components can include one or more image or video files. The content selector component can select the first base digital component based on the request parsed from the input audio signal. The content selector component can select the second base digital component based on the keyword identified based on the input audio signal. For example, for the input audio signal “what is the weather in San Francisco,” the automated assistant can determine the request is for the current weather of San Francisco to be presented. The first base digital component can be an image that includes graphics illustrating the current weather conditions and temperature in San Francisco. The automated assistant can select the second base digital component based on a keyword associated with the request, such as “San Francisco.” For example, the second base digital component can be an image that includes information about a popular restaurant located in San Francisco.
  • The method 1100 can include determining a distance (1108). The automated assistant can determine the distance between the automated assistant (or the projector associated with the automated assistant) and the projection surface (e.g., a wall) onto which the digital components are going to be projected. The automated assistant can determine the distance using built in range finding sensors, such as ultrasonic or infrared sensors. The end user, when configuring the automated assistant, can input the distance to the automated assistant. The distance can be determined each time an input audio signal is transmitted to the automated assistant, at predetermined intervals (e.g., daily or weekly), or during a configuration phase of the automated assistant.
  • The method 1100 can determining transformation parameters (1110). The automated assistant can determine the transformation parameters based at least on the distance between the client device (or associated projector) and the projection surface. The transformation parameters can be based on a pose or distance between the end user and the projection surface. The transformation parameters can correct for a skew in the projection of digital components onto the projection surface based on the placement of the projector or the position of the user. For example, application of the transformation parameters can enable the automated assistant to perform a keystone correction on the digital component. Without the application of the transformation parameters, one or more of the edges of the digital component may be non-parallel to one another when projected onto the projection surface. The transformation parameters can correct for the skew such that the edges of the digital component are parallel with one another when projected onto the projection surface. The transformation parameters can include linear transformations. The transformation parameters can be stored locally at the client device or projector. For example, the digital components can be transmitted to the client device, which can apply the transformation parameters to generate the transformed digital component.
  • The method 1100 can include generating a first transformed digital component and a second transformed digital component (1112). The first and second transformed digital components can be transformed images. The transformed version of the digital component can include the same content as the original digital component. The images of the transformed digital components can be adjusted or scaled such that the edges of the transformed images appear parallel to one another when projected onto the projection surface.
  • The method 1100 can include transmitting the first transformed digital component and the second transformed digital component (1114). The first and second transformed digital components can be transmitted to the client device to be projected onto the projection surface. In some cases, the transformation parameters and the base digital components can be transmitted to the client device. The client device can apply the transformation parameters to the base digital components prior to projecting the base digital components onto the projection surface.
  • FIG. 11 is a block diagram of an example computing device 1010 that may optionally be utilized to perform one or more aspects of techniques described herein. In some implementations, one or more of a client computing device, user-controlled resources module, and/or other component(s) may comprise one or more components of the example computing device 1010.
  • Computing device 1010 typically includes at least one processor 1014 which communicates with a number of peripheral devices via bus subsystem 1012. These peripheral devices may include a storage subsystem 1024, including, for example, a memory subsystem 1025 and a file storage subsystem 1026, user interface output devices 1020, user interface input devices 1022, and a network interface subsystem 1016. The input and output devices allow user interaction with computing device 1010. Network interface subsystem 1016 provides an interface to outside networks and is coupled to corresponding interface devices in other computing devices.
  • User interface input devices 1022 may include a keyboard, pointing devices such as a mouse, trackball, touchpad, or graphics tablet, a scanner, a touchscreen incorporated into the display, audio input devices such as voice recognition systems, microphones, and/or other types of input devices. In general, use of the term “input device” is intended to include all possible types of devices and ways to input information into computing device 1010 or onto a communication network.
  • User interface output devices 1020 may include a display subsystem, a printer, a fax machine, or non-visual displays such as audio output devices. The display subsystem may include a cathode ray tube (CRT), a flat-panel device such as a liquid crystal display (LCD), a projection device, or some other mechanism for creating a visible image. The display subsystem may also provide non-visual display such as via audio output devices. In general, use of the term “output device” is intended to include all possible types of devices and ways to output information from computing device 1010 to the user or to another machine or computing device.
  • Storage subsystem 1024 stores programming and data constructs that provide the functionality of some or all of the modules described herein. For example, the storage subsystem 1024 may include the logic to perform selected aspects of the process of FIG. 3, as well as to implement various components depicted in FIGS. 1 and 2.
  • These software modules are generally executed by processor 1014 alone or in combination with other processors. Memory 1025 used in the storage subsystem 1024 can include a number of memories including a main random access memory (RAM) 1030 for storage of instructions and data during program execution and a read only memory (ROM) 1032 in which fixed instructions are stored. A file storage subsystem 1026 can provide persistent storage for program and data files, and may include a hard disk drive, a floppy disk drive along with associated removable media, a CD-ROM drive, an optical drive, or removable media cartridges. The modules implementing the functionality of certain implementations may be stored by file storage subsystem 1026 in the storage subsystem 1024, or in other machines accessible by the processor(s) 1014.
  • Bus subsystem 1012 provides a mechanism for letting the various components and subsystems of computing device 1010 communicate with each other as intended. Although bus subsystem 1012 is shown schematically as a single bus, alternative implementations of the bus subsystem may use multiple busses.
  • Computing device 1010 can be of varying types including a workstation, server, computing cluster, blade server, server farm, or any other data processing system or computing device. Due to the ever-changing nature of computers and networks, the description of computing device 1010 depicted in FIG. 11 is intended only as a specific example for purposes of illustrating some implementations. Many other configurations of computing device 1010 are possible having more or fewer components than the computing device depicted in FIG. 11.
  • In situations in which the systems described herein collect or otherwise monitor personal information about users, or may make use of personal and/or monitored information), the users may be provided with an opportunity to control whether programs or features collect user information (e.g., information about a user's social network, social actions or activities, profession, a user's preferences, or a user's current geographic location), or to control whether and/or how to receive content from the content server that may be more relevant to the user. Also, certain data may be treated in one or more ways before it is stored or used, so that personal identifiable information is removed. For example, a user's identity may be treated so that no personal identifiable information can be determined for the user, or a user's geographic location may be generalized where geographic location information is obtained (such as to a city, ZIP code, or state level), so that a particular geographic location of a user cannot be determined. Thus, the user may have control over how information is collected about the user and/or used. For example, in some implementations, users may opt out of having automated assistant 112 attempt to estimate their age range and/or vocabulary level.
  • While several implementations have been described and illustrated herein, a variety of other means and/or structures for performing the function and/or obtaining the results and/or one or more of the advantages described herein may be utilized, and each of such variations and/or modifications is deemed to be within the scope of the implementations described herein. More generally, all parameters, dimensions, materials, and configurations described herein are meant to be exemplary and that the actual parameters, dimensions, materials, and/or configurations will depend upon the specific application or applications for which the teachings is/are used. Those skilled in the art will recognize, or be able to ascertain using no more than routine experimentation, many equivalents to the specific implementations described herein. It is, therefore, to be understood that the foregoing implementations are presented by way of example only and that, within the scope of the appended claims and equivalents thereto, implementations may be practiced otherwise than as specifically described and claimed. Implementations of the present disclosure are directed to each individual feature, system, article, material, kit, and/or method described herein. In addition, any combination of two or more such features, systems, articles, materials, kits, and/or methods, if such features, systems, articles, materials, kits, and/or methods are not mutually inconsistent, is included within the scope of the present disclosure.

Claims (21)

1-20. (canceled)
21. A system to generate interfaces in audio-based environments, comprising:
a data processing system having one or more components coupled with memory, the data processing system to:
receive an input audio signal of a user acquired via a sensor of a client device, the client device communicatively coupled with the client device;
parse the input audio signal to identify a request for content to present via the projector;
select, from a plurality of base digital components, a base digital component using the request identified from the input audio signal, the base digital component having a first set of image frames;
identify a distance between the user and a projection surface upon which the projector is to project;
determine, based on the distance between the user and the projection surface, a transformation parameter to set at least one of a rotation, a scaling, a size, or a skew of the first set of image frames of the base digital component for projection onto the projection surface;
generate a second set of image frames based on the first set of image frames of the base digital component and the transformation parameter; and
provide the second set of image frames to the projector to project onto the projection surface.
22. The system of claim 21, comprising the data processing system to:
determine, concurrent to the projection of at least one of the second set of image frames, for a change in the distance between the user and the projection surface is greater than a threshold distance; and
update, responsive to the determination, the transformation parameter based on the change in the distance.
23. The system of claim 21, comprising the data processing system to:
identify a plurality of distance between a corresponding plurality of users and the projector; and
determine, based on at least a subset of the plurality of distances, the transformation parameter for the first set of image frames of the base digital component.
24. The system of claim 21, comprising the data processing system to:
identify a pose of the user relative to the projection surface, the pose including at least one of a position or an orientation of the user; and
determine the transformation parameter based on the pose of the user relative to the projection surface.
25. The system of claim 21, comprising the data processing system to:
determine that a pose of the user relative to the projection surface satisfies a threshold condition; and
select, responsive to the determination that the pose of the user satisfies the threshold condition, the base digital component including one or more interactive interface elements.
26. The system of claim 21, comprising the data processing system to select, from the plurality of base digital components, the base digital component based on the distance identified between the user and the projection surface.
27. The system of claim 21, comprising the data processing system to:
parse the input audio signal to identify one or more keywords;
select, from the plurality of base digital components, a second base digital component using the one or more keywords, the second base digital component including a third set of image frames;
generate a fourth set of image frames based on the third set of image frames of the second base digital component and the transformation parameter; and
provide the fourth set of image frames to the projector to project onto the projection surface with the second set of image frames.
28. A system to generate interfaces in audio-based environments, comprising:
a client device having one or more processors coupled with memory, the client device communicatively coupled with a projector, the client device to:
receive, via a sensor, an input audio signal of a user;
transmit, to a data processing system, the input audio signal to cause the data processing system to identify a request for content from parsing the input audio signal and select a base digital component from a plurality of base digital components based on the request;
receive, from the data processing system, the base digital component including a first set of image frames;
identify a distance between the user and a projection surface upon which the projector is to project;
identify a transformation parameter to set at least one of a rotation, a scaling, a size, or a skew of the first set of image frames of the base digital component for projection onto the projection surface in accordance with the distance between the user and the projection surface;
generate a second set of image frames based on the first set of image frames of the base digital component and the transformation parameter; and
provide the second set of image frames to the projector to project onto the projection surface.
29. The system of claim 28, comprising the client device to:
determine, concurrent to the projection of at least one of the second set of image frames, for a change in the distance between the user and the projection surface is greater than a threshold distance; and
update, responsive to the determination, the transformation parameter based on the change in the distance.
30. The system of claim 28, comprising the client device to:
identify a plurality of distances between a corresponding plurality of users and the projector; and
identify, in accordance with at least a subset of the plurality of distances, the transformation parameter for the first set of image frames of the base digital component.
31. The system of claim 28, comprising the client device to:
identify a pose of the user relative to the projection surface, the pose including at least one of a position or an orientation of the user; and
identify the transformation parameter based on the pose of the user relative to the projection surface.
32. The system of claim 28, comprising the client device to:
determine that a pose of the user relative to the projection surface satisfies a threshold condition; and
cause the data processing system to select, responsive to the determination that the pose of the user satisfies the threshold condition, the base digital component including one or more interactive interface elements.
33. The system of claim 28, comprising the client device to:
determine that a pose of the user relative to the projection surface satisfies a threshold condition; and
cause the data processing system to select, responsive to the determination that the pose of the user satisfies the threshold condition, the base digital component including one or more interactive interface elements.
34. The system of claim 28, comprising the client device to:
transmit, to the data processing system, the input audio signal to cause the data processing system to identify one or more keywords from parsing the input audio signal and select a second base digital component from the plurality of base digital components using the one or more keywords, the second base digital component including a third set of image frames;
generate a fourth set of image frames based on the third set of image frames of the second base digital component and the transformation parameter; and
provide the fourth set of image frames to the projector to project onto the projection surface with the second set of image frames.
35. A method of generating interfaces in audio-based environments, comprising:
receiving, by a data processing system, an input audio signal of a user acquired via a sensor of a client device, the client device communicatively coupled with the client device;
parsing, by the data processing system, the input audio signal to identify a request for content to present via the projector;
selecting, by the data processing system, from a plurality of base digital components, a base digital component using the request identified from the input audio signal, the base digital component having a first set of image frames;
identifying, by the data processing system, a distance between the user and a projection surface upon which the projector is to project;
determining, by the data processing system, based on the distance between the user and the projection surface, a transformation parameter to adjust at least one of a rotation, a scaling, a size, and a skew of the first image frame of the base digital component projected onto the projection surface;
generating, by the data processing system, a second set of image frames based on the first set of image frames of the base digital component and the transformation parameter; and
providing, by the data processing system, the second set of image frames to the projector to project onto the projection surface.
36. The method of claim 35, comprising:
determining, by the data processing system, concurrent to the projection of at least one of the second set of image frames, for a change in the distance between the user and the projection surface is greater than a threshold distance; and
updating, by the data processing system, responsive to the determination, the transformation parameter based on the change in the distance.
37. The method of claim 35, comprising:
identifying, by the data processing system, a plurality of distance between a corresponding plurality of users and the projector; and
determining, by the data processing system, based on at least a subset of the plurality of distances, the transformation parameter for the first set of image frames of the base digital component.
38. The method of claim 35, comprising
identifying, by the data processing system, a pose of the user relative to the projection surface, the pose including at least one of a position or an orientation of the user; and
determining, by the data processing system, the transformation parameter based on the pose of the user relative to the projection surface.
39. The method of claim 35, comprising:
determining, by the data processing system, that a pose of the user relative to the projection surface satisfies a threshold condition; and
selecting, by the data processing system, responsive to the determination that the pose of the user satisfies the threshold condition, the base digital component including one or more interactive interface elements.
40. The method of claim 35, comprising:
parsing, by the data processing system, the input audio signal to identify one or more keywords; and
selecting, by the data processing system, from the plurality of base digital components, a second base digital component using the one or more keywords, the second base digital component including a third set of image frames;
generating, by the data processing system, a fourth set of image frames based on the third set of image frames of the second base digital component and the transformation parameter; and
providing, by the data processing system, the fourth set of image frames to the projector to project onto the projection surface with the second set of image frames.
US16/924,661 2018-04-17 2020-07-09 Dynamic adaptation of device interfaces in a voice-based system Abandoned US20200342569A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/924,661 US20200342569A1 (en) 2018-04-17 2020-07-09 Dynamic adaptation of device interfaces in a voice-based system

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US15/955,297 US10853911B2 (en) 2018-04-17 2018-04-17 Dynamic adaptation of images for projection, and/or of projection parameters, based on user(s) in environment
US15/973,456 US10726521B2 (en) 2018-04-17 2018-05-07 Dynamic adaptation of device interfaces in a voice-based system
US16/924,661 US20200342569A1 (en) 2018-04-17 2020-07-09 Dynamic adaptation of device interfaces in a voice-based system

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US15/973,456 Continuation US10726521B2 (en) 2018-04-17 2018-05-07 Dynamic adaptation of device interfaces in a voice-based system

Publications (1)

Publication Number Publication Date
US20200342569A1 true US20200342569A1 (en) 2020-10-29

Family

ID=68162010

Family Applications (2)

Application Number Title Priority Date Filing Date
US15/973,456 Active 2038-10-21 US10726521B2 (en) 2018-04-17 2018-05-07 Dynamic adaptation of device interfaces in a voice-based system
US16/924,661 Abandoned US20200342569A1 (en) 2018-04-17 2020-07-09 Dynamic adaptation of device interfaces in a voice-based system

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US15/973,456 Active 2038-10-21 US10726521B2 (en) 2018-04-17 2018-05-07 Dynamic adaptation of device interfaces in a voice-based system

Country Status (1)

Country Link
US (2) US10726521B2 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3596584B1 (en) * 2018-05-04 2021-03-24 Google LLC Adapting automated assistant based on detected mouth movement and/or gaze
KR102512446B1 (en) 2018-05-04 2023-03-22 구글 엘엘씨 Hot-word free adaptation of automated assistant function(s)
EP3982236B1 (en) 2018-05-04 2023-10-11 Google LLC Invoking automated assistant function(s) based on detected gesture and gaze

Family Cites Families (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7134080B2 (en) * 2002-08-23 2006-11-07 International Business Machines Corporation Method and system for a user-following interface
US7542210B2 (en) * 2006-06-29 2009-06-02 Chirieleison Sr Anthony Eye tracking head mounted display
BR112013031367A2 (en) * 2011-07-08 2017-03-01 Thomson Licensing system and process for encoding and decoding a bit stream for a 3d model having repetitive structure.
US9104367B2 (en) 2012-03-02 2015-08-11 Realtek Semiconductor Corp. Multimedia interaction system and related computer program product capable of avoiding unexpected interaction behavior
CN104853125B (en) 2014-02-19 2018-08-31 联想(北京)有限公司 A kind of intelligence projecting method and electronic equipment
US9489724B2 (en) * 2014-03-31 2016-11-08 The Boeing Company Three-dimensional stereoscopic projection on complex surfaces
US9588408B1 (en) * 2014-05-15 2017-03-07 Autofuss Methods and systems for projecting a target portion of an image at a higher resolution
US10462421B2 (en) * 2015-07-20 2019-10-29 Microsoft Technology Licensing, Llc Projection unit
US20170092278A1 (en) 2015-09-30 2017-03-30 Apple Inc. Speaker recognition
US9928840B2 (en) 2015-10-16 2018-03-27 Google Llc Hotword recognition
US9747926B2 (en) 2015-10-16 2017-08-29 Google Inc. Hotword recognition
US10691473B2 (en) 2015-11-06 2020-06-23 Apple Inc. Intelligent automated assistant in a messaging environment
CA3016337A1 (en) * 2016-03-15 2017-09-21 Magic Leap, Inc. Wide baseline stereo for low-latency rendering
US10192552B2 (en) 2016-06-10 2019-01-29 Apple Inc. Digital assistant providing whispered speech
US10395428B2 (en) * 2016-06-13 2019-08-27 Sony Interactive Entertainment Inc. HMD transitions for focusing on specific content in virtual-reality environments
US10779011B2 (en) * 2017-07-31 2020-09-15 Qualcomm Incorporated Error concealment in virtual reality system
US10475454B2 (en) * 2017-09-18 2019-11-12 Motorola Mobility Llc Directional display and audio broadcast
US10681318B2 (en) * 2017-11-14 2020-06-09 Texas Instruments Incorporated Camera-assisted arbitrary surface characterization and slope-based correction

Also Published As

Publication number Publication date
US20190318450A1 (en) 2019-10-17
US10726521B2 (en) 2020-07-28

Similar Documents

Publication Publication Date Title
US20210082083A1 (en) Dynamic adaptation of images for projection, and/or of projection parameters, based on user(s) in environment
US11493992B2 (en) Invoking automated assistant function(s) based on detected gesture and gaze
US20200342569A1 (en) Dynamic adaptation of device interfaces in a voice-based system
US11347801B2 (en) Multi-modal interaction between users, automated assistants, and other computing services
JP7461405B2 (en) Systems, methods, and apparatus for image-responsive automated assistants - Patents.com
KR102574277B1 (en) Creation and/or application of automated assistant content according to the distance between the user and the automated assistant interface.
KR20220115824A (en) Matching content to a spatial 3d environment
KR20230121930A (en) Adapting automated assistant based on detected mouth movement and/or gaze
KR20160071732A (en) Method and apparatus for processing voice input
KR102508338B1 (en) Determining whether to automatically resume the first automated assistant session when the second session interrupts
US20220051675A1 (en) Multimodal responses
US11164576B2 (en) Multimodal responses
KR102661487B1 (en) Invoke automated assistant functions based on detected gestures and gaze
US20240103893A1 (en) Generating content endorsements using machine learning nominator(s)

Legal Events

Date Code Title Description
AS Assignment

Owner name: GOOGLE LLC, CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:LEONG, JIAN WEI;REEL/FRAME:053164/0600

Effective date: 20180822

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION