WO2013181633A1 - Providing a converstional video experience - Google Patents

Providing a converstional video experience Download PDF

Info

Publication number
WO2013181633A1
WO2013181633A1 PCT/US2013/043773 US2013043773W WO2013181633A1 WO 2013181633 A1 WO2013181633 A1 WO 2013181633A1 US 2013043773 W US2013043773 W US 2013043773W WO 2013181633 A1 WO2013181633 A1 WO 2013181633A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
response
video
concept
conversation
Prior art date
Application number
PCT/US2013/043773
Other languages
French (fr)
Inventor
Ronald A. CROEN
Mark T. Anikst
Vidur Apparao
Bernt HABERMEIER
Todd A. MENDELOFF
Original Assignee
Volio, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Volio, Inc. filed Critical Volio, Inc.
Publication of WO2013181633A1 publication Critical patent/WO2013181633A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/42Systems providing special services or facilities to subscribers
    • H04M3/487Arrangements for providing information services, e.g. recorded voice services or time announcements
    • H04M3/493Interactive information services, e.g. directory enquiries ; Arrangements therefor, e.g. interactive voice response [IVR] systems or voice portals
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • A63F13/215Input arrangements for video game devices characterised by their sensors, purposes or types comprising means for detecting acoustic signals, e.g. using a microphone
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/54Controlling the output signals based on the game progress involving acoustic signals, e.g. for simulating revolutions per minute [RPM] dependent engine sounds in a driving game or reverberation against a virtual wall
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/141Systems for two-way working between two video terminals, e.g. videophone
    • H04N7/147Communication arrangements, e.g. identifying the communication as a video-communication, intermediate storage of the signals
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/10Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals
    • A63F2300/1081Input via voice recognition
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/6063Methods for processing data by generating or executing the game program for sound processing
    • A63F2300/6072Methods for processing data by generating or executing the game program for sound processing of an input signal, e.g. pitch and rhythm extraction, voice recognition
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/66Methods for processing data by generating or executing the game program for rendering three dimensional images
    • A63F2300/6607Methods for processing data by generating or executing the game program for rendering three dimensional images for animating game characters, e.g. skeleton kinematics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/01Social networking
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M11/00Telephonic communication systems specially adapted for combination with other electrical systems
    • H04M11/10Telephonic communication systems specially adapted for combination with other electrical systems with dictation recording and playback systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2201/00Electronic components, circuits, software, systems or apparatus used in telephone systems
    • H04M2201/40Electronic components, circuits, software, systems or apparatus used in telephone systems using speech recognition
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2201/00Electronic components, circuits, software, systems or apparatus used in telephone systems
    • H04M2201/60Medium conversion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2203/00Aspects of automatic or semi-automatic exchanges
    • H04M2203/25Aspects of automatic or semi-automatic exchanges related to user interface aspects of the telephonic communication service
    • H04M2203/251Aspects of automatic or semi-automatic exchanges related to user interface aspects of the telephonic communication service where a voice mode or a visual mode can be used interchangeably
    • H04M2203/252Aspects of automatic or semi-automatic exchanges related to user interface aspects of the telephonic communication service where a voice mode or a visual mode can be used interchangeably where a voice mode is enhanced with visual information

Definitions

  • Speech recognition technology is used to convert human speech (audio input) to text or data representing text (text-based output).
  • Applications of speech recognition technology to date have included voice-operated user interfaces, such as voice dialing of mobile or other phones, voice-based search, interactive voice response (IVR) interfaces, and other interfaces.
  • voice-operated user interfaces such as voice dialing of mobile or other phones, voice-based search, interactive voice response (IVR) interfaces, and other interfaces.
  • voice-operated user interfaces such as voice dialing of mobile or other phones, voice-based search, interactive voice response (IVR) interfaces, and other interfaces.
  • IVR interactive voice response
  • a user must select from a constrained menu of valid responses, e.g., to navigate a hierarchical sets of menu options.
  • Figure 1 is a block diagram illustrating an embodiment of a system to provide a conversational video experience.
  • Figure 2 is a block diagram illustrating an embodiment of a system to provide a conversational video experience.
  • Figure 3 is a block diagram illustrating an embodiment of a conversational video runtime engine.
  • Figure 4A is a block diagram illustrating an embodiment of a conversational video experience display and interface.
  • Figure 4B is a block diagram illustrating an embodiment of a conversational video experience display and interface.
  • Figure 4C is a block diagram illustrating an embodiment of a conversational video experience display and interface.
  • Figure 5 is a block diagram illustrating an embodiment of a conversational video experience.
  • Figure 6 is a block diagram illustrating an embodiment of a conversational video experience.
  • Figure 7 is a block diagram illustrating an embodiment of a conversational video experience segment.
  • Figure 8A is a flow chart illustrating an embodiment of a process to provide a conversational video experience.
  • Figure 8B is a flow chart illustrating an embodiment of a process to provide a conversational video experience.
  • Figure 9 is a flow chart illustrating an embodiment of a process to receive and interpret user responses.
  • Figure 10 is a block diagram illustrating an embodiment of elements of a conversational video experience system.
  • Figure 11 is a block diagram illustrating an embodiment of a conversational video experience next segment decision engine.
  • Figure 12 is a flow chart illustrating an embodiment of a process to provide and update a response understanding model.
  • Figure 13 is a flow chart illustrating an embodiment of a process to integrate a transition video into a conversation video experience.
  • Figure 14 is a flow chart illustrating an embodiment of a process to provide a dynamic active listening experience.
  • Figure 15 is a flow chart illustrating an embodiment of a process to record a user's side of a conversational video experience.
  • the invention can be implemented in numerous ways, including as a process; an apparatus; a system; a composition of matter; a computer program product embodied on a computer readable storage medium; and/or a processor, such as a processor configured to execute instructions stored on and/or provided by a memory coupled to the processor.
  • these implementations, or any other form that the invention may take, may be referred to as techniques.
  • the order of the steps of disclosed processes may be altered within the scope of the invention.
  • a component such as a processor or a memory described as being configured to perform a task may be implemented as a general component that is temporarily configured to perform the task at a given time or a specific component that is manufactured to perform the task.
  • the term 'processor' refers to one or more devices, circuits, and/or processing cores configured to process data, such as computer program instructions.
  • a conversational video runtime system emulates a virtual participant in a conversation with a real participant (a user). It presents the virtual participant as a video persona, created in various embodiments based on recording or capturing aspects of a real person or other persona participating in the persona's end of the conversation.
  • the video persona in various embodiments may be one or more of an actor or other human subject; a puppet, animal, or other animate or inanimate object; and/or pre-rendered video, for example of a computer generated and/or other participant.
  • the "conversation" may comprise one or more of spoken words, nonverbal gestures, and/or other verbal and/or non-verbal modes of communication capable of being recorded and/or otherwise captured via pre-rendered video and/or video recording.
  • a script or set of scripts may be used to record discrete segments in which the subject affirms a user response to a previously-played segment, imparts information, prompts the user to provide input, and/or actively listens as one might do while listening live to another participant in the conversation.
  • the system provides the video persona's side of the conversation by playing video segments on its own initiative and in response to what it heard and understood from the user side.
  • the video "persona" may include one or more participants, e.g., a conversation with members of a rock band, and/or more than one real world user may interact with the conversational experience at the same time.
  • Active Listening Techniques for simulating the natural cadence of conversation, including visual and aural listening cues and interruptions by either party.
  • Video Transitions and Transformations Methods for smoothing a virtual persona's transition between video segments and other video transformation techniques to simulate a natural conversation.
  • Multiple response modes Allowing the user to provide a response using speech, touch or other input modalities.
  • the selection of the available input modes may be made dynamically by the system.
  • Conversational Transitions In some applications, data in the cloud or other aspects of the application context may require some time to retrieve or analyze. Techniques to make the conversation seem continuous through such transitions are disclosed.
  • Audio-only content (as opposed to audio that is part of video) can augment video content with more flexibility and less storage demands. A method of seamlessly incorporating it within a video interaction is described.
  • FIG. 1 is a block diagram illustrating an embodiment of a system to provide a conversational video experience.
  • each of a plurality of clients represented in Figure 1 by clients 102, 104, 106, has access, e.g., via a wireless or wired connection to a network 108, to one or more conversational video experience servers, represented in Figure 1 by server 110.
  • Servers such as server 110 use conversational video experience assets stored in an associated data store, such as data store 112, to provide assets to respective ones of the clients, e.g., on request by a client side application or other code running on the respective clients, to enable a conversational video experience to be provided.
  • client devices such as clients 102, 104, and 106
  • client devices included without limitation desktop, laptop, and/or other portable computers; iPad® and/or other tablet computers or devices; mobile phones and/or other mobile computing devices; and any other device capable of providing a video display and capturing user input provided by a user, e.g., a response spoken by the user.
  • a single server 110 is shown in Figure 1, in various embodiments a plurality of servers may be used, e.g., to make the same conversational video experience available from a plurality of source, and/or to deploy different conversation video
  • conversational video experience assets that may be stored in data stores such as data store 112 include, without limitation, video segments to be played in a prescribed order and/or manner to provide the video persona' s side of a conversation and meta-information to be used to determine an order and/or timing in which such segments should be played.
  • a conversational video runtime system or runtime engine may be used in various embodiments to provide a conversational experience to a user in multiple different scenarios. For example:
  • Standalone application A conversation with a single virtual persona or multiple conversations with different virtual personae could be packaged as a standalone application (delivered, for example, on a mobile device or through a desktop browser). In such a scenario, the user may have obtained the application primarily for the purpose of conducting conversations with virtual personae.
  • One or more conversations with one or more virtual personae may be embedded within a separate application or web site with a broader purview.
  • an application or web site representing a clothing store could embed a conversational video with a spokesperson with the goal of helping a user make clothing selections.
  • Production tool - The runtime engine may be contained within a tool used for production of conversational videos. The runtime engine could be used for testing the current state of the conversational video in production.
  • the runtime engine is incorporated and used by a container application.
  • the container application may provide services and experiences to the user that complement or supplement those provided by the conversational video runtime engine, including discovery of new conversations; presentation of the conversation at the appropriate time in a broader user experience; presentation of related material alongside or in addition to the conversation; etc.
  • FIG. 2 is a block diagram illustrating an embodiment of a system to provide a conversational video experience.
  • a device 202 such as tablet or other client device, includes a communication interface 204, which provides network connectivity.
  • a conversational video experience runtime engine 206 communicates to the network via communication interface 204, for example to download video, meta-information, and/or other assets to be used to provide a conversational video experience.
  • Downloaded assets e.g., meta-information, video segments of video persona
  • locally-generated assets e.g., audio comprising responses spoken by the user, video of the user interacting with the experience
  • Video segments and/or other content are displayed to the user via output devices 210, such as a video display device and/or speakers.
  • Input devices 212 are used by the user to provide input to the runtime engine 206. Examples of input devices 212 include without limitation a microphone, which may be used, for example, to capture a response spoken by the user in response to a question or other prompt by the video persona, and a touch screen or other haptic device, which may be used to receive as input user selection from among a displayed set of responses, as described more fully below.
  • a user-facing camera 214 in various embodiments provides (or optionally provides) video of the user interacting with the conversational video experience, for example to be used to evaluate and fine-tune the experience for the benefit of future users, to provide picture-in-picture (PIP) capability, and/or to capture the user's side of the conversation, e.g., to enable the user to save and/or share video representing both sides of the conversation.
  • PIP picture-in-picture
  • FIG. 3 is a block diagram illustrating an embodiment of a conversational video runtime engine.
  • a conversational video runtime engine may contain some or all of the components shown in Figure 3.
  • the conversational video runtime engine 206 includes an asset management service 302.
  • asset management service 302 manages the retrieval and caching of all required assets (e.g. video segments, language models, etc.) and makes them available to other system components, such as and without limitation media playback service 304.
  • Media playback service 304 in various embodiments plays video segments representing the persona 's verbal and physical activity.
  • the video segments in various embodiments are primarily pre-recorded, but in other embodiments may be synthesized on-the-fly.
  • a user audio/video recording service 306 captures and records audio and video of the user during a conversational video experience, e.g., for later sharing and analysis.
  • a response concept service 308 determines which video segments to play at which time and in which order, based for example on one or more of user input (e.g., spoken and/or other response); user profile and/or other information; and/or meta-information associated with the conversational video experience.
  • user input e.g., spoken and/or other response
  • user profile and/or other information e.g., user profile and/or other information
  • meta-information associated with the conversational video experience e.g., meta-information associated with the conversational video experience.
  • a input recognition service 310 includes in various embodiments a speech recognition system (SR) and other input recognition such as speech prosody recognition, recognition of user's facial expressions, recognition / extraction of location, time of day, and other environmental factors / features, as well as user's touch gestures (utilizing the provided graphical user interface).
  • the input recognition service 310 in various embodiments accesses user profile information retrieved, captured, and/or generated by the personal profiling service 314, e.g., to utilize personal characteristics of the user in order to adapt the results to the user. For example, if it's understood that the user is male, from their personal profiling data, in some embodiments video segments including any questions regarding the gender of the individual may be skipped, because the user's gender is known from their profile information. Another example is modulating foul language based on user preference:
  • user profile data may be used to choose which version of the conversation is used based on the user's history of swearing (or not) during the course of the user's own statements during the user's participation in the same or previous conversations, making the conversation more enjoyable, or at least more suited to the user's comfort with such language, overall.
  • the speech recognizer as well as natural language processor can be made more effective by tuning based on end-user behavior. The current state-of-the-art speech recognizers do allow a user based profile to be built to improve overall speech recognition accuracy on a per-user basis.
  • the output of the input recognition service 310 in various embodiments may include a collection of one or more feature values, including without limitation speech recognition values (hypotheses, such a ranked and/or scored set of "n-best" hypotheses as to which words were spoken), speech prosody values, facial feature values, etc.
  • speech recognition values hypertheses, such a ranked and/or scored set of "n-best" hypotheses as to which words were spoken
  • speech prosody values such as a ranked and/or scored set of "n-best" hypotheses as to which words were spoken
  • facial feature values etc.
  • Personal profiling service 314 in various embodiments maintains personalized information about a user and retrieves and/or provides that information on demand by other components, such as the response concept service 308 and the input recognition service 310.
  • user profile information is retrieved, provided, and/or updated at the start of the conversation, as well as prior to each turn of the conversation.
  • the personal profiling service 314 updates the user's profile information at the end of each turn of the conversation using new information extracted from the user response and interpreted by the response concept service 308. For example, if a response is mapped to a concept that indicates the marital status of user, a profile data may be updated to reflect what the system has understood the user's marital status to be.
  • a confirmation or other prompt may be provided to the user, to confirm information prior to updating their profile.
  • a user may clear from their profile information that has been added to their profile based on their responses in the course of a conversational video experience, e.g., due to privacy concerns and/or to avoid incorrect assumptions in situations in which multiple different users use a shared device.
  • response concept service 308 interprets output of the input recognition service 310 augmented with the information retrieved by the personal profiling service 314.
  • Response concept service 308 performs interpretation in the domain of natural language (NL), speech prosody and stress, environmental data, etc.
  • Response concept service 308 utilizes one or more response understanding models 312 to map the input feature values into a "response concept" determined to be the concept the user intended to communicate via the words they uttered and other input (facial expression, etc.) they provided in response to a question or other prompt (e.g. "Yup”, “Yeah”, “Sure” or nodding may all map to an "Affirmative" response concept).
  • the response concept service 308 uses the response concept to determine the next video segment to play. For example, the determined response concept in some embodiments may map deterministically or
  • the output of the response concept service 308 in various embodiments includes an identifier indicating which video segment to play next and when to switch to the next segment.
  • Sharing/social networking service 316 enables a user to posts aspects of conversations, for example video recordings or unique responses, to sharing services such as social networking applications.
  • Metrics and logging service 318 records and maintains detailed and
  • any service or asset required for a conversation may be implemented as a split resource, where the decision about how much of the service or asset resides on the client and how much on the server is made dynamically based on resource availability on the client (e.g. processing power, memory, storage, etc.) and across the network (e.g. bandwidth, latency, etc.). This decision may be based on factors such as conversational-speed response and cost.
  • input recognition service 310 may invoke a cloud-based speech recognition service, such as those provided by Google® and others, to obtain a set of hypotheses of which word(s) the user has spoken in response to a question or other prompt by a video persona.
  • a cloud-based speech recognition service such as those provided by Google® and others
  • FIG. 4A is a block diagram illustrating an embodiment of a conversational video experience display and interface.
  • a display 400 includes a control panel region 402 and a conversational video experience display region 404.
  • control panel region 402 is displayed and/or active only at certain times and/or conditions, e.g., once a video segment has finished playing, upon mouse-over or other preselection, etc.
  • control panel region 402 includes three user selectable controls: at left a "rewind” control to iterate back through previously-played segments and/or responses; in the center a "play” control to indicate a readiness and desire to have a current/immediately next segment played; and at right a "forward" control, e.g., to cause a list of user-selectable responses or other user-selectable options available to advance the conversational video experience to be displayed.
  • a video segment of a video persona engaged in her current "turn” of the conversation is displayed.
  • a title/representative frame, or a current/next frame, of the current/next video segment is displayed, and selection of the "play” control would cause the video segment to begin (and/or resume) to play.
  • FIG. 4B is a block diagram illustrating an embodiment of a conversational video experience display and interface.
  • the "play" control in the center of control panel region 402 has been replaced by a "pause” control, indicating in some embodiments that the current video segment of the video persona is currently playing.
  • a picture-in-picture (PIP) frame 406 has been added to conversational video experience display region 404.
  • video of the user of a client device comprising display 400 is displayed in PIP frame 406.
  • a front- facing camera of the client device may be used in various embodiments to capture and display in PIP frame 406 video of a user of the client device while he/she engages with the conversational video experience.
  • Display of user video may enable the user to ensure that the lighting and other conditions are suitable to capture user video of desired quality, for example to be saved and/or shared with others.
  • a speech balloon or bubble may be displayed adjacent to the PIP frame 406, e.g., with a partially greyed out text prompt informing the user that it is the user's turn to speak.
  • the system in various embodiments provides dynamic hints to a user of which input modalities are made available to them at the start of a conversation, as well as in the course of it.
  • the input modalities can include speech, touch or click gestures, or even facial gestures/head movements.
  • the system decides in various embodiments which one should be hinted to the user, and how strong a hint should be.
  • the selection of the hints may be based on environmental factors (e.g. ambient noise), quality of the user experience (e.g. recognition failure/retry rate), resource availability (e.g., network connectivity) and user preference.
  • the user may disregard the hints and continue using a preferred modality.
  • the system keeps track of user preferences for the input modalities and adapts hinting strategy accordingly.
  • the system can use VUI, touch-/click-based GUI and camera-based face image tracking to capture user input.
  • the GUI is also used to display hints of what modality is preferred by the system.
  • the system displays a "listening for speech" indicator every time the speech input modality becomes available. If speech input becomes degraded (e.g. due to a low signal to noise ratio, loss of an access to a remote SR engine) or the user experiences a high recognition failure rate, the user will be hinted at / reminded of the touch based input modality as an alternative to speech.
  • the system hints (indicates) to the user that the touch based input is preferred at this point in the interactions by showing an appropriate touch-enabled on-screen indicator.
  • the strength of a hint is expressed as the brightness and / or the frequency of pulsation of the indicator image.
  • the user may ignore the hint and continue using the speech input modality.
  • the GUI touch interface becomes enabled and visible to the user.
  • the speech input modality remains enabled concurrently with the touch input modality.
  • the user can dismiss the touch interface if they prefer. Conversely, the user can bring up the touch interface at any point in the conversation (by tapping an image or clicking a button).
  • the user input preferences are updated as part of the user profile by the PP system.
  • the system maintains a list of pre-defined responses the user can select from.
  • the list items are response concepts, e.g., "YES”, “NO”, “MAYBE” (in a text or graphical form). These response concepts are linked one-to-one with the subsequent prompts for the next turn of the conversation. (The response concepts match the prompt affirmations of the linked prompts.)
  • each response concept is expanded into a (limited) list of written natural responses matching that response concept.
  • a response concept "NO GIRLFRIEND” may be expanded into a list of natural responses “I don't have a girlfriend", “I don't need a girlfriend in my life”, “I am not dating anyone”, etc.
  • a response concept “MARRIED” may be expended into a list of natural responses “I'm married”, “I am a married man”, “Yes, and I am married to her”, etc.
  • FIG. 4C is a block diagram illustrating an embodiment of a conversational video experience display and interface.
  • a list of response concepts is presented via a touch-enabled GUI popup window 408.
  • the user can apply a touch gesture (e.g., tap) to a response concept item on the list, and in response the system will start playing a corresponding video segment of the video persona.
  • the user can apply another touch gesture (e.g., double-tap) to the response concept item to make the item expand into a list of natural responses (in text format).
  • this new list will replace the response concept list in the popup window.
  • the list of natural responses is shown in another popup window.
  • the user can use touch gestures (e.g., slide, pinch) to change the position and /or the size of the popup window(s).
  • the user can apply a touch gesture (e.g., tap) to a natural response item to start playing the
  • the user can use other touch gestures (or click on a GUI button).
  • FIG. 5 is a block diagram illustrating an embodiment of a conversational video experience.
  • a conversational video experience is represented as a tree 500.
  • An initial node 502 represents an opening video segment, such as a welcoming statement and initial prompt by a video persona.
  • the conversation may after the initial segment 502 follow one or two next paths.
  • the system listens to the human user's spoken (or other) response and maps the response to either a first response concept 504 associated with a next segment 506 or to a second response concept 508 associated with a next segment 510.
  • a conversational experience may be represented as a directed graph, and one or more possible paths through the graph may converge at some node in the graph, for example, if respective response concepts from each of two different nodes map/link to a common next segment/node in the graph.
  • a primary function within the runtime engine is a decision-making process to drive conversation. This process is based on recognizing and interpreting signals from the user and selecting an appropriate video segment to play in response. The challenge faced by the system is guiding the user through a conversation while keeping within the domain of the response understanding model(s) and video segments available.
  • the system may play an initial video segment representing a question posed by the virtual persona.
  • the system may then record the user listening / responding to the question.
  • a user response is captured, for example by an input response service, which produces recognition results and passes them to a response concept service.
  • the response concept service uses one or more response understanding models to interpret the recognition results, augmented in various embodiments with user profile information.
  • the result of this process is a "response concept.” For example, recognized spoken responses like "Sure”, “Yes” or "Yup” may all result in a response concept of "AFFIRMATIVE".
  • each response concept is deterministically associated with a single video segment.
  • each node in the tree such as tree 500 of Figure 5
  • has associated therewith a node-specific response understanding model that may be used to map words determined to have been uttered and/or other input provided in response to that segment to one or more response concepts, which in turn may be used to select a next video segment to be presented to the user.
  • the video segment and the timing of the start of a response are passed in various embodiments to a media playback service, which initiates video playback of the response by the virtual persona at an indicated and/or otherwise determined time.
  • the video conversational experience includes a sequence of conversation turns such as those described above in connection with Figure 5.
  • all possible conversation turns are represented in the form of a pre-defined decision tree/graph, as in the example shown in Figure 5, where each node in the tree/graph represents a video segment to play, a response understanding model to map recognized and interpreted user responses to a set of response concepts, and the next node for each response concept.
  • FIG. 6 is a block diagram illustrating an embodiment of a conversational video experience.
  • a less hierarchical representation of a conversation is shown.
  • a user response that is mapped to a first response concept 603 results in a transition to node/segment 604
  • a second response concept 605 would cause the conversation to leap ahead to node/segment 606, bypassing node 604.
  • the conversation may transition via one response concept to node 608, but via a different response concept directly to node 610.
  • a transition "back" from node 606 to node 604 is possible, e.g., in the case of a user response that is mapped to response concept 612.
  • each conversational turn does not have to be pre-defined.
  • the system in various embodiments has access to one or more of: A corpus of video segments representing a large set of possible prompts and responses by the virtual persona in the subject domain of the conversation.
  • a domain- wide response understanding model in the subject domain of the conversation is conditioned at each conversational turn based on prompts and responses adjacent to that point in the conversation.
  • the response understanding model is used, as described above, to interpret user responses (deriving one or more response concepts based on user input). It is also used to select the best video segment for the next dialog turn, based on highest probability interpreted meaning.
  • An example process flow in such a scenario includes the following steps:
  • a pre-selected opening prompt is played.
  • the response understanding model may be updated (conditioned) based on the user response.
  • the conditioned response understanding model is used to select the best possible available video segment as the prompt to play next, representing the virtual persona' s response to the user response described in immediately above.
  • each available prompt is passed to the conditioned response understanding model, which generates a list of possible interpretations of that prompt, each with a probability of expressing the meaning of the prompt.
  • the highest-probability interpretation defines the best meaning for the underlying prompt and serves as its best-meaning score In principle, an attempt may be made to interpret every prompt recorded for a given video persona in the domain of the conversation, and select the prompt yielding the highest best-meaning score.
  • This selection of the next prompt represents the start of the next conversational turn. It starts by playing a video segment representing the selected prompt.
  • the response understanding model can be reset to the domain-wide response understanding model and the steps described above are repeated. This process continues until the user ends the conversation, the system selects a video segment that is tagged as a conversation termination point, or the currently conditioned response understanding model determines that the conversation has ended.
  • a further embodiment of the runtime system utilizes speech and video synthesis techniques to remove the constraint of responding using a limited set of prerecorded video segments.
  • a response understanding model can generate the best possible next prompt by the virtual persona within the entire conversation domain. The next step of the conversation will be rendered or presented to the user by the runtime system based on dynamic speech and video synthesis of the virtual persona delivering the prompt.
  • the video persona maintains its virtual presence and responsiveness, and provides feedback to the user, through the course of a conversation, including when the user is speaking.
  • appropriate video segments are played when the user is speaking and responding, giving the illusion that the persona is listening to the user's utterance.
  • FIG. 7 is a block diagram illustrating an embodiment of a conversational video experience segment.
  • a video segment 702 includes three distinct portions.
  • the video persona provides feedback to indicate that the user's previous response has been heard and understood.
  • the video persona might during the affirmation portion of the segment that is selected to play next say something like, "That's great, I'm feeling pretty good today too.”
  • the video persona may communicate information, such as to inform the user, provide information the user may be understood to have expressed interest in, etc., and either explicitly (e.g., by asking a question) or implicitly or otherwise prompt the user to provide a response.
  • an "active listening" portion of the video segment 702 is played. In the example shown, if an end of the active listening portion is reached before the user has completed providing a response, the system loops through the active listening portion again, and if necessary through successive iterations, until the user has completed providing a response.
  • active listening is simulated by playing a video segment
  • the video segment could depict the virtual persona leaning towards the user, nodding, smiling or making a verbal
  • the system selects an appropriate active listening video segment based on the best current understanding of the user's response, as discussed more fully below in connection with Figure 14.
  • the system can allow a real user to interrupt a virtual persona, and will simulate an "ad hoc" transition to an active listening shortly after detection of such interruption after selecting an appropriate "post-interrupted” active listening video segment (done within the response concept service system).
  • Figure 8A is a flow chart illustrating an embodiment of a process to provide a conversational video experience.
  • affirmation and statement/prompt portions of a segment are played (802).
  • An active listening portion of the segment is looped, until end-of-speech by the user is detected (804).
  • end of user speech is detected (806)
  • the user's response is determined (e.g., speech recognition) and mapped to a response concept, enabling transition to a next segment (808), e.g., one associated with the response concept to which the user's spoken response has been mapped.
  • the system can allow a real user to interrupt a virtual persona, and will simulate an "ad hoc" transition to an active listening mode shortly after detection of such interruption after selecting an appropriate "post-interrupted" active listening video segment.
  • FIG. 8B is a flow chart illustrating an embodiment of a process to provide a conversational video experience.
  • affirmation and statement/prompt portions of a segment are played (822). If the affirmation and statement/prompt portions play to the end of those portions (824), an active listening portion of the segment is looped (826), until end-of-speech by the user is detected (830). If, instead, the user begins to speak during playback of the affirmation and statement/prompt portions of the segment (828), then playback of the affirmation and statement/prompt portions of the segment ends without completing playback of those portions, and an immediate transition to looping the active listening portion of the segment is made (826). Once the end of the user's speech is reached (830), the system transitions to the next segment (832).
  • FIG. 9 is a flow chart illustrating an embodiment of a process to receive and interpret user responses.
  • audio data is received (902).
  • the client device's microphone is activated, and audio captured by the microphone is streamed to a cloud-based or other speech recognition service (904).
  • Natural language processing is performed on results of the speech recognition processing, e.g., an n-best or other set of hypotheses, to map the results to one or more "response concepts" (906).
  • FIG. 10 is a block diagram illustrating an embodiment of elements of a conversational video experience system.
  • audio data comprising user speech 1002 is provided to a speech recognition local and/or remote module and/or service to obtain a speech recognition output 1004 comprising a set of n-best hypotheses determined by the speech recognition service as the most likely words that were uttered.
  • each member of the set has an associated score, but in other embodiments a confidence or other score for the entire set is provided, with members of the set being presented in ranked order.
  • the speech recognition output 1004 is provided as input to a natural language processing module and/or service, which in the example shown attempts to match the speech recognition output 1004 to a response understanding model 1006 to determine a response concept 1008 that the user is determined to have intended and/or in some embodiments which is the response concept most likely to be associated with the user's spoken and/or other responsive input.
  • the spoken utterance has been determined by speech recognition processing to most likely have been the word "yes”, which in turn has been mapped to the response concept "affirmative”.
  • words represented as text are included in the response understanding model 1006 as shown in Figure 10, in various embodiments other input, such as selection by touch or otherwise of a displayed response option, nodding of the head, etc., may also and/or instead be included.
  • FIG 11 is a block diagram illustrating an embodiment of a conversational video experience next segment decision engine.
  • conversation context data 1104 e.g., where the most recently played video segment is located within a set of nodes comprising the conversational video experience; what the user has said or otherwise provided as input in the course of the experience; etc.
  • user profile data 1106 are provided to a decision engine 1108 configured to identify and provide as output a next segment 1110 based at least in part on the inputs 1102, 1104, and/or 1106.
  • the input (e.g., speech) recognition service accesses, and the response concept service integrates, all relevant information sources to support decision-making necessary for selection of a meaningful, informed and entertaining response as a video segment (e.g., from a collection of pre-recorded video segments representing the virtual persona asking questions and/or affirming responses by the user).
  • the system can be more responsive, more accurate in its assessment of the user's intent, and can more accurately anticipate future requests.
  • Examples of information gathered through various sources may include, without limitation, one or more of the following:
  • knowledge include information about the user's interests, contacts and recent posts from the user's social network; gender or demographic information from a customer information database; name and address information from a prior registration process.
  • Extrinsic inputs collected by sensors available to the system This could include time- of-day, current location, or even current orientation of the client device.
  • the above information may be used in isolation or in combination to provide a better conversational experience, for example, by:
  • the virtual persona may start a conversation with "Good morning!” or "Good evening!” based on the time that the conversation is started by the user.
  • Figure 12 is a flow chart illustrating an embodiment of a process to provide and update a response understanding model.
  • an initial understanding model is built and deployed (1202).
  • a designer of the conversation video experience may decide in the course of initial design of the conversational video experience which concepts may be expressed by a user in response to a question or other prompt by the video persona.
  • the designer may attempt to anticipate words and phrases a user might utter, in response to a question or other prompt by the video persona via a video segment associated with the node, and for each a concept to which that word or phrase is to be mapped.
  • the understanding model is deployed (1202)
  • user interactions with the system are monitored and analyzed to determine whether any updates to the model are required and/or would be beneficial to the system (1204).
  • the interaction of test users and/or real users with the conversational video experience may be observed, and words or phrases used by such users but not yet mapped to a concept may be added and mapped to corresponding concepts determined to have been intended by such users. Additional words or phrases may be mapped to existing concepts conceived of by the designer, or in some embodiments, new concepts may be determined and added, and associated paths through the conversational video experience defined, in response to such observation of user interactions with the experience.
  • model refinements may be identified, initiated, and/or submitted by a participating user; by one or more users acting collectively; by one or more remote workers/users assigned tasks structured to potentially yield model refinements (e.g., crowdsourcing); and/or a designer or other human operator associated with production, maintenance, and/or improvement of the conversational video experience. If an update to the model is determined to be required (1206), the model is updated and the updated model is deployed (1208). The process of Figure 12 executes continuously unless terminated (1210).
  • mechanisms are provided to handle expected and longer-than-expected transitions, e.g., to handle delays in deciding what the virtual persona should say next without destroying the conversational feel of the application.
  • Such delays can come from a number of sources, such as the need to retrieve assets from the cloud, the computational time taken for analysis, and other sources.
  • the detection of a delay may be determined immediately prior to requiring the asset or analysis result that is the source of the delay, or it may instead be determined well in advance of requiring the asset (for example, if assets were being progressively downloaded in anticipation of their use and there were a network disruption).
  • transitional conversational segments transitional in that they delay the need for the asset being retrieved or the result which is the subject of the analysis causing the delay.
  • transitions can be of several types: [0075] Neutral: Simple delays in meaningful content that would apply in any context, e.g., "Let me see...,” “That's a good question...,” “That's one I'd like to think about...Hold on, I'm thinking,” or something intended to be humorous. The length of the segment could be in part determined by the expected delay.
  • Application contextual The transition can be particular to the application context, e.g., "Making financial decisions isn't easy. There are a lot of things to consider.”
  • Conversation contextual The transition could be particular to the specific point in the conversation, e.g., "The show got excellent reviews,” prior to indicating ticket availability for a particular event.
  • Directional The system could direct the conversation down a specific path where video assets are available without delay. This decision to go down this conversation path would not have been taken but for the aforementioned detection of delay.
  • Figure 13 is a flow chart illustrating an embodiment of a process to integrate a transition video into a conversation video experience.
  • a transition may be generated and inserted dynamically, e.g., in response to determining
  • a delay will prevent timely display of a next segment and/or in embodiments in which a next segment may be obtained and/or synthesized dynamically, in real time, based for example on the user's response to the previously-played segment.
  • the process of Figure 13 may be used at production time to create transition clips or portions thereof.
  • a next segment to which a transition from a previous/current segment is required is received (1302).
  • a transition is generated and/or obtained (1304) and inserted into the video stream to be rendered to the user (1306).
  • one or more techniques may be used to enable a smooth transition of a virtual persona 's face / head image between video segments for an uninterrupted user experience.
  • the ideal case is if the video persona moves smoothly. In various embodiments, it is a "talking head.” There is no problem if a whole segment of the video persona speaking is recorded continuously. But there may be transitions between segments where that continuity is not guaranteed. Thus, there is a general need for an approach to smoothly blending two segments of video, with a talking head as the
  • One approach is to record segments where the end of the segment ends in a pose that is the same as the pose at the beginning of a segment that might be appended to the first segment. (Each segment might be recorded multiple times with the "pose” varied to avoid the transition being overly “staged.")
  • each segment might be recorded multiple times with the "pose” varied to avoid the transition being overly “staged.”
  • standard video processing techniques can be used to make the transition appear seamless, even though there are some differences in the ending frame of one segment and the beginning of the next.
  • transition images to move each identified point on the face to the corresponding point so that the paths are as similar as possible (minimize distortion). • Other points are moved consistently based on their relationship to the points specifically modeled. The result is a smooth transition that focuses on what the person watching will be focusing on, resulting in reduced processing requirements.
  • transition treatment as described herein is applied in the context of conversational video.
  • conversational video There is a need for many transitions relative to some other areas where videos are used for, e.g., instructional purposes, with little if any real variation in content. While some of these applications are characterized as "interactive," they are little more than allowing branching between complete videos, e.g., to explain a point in more detail if requested.
  • conversational video a key component is much more flexibility to allow elements such as personalization and the use of context, which will be discussed later in this document. Thus, it is not feasible to create long videos incorporating all the variations possible and simply choose among them; it will be necessary to fuse shorter segments.
  • a further demand of interactive conversation with a video persona on portable devices in some embodiments is the limitation on storage. It would not be feasible to store all segments on the device, even if there were not the issue of updates reflecting change in content.
  • the system is configured to anticipate segments that will be needed and begin downloading them while one is playing, this encourages the use of shorter segments, further increasing the likelihood that concatenation of segments will be necessary.
  • FIG 14 is a flow chart illustrating an embodiment of a process to provide a dynamic active listening experience.
  • user input in the form of a spoken response is received and processed (1402). If an end of speech is reached (1404), the system transitions to a next video segment (1406), e.g., based on the system's best understanding of the response concept with which the user's input is associated. If prior to the end of the user's speech the word or words that have been uttered thus far are determined to be sufficient to map the user's input to a response concept, at least provisionally (1408), the system transitions to a response concept-appropriate active listening loop (1410) and continues to receive and process the user's ongoing speech (1402).
  • a transition from a neutral active listening loop to one that expresses a sentiment and/or understanding such as would be appropriate and/or expected in a conversation between two live humans as a listener begins to understand the speaker's response may be made.
  • Examples include without limitation active listening loops in which the video persona expresses heightened concern, keen interest, piqued curiosity, growing dismay, unease, confusion, satisfaction, agreement or other affirmation (e.g., by nodding while smiling), etc.
  • the system selects, switches and plays the most appropriate video segment based on (a) an extracted meaning of the user statement so far into their utterance (and an extrapolated meaning of the whole utterance); and (b) an appropriate reaction to it by a would-be human listener.
  • an appropriate reaction to it by a would-be human listener uses information streamed to it from the speech or other input recognition and response concept services.
  • an on-going partially spoken user response is processed, and the progressively expanding results are used to make a selection of a video segment to play during the active listening phase.
  • the video segment selected and the time at which it is played can be used to support aspects of the cadence of a natural conversation. For example:
  • the video could be an active listening segment possibly containing verbal and facial expressions played during the time that the user is making the utterance.
  • the video could be an affirmation or question in response to the user's utterance, played immediately after the user has completed the utterance.
  • the system can decide to start playing back the next video segment while the user is still speaking, thus interrupting or "barging-in" to the user's utterance. If the user does not yield the turn and keeps speaking, this will be treated as a user barge-in.
  • a single speech recognition service and/or system may be required.
  • a local speech recognition service embedded in the user device.
  • at least one local speech recognition service and at least one remote speech recognition service are included.
  • Several cooperative schemes can be used to enable their co-processing of speech input and delegation of the authority for the final decision / formulation of the results. These schemes are implemented in some embodiments using a speech recognition controller system which coordinates operations of local and remote speech recognition services.
  • the schemes may include:
  • a local speech recognition service can do a more efficient start/stop analysis, and the results can be used to reduce the amount of data sent to the remote speech recognition service.
  • a local speech recognition service is authorized to track audio input and detect the start of a speech utterance.
  • the detected events with an estimated confidence level are passed as hints to the speech recognition service controller which makes a final decision to engage (to send a "start listening” command to) the remote speech recognition service and to start streaming the input audio to it (covering a backdated audio content to capture the start of the utterance).
  • a local speech recognition service is authorized to track audio input and detect the end of a speech utterance.
  • the detected events with an estimated confidence level are passed as hints to the speech recognition service controller which makes a final decision to send a "stop listening" command to the remote speech recognition service and to stop streaming the input audio to it (after sending some additional audio content to capture the end of the utterance as may be required by the remote speech recognition service).
  • the speech recognition service controller may decide to rely on a remote speech recognition service for the end of speech detection.
  • a "stop listening" decision can be based on a higher-authority feedback from the response concept service system that may decide that a sufficient information has been accumulated for their decision-making.
  • the speech recognition service controller sets a maximum utterance duration which will limit the utterance processing to the local speech recognition service only. If the end of utterance is detected by the local speech recognition service before the maximum duration is exceeded, the local speech recognition service completes recognition of the utterance and the remote speech recognition service is not invoked. Otherwise, the speech audio will be streamed to the remote speech recognition service (starting with the audio buffered from the sufficiently padded start of speech).
  • the speech recognition service controller can decide to start using the remote speech recognition service. If the utterance is rejected by the local speech recognition service, the speech recognition service controller will start using the remote speech recognition service.
  • the speech recognition service controller sends "start listening" to the local speech recognition service.
  • the local speech recognition service detects the start of speech utterance, notifies the speech recognition service controller of this event and initiates streaming of speech recognition results to the speech recognition service controller which directs them to the response concept service system.
  • the local speech recognition service detects the subsequent end of utterance, it notifies the speech recognition service controller of this event.
  • the local speech recognition service returns the final recognition hypotheses with their scores to the speech recognition service controller.
  • the speech recognition service controller Upon receipt of the "start of speech utterance" notification from the local speech recognition service, the speech recognition service controller sets the pre-defined maximum utterance duration. If the end of utterance is detected before the maximum duration is exceeded, the local speech recognition service completes recognition of the utterance. The remote speech recognition service is not invoked.
  • the speech recognition service controller sends "start listening" and starts streaming the utterance audio data (including a buffered audio from the start of the utterance) to, and receiving streamed recognition results from, the remote speech recognition service.
  • the streams of partial recognition results from the local and remote speech recognition services are merged by the speech recognition service controller and used as input into the response concept service system.
  • the end of recognition notification is sent to the speech recognition service controller by the two speech recognition service engines when these events occur.
  • the speech recognition service controller will start using the remote speech recognition service if it has not done that already.
  • the speech recognition service controller will start using the remote speech recognition service (if it has not done that already).
  • a video segment of a "speed equalizer" is played while streaming the audio to a remote speech recognition service and processing the recognition results.
  • Auxiliary expert - a local speech recognition service is specialized on recognizing speech characteristics such as prosody, stress, rate of speech, etc.
  • the speech recognition service controller If the loss/degradation of the network connectivity is detected, the speech recognition service controller is notified of this event and stops communicating with the remote speech recognition service (i.e. sending start/stop listening commands and receiving streamed partial results). The speech recognition service controller resumes communicating with the remote speech recognition service when it is notified that network connectivity has been restored.
  • This section describes audio/video recording and transcription of the user side of a conversation as means of capturing user-generated content. It also presents innovative ways of sharing the recorded content data as the whole or in parts on social media channels.
  • FIG. 15 is a flow chart illustrating an embodiment of a process to record a user's side of a conversational video experience.
  • a time-synchronized video recording is made of the user while the user is interacting with the conversational video experience (1502).
  • the conversational video runtime system in various embodiments performs video capture of the user while they listen to, and respond to, the persona's video prompts.
  • the audio/video capture may utilize a microphone and a front-facing camera of the host device.
  • the captured audio/video data is recorded to a local storage as a set of video files.
  • the capture and recording are performed concurrently with the video playback of the persona's prompts (both speaking and active listening segments).
  • the timing of the video capture is synchronized with that of the playback of the video prompts, so that it is possible to time- align both video streams.
  • Audio/video recordings and transcriptions of the user's responses can be used for multiple types of social media interactions:
  • a user's video data segments captured during a persona's speaking and active listening modes can be sequenced with the persona-speaking and active listening segments to reconstruct an interactive exchange between both sides of the conversation. These segments can be sequenced in multiple ways including alternating video playback between segments or showing multiple segments adjacent to each other. These recorded video conversations can be posted in part or in their entirety to a social network on behalf of the user. These published video segments are available for standalone viewing, but can also serve as a means of discovery of the availability of a conversational video interaction with the virtual persona.
  • Selected user recorded video segments on their own or sequenced with recorded video segments of other users engaging in a conversation with the same virtual persona can be posted to social networks on behalf of the virtual (or corresponding real) persona.
  • the transcribed actual or response concepts from multiple users engaging in a conversation with the same virtual persona can be measured for similarity.
  • Data elements or features collected for multiple users by the personal profiling service can also be measured for similarity.
  • Users with a similarity score above a defined threshold can be connected over social networks. For example, fans / admirers / followers of a celebrity can be introduced to each other by the celebrity persona and view their collections of video segments of their conversations with that celebrity.
  • This human agent - a real person connected via a video and/or audio channel to the user - could be an additional participant in the conversation or a substitute for the virtual persona.
  • the new participant could be selected from a pool of available human agents or could even be the real person on whom the virtual persona is based.
  • the decision to include a human agent may only be taken if there is a human agent available, determined by integration with a present system.
  • Examples of scenarios in which human agents would be integrated may include cases where the conversation path goes outside the immediate conversation domain. In such a case, the virtual persona could indicate he/she needs to ask their assistant.
  • the integration could also be subtler, with "hidden” agents. That is, when the system can't decipher what the user is saying, perhaps after a few tries, an agent could be tapped in to listen and type a response. In some cases, the agent could simply choose a prewritten response to the question. A text-to-speech system customized to the virtual persona's voice could then speak the typed response.
  • the video system could be designed to simulate arbitrary speech; however, it may be easier to just have the virtual persona pretend to look up the answer on a document, hiding the virtual persona's lips, or a similar device for hiding the virtual persona's mouth.
  • the advantage of a hidden agent in part is that agents with analytic skills that might have accents or other disadvantages when communicating by speech could be used.
  • the conversation could transfer to a real person, either a new participant or even the real person on whom the virtual participant is based.
  • a real person either a new participant or even the real person on whom the virtual participant is based.
  • the conversational video could ask a series of screening questions and for a specific pattern of responses, the real person represented by the virtual persona could be brought in if he/she is available.
  • a user may be given the opportunity to "win" a chance to speak to the real celebrity represented by the virtual persona based on a random drawing or other triggering criteria.
  • Audio-only content within the video solution.
  • the virtual persona could "phone” an assistant, listen to a radio, or ask someone "off-camera” and hear audio-only information, either pre-recorded or delivered by text-to-speech synthesis.
  • the new content could originate as text, e.g., text from a news site, for example.
  • Audio-only content could also be environmental or musical, for example if the virtual persona played examples of music for the user for possible purchase.
  • the creation process includes an authoring phase, in which the content, purpose, and structure of a conversation are conceived; data representing the conversation and its structure is generated; and a script is written to be used to create video content for the segments that will be required to provide the
  • conversational experience In a production phase, video segments are recorded and associated with the conversation, e.g., by creating and storing appropriate meta-information.
  • the relationships between segments comprising the conversational video experience are defined. For example, a conceptual understanding model is created which defines transitions to be made based on a best understanding by the system of a user's response to a segment that has just played.
  • understanding model(s), and associated meta-information Once the required video content, understanding model(s), and associated meta-information have been created, they are packaged into sets of one or more files and deployed in a deployment phase, e.g., by distributing them to users via download and/or other channels.
  • a refinement phase Once the assets required to provide and/or consume the conversation video experience have been deployed, user interaction with the conversational video experience is observed in a refinement phase.
  • user interaction may indicate that one or more users provided a response to a segment that could not be mapped to a "response concept" recognized within the domain of the conversational video experience, e.g., to one of the response concepts associated with the segment and/or one or more other segments in an understanding model of the conversational video experience.
  • a speech recognition system/service and the understanding model of the conversational video experience may be tuned or otherwise refined or updated based on the observed interactions.
  • video and/or audio of one or more users interacting with the system, and the corresponding speech recognition and/or understanding service results may be reviewed by a human operator (e.g., a designer, user, crowd-source worker) and/or programmatically to better train the speech recognition service to recognize an uttered response that was not recognized correctly and/or to update the model to ensure that a correctly recognized response can be mapped to an existing or if necessary a newly-defined response concept for the segment to which the user(s) was/were responding.
  • the refined or otherwise updated assets e.g., conceptual understanding model, etc.
  • further observation and refinement may occur.
  • a definition of an initial node, e.g., a root or other starting node, of a conversation is received, for example via a user interface. Iteratively, as successive indications that further nodes are desired to be added and defined, an interface is displayed and an associated node definition is received and stored, including as applicable information defining any relationship(s) between the defined node and other nodes. For example, a location of the node within a hierarchy of nodes comprising the conversation and/or a definition of one or more response concepts associated with the node and for each a consequence of a user response to a prompt provided via a video segment associated with the node being mapped to that response concept, may be received and stored. Once the authoring user indicates that (at least for now) no further nodes are desired to be defined, the process ends.
  • a user interface of a tool to create content for a conversational video experience includes a palette region and a workspace region.
  • the palette region may include two or more element types, e.g., one to add a node and another to add and define a transition between nodes, e.g., by dragging a dropping an instance of an element by clicking on the representation of the element type in the palette region and dragging it to a desire location in the workspace region of the user interface.
  • a structured user interface is provided to enable an authoring user to define one or more nodes of a conversational video experience and for each its associated video content, understanding model (or elements thereof), etc.
  • the user interface may be pre -populated with known (e.g., inherited and/or otherwise previously received or generated values) and/or generated values, if any.
  • known e.g., inherited and/or otherwise previously received or generated values
  • one or more nodes from another conversation may be re-used, or used as a template, in which case associated node values that have not (yet) been changed by the authoring user may be displayed.
  • Authoring user input is received via the interface. Once an indication is received that the authoring user is done defining a particular node, the node definition and content data are stored.
  • a user interface of a tool to create content for a conversational video experience includes a first script entry region, in which the authoring user is prompted to enter the words the actor or other person who will perform the segment should say during a first portion of a video content to be associated with the node.
  • the authoring user is prompted to enter a statement that affirms a response the system would have understood the user to have given to a previous segment, resulting in their being transitioned to the current node.
  • the authoring user is prompted to enter words the video persona should say next, in this example to include a question or other prompt to which the user may be expected to respond.
  • script definition information entered in appropriate regions of the user interface may be used by a backend process or service to generate automatically and provide as output a script in a form usable by video production talent and production personnel to record a video segment to be associated with the node.
  • the user interface further includes a response concept entry and definition section.
  • initially fields to define up to three response concepts are provided, and an "add" control is included to enable more to be defined.
  • Text entry boxes are provided to enable response concept names or labels to be entered.
  • a "define” button may be selected to enable key words, key phrases, Boolean and/or other combinations of words and phrases, etc. to be added as natural responses that are associated with and should be mapped by the system to the corresponding response concept.
  • natural language and/or other processing may be performed on script text entered in appropriate regions of the user interface, for example, to generate automatically and pre -populate one or more response concept candidates to be reviewed by and/or further defined by the authoring user.
  • other response concepts that have already been defined for other nodes within the same conversation and/or associated with nodes in other conversations in a same subject matter and/or other common domain may be selected, based on the script text entered in corresponding regions of the user interface, to be a candidate response concept to the prompt entered in a prompt or other script definition region of the user interface, for example.
  • the user interface in some embodiments includes a media linking section.
  • a URL or other identifier may be entered in a text entry field to link previously recorded video to the node.
  • a "browse" control opens an interface to browse a local or network folder structure to find desired media.
  • an "auto” button enables a larger video file to be identified, and in response to selection of the "auto” button the system will perform speech recognition to generate a time synchronized transcript for the video file, which will then be used to find programmatically the portion(s) of the video file that are associated with the node, for example those portions that match the script text entered in appropriate regions of the user interface, and for the active listening portions a subsequent portion of the video until the earlier of the end of the file or when the subject resumes speaking.
  • a script is received, e.g., a text or other file or portion thereof including text comprising all or a portion of a conversational video experience script.
  • Natural language processing is performed to generate one or more candidate response concepts.
  • the text for a prompt portion of a segment may be processed using a conversation domain-specific language understanding model to extract therefrom one or more concepts expressed and/or otherwise associated with the prompt.
  • the same model may then be used to determine one or more responsive concepts that a user may be expected to express in response to the prompt, e.g., the same, similar, and/or otherwise related concepts.
  • Response concept candidates are displayed to an authoring user for review and/or editing, e.g., via a user interface such as described above.
  • a set of related conversations may include two or more conversations, each comprising a hierarchically related set of conversation nodes, each having associated therewith one or more video segments, with "response concept" associated transitions between them.
  • conversations in a set may overlap, e.g., by sharing one or more conversation nodes with another conversation.
  • an authoring system and/or tool provides an ability to discover existing conversation nodes to be incorporated (e.g., reused) in a new conversation that is being authored.
  • an authoring user may have been provided with a tool to discover and/or access and reuse the nodes of another, related conversation.
  • the node definition may be reused, including associated resources, such as one or more of an associated script, node definition, video segment, and/or understanding model or portion thereof.
  • a reused node may be modified by an authoring user, for example by splitting of a "clone" or other modifiable copy specific to the new conversation that is being authored. Once a modification is made, a new copy of the cloned node is made and store separately from the original node, and changes are stored in the new copy, which from that point on is associated only with the new conversation.
  • a set of conversations may be associated with an
  • environment or other shared logical construct.
  • configuration settings and/or other variables may be defined once for the environment, which will result in those settings and variables being inherited and/or otherwise used automatically across conversations associated with that environment.
  • environment variables and/or definitions provide and/or ensure a common "look and feel" across conversations associated with that environment.
  • a conversation that has been defined as described herein is analyzed programmatically to determine an optimized set of paths to traverse and explore fully the conversation space.
  • Output to be used to produce in an efficient way video assets required to provide a video conversational experience based on the definition are generated programmatically based at least in part on the determined optimized set of paths. Examples include, without limitation, scripts to be used to perform a read-through and/or shoot video segments in an efficient order; a plan to shoot the required video; and/or a testing plan to ensure that test users explore the conversation space fully and efficiently.
  • printed scripts, files capable of being rendered by a teleprompter, and/or other assets in any appropriate format may be generated.
  • automated processing is performed to identify within a larger video file and/or set of files the portion(s) of video content that are related to a given node.
  • One or more video files are received.
  • Speech recognition processing is perform on the audio portion of the video data to generate a synchronized transcription, e.g., a text transcript with timestamps, offsets, and/or other meta-information indicating for each part of the transcript a corresponding portion of the video file(s).
  • the transcription is used to map portions of the video file(s) to
  • a match within the transcription may be found for a node-specific script.
  • corresponding portion of the video file(s) may be used to tag or otherwise map the corresponding video content to the conversation node with which that portion of the script and corresponding portion of the transcription are associated.
  • user interaction with a conversation video experience is observed and feedback based on the observation is used to update an associated
  • a natural language processing-based understanding model is deployed, e.g., in connection with deployment of a conversational video experience.
  • User interactions with one or more conversational video experiences with which the understanding model is associated are observed.
  • observed instances in which the system is unable to determined based on user utterances a response concept to which the user's response should be mapped may be analyzed programmatically and/or by a human operator to train associated speech recognition services and/or to update the understanding model to ensure that going forward the system will recognize previously-misunderstood or not-understood responses as being associated with an existing and/or newly-defined response concept.
  • the understanding model and/or other resource, once updated, is deployed and/or re-deployed, as described above.

Abstract

Providing a conversational video experience is disclosed. In various embodiments, a user response data provided by a user in response to a first video segment at least a portion of which has been rendered to the user is received. The user response data is processed to generate a text-based representation of a user response indicated by the user response data. A response concept with which the user response is associated is determined based at least in part on the text-based representation. A next video segment to be rendered to the user is selected based at least in part on the response concept.

Description

PROVIDING A CONVERSATIONAL VIDEO EXPERIENCE
CROSS REFERENCE TO OTHER APPLICATIONS
[0001] This application claims priority to U.S. Provisional Patent Application No.
61/653,923 (Attorney Docket No. NUMEP002+) entitled PROVIDING A
CONVERSATIONAL VIDEO EXPERIENCE, filed May 31, 2012, which is incorporated herein by reference for all purposes.
BACKGROUND OF THE INVENTION
[0002] Speech recognition technology is used to convert human speech (audio input) to text or data representing text (text-based output). Applications of speech recognition technology to date have included voice-operated user interfaces, such as voice dialing of mobile or other phones, voice-based search, interactive voice response (IVR) interfaces, and other interfaces. Typically, a user must select from a constrained menu of valid responses, e.g., to navigate a hierarchical sets of menu options.
[0003] Attempts have been made to provide interactive video experiences, but typically such attempts have lacked key elements of the experience human users expect when they participate in a conversation.
BRIEF DESCRIPTION OF THE DRAWINGS
[0004] Various embodiments of the invention are disclosed in the following detailed description and the accompanying drawings.
[0005] Figure 1 is a block diagram illustrating an embodiment of a system to provide a conversational video experience.
[0006] Figure 2 is a block diagram illustrating an embodiment of a system to provide a conversational video experience.
[0007] Figure 3 is a block diagram illustrating an embodiment of a conversational video runtime engine. [0008] Figure 4A is a block diagram illustrating an embodiment of a conversational video experience display and interface.
[0009] Figure 4B is a block diagram illustrating an embodiment of a conversational video experience display and interface.
[0010] Figure 4C is a block diagram illustrating an embodiment of a conversational video experience display and interface.
[0011] Figure 5 is a block diagram illustrating an embodiment of a conversational video experience.
[0012] Figure 6 is a block diagram illustrating an embodiment of a conversational video experience.
[0013] Figure 7 is a block diagram illustrating an embodiment of a conversational video experience segment.
[0014] Figure 8A is a flow chart illustrating an embodiment of a process to provide a conversational video experience.
[0015] Figure 8B is a flow chart illustrating an embodiment of a process to provide a conversational video experience.
[0016] Figure 9 is a flow chart illustrating an embodiment of a process to receive and interpret user responses.
[0017] Figure 10 is a block diagram illustrating an embodiment of elements of a conversational video experience system.
[0018] Figure 11 is a block diagram illustrating an embodiment of a conversational video experience next segment decision engine.
[0019] Figure 12 is a flow chart illustrating an embodiment of a process to provide and update a response understanding model.
[0020] Figure 13 is a flow chart illustrating an embodiment of a process to integrate a transition video into a conversation video experience. [0021] Figure 14 is a flow chart illustrating an embodiment of a process to provide a dynamic active listening experience.
[0022] Figure 15 is a flow chart illustrating an embodiment of a process to record a user's side of a conversational video experience.
DETAILED DESCRIPTION
[0023] The invention can be implemented in numerous ways, including as a process; an apparatus; a system; a composition of matter; a computer program product embodied on a computer readable storage medium; and/or a processor, such as a processor configured to execute instructions stored on and/or provided by a memory coupled to the processor. In this specification, these implementations, or any other form that the invention may take, may be referred to as techniques. In general, the order of the steps of disclosed processes may be altered within the scope of the invention. Unless stated otherwise, a component such as a processor or a memory described as being configured to perform a task may be implemented as a general component that is temporarily configured to perform the task at a given time or a specific component that is manufactured to perform the task. As used herein, the term 'processor' refers to one or more devices, circuits, and/or processing cores configured to process data, such as computer program instructions.
[0024] A detailed description of one or more embodiments of the invention is provided below along with accompanying figures that illustrate the principles of the invention. The invention is described in connection with such embodiments, but the invention is not limited to any embodiment. The scope of the invention is limited only by the claims and the invention encompasses numerous alternatives, modifications and equivalents. Numerous specific details are set forth in the following description in order to provide a thorough understanding of the invention. These details are provided for the purpose of example and the invention may be practiced according to the claims without some or all of these specific details. For the purpose of clarity, technical material that is known in the technical fields related to the invention has not been described in detail so that the invention is not unnecessarily obscured.
[0025] A conversational video runtime system is disclosed. In various embodiments, the system emulates a virtual participant in a conversation with a real participant (a user). It presents the virtual participant as a video persona, created in various embodiments based on recording or capturing aspects of a real person or other persona participating in the persona's end of the conversation. The video persona in various embodiments may be one or more of an actor or other human subject; a puppet, animal, or other animate or inanimate object; and/or pre-rendered video, for example of a computer generated and/or other participant. In various embodiments, the "conversation" may comprise one or more of spoken words, nonverbal gestures, and/or other verbal and/or non-verbal modes of communication capable of being recorded and/or otherwise captured via pre-rendered video and/or video recording. A script or set of scripts may be used to record discrete segments in which the subject affirms a user response to a previously-played segment, imparts information, prompts the user to provide input, and/or actively listens as one might do while listening live to another participant in the conversation. The system provides the video persona's side of the conversation by playing video segments on its own initiative and in response to what it heard and understood from the user side. It listens, recognizes and under stands/interprets user responses, selects an appropriate response as a video segment, and delivers it in turn by playing the selected video segment. The goal of the system in various embodiments is to make the virtual participant in the form of a video persona as indistinguishable as possible from a real person participating in a conversation across a video channel. In various embodiments, the video "persona" may include one or more participants, e.g., a conversation with members of a rock band, and/or more than one real world user may interact with the conversational experience at the same time.
[0026] In a natural human conversation, participants acknowledge their understanding of the meaning or idea being conveyed by another and express their attitude to the understood content, with verbal and facial expressions or other cues. In general, the participants are allowed to interrupt each other and start responding to the other side if they choose to do so. These traits of a natural conversation are emulated in various embodiments by a conversing virtual participant to maintain a suspension of disbelief on the part of the user.
[0027] Architectural components and approaches taken in various embodiments to conduct such a conversation in a manner that is convincing and compelling are disclosed. In various embodiments, one or more of the following may be included:
• Architecture: An exemplary architecture, including some of the primary components included in various embodiments, is disclosed. • Hierarchical language understanding: Statistical methods in various embodiments exploit the specific context of a particular application to determine from examples the intent of the user and the likely direction of a conversation.
• Using context: Make responses more relevant and the conversation more efficient by using what is known about the user or previous conversations or interactions with the user.
• Active Listening: Techniques for simulating the natural cadence of conversation, including visual and aural listening cues and interruptions by either party.
• Video Transitions and Transformations: Methods for smoothing a virtual persona's transition between video segments and other video transformation techniques to simulate a natural conversation.
• Multiple Recognizers: Improving performance and optimizing cost by using multiple speech recognizers.
• Multiple response modes: Allowing the user to provide a response using speech, touch or other input modalities. The selection of the available input modes may be made dynamically by the system.
• Social Sharing: Recording of all or part of the conversation for sharing via social networks or other channels.
• Conversational Transitions: In some applications, data in the cloud or other aspects of the application context may require some time to retrieve or analyze. Techniques to make the conversation seem continuous through such transitions are disclosed.
• Integrating audio-only content: Audio-only content (as opposed to audio that is part of video) can augment video content with more flexibility and less storage demands. A method of seamlessly incorporating it within a video interaction is described.
[0028] Figure 1 is a block diagram illustrating an embodiment of a system to provide a conversational video experience. In the example shown, each of a plurality of clients, represented in Figure 1 by clients 102, 104, 106, has access, e.g., via a wireless or wired connection to a network 108, to one or more conversational video experience servers, represented in Figure 1 by server 110. Servers such as server 110 use conversational video experience assets stored in an associated data store, such as data store 112, to provide assets to respective ones of the clients, e.g., on request by a client side application or other code running on the respective clients, to enable a conversational video experience to be provided. Examples of client devices, such as clients 102, 104, and 106, included without limitation desktop, laptop, and/or other portable computers; iPad® and/or other tablet computers or devices; mobile phones and/or other mobile computing devices; and any other device capable of providing a video display and capturing user input provided by a user, e.g., a response spoken by the user. While a single server 110 is shown in Figure 1, in various embodiments a plurality of servers may be used, e.g., to make the same conversational video experience available from a plurality of source, and/or to deploy different conversation video
experiences from different servers. Examples of conversational video experience assets that may be stored in data stores such as data store 112 include, without limitation, video segments to be played in a prescribed order and/or manner to provide the video persona' s side of a conversation and meta-information to be used to determine an order and/or timing in which such segments should be played.
[0029] A conversational video runtime system or runtime engine may be used in various embodiments to provide a conversational experience to a user in multiple different scenarios. For example:
• Standalone application - A conversation with a single virtual persona or multiple conversations with different virtual personae could be packaged as a standalone application (delivered, for example, on a mobile device or through a desktop browser). In such a scenario, the user may have obtained the application primarily for the purpose of conducting conversations with virtual personae.
• Embedded - One or more conversations with one or more virtual personae may be embedded within a separate application or web site with a broader purview. For example, an application or web site representing a clothing store could embed a conversational video with a spokesperson with the goal of helping a user make clothing selections. • Production tool - The runtime engine may be contained within a tool used for production of conversational videos. The runtime engine could be used for testing the current state of the conversational video in production.
[0030] In various implementations of the above, the runtime engine is incorporated and used by a container application. The container application may provide services and experiences to the user that complement or supplement those provided by the conversational video runtime engine, including discovery of new conversations; presentation of the conversation at the appropriate time in a broader user experience; presentation of related material alongside or in addition to the conversation; etc.
[0031] Figure 2 is a block diagram illustrating an embodiment of a system to provide a conversational video experience. In the example shown, a device 202, such as tablet or other client device, includes a communication interface 204, which provides network connectivity. A conversational video experience runtime engine 206 communicates to the network via communication interface 204, for example to download video, meta-information, and/or other assets to be used to provide a conversational video experience. Downloaded assets (e.g., meta-information, video segments of video persona) and/or locally-generated assets (e.g., audio comprising responses spoken by the user, video of the user interacting with the experience) are stored locally in a local asset store 208. Video segments and/or other content are displayed to the user via output devices 210, such as a video display device and/or speakers. Input devices 212 are used by the user to provide input to the runtime engine 206. Examples of input devices 212 include without limitation a microphone, which may be used, for example, to capture a response spoken by the user in response to a question or other prompt by the video persona, and a touch screen or other haptic device, which may be used to receive as input user selection from among a displayed set of responses, as described more fully below. A user-facing camera 214 in various embodiments provides (or optionally provides) video of the user interacting with the conversational video experience, for example to be used to evaluate and fine-tune the experience for the benefit of future users, to provide picture-in-picture (PIP) capability, and/or to capture the user's side of the conversation, e.g., to enable the user to save and/or share video representing both sides of the conversation.
[0032] Figure 3 is a block diagram illustrating an embodiment of a conversational video runtime engine. In various embodiments, a conversational video runtime engine may contain some or all of the components shown in Figure 3. In the example shown, the conversational video runtime engine 206 includes an asset management service 302. In various embodiments, asset management service 302 manages the retrieval and caching of all required assets (e.g. video segments, language models, etc.) and makes them available to other system components, such as and without limitation media playback service 304. Media playback service 304 in various embodiments plays video segments representing the persona 's verbal and physical activity. The video segments in various embodiments are primarily pre-recorded, but in other embodiments may be synthesized on-the-fly. A user audio/video recording service 306 captures and records audio and video of the user during a conversational video experience, e.g., for later sharing and analysis.
[0033] In the example shown in Figure 3, a response concept service 308 determines which video segments to play at which time and in which order, based for example on one or more of user input (e.g., spoken and/or other response); user profile and/or other information; and/or meta-information associated with the conversational video experience.
[0034] A input recognition service 310 includes in various embodiments a speech recognition system (SR) and other input recognition such as speech prosody recognition, recognition of user's facial expressions, recognition / extraction of location, time of day, and other environmental factors / features, as well as user's touch gestures (utilizing the provided graphical user interface). The input recognition service 310 in various embodiments accesses user profile information retrieved, captured, and/or generated by the personal profiling service 314, e.g., to utilize personal characteristics of the user in order to adapt the results to the user. For example, if it's understood that the user is male, from their personal profiling data, in some embodiments video segments including any questions regarding the gender of the individual may be skipped, because the user's gender is known from their profile information. Another example is modulating foul language based on user preference:
Assuming you have two versions of the conversation, where one version makes use of swear words, and another version that does not, in some embodiments user profile data may be used to choose which version of the conversation is used based on the user's history of swearing (or not) during the course of the user's own statements during the user's participation in the same or previous conversations, making the conversation more enjoyable, or at least more suited to the user's comfort with such language, overall. As a third example, the speech recognizer as well as natural language processor can be made more effective by tuning based on end-user behavior. The current state-of-the-art speech recognizers do allow a user based profile to be built to improve overall speech recognition accuracy on a per-user basis. The output of the input recognition service 310 in various embodiments may include a collection of one or more feature values, including without limitation speech recognition values (hypotheses, such a ranked and/or scored set of "n-best" hypotheses as to which words were spoken), speech prosody values, facial feature values, etc.
[0035] Personal profiling service 314 in various embodiments maintains personalized information about a user and retrieves and/or provides that information on demand by other components, such as the response concept service 308 and the input recognition service 310. In various embodiments, user profile information is retrieved, provided, and/or updated at the start of the conversation, as well as prior to each turn of the conversation. In various embodiments, the personal profiling service 314 updates the user's profile information at the end of each turn of the conversation using new information extracted from the user response and interpreted by the response concept service 308. For example, if a response is mapped to a concept that indicates the marital status of user, a profile data may be updated to reflect what the system has understood the user's marital status to be. In some embodiments, a confirmation or other prompt may be provided to the user, to confirm information prior to updating their profile. In some embodiments, a user may clear from their profile information that has been added to their profile based on their responses in the course of a conversational video experience, e.g., due to privacy concerns and/or to avoid incorrect assumptions in situations in which multiple different users use a shared device.
[0036] In various embodiments, response concept service 308 interprets output of the input recognition service 310 augmented with the information retrieved by the personal profiling service 314. Response concept service 308 performs interpretation in the domain of natural language (NL), speech prosody and stress, environmental data, etc. Response concept service 308 utilizes one or more response understanding models 312 to map the input feature values into a "response concept" determined to be the concept the user intended to communicate via the words they uttered and other input (facial expression, etc.) they provided in response to a question or other prompt (e.g. "Yup", "Yeah", "Sure" or nodding may all map to an "Affirmative" response concept). The response concept service 308 uses the response concept to determine the next video segment to play. For example, the determined response concept in some embodiments may map deterministically or
stochastically to a next video segment to play. The output of the response concept service 308 in various embodiments includes an identifier indicating which video segment to play next and when to switch to the next segment.
[0037] Sharing/social networking service 316 enables a user to posts aspects of conversations, for example video recordings or unique responses, to sharing services such as social networking applications.
[0038] Metrics and logging service 318 records and maintains detailed and
summarized data about conversations, including specific responses, conversation paths taken, errors, etc. for reporting and analysis.
[0039] The services shown in Figure 3 and described above may in various embodiments reside in part or in their entirety either on the client device of the human participant (e.g. a mobile device, a personal computer) or on a cloud-based server. In addition, any service or asset required for a conversation may be implemented as a split resource, where the decision about how much of the service or asset resides on the client and how much on the server is made dynamically based on resource availability on the client (e.g. processing power, memory, storage, etc.) and across the network (e.g. bandwidth, latency, etc.). This decision may be based on factors such as conversational-speed response and cost. In some embodiments, for example, input recognition service 310 may invoke a cloud-based speech recognition service, such as those provided by Google® and others, to obtain a set of hypotheses of which word(s) the user has spoken in response to a question or other prompt by a video persona.
[0040] Figure 4A is a block diagram illustrating an embodiment of a conversational video experience display and interface. In the example shown, a display 400 includes a control panel region 402 and a conversational video experience display region 404. In some embodiments, control panel region 402 is displayed and/or active only at certain times and/or conditions, e.g., once a video segment has finished playing, upon mouse-over or other preselection, etc. In the example shown, control panel region 402 includes three user selectable controls: at left a "rewind" control to iterate back through previously-played segments and/or responses; in the center a "play" control to indicate a readiness and desire to have a current/immediately next segment played; and at right a "forward" control, e.g., to cause a list of user-selectable responses or other user-selectable options available to advance the conversational video experience to be displayed. In the video experience display region 404, a video segment of a video persona engaged in her current "turn" of the conversation is displayed. In the example shown, a title/representative frame, or a current/next frame, of the current/next video segment is displayed, and selection of the "play" control would cause the video segment to begin (and/or resume) to play.
[0041] Figure 4B is a block diagram illustrating an embodiment of a conversational video experience display and interface. In the example shown in Figure 4B, the "play" control in the center of control panel region 402 has been replaced by a "pause" control, indicating in some embodiments that the current video segment of the video persona is currently playing. In addition, a picture-in-picture (PIP) frame 406 has been added to conversational video experience display region 404. In this example, video of the user of a client device comprising display 400 is displayed in PIP frame 406. A front- facing camera of the client device may be used in various embodiments to capture and display in PIP frame 406 video of a user of the client device while he/she engages with the conversational video experience. Display of user video may enable the user to ensure that the lighting and other conditions are suitable to capture user video of desired quality, for example to be saved and/or shared with others. In some embodiments, upon expiration of a prescribed time after the video persona finishes expressing a question or other prompt, a speech balloon or bubble may be displayed adjacent to the PIP frame 406, e.g., with a partially greyed out text prompt informing the user that it is the user's turn to speak.
[0042] The system in various embodiments provides dynamic hints to a user of which input modalities are made available to them at the start of a conversation, as well as in the course of it. The input modalities can include speech, touch or click gestures, or even facial gestures/head movements. The system decides in various embodiments which one should be hinted to the user, and how strong a hint should be. The selection of the hints may be based on environmental factors (e.g. ambient noise), quality of the user experience (e.g. recognition failure/retry rate), resource availability (e.g., network connectivity) and user preference. The user may disregard the hints and continue using a preferred modality. The system keeps track of user preferences for the input modalities and adapts hinting strategy accordingly.
[0043] The system can use VUI, touch-/click-based GUI and camera-based face image tracking to capture user input. The GUI is also used to display hints of what modality is preferred by the system. For speech input, the system displays a "listening for speech" indicator every time the speech input modality becomes available. If speech input becomes degraded (e.g. due to a low signal to noise ratio, loss of an access to a remote SR engine) or the user experiences a high recognition failure rate, the user will be hinted at / reminded of the touch based input modality as an alternative to speech.
[0044] The system hints (indicates) to the user that the touch based input is preferred at this point in the interactions by showing an appropriate touch-enabled on-screen indicator. The strength of a hint is expressed as the brightness and / or the frequency of pulsation of the indicator image. The user may ignore the hint and continue using the speech input modality. Once the user touches that indicator, or if the speech input failure persists, the GUI touch interface becomes enabled and visible to the user. The speech input modality remains enabled concurrently with the touch input modality. The user can dismiss the touch interface if they prefer. Conversely, the user can bring up the touch interface at any point in the conversation (by tapping an image or clicking a button). The user input preferences are updated as part of the user profile by the PP system.
[0045] For touch input, the system maintains a list of pre-defined responses the user can select from. The list items are response concepts, e.g., "YES", "NO", "MAYBE" (in a text or graphical form). These response concepts are linked one-to-one with the subsequent prompts for the next turn of the conversation. (The response concepts match the prompt affirmations of the linked prompts.) In addition, each response concept is expanded into a (limited) list of written natural responses matching that response concept. As an example, for a prompt "Do you have a girlfriend?" a response concept "NO GIRLFRIEND" may be expanded into a list of natural responses "I don't have a girlfriend", "I don't need a girlfriend in my life", "I am not dating anyone", etc. A response concept "MARRIED" may be expended into a list of natural responses "I'm married", "I am a married man", "Yes, and I am married to her", etc.
[0046] Figure 4C is a block diagram illustrating an embodiment of a conversational video experience display and interface. In the example shown, a list of response concepts is presented via a touch-enabled GUI popup window 408. The user can apply a touch gesture (e.g., tap) to a response concept item on the list, and in response the system will start playing a corresponding video segment of the video persona. In some embodiments, the user can apply another touch gesture (e.g., double-tap) to the response concept item to make the item expand into a list of natural responses (in text format). In some embodiments, this new list will replace the response concept list in the popup window. In another implementation, the list of natural responses is shown in another popup window. The user can use touch gestures (e.g., slide, pinch) to change the position and /or the size of the popup window(s). The user can apply a touch gesture (e.g., tap) to a natural response item to start playing the
corresponding video prompt. To go back to the list of response concepts or dismiss a popup window, the user can use other touch gestures (or click on a GUI button).
[0047] Figure 5 is a block diagram illustrating an embodiment of a conversational video experience. In the example shown, a conversational video experience is represented as a tree 500. An initial node 502 represents an opening video segment, such as a welcoming statement and initial prompt by a video persona. In the example shown, the conversation may after the initial segment 502 follow one or two next paths. The system listens to the human user's spoken (or other) response and maps the response to either a first response concept 504 associated with a next segment 506 or to a second response concept 508 associated with a next segment 510. Depending on which response concept the user's input is mapped to, in this example either response concept 504 or response concept 508, a different next video segment (i.e., segment 506 or segment 510) will be played next. In various embodiments, a conversational experience may be represented as a directed graph, and one or more possible paths through the graph may converge at some node in the graph, for example, if respective response concepts from each of two different nodes map/link to a common next segment/node in the graph.
[0048] In various embodiments, a primary function within the runtime engine is a decision-making process to drive conversation. This process is based on recognizing and interpreting signals from the user and selecting an appropriate video segment to play in response. The challenge faced by the system is guiding the user through a conversation while keeping within the domain of the response understanding model(s) and video segments available.
[0049] For example, in some embodiments, the system may play an initial video segment representing a question posed by the virtual persona. The system may then record the user listening / responding to the question. A user response is captured, for example by an input response service, which produces recognition results and passes them to a response concept service. The response concept service uses one or more response understanding models to interpret the recognition results, augmented in various embodiments with user profile information. The result of this process is a "response concept." For example, recognized spoken responses like "Sure", "Yes" or "Yup" may all result in a response concept of "AFFIRMATIVE".
[0050] The response concept is used to select the next video segment to play. In the example shown in Figure 5, each response concept is deterministically associated with a single video segment. In some embodiments, each node in the tree, such as tree 500 of Figure 5, has associated therewith a node-specific response understanding model that may be used to map words determined to have been uttered and/or other input provided in response to that segment to one or more response concepts, which in turn may be used to select a next video segment to be presented to the user.
[0051] The video segment and the timing of the start of a response are passed in various embodiments to a media playback service, which initiates video playback of the response by the virtual persona at an indicated and/or otherwise determined time.
[0052] In various embodiments, the video conversational experience includes a sequence of conversation turns such as those described above in connection with Figure 5. In one embodiment of this type of conversation, all possible conversation turns are represented in the form of a pre-defined decision tree/graph, as in the example shown in Figure 5, where each node in the tree/graph represents a video segment to play, a response understanding model to map recognized and interpreted user responses to a set of response concepts, and the next node for each response concept.
[0053] Figure 6 is a block diagram illustrating an embodiment of a conversational video experience. In Figure 6, a less hierarchical representation of a conversation is shown. In the example shown, from an initial node/segment 602, a user response that is mapped to a first response concept 603 results in a transition to node/segment 604, whereas a second response concept 605 would cause the conversation to leap ahead to node/segment 606, bypassing node 604. Likewise, from node 606 the conversation may transition via one response concept to node 608, but via a different response concept directly to node 610. In the example shown, a transition "back" from node 606 to node 604 is possible, e.g., in the case of a user response that is mapped to response concept 612.
[0054] In some embodiments, to enable a more natural and dynamic conversation, each conversational turn does not have to be pre-defined. To make this possible, the system in various embodiments has access to one or more of: A corpus of video segments representing a large set of possible prompts and responses by the virtual persona in the subject domain of the conversation.
A domain- wide response understanding model in the subject domain of the conversation. In various embodiments, the domain-wise response understanding model is conditioned at each conversational turn based on prompts and responses adjacent to that point in the conversation. The response understanding model is used, as described above, to interpret user responses (deriving one or more response concepts based on user input). It is also used to select the best video segment for the next dialog turn, based on highest probability interpreted meaning.
An example process flow in such a scenario includes the following steps:
At the start of the conversation, a pre-selected opening prompt is played.
After playing the selected prompt, the user response is captured, speech recognition is performed, and the result is used to determine a response concept. The response understanding model may be updated (conditioned) based on the user response.
The conditioned response understanding model is used to select the best possible available video segment as the prompt to play next, representing the virtual persona' s response to the user response described in immediately above. To make that selection, each available prompt is passed to the conditioned response understanding model, which generates a list of possible interpretations of that prompt, each with a probability of expressing the meaning of the prompt. The highest-probability interpretation defines the best meaning for the underlying prompt and serves as its best-meaning score In principle, an attempt may be made to interpret every prompt recorded for a given video persona in the domain of the conversation, and select the prompt yielding the highest best-meaning score. This selection of the next prompt represents the start of the next conversational turn. It starts by playing a video segment representing the selected prompt.
For each conversational turn, the response understanding model can be reset to the domain-wide response understanding model and the steps described above are repeated. This process continues until the user ends the conversation, the system selects a video segment that is tagged as a conversation termination point, or the currently conditioned response understanding model determines that the conversation has ended.
[0056] The above embodiments exemplify different methods through which the runtime system can guide the conversation within the constraints of a finite and limited set of available understanding models and video segments.
[0057] A further embodiment of the runtime system utilizes speech and video synthesis techniques to remove the constraint of responding using a limited set of prerecorded video segments. In this embodiment, a response understanding model can generate the best possible next prompt by the virtual persona within the entire conversation domain. The next step of the conversation will be rendered or presented to the user by the runtime system based on dynamic speech and video synthesis of the virtual persona delivering the prompt.
Active Listening
[0058] To maintain a user experience of a natural conversation, in various embodiments the video persona maintains its virtual presence and responsiveness, and provides feedback to the user, through the course of a conversation, including when the user is speaking. To accomplish that, in various embodiments appropriate video segments are played when the user is speaking and responding, giving the illusion that the persona is listening to the user's utterance.
[0059] Figure 7 is a block diagram illustrating an embodiment of a conversational video experience segment. In the example shown, a video segment 702 includes three distinct portions. In a first portion, labeled "affirmation" in Figure 7, the video persona provides feedback to indicate that the user's previous response has been heard and understood. For example, if the user has uttered a response that has been mapped to the response concept "feeling good", the video persona might during the affirmation portion of the segment that is selected to play next say something like, "That's great, I'm feeling pretty good today too." In the next portion, labeled "statement and/or prompt" in Figure 7, the video persona may communicate information, such as to inform the user, provide information the user may be understood to have expressed interest in, etc., and either explicitly (e.g., by asking a question) or implicitly or otherwise prompt the user to provide a response. While the video persona waits for the user to respond, an "active listening" portion of the video segment 702 is played. In the example shown, if an end of the active listening portion is reached before the user has completed providing a response, the system loops through the active listening portion again, and if necessary through successive iterations, until the user has completed providing a response.
[0060] In one embodiment, active listening is simulated by playing a video segment
(or portion thereof) that is non-specific. For example, the video segment could depict the virtual persona leaning towards the user, nodding, smiling or making a verbal
acknowledgement ("OK"), irrespective of the user response. Of course, this approach risks the possibility that the virtual persona' s reaction is not appropriate for the user response.
[0061] In another embodiment of the process, the system selects an appropriate active listening video segment based on the best current understanding of the user's response, as discussed more fully below in connection with Figure 14.
[0062] The system can allow a real user to interrupt a virtual persona, and will simulate an "ad hoc" transition to an active listening shortly after detection of such interruption after selecting an appropriate "post-interrupted" active listening video segment (done within the response concept service system).
[0063] Figure 8A is a flow chart illustrating an embodiment of a process to provide a conversational video experience. In the example shown, affirmation and statement/prompt portions of a segment are played (802). An active listening portion of the segment is looped, until end-of-speech by the user is detected (804). Once end of user speech is detected (806), the user's response is determined (e.g., speech recognition) and mapped to a response concept, enabling transition to a next segment (808), e.g., one associated with the response concept to which the user's spoken response has been mapped.
[0064] In some embodiments, the system can allow a real user to interrupt a virtual persona, and will simulate an "ad hoc" transition to an active listening mode shortly after detection of such interruption after selecting an appropriate "post-interrupted" active listening video segment.
[0065] Figure 8B is a flow chart illustrating an embodiment of a process to provide a conversational video experience. In the example shown, affirmation and statement/prompt portions of a segment are played (822). If the affirmation and statement/prompt portions play to the end of those portions (824), an active listening portion of the segment is looped (826), until end-of-speech by the user is detected (830). If, instead, the user begins to speak during playback of the affirmation and statement/prompt portions of the segment (828), then playback of the affirmation and statement/prompt portions of the segment ends without completing playback of those portions, and an immediate transition to looping the active listening portion of the segment is made (826). Once the end of the user's speech is reached (830), the system transitions to the next segment (832).
Receiving and Interpreting User Responses
[0066] Figure 9 is a flow chart illustrating an embodiment of a process to receive and interpret user responses. In the example shown, audio data is received (902). For example, in some embodiments, when the beginning of user speech is detected and/or once the video playback has entered into active listening mode, the client device's microphone is activated, and audio captured by the microphone is streamed to a cloud-based or other speech recognition service (904). Natural language processing is performed on results of the speech recognition processing, e.g., an n-best or other set of hypotheses, to map the results to one or more "response concepts" (906).
[0067] Figure 10 is a block diagram illustrating an embodiment of elements of a conversational video experience system. In the example shown, audio data comprising user speech 1002 is provided to a speech recognition local and/or remote module and/or service to obtain a speech recognition output 1004 comprising a set of n-best hypotheses determined by the speech recognition service as the most likely words that were uttered. In the example shown each member of the set has an associated score, but in other embodiments a confidence or other score for the entire set is provided, with members of the set being presented in ranked order. The speech recognition output 1004 is provided as input to a natural language processing module and/or service, which in the example shown attempts to match the speech recognition output 1004 to a response understanding model 1006 to determine a response concept 1008 that the user is determined to have intended and/or in some embodiments which is the response concept most likely to be associated with the user's spoken and/or other responsive input. In the example shown, the spoken utterance has been determined by speech recognition processing to most likely have been the word "yes", which in turn has been mapped to the response concept "affirmative". While words represented as text are included in the response understanding model 1006 as shown in Figure 10, in various embodiments other input, such as selection by touch or otherwise of a displayed response option, nodding of the head, etc., may also and/or instead be included.
[0068] Figure 11 is a block diagram illustrating an embodiment of a conversational video experience next segment decision engine. In the example shown, one or more of user responsive input that has been mapped to an associated response concept 1102, conversation context data 1104 (e.g., where the most recently played video segment is located within a set of nodes comprising the conversational video experience; what the user has said or otherwise provided as input in the course of the experience; etc.), and user profile data 1106 are provided to a decision engine 1108 configured to identify and provide as output a next segment 1110 based at least in part on the inputs 1102, 1104, and/or 1106.
[0069] In various embodiments, the input (e.g., speech) recognition service accesses, and the response concept service integrates, all relevant information sources to support decision-making necessary for selection of a meaningful, informed and entertaining response as a video segment (e.g., from a collection of pre-recorded video segments representing the virtual persona asking questions and/or affirming responses by the user). By using context beyond a single utterance of the user, in various embodiments the system can be more responsive, more accurate in its assessment of the user's intent, and can more accurately anticipate future requests.
[0070] Examples of information gathered through various sources may include, without limitation, one or more of the following:
• A priori knowledge of the user based on his or her identity. Examples of such
knowledge include information about the user's interests, contacts and recent posts from the user's social network; gender or demographic information from a customer information database; name and address information from a prior registration process.
• Information gathered by the runtime system based on prior experience within the system, even across conversations with different virtual personae. Examples of such information could include responses to prior questions such as "Are you married?", "What kind of pet do you have?", etc. or others that suggest interest and intent. • Information provided by the container application of the runtime engine. This can provide the context of application domain and activity prior to entering a
conversation.
• Extrinsic inputs collected by sensors available to the system. This could include time- of-day, current location, or even current orientation of the client device.
• Facial recognition and expressions collected by capturing and interpreting video or still images of the user through a camera on the client device.
[0071] In various embodiments, the above information may be used in isolation or in combination to provide a better conversational experience, for example, by:
• Providing a greater number of input features to the input recognition and/or response concept services to carry out more accurate recognition, understanding and decisionmaking. For example, recognition that the user is nodding would help the system interpret the utterance "uh-huh" as an affirmative response. Or the statement "I went boarding yesterday" could be disambiguated based on a recent post on a social network made by the same user describing a skateboarding activity.
• Allowing the runtime engine to skip entire sections of the conversation that were
originally designed to collect information that is already known. For example, if a conversation turn was originally built to determine whether the user is married, this turn could be skipped if the marital status of the user was determined in a previous conversation or from an existing user profile database.
• Allowing the runtime engine to select video segments solely based on extrinsic inputs.
For example, the virtual persona may start a conversation with "Good morning!" or "Good evening!" based on the time that the conversation is started by the user.
[0072] Figure 12 is a flow chart illustrating an embodiment of a process to provide and update a response understanding model. In the example shown, an initial understanding model is built and deployed (1202). For example, in some embodiments, a designer of the conversation video experience may decide in the course of initial design of the conversational video experience which concepts may be expressed by a user in response to a question or other prompt by the video persona. For each concept, the designer may attempt to anticipate words and phrases a user might utter, in response to a question or other prompt by the video persona via a video segment associated with the node, and for each a concept to which that word or phrase is to be mapped. Once the understanding model is deployed (1202), user interactions with the system are monitored and analyzed to determine whether any updates to the model are required and/or would be beneficial to the system (1204). For example, in various embodiments the interaction of test users and/or real users with the conversational video experience may be observed, and words or phrases used by such users but not yet mapped to a concept may be added and mapped to corresponding concepts determined to have been intended by such users. Additional words or phrases may be mapped to existing concepts conceived of by the designer, or in some embodiments, new concepts may be determined and added, and associated paths through the conversational video experience defined, in response to such observation of user interactions with the experience. In various embodiments, model refinements may be identified, initiated, and/or submitted by a participating user; by one or more users acting collectively; by one or more remote workers/users assigned tasks structured to potentially yield model refinements (e.g., crowdsourcing); and/or a designer or other human operator associated with production, maintenance, and/or improvement of the conversational video experience. If an update to the model is determined to be required (1206), the model is updated and the updated model is deployed (1208). The process of Figure 12 executes continuously unless terminated (1210).
Conversational Transitions
[0073] In various embodiments, mechanisms are provided to handle expected and longer-than-expected transitions, e.g., to handle delays in deciding what the virtual persona should say next without destroying the conversational feel of the application. Such delays can come from a number of sources, such as the need to retrieve assets from the cloud, the computational time taken for analysis, and other sources. Depending on the circumstances, the detection of a delay may be determined immediately prior to requiring the asset or analysis result that is the source of the delay, or it may instead be determined well in advance of requiring the asset (for example, if assets were being progressively downloaded in anticipation of their use and there were a network disruption).
[0074] One approach is to use transitional conversational segments, transitional in that they delay the need for the asset being retrieved or the result which is the subject of the analysis causing the delay. These transitions can be of several types: [0075] Neutral: Simple delays in meaningful content that would apply in any context, e.g., "Let me see...," "That's a good question...," "That's one I'd like to think about...Hold on, I'm thinking," or something intended to be humorous. The length of the segment could be in part determined by the expected delay.
[0076] Application contextual: The transition can be particular to the application context, e.g., "Making financial decisions isn't easy. There are a lot of things to consider."
[0077] Conversation contextual: The transition could be particular to the specific point in the conversation, e.g., "The show got excellent reviews," prior to indicating ticket availability for a particular event.
[0078] Additional information: The transition could take advantage of the delay to provide potentially valuable additional content in context, without seeming disruptive to the conversation flow, e.g., "The stock market closed lower today," prior to a specific stock quote.
[0079] Directional: The system could direct the conversation down a specific path where video assets are available without delay. This decision to go down this conversation path would not have been taken but for the aforementioned detection of delay.
[0080] Figure 13 is a flow chart illustrating an embodiment of a process to integrate a transition video into a conversation video experience. In the example shown, a transition may be generated and inserted dynamically, e.g., in response to determining
programmatically that a delay will prevent timely display of a next segment and/or in embodiments in which a next segment may be obtained and/or synthesized dynamically, in real time, based for example on the user's response to the previously-played segment. In some embodiments, the process of Figure 13 may be used at production time to create transition clips or portions thereof. In the example shown, a next segment to which a transition from a previous/current segment is required is received (1302). A transition is generated and/or obtained (1304) and inserted into the video stream to be rendered to the user (1306).
[0081] In various embodiments, one or more techniques may be used to enable a smooth transition of a virtual persona 's face / head image between video segments for an uninterrupted user experience. The ideal case is if the video persona moves smoothly. In various embodiments, it is a "talking head." There is no problem if a whole segment of the video persona speaking is recorded continuously. But there may be transitions between segments where that continuity is not guaranteed. Thus, there is a general need for an approach to smoothly blending two segments of video, with a talking head as the
implementation we will use to explain the issue and its solution.
[0082] One approach is to record segments where the end of the segment ends in a pose that is the same as the pose at the beginning of a segment that might be appended to the first segment. (Each segment might be recorded multiple times with the "pose" varied to avoid the transition being overly "staged.") When the videos are combined as part of creating a single video for a particular segment of the interaction (as opposed to being concatenated in real time), standard video processing techniques can be used to make the transition appear seamless, even though there are some differences in the ending frame of one segment and the beginning of the next.
[0083] Depending on the processor of the device on which the video is appearing, those same techniques (or variations thereof) could be used to smooth the transition when the next segment is dynamically determined. However, methodology that makes the transition smoothing computationally efficient is desirable to minimize the burden on the processor. One approach is the use of "dynamic programming" techniques normally employed in applications such as finding the shortest route between two points on a map, as in navigation systems, combined with facial recognition technology. The process proceeds roughly as follows:
• Identify key points on the face using facial recognition technology that we focus on when recognizing and watching faces, e.g., the corners of the mouth, the center of the eyes, the eyebrows, etc. Find those points on both the last frame of the preceding video segment and the first frame of the following video segment.
• Depending on how far apart corresponding points are in pixels between the two
images, determine the number of video frames required to create a smooth transition.
• Use dynamic programming or similar techniques to find the path through the
transition images to move each identified point on the face to the corresponding point so that the paths are as similar as possible (minimize distortion). • Other points are moved consistently based on their relationship to the points specifically modeled. The result is a smooth transition that focuses on what the person watching will be focusing on, resulting in reduced processing requirements.
[0084] In various embodiments techniques described herein are applied with respect to faces. Only a few points on the face need be used to make a transformation, yet it will generate a perceived smooth transition. Because the number of points used to create the transformation is few, the computation is small, similar to that required to compute several alternative traffic routes in a navigation system, which we know can be done on portable devices. Facial recognition has similarly been used on portable devices, and Microsoft's Kinect™ game controller recognizes and models the whole human body on a small device.
[0085] In various embodiments, transition treatment as described herein is applied in the context of conversational video. There is a need for many transitions relative to some other areas where videos are used for, e.g., instructional purposes, with little if any real variation in content. While some of these applications are characterized as "interactive," they are little more than allowing branching between complete videos, e.g., to explain a point in more detail if requested. In conversational video, a key component is much more flexibility to allow elements such as personalization and the use of context, which will be discussed later in this document. Thus, it is not feasible to create long videos incorporating all the variations possible and simply choose among them; it will be necessary to fuse shorter segments.
[0086] A further demand of interactive conversation with a video persona on portable devices in some embodiments is the limitation on storage. It would not be feasible to store all segments on the device, even if there were not the issue of updates reflecting change in content. In addition, since in some embodiments the system is configured to anticipate segments that will be needed and begin downloading them while one is playing, this encourages the use of shorter segments, further increasing the likelihood that concatenation of segments will be necessary.
[0087] Figure 14 is a flow chart illustrating an embodiment of a process to provide a dynamic active listening experience. In the example shown, user input in the form of a spoken response is received and processed (1402). If an end of speech is reached (1404), the system transitions to a next video segment (1406), e.g., based on the system's best understanding of the response concept with which the user's input is associated. If prior to the end of the user's speech the word or words that have been uttered thus far are determined to be sufficient to map the user's input to a response concept, at least provisionally (1408), the system transitions to a response concept-appropriate active listening loop (1410) and continues to receive and process the user's ongoing speech (1402). For example, a transition from a neutral active listening loop to one that expresses a sentiment and/or understanding such as would be appropriate and/or expected in a conversation between two live humans as a listener begins to understand the speaker's response may be made. Examples include without limitation active listening loops in which the video persona expresses heightened concern, keen interest, piqued curiosity, growing dismay, unease, confusion, satisfaction, agreement or other affirmation (e.g., by nodding while smiling), etc.
[0088] To be able to make a decision while the user is speaking, the recognition and understanding of an on-going partially completed response have to be performed and the results made available while the user is in the process of speaking (or providing other, nonverbal input). The response time of such processing should allow a timely selection of a video segment to simulate active listening with appropriate verbal and facial cues.
[0089] In various embodiments, the system selects, switches and plays the most appropriate video segment based on (a) an extracted meaning of the user statement so far into their utterance (and an extrapolated meaning of the whole utterance); and (b) an appropriate reaction to it by a would-be human listener. To make the timely selection and switch, it uses information streamed to it from the speech or other input recognition and response concept services. In some embodiments, an on-going partially spoken user response is processed, and the progressively expanding results are used to make a selection of a video segment to play during the active listening phase.
[0090] The video segment selected and the time at which it is played can be used to support aspects of the cadence of a natural conversation. For example:
• The video could be an active listening segment possibly containing verbal and facial expressions played during the time that the user is making the utterance.
• The video could be an affirmation or question in response to the user's utterance, played immediately after the user has completed the utterance. • The system can decide to start playing back the next video segment while the user is still speaking, thus interrupting or "barging-in" to the user's utterance. If the user does not yield the turn and keeps speaking, this will be treated as a user barge-in.
Multiple Recognizers
[0091] To achieve the best speech recognition performance (minimum error rate, acceptable response time and resource utilization), in some embodiments more than a single speech recognition service and/or system may be required. Also, to reduce the cost incurred by interacting with a fee-based remote speech recognition service, it may be desirable to balance its use with a local speech recognition service (embedded in the user device). In various embodiments, at least one local speech recognition service and at least one remote speech recognition service (network based) are included. Several cooperative schemes can be used to enable their co-processing of speech input and delegation of the authority for the final decision / formulation of the results. These schemes are implemented in some embodiments using a speech recognition controller system which coordinates operations of local and remote speech recognition services.
[0092] The schemes may include:
[0093] (1) Chaining
[0094] A local speech recognition service can do a more efficient start/stop analysis, and the results can be used to reduce the amount of data sent to the remote speech recognition service.
[0095] (1.1) A local speech recognition service is authorized to track audio input and detect the start of a speech utterance. The detected events with an estimated confidence level are passed as hints to the speech recognition service controller which makes a final decision to engage (to send a "start listening" command to) the remote speech recognition service and to start streaming the input audio to it (covering a backdated audio content to capture the start of the utterance).
[0096] (1.2) In addition, a local speech recognition service is authorized to track audio input and detect the end of a speech utterance. The detected events with an estimated confidence level are passed as hints to the speech recognition service controller which makes a final decision to send a "stop listening" command to the remote speech recognition service and to stop streaming the input audio to it (after sending some additional audio content to capture the end of the utterance as may be required by the remote speech recognition service). Alternatively, the speech recognition service controller may decide to rely on a remote speech recognition service for the end of speech detection. Also, a "stop listening" decision can be based on a higher-authority feedback from the response concept service system that may decide that a sufficient information has been accumulated for their decision-making.
[0097] (2) Local and remote speech recognition services in parallel / overlapping recognition
[0098] (2.1) Local speech recognition service for short utterances, both local and remote speech recognition services for longer utterances.
[0099] To optimize recognition accuracy and reduce the response time for short utterances, only a local speech recognition service can be used. This also reduces the usage of the remote speech recognition service and related usage fees.
[00100] The speech recognition service controller sets a maximum utterance duration which will limit the utterance processing to the local speech recognition service only. If the end of utterance is detected by the local speech recognition service before the maximum duration is exceeded, the local speech recognition service completes recognition of the utterance and the remote speech recognition service is not invoked. Otherwise, the speech audio will be streamed to the remote speech recognition service (starting with the audio buffered from the sufficiently padded start of speech).
[00101] Depending on the recognition confidence level for partial results streamed by the local speech recognition service, the speech recognition service controller can decide to start using the remote speech recognition service. If the utterance is rejected by the local speech recognition service, the speech recognition service controller will start using the remote speech recognition service.
[00102] The speech recognition service controller sends "start listening" to the local speech recognition service. The local speech recognition service detects the start of speech utterance, notifies the speech recognition service controller of this event and initiates streaming of speech recognition results to the speech recognition service controller which directs them to the response concept service system. When the local speech recognition service detects the subsequent end of utterance, it notifies the speech recognition service controller of this event. The local speech recognition service returns the final recognition hypotheses with their scores to the speech recognition service controller.
[00103] Upon receipt of the "start of speech utterance" notification from the local speech recognition service, the speech recognition service controller sets the pre-defined maximum utterance duration. If the end of utterance is detected before the maximum duration is exceeded, the local speech recognition service completes recognition of the utterance. The remote speech recognition service is not invoked.
[00104] If the utterance duration exceeds the specified maximum while the local speech recognition service continues recognizing the utterance and streaming partial results, the speech recognition service controller sends "start listening" and starts streaming the utterance audio data (including a buffered audio from the start of the utterance) to, and receiving streamed recognition results from, the remote speech recognition service. The streams of partial recognition results from the local and remote speech recognition services are merged by the speech recognition service controller and used as input into the response concept service system. The end of recognition notification is sent to the speech recognition service controller by the two speech recognition service engines when these events occur.
[00105] However, if the confidence score of the partial recognition results by the local speech recognition service are considered low according to some criterion (e.g., below a set threshold), the speech recognition service controller will start using the remote speech recognition service if it has not done that already.
[00106] If the utterance is rejected by the local speech recognition service, the speech recognition service controller will start using the remote speech recognition service (if it has not done that already). A video segment of a "speed equalizer" is played while streaming the audio to a remote speech recognition service and processing the recognition results.
[00107] (3) Auxiliary expert - a local speech recognition service is specialized on recognizing speech characteristics such as prosody, stress, rate of speech, etc.
[00108] This recognizer runs alongside other local recognizers and shares the audio input channel with them. [00109] (4) A Fail-over backup to tolerate resource constraints (e.g. no network resources)
[00110] If the loss/degradation of the network connectivity is detected, the speech recognition service controller is notified of this event and stops communicating with the remote speech recognition service (i.e. sending start/stop listening commands and receiving streamed partial results). The speech recognition service controller resumes communicating with the remote speech recognition service when it is notified that network connectivity has been restored.
Social Media Sharing
[00111] This section describes audio/video recording and transcription of the user side of a conversation as means of capturing user-generated content. It also presents innovative ways of sharing the recorded content data as the whole or in parts on social media channels.
[00112] Figure 15 is a flow chart illustrating an embodiment of a process to record a user's side of a conversational video experience. In the example shown, a time-synchronized video recording is made of the user while the user is interacting with the conversational video experience (1502). For example, during a conversation between a virtual persona and a real user, the conversational video runtime system in various embodiments performs video capture of the user while they listen to, and respond to, the persona's video prompts. The audio/video capture may utilize a microphone and a front-facing camera of the host device. The captured audio/video data is recorded to a local storage as a set of video files. The capture and recording are performed concurrently with the video playback of the persona's prompts (both speaking and active listening segments). The timing of the video capture is synchronized with that of the playback of the video prompts, so that it is possible to time- align both video streams.
[00113] Furthermore, transcriptions of the user's actual responses and the
corresponding response concepts are logged and stored (1504). The user's responses are automatically transcribed into text and interpreted / summarized as response concepts by the input recognition and response concept services, respectively. The automatically transcribed and logged responses may be hand-corrected / edited by the user. [00114] Audio/video recordings and transcriptions of the user's responses can be used for multiple types of social media interactions:
[00115] A user's video data segments captured during a persona's speaking and active listening modes can be sequenced with the persona-speaking and active listening segments to reconstruct an interactive exchange between both sides of the conversation. These segments can be sequenced in multiple ways including alternating video playback between segments or showing multiple segments adjacent to each other. These recorded video conversations can be posted in part or in their entirety to a social network on behalf of the user. These published video segments are available for standalone viewing, but can also serve as a means of discovery of the availability of a conversational video interaction with the virtual persona.
[00116] Selected user recorded video segments on their own or sequenced with recorded video segments of other users engaging in a conversation with the same virtual persona can be posted to social networks on behalf of the virtual (or corresponding real) persona.
[00117] The transcribed actual or response concepts from multiple users engaging in a conversation with the same virtual persona can be measured for similarity. Data elements or features collected for multiple users by the personal profiling service can also be measured for similarity. Users with a similarity score above a defined threshold can be connected over social networks. For example, fans / admirers / followers of a celebrity can be introduced to each other by the celebrity persona and view their collections of video segments of their conversations with that celebrity.
Integrating human agents
[00118] There may be occasions for some applications to involve a human agent. This human agent - a real person connected via a video and/or audio channel to the user - could be an additional participant in the conversation or a substitute for the virtual persona. The new participant could be selected from a pool of available human agents or could even be the real person on whom the virtual persona is based. The decision to include a human agent may only be taken if there is a human agent available, determined by integration with a present system. [00119] Examples of scenarios in which human agents would be integrated may include cases where the conversation path goes outside the immediate conversation domain. In such a case, the virtual persona could indicate he/she needs to ask their assistant. They would then "call out" on a speakerphone, and a real agent's voice could say hello. The virtual persona would then say, "Please tell my assistant what you want" or the equivalent. At that point, the user would interact with the agent by simulated speakerphone, with the virtual persona nodding occasionally or showing other appropriate behavior. At the end of the agent interaction, a signal would indicate to the app on the mobile device or computer that the virtual persona should begin interacting with the user. This use allows conventional agents serving, for example, call centers, to treat the interaction as a phone call, but for the user to continue to feel engaged with the video.
[00120] The integration could also be subtler, with "hidden" agents. That is, when the system can't decipher what the user is saying, perhaps after a few tries, an agent could be tapped in to listen and type a response. In some cases, the agent could simply choose a prewritten response to the question. A text-to-speech system customized to the virtual persona's voice could then speak the typed response. The video system could be designed to simulate arbitrary speech; however, it may be easier to just have the virtual persona pretend to look up the answer on a document, hiding the virtual persona's lips, or a similar device for hiding the virtual persona's mouth. The advantage of a hidden agent in part is that agents with analytic skills that might have accents or other disadvantages when communicating by speech could be used.
[00121] For a certain pattern of responses, the conversation could transfer to a real person, either a new participant or even the real person on whom the virtual participant is based. For example, in a dating scenario the conversational video could ask a series of screening questions and for a specific pattern of responses, the real person represented by the virtual persona could be brought in if he/she is available. In another scenario, a user may be given the opportunity to "win" a chance to speak to the real celebrity represented by the virtual persona based on a random drawing or other triggering criteria.
Integrating audio-only content
[00122] Having video coverage for new developments or content seldom used may not be cost-effective or even feasible. This can be addressed in part by using audio-only content within the video solution. As with human agents, the virtual persona could "phone" an assistant, listen to a radio, or ask someone "off-camera" and hear audio-only information, either pre-recorded or delivered by text-to-speech synthesis. In that latter case, the new content could originate as text, e.g., text from a news site, for example. Audio-only content could also be environmental or musical, for example if the virtual persona played examples of music for the user for possible purchase.
Producing content to provide a conversational video experience
[00123] An embodiment of a process to create content to provide a conversational video experience is disclosed. In various embodiments, the creation process includes an authoring phase, in which the content, purpose, and structure of a conversation are conceived; data representing the conversation and its structure is generated; and a script is written to be used to create video content for the segments that will be required to provide the
conversational experience. In a production phase, video segments are recorded and associated with the conversation, e.g., by creating and storing appropriate meta-information. The relationships between segments comprising the conversational video experience are defined. For example, a conceptual understanding model is created which defines transitions to be made based on a best understanding by the system of a user's response to a segment that has just played. Once the required video content, understanding model(s), and associated meta-information have been created, they are packaged into sets of one or more files and deployed in a deployment phase, e.g., by distributing them to users via download and/or other channels. In the example shown, once the assets required to provide and/or consume the conversation video experience have been deployed, user interaction with the conversational video experience is observed in a refinement phase. For example, user interaction may indicate that one or more users provided a response to a segment that could not be mapped to a "response concept" recognized within the domain of the conversational video experience, e.g., to one of the response concepts associated with the segment and/or one or more other segments in an understanding model of the conversational video experience. In such a case, in the refinement phase one or both of a speech recognition system/service and the understanding model of the conversational video experience may be tuned or otherwise refined or updated based on the observed interactions. For example, video and/or audio of one or more users interacting with the system, and the corresponding speech recognition and/or understanding service results, may be reviewed by a human operator (e.g., a designer, user, crowd-source worker) and/or programmatically to better train the speech recognition service to recognize an uttered response that was not recognized correctly and/or to update the model to ensure that a correctly recognized response can be mapped to an existing or if necessary a newly-defined response concept for the segment to which the user(s) was/were responding. The refined or otherwise updated assets (e.g., conceptual understanding model, etc.) are then deployed and further observation and refinement may occur.
[00124] In various embodiments, a definition of an initial node, e.g., a root or other starting node, of a conversation is received, for example via a user interface. Iteratively, as successive indications that further nodes are desired to be added and defined, an interface is displayed and an associated node definition is received and stored, including as applicable information defining any relationship(s) between the defined node and other nodes. For example, a location of the node within a hierarchy of nodes comprising the conversation and/or a definition of one or more response concepts associated with the node and for each a consequence of a user response to a prompt provided via a video segment associated with the node being mapped to that response concept, may be received and stored. Once the authoring user indicates that (at least for now) no further nodes are desired to be defined, the process ends.
[00125] A user interface of a tool to create content for a conversational video experience. In various embodiments, the user interface (or portion thereof) includes a palette region and a workspace region. In some embodiments, the palette region may include two or more element types, e.g., one to add a node and another to add and define a transition between nodes, e.g., by dragging a dropping an instance of an element by clicking on the representation of the element type in the palette region and dragging it to a desire location in the workspace region of the user interface.
[00126] In some embodiments, a structured user interface is provided to enable an authoring user to define one or more nodes of a conversational video experience and for each its associated video content, understanding model (or elements thereof), etc. The user interface may be pre -populated with known (e.g., inherited and/or otherwise previously received or generated values) and/or generated values, if any. For example, in some embodiments, as described more fully below, one or more nodes from another conversation may be re-used, or used as a template, in which case associated node values that have not (yet) been changed by the authoring user may be displayed. Authoring user input is received via the interface. Once an indication is received that the authoring user is done defining a particular node, the node definition and content data are stored.
[00127] In some embodiments, a user interface of a tool to create content for a conversational video experience includes a first script entry region, in which the authoring user is prompted to enter the words the actor or other person who will perform the segment should say during a first portion of a video content to be associated with the node. In some embodiments, the authoring user is prompted to enter a statement that affirms a response the system would have understood the user to have given to a previous segment, resulting in their being transitioned to the current node. In a second script entry region, the authoring user is prompted to enter words the video persona should say next, in this example to include a question or other prompt to which the user may be expected to respond. In an active listening direction entry section, the authoring user is invited to provide instructions as to how the person who performs the video persona part should behave during an active listening portion of the video segment associated with the node, e.g., as described above. In various embodiments, script definition information entered in appropriate regions of the user interface may be used by a backend process or service to generate automatically and provide as output a script in a form usable by video production talent and production personnel to record a video segment to be associated with the node.
[00128] In some embodiments, the user interface further includes a response concept entry and definition section. In some embodiments, initially fields to define up to three response concepts are provided, and an "add" control is included to enable more to be defined. Text entry boxes are provided to enable response concept names or labels to be entered. For each a "define" button may be selected to enable key words, key phrases, Boolean and/or other combinations of words and phrases, etc. to be added as natural responses that are associated with and should be mapped by the system to the corresponding response concept. In some embodiments, as described more fully below, natural language and/or other processing may be performed on script text entered in appropriate regions of the user interface, for example, to generate automatically and pre -populate one or more response concept candidates to be reviewed by and/or further defined by the authoring user. For example, in some embodiments, other response concepts that have already been defined for other nodes within the same conversation and/or associated with nodes in other conversations in a same subject matter and/or other common domain may be selected, based on the script text entered in corresponding regions of the user interface, to be a candidate response concept to the prompt entered in a prompt or other script definition region of the user interface, for example.
[00129] Finally, the user interface in some embodiments includes a media linking section. In some embodiments, a URL or other identifier may be entered in a text entry field to link previously recorded video to the node. A "browse" control opens an interface to browse a local or network folder structure to find desired media. In the example shown, an "auto" button enables a larger video file to be identified, and in response to selection of the "auto" button the system will perform speech recognition to generate a time synchronized transcript for the video file, which will then be used to find programmatically the portion(s) of the video file that are associated with the node, for example those portions that match the script text entered in appropriate regions of the user interface, and for the active listening portions a subsequent portion of the video until the earlier of the end of the file or when the subject resumes speaking.
[00130] An embodiment of a process to create content for a conversational video experience is disclosed. In some embodiments, a script is received, e.g., a text or other file or portion thereof including text comprising all or a portion of a conversational video experience script. Natural language processing is performed to generate one or more candidate response concepts. For example, the text for a prompt portion of a segment may be processed using a conversation domain-specific language understanding model to extract therefrom one or more concepts expressed and/or otherwise associated with the prompt. The same model may then be used to determine one or more responsive concepts that a user may be expected to express in response to the prompt, e.g., the same, similar, and/or otherwise related concepts.
Response concept candidates are displayed to an authoring user for review and/or editing, e.g., via a user interface such as described above.
[00131] In various embodiments, related sets of content for related conversational video experiences may be created. For example, a set of related conversations may include two or more conversations, each comprising a hierarchically related set of conversation nodes, each having associated therewith one or more video segments, with "response concept" associated transitions between them. In some embodiments, conversations in a set may overlap, e.g., by sharing one or more conversation nodes with another conversation. In various embodiments, an authoring system and/or tool provides an ability to discover existing conversation nodes to be incorporated (e.g., reused) in a new conversation that is being authored. For example, in the course of authoring a conversation, an authoring user may have been provided with a tool to discover and/or access and reuse the nodes of another, related conversation. In some embodiments, the node definition may be reused, including associated resources, such as one or more of an associated script, node definition, video segment, and/or understanding model or portion thereof. In some embodiments, a reused node may be modified by an authoring user, for example by splitting of a "clone" or other modifiable copy specific to the new conversation that is being authored. Once a modification is made, a new copy of the cloned node is made and store separately from the original node, and changes are stored in the new copy, which from that point on is associated only with the new conversation.
[00132] In some embodiments, a set of conversations may be associated with an
"environment" or other shared logical construct. In various embodiments, configuration settings and/or other variables may be defined once for the environment, which will result in those settings and variables being inherited and/or otherwise used automatically across conversations associated with that environment. In some embodiments, environment variables and/or definitions provide and/or ensure a common "look and feel" across conversations associated with that environment.
[00133] In some embodiments, a conversation that has been defined as described herein is analyzed programmatically to determine an optimized set of paths to traverse and explore fully the conversation space. Output to be used to produce in an efficient way video assets required to provide a video conversational experience based on the definition are generated programmatically based at least in part on the determined optimized set of paths. Examples include, without limitation, scripts to be used to perform a read-through and/or shoot video segments in an efficient order; a plan to shoot the required video; and/or a testing plan to ensure that test users explore the conversation space fully and efficiently. In various embodiments, printed scripts, files capable of being rendered by a teleprompter, and/or other assets in any appropriate format may be generated.
[00134] In some embodiments, once the video for a conversation has been generated, automated processing is performed to identify within a larger video file and/or set of files the portion(s) of video content that are related to a given node. One or more video files are received. Speech recognition processing is perform on the audio portion of the video data to generate a synchronized transcription, e.g., a text transcript with timestamps, offsets, and/or other meta-information indicating for each part of the transcript a corresponding portion of the video file(s). The transcription is used to map portions of the video file(s) to
corresponding conversation nodes. For example, a match within the transcription may be found for a node-specific script. Information indicating a beginning and end of a
corresponding portion of the video file(s) may be used to tag or otherwise map the corresponding video content to the conversation node with which that portion of the script and corresponding portion of the transcription are associated.
[00135] In some embodiments, user interaction with a conversation video experience is observed and feedback based on the observation is used to update an associated
understanding and/or speech recognition model. In the example shown, a natural language processing-based understanding model is deployed, e.g., in connection with deployment of a conversational video experience. User interactions with one or more conversational video experiences with which the understanding model is associated are observed. For example, observed instances in which the system is unable to determined based on user utterances a response concept to which the user's response should be mapped may be analyzed programmatically and/or by a human operator to train associated speech recognition services and/or to update the understanding model to ensure that going forward the system will recognize previously-misunderstood or not-understood responses as being associated with an existing and/or newly-defined response concept. The understanding model and/or other resource, once updated, is deployed and/or re-deployed, as described above.
[00136] Using techniques disclosed herein, a more natural, satisfying conversational video experience may be provided.
[00137] Although the foregoing embodiments have been described in some detail for purposes of clarity of understanding, the invention is not limited to the details provided. There are many alternative ways of implementing the invention. The disclosed embodiments are illustrative and not restrictive.
[00138] WHAT IS CLAIMED IS:

Claims

1. A method of providing a conversational video experience, comprising:
receiving a user response data provided by a user in response to a first video segment at least a portion of which has been rendered to the user;
processing the user response data to generate a text-based representation of a user response indicated by the user response data;
determining based at least in part on the text-based representation a response concept with which the user response is associated; and
selecting based at least in part on the response concept a next video segment to be rendered to the user.
2. The method of claim 1, wherein the user response data comprises audio data associated with user speech.
3. The method of claim 2, wherein processing the user response data to generate a text- based representation of a user response indicated by the user response data includes performing speech recognition processing.
4. The method of claim 3, wherein the text-based representation of the user response indicated by the user response data includes an n-best or other set of hypotheses generated by said speech recognition processing.
5. The method of claim 1, wherein said response concept is determined at least in part by comparing said text-based representation to one or more entities comprising a response understanding model.
6. The method of claim 5, further comprising updating said understanding model based at least in part on said text-based representation of the user response.
7. The method of claim 1 , wherein said response concept is included in a predetermined set of response concepts each of which is defined within a domain with which the
conversational video experience is associated.
8. The method of claim 1, wherein said first video segment includes a prompt portion, in which a video persona prompts the user to provide a response.
9. The method of claim 8, wherein said first video segment includes an active listening portion, in which a video persona engages in one or both of verbal and non-verbal behaviors associated with listening attentively to another.
10. The method of claim 9, wherein a transition from playing the prompt portion to playing the active listening portion occurs dynamically, upon detecting that the user has begun to speak.
11. The method of claim 10, further comprising processing a partial response by the user to determine provisionally an associated response concept, and transitioning from the active listening portion of the first video segment to a second active listening video that is more specific to the provisionally determined response concept than the active listening portion of the first video segment.
12. The method of claim 11, further comprising generating dynamically and inserting dynamically in a video stream to be played back a transition between the active listening portion of the first video segment and the second active listening video.
13. The method of claim 1, wherein the response concept is determined based at least in part on a context data, including one or more of a conversation context, a conversation history, and a user profile.
14. The method of claim 1 , wherein the user response may be provided via two or more input modalities.
15. The method of claim 1, further comprising displaying a set of user selectable response options available to be selected by the user to generate the user response data indicating the user's response.
16. The method of claim 15, wherein the set of user selectable response options is displayed in response to one or both of the user selecting a control associated with display of said user selectable response options and expiration of a prescribed time period without user speech input having been received since a prompt portion of the first video segment has finished playing.
17. The method of claim 1, further comprising integrating a live human agent into the conversational video experience.
18. The method of claim 1, further comprising integrating audio-only content into the conversational video experience.
19. A system to provide a conversational video experience, comprising:
a processor configured to:
receive a user response data provided by a user in response to a first video segment at least a portion of which has been rendered to the user;
process the user response data to generate a text-based representation of a user response indicated by the user response data;
determine based at least in part on the text-based representation a response concept with which the user response is associated; and
select based at least in part on the response concept a next video segment to be rendered to the user; and
a memory configured to provide the processor with instructions.
20. The system of claim 19, further comprising a communication interface coupled to the processor and wherein the processor is configured to process the user response data to generate the text-based representation of the user response indicated by the user response data at least in part by sending via the communication interface a request to an external, network- based input recognition service.
21. The system of claim 19, further comprising a display device coupled to the processor and wherein the first video segment and the next video segment are rendered to the user via the display device.
22. The system of claim 19, further comprising a user input device and wherein the user response data provided by the user in response to the first video segment is associated with input received via the user input device.
23. The system of claim 22, wherein the user input device comprises a microphone and the user input data comprises audio data representing an audible response uttered by the user.
24. The system of claim 22, wherein the user input device comprises a user-facing camera and the user input data comprises video data representing video images of the user responding to the first video segment.
25. A computer program product embodied in a non-transitory computer readable storage medium and comprising computer instructions for:
receiving a user response data provided by a user in response to a first video segment at least a portion of which has been rendered to the user;
processing the user response data to generate a text-based representation of a user response indicated by the user response data;
determining based at least in part on the text-based representation a response concept with which the user response is associated; and
selecting based at least in part on the response concept a next video segment to be rendered to the user.
PCT/US2013/043773 2012-05-31 2013-05-31 Providing a converstional video experience WO2013181633A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US201261653921P 2012-05-31 2012-05-31
US201261653923P 2012-05-31 2012-05-31
US61/653,923 2012-05-31
US61/653,921 2012-05-31

Publications (1)

Publication Number Publication Date
WO2013181633A1 true WO2013181633A1 (en) 2013-12-05

Family

ID=49673944

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2013/043773 WO2013181633A1 (en) 2012-05-31 2013-05-31 Providing a converstional video experience

Country Status (1)

Country Link
WO (1) WO2013181633A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9720917B2 (en) 2015-02-17 2017-08-01 International Business Machines Corporation Electronic meeting question management
EP3568850A4 (en) * 2017-03-21 2020-05-27 Beijing Didi Infinity Technology And Development Co., Ltd. Systems and methods for speech information processing
EP4050887A4 (en) * 2020-07-17 2023-06-21 Beijing Bytedance Network Technology Co., Ltd. Video recording method and apparatus, electronic device, and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060203977A1 (en) * 2005-03-10 2006-09-14 Avaya Technology Corp. Dynamic video generation in interactive voice response systems
US20070036334A1 (en) * 2005-04-22 2007-02-15 Culbertson Robert F System and method for intelligent service agent using VoIP
US20080262910A1 (en) * 2007-04-20 2008-10-23 Utbk, Inc. Methods and Systems to Connect People via Virtual Reality for Real Time Communications
WO2010045335A1 (en) * 2008-10-16 2010-04-22 Gencia Corporation Transducible polypeptides for modifying mitochondrial metabolism
US20100191658A1 (en) * 2009-01-26 2010-07-29 Kannan Pallipuram V Predictive Engine for Interactive Voice Response System
US20110164106A1 (en) * 2010-01-05 2011-07-07 Kim Minhyoung Method for connecting video communication to other device, video communication apparatus and display apparatus thereof
US20120030149A1 (en) * 2006-12-21 2012-02-02 Support Machines Ltd. Method and a computer program product for providing a response to a statement of a user
US20120051526A1 (en) * 2010-08-24 2012-03-01 Avaya Inc. Contact Center Trend Analysis and Process Altering System and Method

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060203977A1 (en) * 2005-03-10 2006-09-14 Avaya Technology Corp. Dynamic video generation in interactive voice response systems
US20070036334A1 (en) * 2005-04-22 2007-02-15 Culbertson Robert F System and method for intelligent service agent using VoIP
US20120030149A1 (en) * 2006-12-21 2012-02-02 Support Machines Ltd. Method and a computer program product for providing a response to a statement of a user
US20080262910A1 (en) * 2007-04-20 2008-10-23 Utbk, Inc. Methods and Systems to Connect People via Virtual Reality for Real Time Communications
WO2010045335A1 (en) * 2008-10-16 2010-04-22 Gencia Corporation Transducible polypeptides for modifying mitochondrial metabolism
US20100191658A1 (en) * 2009-01-26 2010-07-29 Kannan Pallipuram V Predictive Engine for Interactive Voice Response System
US20110164106A1 (en) * 2010-01-05 2011-07-07 Kim Minhyoung Method for connecting video communication to other device, video communication apparatus and display apparatus thereof
US20120051526A1 (en) * 2010-08-24 2012-03-01 Avaya Inc. Contact Center Trend Analysis and Process Altering System and Method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
GETABBYCHANNEL.: "Abby311 - iPhone App Demo.", YOUTUBE, 2011, Retrieved from the Internet <URL:http://www.youtube.com/watch?v=6zrH-Huljoc> [retrieved on 20131023] *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9720917B2 (en) 2015-02-17 2017-08-01 International Business Machines Corporation Electronic meeting question management
US9753927B2 (en) 2015-02-17 2017-09-05 International Business Machines Corporation Electronic meeting question management
US10599703B2 (en) 2015-02-17 2020-03-24 International Business Machines Corporation Electronic meeting question management
EP3568850A4 (en) * 2017-03-21 2020-05-27 Beijing Didi Infinity Technology And Development Co., Ltd. Systems and methods for speech information processing
EP4050887A4 (en) * 2020-07-17 2023-06-21 Beijing Bytedance Network Technology Co., Ltd. Video recording method and apparatus, electronic device, and storage medium

Similar Documents

Publication Publication Date Title
US20140036022A1 (en) Providing a conversational video experience
US20140028780A1 (en) Producing content to provide a conversational video experience
US11017779B2 (en) System and method for speech understanding via integrated audio and visual based speech recognition
US10726836B2 (en) Providing audio and video feedback with character based on voice command
US11158102B2 (en) Method and apparatus for processing information
US20230053350A1 (en) Encapsulating and synchronizing state interactions between devices
US20190332400A1 (en) System and method for cross-platform sharing of virtual assistants
US11871109B2 (en) Interactive application adapted for use by multiple users via a distributed computer-based system
KR101821358B1 (en) Method and system for providing multi-user messenger service
US20140278403A1 (en) Systems and methods for interactive synthetic character dialogue
US20190371318A1 (en) System and method for adaptive detection of spoken language via multiple speech models
US11017551B2 (en) System and method for identifying a point of interest based on intersecting visual trajectories
US20140111689A1 (en) Display device, method of controlling the display device, and information processor to control the display device
US10785489B2 (en) System and method for visual rendering based on sparse samples with predicted motion
US11200902B2 (en) System and method for disambiguating a source of sound based on detected lip movement
US11308312B2 (en) System and method for reconstructing unoccupied 3D space
US20190191224A1 (en) Eyes free entertainment
EP3752959A1 (en) System and method for inferring scenes based on visual context-free grammar model
CN111936964A (en) Non-interruptive NUI command
WO2013181633A1 (en) Providing a converstional video experience
KR20210027991A (en) Electronic apparatus and control method thereof
US11150923B2 (en) Electronic apparatus and method for providing manual thereof
KR20230025708A (en) Automated Assistant with Audio Present Interaction
US11900677B2 (en) User-selected multi-view videoconferencing
CN117034199A (en) Personalized content generation method and system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13796428

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 13796428

Country of ref document: EP

Kind code of ref document: A1