US20230027078A1 - Systems and methods for generation and deployment of a human-personified virtual agent using pre-trained machine learning-based language models and a video response corpus - Google Patents

Systems and methods for generation and deployment of a human-personified virtual agent using pre-trained machine learning-based language models and a video response corpus Download PDF

Info

Publication number
US20230027078A1
US20230027078A1 US17/849,589 US202217849589A US2023027078A1 US 20230027078 A1 US20230027078 A1 US 20230027078A1 US 202217849589 A US202217849589 A US 202217849589A US 2023027078 A1 US2023027078 A1 US 2023027078A1
Authority
US
United States
Prior art keywords
embedding
response
imprintation
human
representation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US17/849,589
Other versions
US11550831B1 (en
Inventor
Eldon Marks
Jason Mars
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
TrueSelph Inc
Original Assignee
TrueSelph Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by TrueSelph Inc filed Critical TrueSelph Inc
Priority to US17/849,589 priority Critical patent/US11550831B1/en
Assigned to TrueSelph, Inc. reassignment TrueSelph, Inc. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: MARKS, ELDON, MARS, JASON
Application granted granted Critical
Publication of US11550831B1 publication Critical patent/US11550831B1/en
Publication of US20230027078A1 publication Critical patent/US20230027078A1/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/903Querying
    • G06F16/9032Query formulation
    • G06F16/90332Natural language query formulation or dialogue systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3329Natural language query formulation or dialogue systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2137Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on criteria of topology preservation, e.g. multidimensional scaling or self-organising maps
    • G06F18/21375Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on criteria of topology preservation, e.g. multidimensional scaling or self-organising maps involving differential geometry, e.g. embedding of pattern manifold
    • G06K9/6252
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/20Ensemble learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • G06N3/0455Auto-encoder networks; Encoder-decoder networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/01Dynamic search techniques; Heuristics; Dynamic trees; Branch-and-bound
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/02Knowledge representation; Symbolic representation
    • G06N5/022Knowledge engineering; Knowledge acquisition
    • G06N5/025Extracting rules from data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N7/00Computing arrangements based on specific mathematical models
    • G06N7/01Probabilistic graphical models, e.g. probabilistic networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/02User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail using automatic reactions or user delegation, e.g. automatic replies or chatbot-generated messages

Definitions

  • the inventions herein relate generally to the virtual assistant field, and more specifically to a new and useful system and method for generating and deploying human-personified artificially intelligent virtual agents using machine learning.
  • Modern virtual assistants may typically be employed to perform various tasks or services based on an interaction with a user.
  • a user interacting with a virtual assistant may pose a question, a message, or otherwise submit an input to the virtual assistant, to which, the virtual assistant may provide a response or perform some action as a result of the user input.
  • Many of these virtual assistants may typically be portrayed as a user interface object, such as a chat window or may include an animated computer object that lack real human features and real human mannerisms.
  • a significant class of users fail to fully engage with the virtual assistant and may continue to prefer interactions that involve a real human agent.
  • chatbot or virtual assistant systems in their ability to relate to human users may continue to act as a barrier for mass adoption of often helpful automated chat systems.
  • automated engagement human agent
  • human sounding interactive human chatbot
  • a virtual assistant or automated interaction system personified with real human response may create a more personable and engaging conversation which may increase the likelihood of a user's satisfaction with its response and interactions and therefore, reducing interaction loads on often limited real human agents.
  • a method of implementing a human video-personified machine learning-based virtual dialogue agent includes: computing an input embedding based on receiving a user input; computing, via a pre-trained machine learning language model, an embedding response inference based on the input embedding; searching, based on the embedding response inference, a response imprintation embedding space that includes a plurality of distinct embedding representations of potential text-based responses to the user input, wherein each of the plurality of distinct embedding representations is tethered to a distinct human-imprinted media response, and searching the response imprintation embedding space includes: (i) defining an embedding search query using the embedding response inference as a search parameter, (ii) searching the response imprintation embedding space based on the embedding search query, and (iii) returning a target embedding representation from the response imprintation embedding space based on the searching of the response imprintation embedding space; and executing
  • the pre-trained machine learning language model computes the embedding response inference based on an embedding space different from the response imprintation embedding space.
  • the method includes normalizing the embedding response inference to the response imprintation embedding space contemporaneously with defining the embedding search query.
  • searching the response imprintation embedding space includes: (1) computing a distance between the embedding response inference and each of the plurality of distinct embedding representations, and (2) determining which embedding representation of the plurality of distinct embedding representations is closest to the embedding response inference, and the target embedding representation returned from the response imprintation embedding space corresponds to the embedding representation closest to the embedding response inference.
  • searching the response imprintation embedding space includes: (1) computing a similarity metric between the embedding response inference and each of the plurality of distinct embedding representations, and (2) determining which embedding representation of the plurality of distinct embedding representation is most similar to the embedding response inference, and the target embedding representation returned from the response imprintation embedding space corresponds to the embedding representation most similar to the embedding response inference.
  • the response imprintation embedding space relates to an n-dimensional vector space.
  • the method includes contemporaneously with computing the embedding response inference via the pre-trained machine learning language model: computing, via one or more additional pre-trained machine learning language models, one or more additional embedding response inferences based on the input embedding, wherein: searching the response imprintation embedding space is based on the embedding response inference and the one or more additional embedding response inferences, searching the response imprintation embedding space further includes: defining one or more additional embedding search queries based on the one or more additional embedding response inferences; and searching the response imprintation embedding space based on the one or more additional embedding search queries, and the target embedding representation returned from the response imprintation embedding space is based on searching the response imprintation embedding space with the embedding search query and the one or more additional embedding search queries.
  • the one or more additional embedding response inferences include a first additional embedding response inference and a second additional embedding response inference
  • the one or more additional embedding search queries include a first embedding search query based on the first additional embedding response inference and a second embedding search query based on the second additional embedding response inference
  • searching the response imprintation embedding space based on the first additional embedding search query includes identifying, from the plurality of distinct embedding representations, an embedding representation most similar to the first additional embedding response inference
  • searching the response imprintation embedding space based on the second additional embedding search query includes identifying, from the plurality of distinct embedding representations, an embedding representation most similar to the second additional embedding response inference
  • searching the response imprintation embedding space based on the embedding search query includes identifying, from the plurality of distinct embedding representations, an embedding representation most similar to the embedding response inference.
  • searching the response imprintation embedding space further includes: identifying an embedding representation most frequently identified by the embedding search query and the one or more additional search queries; and determining which embedding representation of the plurality of distinct embedding representations is closest to the embedding representation most frequently identified by the embedding search query and the one or more additional search queries, and the target embedding representation returned from the response imprintation embedding space corresponds to the embedding representation determined to be closest to the embedding representation most frequently identified by the embedding search query and the one or more additional search queries.
  • searching the response imprintation embedding space further includes: computing an average embedding representation based on embedding representations identified by the embedding search query and the one or more additional search queries; and determining which embedding representation of the plurality of distinct embedding representations is closest to the average embedding representation, and the target embedding representation returned from the response imprintation embedding space corresponds to the embedding representation determined to be closest to the average embedding representation.
  • the human-imprinted media response tethered to the target embedding includes a video component
  • executing the human-imprinted media response includes: playing the video component of the human-imprinted media response at the user interface of the human video-personified machine learning-based virtual dialogue agent; and displaying, in association with the playing of the video component, a transcript of the human-imprinted media response.
  • the user input is received via the user interface of the human video-personified machine learning-based virtual dialogue agent, and the user input comprises textual input that relates to one or more dialogue intents.
  • a method of implementing a human-personified machine learning-based virtual dialogue agent includes: computing an input embedding based on a user input; computing, via a pre-trained machine learning language model, an embedding response inference based on the input embedding; searching, based on the embedding response inference, a response imprintation embedding space that includes a plurality of distinct embedding representations of potential text-based responses to the user input, wherein each of the plurality of distinct embedding representations is tethered to a distinct human-imprinted media response, and searching the response imprintation embedding space includes: (i) defining an embedding search query using the embedding response inference as a search parameter, and (ii) executing the embedding search query to search the response imprintation embedding space, wherein the embedding search query returns a target embedding representation from the response imprintation embedding space; and executing, via a user interface of the human-personified machine learning
  • the embedding search query returns the target embedding representation because the target embedding representation is closer to the embedding response inference as compared to other embedding representations in the response imprintation embedding space.
  • the response imprintation embedding space relates to a multi-dimensional vector space.
  • the method further includes before searching the response imprintation embedding space, constructing the response imprintation embedding space, wherein constructing the response imprintation embedding space includes: identifying a human-imprinted media response corpus that includes a plurality of distinct human-imprinted media responses to likely user input; generating, via a transcriber, a text-based transcription for each of the plurality of distinct human-imprinted media responses; providing, as input to a pre-trained machine learning language model, the text-based transcription generated for each of the plurality of distinct human-imprinted media responses; computing, by the pre-trained machine learning language model, an embedding representation for each of the plurality of distinct human-imprinted media responses based on the text-based transcription generated for each of the plurality of distinct human-imprinted media responses; mapping the embedding representation computed for each of the plurality of distinct human-imprinted media responses to the multi-dimensional
  • each of the plurality of distinct human-imprinted media responses includes an audio/video (AV) component.
  • AV audio/video
  • the user input is received via the user interface of the human-personified machine learning-based virtual dialogue agent, and executing the human-imprinted media response tethered to the target embedding representation includes playing the human-imprinted media response.
  • an accessibility setting of the human-personified machine learning-based virtual dialogue agent is toggled on: forgoing playing the human-imprinted media response; and displaying a text-based transcription of the human-imprinted media response at the user interface of the human-personified machine learning-based virtual dialogue agent.
  • a method of implementing a fast-generated virtual dialogue agent includes: receiving, via a web-enabled virtual dialogue agent interface, user stimuli; converting, by a computer implementing one or more pre-trained language machine learning models, the user stimuli to a stimuli embeddings inference; computing a response inference based on the stimuli embeddings inference, wherein computing the response inference includes: performing an embeddings search for a response embeddings of a plurality of distinct response embeddings based on the stimuli embeddings inference; and generating an automated response to the user stimuli, via the web-enabled virtual dialogue agent interface, based on the response embeddings.
  • the embeddings search searches a multi-dimensional space and identifies which of the plurality of distinct response embeddings is closest to the stimuli embeddings inference.
  • FIG. 1 illustrates a schematic representation of a system in accordance with one or more embodiments of the present application
  • FIG. 2 illustrates an example method of configuring a human-personified virtual agent in accordance with one or more embodiments of the present application
  • FIG. 3 illustrates an example method of implementing a human-personified virtual agent in accordance with one or more embodiments of the present application
  • FIG. 4 illustrates a schematic representation for creating a single response video imprint in accordance with one or more embodiments of the present application
  • FIG. 5 illustrates a schematic representation for creating a multiple response video imprint in accordance with one or more embodiments of the present application
  • FIG. 6 illustrates a schematic representation of processing a response via a human-personified virtual agent in accordance with one or more embodiments of the present application.
  • FIG. 7 illustrates a schematic representation of interfacing with a human-personified virtual agent in accordance with one or more embodiments of the present application.
  • a system 100 that may configure and deploy a machine learning-based human-personified virtual agent may include a response development module 110 , an embedding service 120 , a model accessibility/development engine 130 , a machine learning-based virtual agent model 140 , a dialogue response collection module 150 , and an intelligent machine learning-based virtual agent 160 .
  • system 100 may function to configure and/or deploy the intelligent machine learning-based human-personified virtual agent 160 to enable an automated conversational experience between a user and a human-imprinted virtual agent of a subscriber.
  • a response development module 110 may be in digital communication with a response development interface (or client interface).
  • the response development module 110 may be configured to ingest (or identify) responses inputted by a subscriber, at the response development interface, to construct a corpus of responses (e.g., a response corpus).
  • the response development module 110 may interface with a subscriber that may provide a source of knowledge (or a source of responses) for a machine learning-based virtual agent 160 . Accordingly, in one or more embodiments, the response development interface may be configured to allow for manual and/or bulk upload of responses or a response corpus (e.g., a video response corpus) that may be identifiable to the response development module 110 .
  • a response corpus e.g., a video response corpus
  • the response development module 110 may include a response development interface that may be configured to allow a subscriber to manually input a string of text that may define an individual response associated with one or more video response imprints; however, in alternative embodiments, the response development interface may also be configured to accept, as input, documents, files, media files, or the like comprising a collection of responses in bulk.
  • the response development module 110 may include and/or be in operable communication with an image capturing device or video capturing device that enables a capturing of one or more video response imprints/imprints for building a video response corpus.
  • the response development module 110 may include and/or be in operable communication with one or more video response handling module that may function to partition video response imprints according to any suitable known partitioning or segmentation techniques including the one or more video partitioning and segmentation techniques described herein.
  • the machine learning-based virtual agent 160 may also be referred to herein as a “machine learning-based virtual assistant” or “machine learning-based human-personified virtual agent) may communicate with an intermediary service that may store the text-based transcriptions of a video response corpus to rapidly identify one or more most likely or most probable responses to user stimulus based on one or more inferred responses of one or more pre-trained machine learning language models.
  • an intermediary service may store the text-based transcriptions of a video response corpus to rapidly identify one or more most likely or most probable responses to user stimulus based on one or more inferred responses of one or more pre-trained machine learning language models.
  • an embedding service 120 may preferably function to receive text-based transcriptions of a video response corpus as input and output an embedded response representation for each response (or response imprint) of the video response corpus.
  • the embedding service may be a sentence/word (or text) embeddings service that may be configured to compute embedded response representations.
  • the embedding service 120 may function to generate an embedded response space that may map each of the computed embedded response representations associated with a corresponding response (or response imprint) of the response corpus to the embedded response space.
  • the embedded response space may function to graphically associate (or cluster) semantically similar responses closer to one another than unrelated (or dissimilar) responses.
  • the model accessibility/development engine 130 may preferably include storing and/or at least capable of accessing a plurality of pre-trained and/or pre-developed language processing models.
  • each of the plurality of language processing models may be pre-developed and/or pre-trained for reading, understanding, interpreting human language, and/or making predictions based on user inputs or user stimuli.
  • the model accessibility/development engine 130 may store and/or identify the baseline embedded response representations computed by the embedding service 120 to identify and/or select one or more applicable pre-trained language processing models based, in part, on the embedding values.
  • an algorithmic structure of the machine learning virtual agent model 140 underlying the virtual dialogue agent 160 may be the entirety of the plurality of accessed pre-trained language processing models and/or the stored language processing models outputted by the model accessibility/development engine 130 .
  • the machine learning virtual agent model 140 that may be accessed, generated, and/or outputted by the model accessibility/development engine 130 may be capable of predicting and/or inferring responses based on user input.
  • the model accessibility/development engine 130 may implement one or more ensembles of pre-trained or trained machine learning models.
  • the one or more ensembles of machine learning models may employ any suitable machine learning including one or more of: supervised learning (e.g., using logistic regression, using back propagation neural networks, using random forests, decision trees, etc.), unsupervised learning (e.g., using an Apriori algorithm, using K-means clustering), semi-supervised learning, reinforcement learning (e.g., using a Q-learning algorithm, using temporal difference learning), adversarial learning, and any other suitable learning style.
  • Each module of the plurality can implement any one or more of: a regression algorithm (e.g., ordinary least squares, logistic regression, stepwise regression, multivariate adaptive regression splines, locally estimated scatterplot smoothing, etc.), an instance-based method (e.g., k-nearest neighbor, learning vector quantization, self-organizing map, etc.), a regularization method (e.g., ridge regression, least absolute shrinkage and selection operator, elastic net, etc.), a decision tree learning method (e.g., classification and regression tree, iterative dichotomiser 3, C4.5, chi-squared automatic interaction detection, decision stump, random forest, multivariate adaptive regression splines, gradient boosting machines, etc.), a Bayesian method (e.g., na ⁇ ve Bayes, averaged one-dependence estimators, Bayesian belief network, etc.), a kernel method (e.g., a support vector machine, a radial basis function, a linear discriminate analysis, etc.),
  • Each processing portion of the system 100 can additionally or alternatively leverage: a probabilistic module, heuristic module, deterministic module, or any other suitable module leveraging any other suitable computation method, machine learning method or combination thereof.
  • a probabilistic module e.g., heuristic module, deterministic module, or any other suitable module leveraging any other suitable computation method, machine learning method or combination thereof.
  • any suitable machine learning approach can otherwise be incorporated in the system 100 .
  • any suitable model e.g., machine learning, non-machine learning, etc.
  • the dialogue response collection module 150 may preferably function as the response repository for the machine learning-based virtual agent 160 . Accordingly, in one or more preferred embodiments, the response collection module 150 may be configured to collect and/or store the constructed response corpus generated by the response development module 110 and the embedded response representations of the response corpus computed by the embedding service 120 .
  • the response collection module 150 may be combinable (or associated) with the selected or the identified machine learning virtual agent model(s) 140 (e.g., the pre-trained language processing models) outputted by the model accessibility/development engine 130 to form the underlying structure of the virtual dialogue agent 160 .
  • the identified machine learning virtual agent model(s) 140 e.g., the pre-trained language processing models
  • the method 200 for configuring a machine learning-based human-personified virtual agent includes creating a video response corpus S 210 , intelligently processing the video response corpus S 220 , and computing and storing embedding values based on the video response corpus S 230 .
  • the method 200 optionally includes creating a mapping of embedded response representations of the video response corpus S 235 .
  • S 210 which includes identifying and/or creating a video response corpus, may function to create a plurality of expected and/or desired video responses to user stimulus or user utterances to a virtual agent.
  • video responses may be created and/or obtained from any suitable source including, but not limited to, human agent responses, manuscripts, transcriptions, any video storage database, and/or the like.
  • An expected and/or desired video response may sometimes be referred to herein as a “video response item” or a “video response imprint”.
  • a video response imprint preferably includes a recording or imprintation of a human providing a response to a query, utterance, user communication, or the like.
  • a video response imprint may be created by an abridged recording, as shown by way of example in FIG. 4 .
  • a human agent or a subscriber may create and/or upload one or more video response imprints that may include one or more short or abridged videos of themselves and/or another person responding to an expected user stimulus and/or utterance.
  • an “abridged recording” or “single response recording” as referred to herein preferably relates to a video recording response imprint that includes a single expected or desired response to user stimulus.
  • video response imprints may be obtained by an extended recording, as shown by way of example in FIG. 5 .
  • subscribers may create and/or upload video response imprints of recordings from a longer or extended video of themselves and/or another person responding to one or more expected user stimulus' and/or utterances.
  • longer video response imprints may be partitioned into smaller video response (sub-)imprints.
  • an “extended recording,” as referred to herein, preferably relates to a video recording response imprint that includes multiple expected or desired responses, preferably that each relate to a single dialogue category or dialogue domain.
  • an image capturing device e.g., a video recording camera
  • a human agent may be used by a human agent to create a video response imprint that includes a video recording of themselves or of another human person or agent providing one or more responses for creating a response corpus of a target virtual agent.
  • S 210 may function to create a video response corpus which may include a plurality of distinct video response imprints or a multi- or single video response imprint that may be pre-recorded. That is, in some embodiments, rather than creating the video response corpus by making one or more human-based or human-imprinted recordings with expected or desired response to user stimuli, S 210 may function to search and/or source any available repository (e.g., the Internet, YouTube, Google, etc.) having human-based pre-recordings that may include a desired response or an expected response to a user stimulus.
  • any available repository e.g., the Internet, YouTube, Google, etc.
  • the method 200 , the method 300 or the like may function to implement any suitable combination of the above-described configuration parameters to create a media file, video response imprint, and/or the like.
  • S 220 which includes intelligently processing the video response corpus, may function to intelligently process a video response corpus to enable a consumption of the content therein for a machine learning-based virtual agent.
  • pre-processing the video response corpus may include partitioning or chunking one or more video response imprints, extracting audio features, and/or transcribing audio features into a consumable or prescribed format (e.g., a textual format), such as an input format suitable for a machine learning virtual agent generation component of a service or system implementing one or more steps of the method 200 .
  • a consumable or prescribed format e.g., a textual format
  • S 220 includes S 222 , which may function to partition or chunk a video response imprint of the video response corpus.
  • S 222 may function to identify that partitioning of a subject video response imprint may be required based on the subject video response imprint having therein multiple distinct responses satisfying a partitioning threshold (e.g., a minimum number of responses to one or more stimuli). Additionally, or alternatively, in one or more embodiments, S 222 may determine that a partitioning of a video response imprint may be required based on a length of the video response imprint, the file size of the video response imprint, and/or the like.
  • a partitioning threshold e.g., a minimum number of responses to one or more stimuli
  • S 222 may function to partition a video response imprint using an intelligent partitioning scheme.
  • S 222 may determine a partition location of the video response imprint if there is a suitable pause within the audio of the response.
  • a suitable pause length for partitioning the video response imprint may be identified based on a discontinuance of a verbal communication by a human agent within the video and lapse of a predetermined amount of time (e.g., pause length). The pause length, pause threshold, and/or decibel level of the sound of the intelligent partitioning may be adjusted according to each subscriber, video response, etc.
  • S 222 may function to partition a video response imprint using identification partitioning.
  • S 222 may determine a partition location of the video response imprint if a user voices a specified keyword and/or phrase or performs an action, such as a predetermined gesture indicating that a partition or break in the video response imprint should be made.
  • a predetermined gesture indicating that a partition or break in the video response imprint should be made.
  • the user may pronounce a predetermined keyword, an expected user utterance, and/or the like before articulating their expected response.
  • a human agent in the recording may make a “thumbs up” gesture or similar gesture indicating a point or section for partitioning the video response imprint.
  • S 222 may function to partition a video response imprint using interval partitioning.
  • S 222 may function to partition a video response imprint every ‘x’ number of seconds. For example, a subscriber may determine that every 10 seconds they'd like the video response to be partitioned. Additionally, or alternatively, the video response imprint may be evenly partitioned into halves, quarters, eighths, and/or the like.
  • S 222 may function to partition a video response imprint by demarcating multiple distinct time segments of the video response imprint.
  • S 222 may include pairs of digital markers within the video response imprint in which a first marker of the pair of digital markers may indicate a beginning of a time segment (i.e., sub-video response imprint) and a second marker of the pair may indicate a termination or ending of the time segment.
  • S 222 may function to partition a video response imprint by defining each of a plurality of time segments of the video response imprint, extracting each time segment, and storing the extracted time segments independently. That is, in this second implementation, S 222 may function to break a video response imprint having multiple responses into multiple distinct sub-videos.
  • the method 200 or the like may function to implement any suitable video partitioning scheme including a partitioning scheme based on a combination of the above-described techniques or schemes for partition a media file, and/or the like.
  • S 220 includes S 224 , which may function to generate and/or assign one or more unique identifiers to each video response corpus and to each of the video response items/imprints within the video response corpus.
  • S 224 may function to identify or assign a global identifier to each distinct video response corpus or to each collection of video responses that maybe related in content (e.g., same domain or category). Additionally, or alternatively, S 224 may function to identify or assign a unique local identifier to video response item/imprint within a given video response corpus.
  • the global identifier maybe referred to herein as one of a “Selph ID” or a “corpus identifier (ID)”.
  • the global identifier may function to identify a subject video response corpus as distinctly including video response imprints within a dialogue category or dialogue domain. That is, the global identifier may indicate an overarching category or domain of the video response content within the group.
  • S 224 may function to assign a first global identifier to a first video response corpus in which substantially or all the video response imprints relate to a single, first primary category, such as “air travel,” and may function to assign a second global identifier to a second video response corpus in which substantially or all the video response imprints to a single, first primary category, such as “automotive repair.”
  • the local identifier may be referred to herein as one of an “imprint identifier (ID),” a “video response identifier (ID),” or a “video response imprint ID.”
  • the local identifier may function as a sub-domain or sub-category identifier that identifies an associated video response imprint as a specific sub-topic or sub-category within a video response corpus.
  • each partition of a video response imprint may be assigned a distinct imprint ID for ease of reference, lookup, and/or publishing as a response to a user stimulus.
  • a video response corpus and a plurality of distinct video response imprints within the video response corpus may be assigned identifiers (i.e., global and local identifiers) based on a hierarchical identification structure in which a video response corpus may be assigned a global identifier corresponding to a top-level category or top-level domain and each of the video response imprints with the video response corpus may be assigned a local identifier corresponding to one or more distinct sub-level categories or sub-level domains.
  • identifiers i.e., global and local identifiers
  • a video response corpus may be assigned a global identifier such as “air travel”, which may function as a top-level domain, and a first video response imprint within the video response corpus may be assigned a first local identifier of “scheduling flight” and a second video response imprint within the video response corpus may be assigned a second local identifier of “flight status”.
  • each of the first local identifier of “scheduling flight” and the second local identifier of “flight status” may be sub-level categories of the top-level category of “air travel.”
  • either the global and/or local identifiers associated with either the video response corpus or distinct video response imprints may be implemented in tracking operations and/or processes involving or being applied to either the video response corpus or a distinct video response items/imprints.
  • either the global and/or local identifiers may be implemented for sourcing or obtaining one or more public URLs of one or more target video response imprints preferably during an interaction between a user and a human-personified virtual dialogue agent.
  • either the global and/or local identifiers may be implemented for response linking and may be electronically associated with an answer object or the like of a chatbot or virtual agent generation service.
  • system 100 and/or the method 200 may function to convert the identified (or constructed) video response corpus of S 220 into embedding values (e.g., embedded response representations).
  • S 220 includes S 226 , which may function to create a transcription of each video response item/imprint of a video response corpus.
  • the transcription of a given video response imprint may include a textual representation of an audio component or verbal response component within the given video response imprint or any media file.
  • S 226 may function to interface with a transcriber or a transcription service for automatically generating a transcription of a subject video response imprint.
  • S 226 may function to transmit, via a network and/or via an API, the subject video response imprint together with local identifier data (e.g., an imprint ID) to the transcription service that may function to create a transcription of the subject video response imprint.
  • local identifier data e.g., an imprint ID
  • S 226 may function to implement a transcription module or the like that may include one or more language models that may function to automatically transcribe an audio component of a subject video response imprint to a text representation.
  • S 226 may additionally or alternatively function to partition the transcription into a plurality of distinct transcriptions corresponding to the plurality of distinct responses.
  • S 226 may function to partition the transcription based on identifying one or more points of silence (e.g., gaps of text between text representations of the transcription). It shall be noted that any suitable technique for partitioning a transcription of a video response imprint may be implemented.
  • a transcription for a given video response imprint may be stored in electronic association with the given video from which the transcription was created together with the local and/or global identifier of the given video response.
  • S 230 which includes computing and storing embedding values based on the video response corpus, may function to convert or generate vector representations or text representations for each response imprint (e.g., each anchor response) of the video response corpus.
  • S 230 may function to implement a sentence or text embedding service or language model that may function to convert transcriptions of response imprints of the video response corpus into numerical-based vector representations.
  • the sentence or text embedding service or model that may be implemented for converting each transcription of each video response imprint in the video response corpus may be the same or substantially similar to an embedding service or model implemented with a pre-trained machine learning (language) model, described herein.
  • S 230 may function to map the vector representations for each response imprint in the video response corpus to an n-dimensional vector space that may be familiar, known, and/or used by the pre-trained language model.
  • the method 200 may function to implement a plurality of distinct pre-trained language models that may each include an embedding layer (i.e., a hidden layer) or implement a distinct embedding service or language model.
  • S 230 may function to compute one or more distinct sets of embedding values for the video response imprints of the video response corpus using an embedding layer of a pre-trained language model or using one or more of a plurality of distinct embedding services or models that may be used by the pre-trained language models.
  • S 230 may function to generate a distinct text representation for each of the plurality of distinct response imprints of the video response corpus.
  • S 230 may function to sequentially or individually input each response imprint of the video response corpus through an embedding service or language model to create an associated baseline embedded response representation.
  • a transcription of a first response of the video response corpus e.g., The delivery fee is $3
  • a transcription of a second response of the video response corpus e.g., We have a wide selection of vegetarian pizzas
  • a transcription of a third response of the video response corpus e.g., Your order will arrive in 30 minutes
  • a third embedded response representation distinct from the first embedded response representation and the second embedded response representation.
  • each transcription of a response of the video response corpus may be an individual input into the sentence or text embedding service to compute a corresponding individual output of an embedded response representation.
  • At least one technical benefit of an individual or discrete approach for creating an embedding representation for each response imprint of a response corpus may include an ability to specifically track a correspondence between a respective response imprint and its computed embedded representation thereby enabling a capability to specifically tune or adjust the computed embedded representation within a given multi-dimensional space for embedding values.
  • S 230 may function to input a set of transcriptions of the video response corpus of S 210 through an embedding service to create a set of embedded response representations (e.g., M r ).
  • the input into the embedding service may be the entire response corpus and the output (e.g., after processing the video response corpus through the embedding service) may be a set of baseline embedded response representations.
  • S 230 may function to implement any suitable and/or combination of suitable sentence (or text) embeddings techniques or services to compute embedded response representations. Accordingly, in the case that the identified response corpus may span across a diverse set of text representations (or vector values), S 230 may function to identify or define the range of embedding values associated with the video response imprints of the video response corpus.
  • S 230 includes S 235 , which may function to associate or map each embedded response representation (or embedded vector representation) of the video response corpus into a multi-dimensional vector space (e.g., an n-dimensional embeddings space), that, in one or more embodiments, may be graphically illustrated.
  • a multi-dimensional vector space e.g., an n-dimensional embeddings space
  • each vector representation of each distinct string of text or word that may define a distinct response of the video response corpus may be used as input for creating a mapping item or a point that may be positioned onto the multi-dimensional embedding space.
  • the embedded response space (e.g., the n-dimensional embedded response space) may be constructed based on mapping the embedded response representations for each response imprint of the video response corpus.
  • S 230 may function to map each embedded response representation that may define a coordinate or vector onto the embedded response space.
  • each of the embedded representations may be linked, coupled, and/or associated with the anchor response and/or the system-displayed response.
  • responses of the video response corpus that may share one or more similar characteristics (e.g., response categories, semantically similar responses within a similarity threshold value) or that may have semantically similar meanings may be mapped (or clustered) proximate to one another, in the embedded response space, when compared to unrelated (e.g., dissimilar) responses.
  • similar characteristics e.g., response categories, semantically similar responses within a similarity threshold value
  • unrelated responses e.g., dissimilar
  • each of the video response corpus, the embeddings values for the video response corpus, and/or the n-dimensional mapping of the embeddings values of the video response corpus may sometimes be referred to herein as the “dialogue agent response collection”, may be stored in association with one another when configured for a specific virtual dialogue agent.
  • the dialogue agent response collection may be stored by an instant-virtual agent generation service or the like for creating chatbots or virtual dialogue agents.
  • the method 300 for implementing a machine learning-based human-personified virtual dialogue agent includes identifying user stimulus data S 310 , generating a machine learning-based response inference S 320 , computing a video response imprint (ID) based on the machine learning-based response inference S 330 , selecting a video response imprint based on a video response imprint ID S 340 , and implementing a human-personified virtual agent based on the video response S 350 .
  • ID video response imprint
  • S 310 which includes identifying user stimulus data, may function to identify, collect, and/or receive user input data in the form of a user utterance or user stimulus towards one or more human-personified virtual dialogue agents deployed in a production environment of a subscriber. It shall be noted that one or more of the virtual agents deployed in a production environment of a subscriber may be associated with a distinct video response corpus previously configured with a chat agent generation service or the like.
  • S 310 may function to receive a user input or user stimulus via a user interface (e.g., an interface of the virtual dialogue agent) accessible by or provided to the user.
  • a user interface e.g., an interface of the virtual dialogue agent
  • the interface of the virtual dialogue agent may be accessible by a plurality of channels, including but not limited to, a mobile computing device, a web browser (having a website displayed therein), a social network interface, or any other suitable channel or client interface/device for deploying the virtual dialogue agent.
  • the user utterance or user stimulus may include, but should be not limited to, speech or utterance input, textual input, gesture input, touch input, character input, numerical input, image input and/or any other suitable type of input.
  • the user utterance or user stimulus identified, collected, and or received by S 310 may be of a single dialogue intent or a plurality of dialogue intents. Additionally, or alternatively, the identified, collected, and or received user stimulus or user utterance may relate to a single dialogue domain or a plurality of dialogue domains.
  • S 310 may function to identify, receive, and/or collect the user stimulus or user utterance and transmit, via a computer network or the like, the user stimulus or user utterance to an embedding service that may convert or translate the user stimulus or user utterance into an embedded representation consumable by one or more pre-trained language processing models for producing one or more response inferences or response predictions (“input embedding”).
  • an embedding service may convert or translate the user stimulus or user utterance into an embedded representation consumable by one or more pre-trained language processing models for producing one or more response inferences or response predictions (“input embedding”).
  • S 310 may function to directly pass the user input or user stimulus, in a raw state, to one or more of the pre-trained language processing models that may include an embedding layer used to generate embedding values for input into one or more inference layers of the models for producing one or more response inferences.
  • the embedded or vector representation associated with a user utterance or user stimulus may assist with providing the system 100 , the method 200 , and/or the method 300 the capability of understanding a relational strength between the embedded representation of the user stimulus or the user utterance and the embedded response representations of the video response corpus for intelligently aiding and/or improving a conversational experience.
  • S 320 which includes generating a response inference, may function to provide the user communication or user stimulus as input to one or more pre-trained language models.
  • a chatbot generation service or the like may function as an intermediary between a client interface implementing a virtual dialogue agent (e.g., a subscriber system) and one or more remote or cloud-based systems implementing the one or more pre-trained language models.
  • a virtual dialogue agent e.g., a subscriber system
  • remote or cloud-based systems implementing the one or more pre-trained language models.
  • S 320 may function to provide or transmit the user stimulus from the client interface (i.e., the virtual dialogue interface) together with a global identifier or Selph ID of a subject virtual dialogue agent to the chatbot generation service and in response to a receipt of the user stimulus, the chatbot generation service may function to directly interface with the one or more pre-trained language models for generating at least one response inference based on the user stimulus.
  • S 320 may function to operably communicate with (e.g., access, retrieve, or the like) one or more of the plurality of pre-trained language processing models identified within the system 100 and/or by the method 200 or the like.
  • S 320 may digitally communicate or digitally interface with one or more of a plurality of language processing models via an application programming interface (API) that may programmatically integrate both the system 100 (implementing the method 200 and method 300 ) and foreign or third-party systems implementing the one or more pre-trained language processing models.
  • API application programming interface
  • S 320 may function to access one or more of the plurality of pre-trained language processing models by requesting or generating one or more API calls that include user stimulus data to APIs of one or more of the pre-trained language processing models for producing one or more inferred responses to the user stimulus.
  • the plurality of pre-trained language processing models may be pre-developed and/or pre-trained whereby each of the plurality of pre-trained language processing models may have corresponding configured parameters (e.g., learned weights), the parameters (or weights) of the plurality of language processing models may vary between the distinct pre-trained language models.
  • each of the distinct language processing models that may process the user stimulus or the user utterance differently and may use distinct embedding models to generate a distinct embedded query representation and may compute a predicted (e.g., embeddings) response inference that may vary from other language processing models.
  • S 320 may function to process the user stimulus or user utterance through a plurality of language processing models and each of the language processing models of the plurality of language processing models may be associated with a distinct embeddings models that generates a distinct user stimulus representation.
  • S 330 which includes computing a video response imprint (ID) based on the machine learning-based response inference, may function to intelligently identify a response to the user stimulus or user utterance based on computationally (e.g., spatially, numerically, etc.) evaluating one or more predicted responses or response inferences of the one or more pre-trained language models against the embedded response representations of the video response corpus for a subject virtual dialogue agent, as shown by way of example in FIG. 6 .
  • computationally e.g., spatially, numerically, etc.
  • S 330 may function to evaluate each embedded response representation or a subset of the embedded response representations of the video response corpus with reference to each response inference computed by the plurality of pre-trained language processing models based on the user stimulus.
  • the embedded response representations of the video response corpus and the embedded representations of the user stimulus may be normalized to one another.
  • S 330 may function to compute a similarity metric between an inferred response and one or more embedded response representations of the video response corpus.
  • the computation of the similarity metric may include mathematically computing a distance value or a distance similarity metric between each embedded response representation of the video response corpus and each inferred response representation produced based on the user stimulus to thereby intelligently identify an optimal or most probable response based on the quantitative analysis of the mathematically computed distance therebetween. It shall be noted that a shorter distance between an embedded response representation and an inferred response representation may express that the two embedded representations signify a higher degree of similarity, and vice versa.
  • each response imprint of the video response corpus may be associated with a corresponding embedded response representation and preferably mapped to an embedded response space and the user stimulus may be associated with one or more inference responses or inference response vectors.
  • the inference response vector may sometimes be mapped to an n-dimensional space and compared or evaluated against a mapping of the embedding values of the video response corpus for a subject virtual dialogue agent.
  • the inference response vector may be mapped directly to the n-dimensional space of the embedding values of the video response corpus. Accordingly, S 330 may function to mathematically compute, for each inference response to the user stimulus produced by the one or more the pre-trained language processing models, which embedded response representation of the video response corpus provides the most likely or most probable response to the user utterance or user stimulus through a similarity (or distance) analysis.
  • S 330 may perform a quantitative measurement that computes a distance between each embedded response representation of the video response corpus and each inferred response representation that may be computed based on an input of the user stimulus. Accordingly, S 330 may further function to identify or select which response from the plurality of responses of the video response corpus that may be the most likely response to the user stimulus by identifying the embedded response representation with the smallest (or shortest) distance to the inferred response representation.
  • the selected or optimal response (vector, item) to the user stimulus may be the embedded response representation that occurs with the highest frequency based on the results of the similarity analysis performed for multiple, distinct inferred responses of the plurality of distinct pre-trained language models.
  • S 330 may function to select the first response, R_ 1 , as the most likely or most probable response to the user stimulus since there is a higher frequency of similarity mapping between the inferred responses of the pre-trained language models and a given embedded representation of a response imprint of the video response corpus of a given virtual dialogue agent.
  • the inferred response representation generated by each of the plurality of pre-trained language processing models may be averaged together and the averaged inferred response representation may be used for the similarity analysis computations.
  • S 330 may function to source three (3) response inferences or response predictions from three distinct pre-trained machine learning models based on a given user stimulus.
  • S 330 may function to compute an average inference vector value based on the 3 response inferences and compute a similarity metric between the average vector value and one or more embedded values of the video response corpus of a subject virtual dialogue agent.
  • S 330 may function to identify the most probable or most likely (e.g., best or optimal) video response imprint for responding to the user stimulus.
  • S 330 may function to identify a text-based transcript or the like associated with most probable the video response imprint together with an associated local identifier (e.g., imprint identifier) and return a copy of the text-based transcript of a video response imprint, the imprint ID, and/or the Selph ID to a source of the user stimulus (e.g., subscriber system, client interface, etc.).
  • an associated local identifier e.g., imprint identifier
  • S 330 may function to identify a text-based transcript or the like associated with most probable the video response imprint together with an associated local identifier (e.g., imprint identifier) and return a copy of the text-based transcript of a video response imprint, the imprint ID, and/or the Selph ID to a source of the user stimulus (e.g., subscriber system, client interface, etc.).
  • a source of the user stimulus e.g., subscriber system, client interface, etc.
  • S 340 which includes selecting a video response imprint based on a video response imprint ID, may function to generate a response via the virtual dialogue agent based on the identified text-based transcript and/or imprint ID of a video response imprint.
  • S 340 in response to receiving or identifying the text-based transcript of the video response imprint with a corresponding local identifier or imprint ID, may function to evaluate one or more of the text-based transcript and imprint ID against the one or more video response imprints of a video response corpus. In such embodiment, based on the evaluation, S 340 may function to identify a video response imprint and/or associated time segment or partition of a video response imprint for responding to the user stimulus.
  • S 340 may function to evaluate and/or compare the identified imprint ID along the sequence of video response imprints or time segments and identify a video response imprint or time segment that matches the identified imprint ID.
  • S 340 may function to individually evaluate and/or compare the identified imprint ID to each video response imprint and an associated imprint ID to identify a video response imprint having an imprint ID that matches the identified imprint ID.
  • S 350 which include implementing a human-personified virtual agent based on the video response, may function to generate and/or provide a response to user stimulus/user utterance by loading and/or playing a video response imprint via the human-personified virtual dialogue agent that best (most likely) responds to the user stimulus/user utterance, as shown by way of example in FIG. 7 .
  • S 350 may function to identify and/or collect a target video response imprint together with an associated text-based transcript of the target video response imprint and load or transmit the video response imprint and transcript to a user system (client interface).
  • the user system may include a display, such a computer or mobile device display panel, that includes a window or an interface object at which the video response imprint may be communicated and/or played.
  • S 350 may function to generate a response via the human-personified virtual dialogue agent that includes executing the video response imprint together with the text-based transcript.
  • executing the text-based transcript may include superimposing on or overlapping the content of the text-based transcript with the video response imprint. In this way, a user may optionally follow the video response imprint based on viewing the text-based transcript of the video response imprint.
  • S 350 may only function to display the text-based transcript of the video response imprint if an accessibility setting of the human-personified virtual dialogue agent is toggled on. Conversely, in some embodiments, S 350 may only function to display the video response imprint (and not the text-based transcript of the video response imprint) if the accessibility setting of the human-personified virtual dialogue agent is toggled off.
  • the identified imprint ID may be evaluated and/or matched against a proxy response item or a display response item which may be different from an original video response imprint (anchor answer) associated with the identified imprint ID. That is, in some embodiments, an original video response imprint may be tethered to or otherwise associated with a proxy response item that may be presented in lieu of the original video response imprint.
  • S 350 may function to play (via stream, download, and/or the like) any combinations of a video, audio, transcript, and/or the like of a video response imprint.
  • Embodiments of the system and/or method can include every combination and permutation of the various system components and the various method processes, wherein one or more instances of the method and/or processes described herein can be performed asynchronously (e.g., sequentially), concurrently (e.g., in parallel), or in any other suitable order by and/or using one or more instances of the systems, elements, and/or entities described herein.
  • the preferred embodiments may include every combination and permutation of the implementations of the systems and methods described herein.

Abstract

A system and method for implementing a machine learning-based virtual dialogue agent includes computing an input embedding based on receiving a user input; computing, via a pre-trained machine learning language model, an embedding response inference based on the input embedding; searching, based on the embedding response inference, a response imprintation embedding space that includes a plurality of distinct embedding representations of potential text-based responses to the user input, wherein each of the plurality of distinct embedding representations is tethered to a distinct human-imprinted media response, and searching the response imprintation embedding space includes: searching the response imprintation embedding space based on an embedding search query, and returning a target embedding representation from the response imprintation embedding space based on the searching of the response imprintation embedding space; and executing, via a user interface of the machine learning-based virtual dialogue agent, a human-imprinted media response tethered to the target embedding representation.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • This application claims the benefit of U.S. Provisional Application number 63/222,169, filed on 15 Jul. 2021, which is incorporated in its entireties by this reference.
  • TECHNICAL FIELD
  • The inventions herein relate generally to the virtual assistant field, and more specifically to a new and useful system and method for generating and deploying human-personified artificially intelligent virtual agents using machine learning.
  • BACKGROUND
  • Modern virtual assistants may typically be employed to perform various tasks or services based on an interaction with a user. Typically, a user interacting with a virtual assistant may pose a question, a message, or otherwise submit an input to the virtual assistant, to which, the virtual assistant may provide a response or perform some action as a result of the user input. Many of these virtual assistants may typically be portrayed as a user interface object, such as a chat window or may include an animated computer object that lack real human features and real human mannerisms. As a result, a significant class of users fail to fully engage with the virtual assistant and may continue to prefer interactions that involve a real human agent.
  • Therefore, an inability of chatbot or virtual assistant systems in their ability to relate to human users may continue to act as a barrier for mass adoption of often helpful automated chat systems. automated engagement (human agent) (compromise, not relatable, human sounding; interactive human chatbot
  • However, a virtual assistant or automated interaction system personified with real human response may create a more personable and engaging conversation which may increase the likelihood of a user's satisfaction with its response and interactions and therefore, reducing interaction loads on often limited real human agents.
  • Therefore, there is a need in the virtual assistant field to generate human-personified virtual assistants that deliver interactive automated human engagement with relatable personal features (e.g., a human, voice, human voice, human mannerisms, etc.). The embodiments of the present application described herein provide technical solutions that address, at least, the needs described above, as well as the deficiencies of the state of the art.
  • BRIEF SUMMARY OF THE INVENTION(S)
  • In some embodiments, a method of implementing a human video-personified machine learning-based virtual dialogue agent includes: computing an input embedding based on receiving a user input; computing, via a pre-trained machine learning language model, an embedding response inference based on the input embedding; searching, based on the embedding response inference, a response imprintation embedding space that includes a plurality of distinct embedding representations of potential text-based responses to the user input, wherein each of the plurality of distinct embedding representations is tethered to a distinct human-imprinted media response, and searching the response imprintation embedding space includes: (i) defining an embedding search query using the embedding response inference as a search parameter, (ii) searching the response imprintation embedding space based on the embedding search query, and (iii) returning a target embedding representation from the response imprintation embedding space based on the searching of the response imprintation embedding space; and executing, via a user interface of the human video-personified machine learning-based virtual dialogue agent, a human-imprinted media response tethered to the target embedding representation.
  • In some embodiments, the pre-trained machine learning language model computes the embedding response inference based on an embedding space different from the response imprintation embedding space. In some embodiments, the method includes normalizing the embedding response inference to the response imprintation embedding space contemporaneously with defining the embedding search query.
  • In some embodiments, searching the response imprintation embedding space includes: (1) computing a distance between the embedding response inference and each of the plurality of distinct embedding representations, and (2) determining which embedding representation of the plurality of distinct embedding representations is closest to the embedding response inference, and the target embedding representation returned from the response imprintation embedding space corresponds to the embedding representation closest to the embedding response inference.
  • In some embodiments, searching the response imprintation embedding space includes: (1) computing a similarity metric between the embedding response inference and each of the plurality of distinct embedding representations, and (2) determining which embedding representation of the plurality of distinct embedding representation is most similar to the embedding response inference, and the target embedding representation returned from the response imprintation embedding space corresponds to the embedding representation most similar to the embedding response inference.
  • In some embodiments, the response imprintation embedding space relates to an n-dimensional vector space.
  • In some embodiments, the method includes contemporaneously with computing the embedding response inference via the pre-trained machine learning language model: computing, via one or more additional pre-trained machine learning language models, one or more additional embedding response inferences based on the input embedding, wherein: searching the response imprintation embedding space is based on the embedding response inference and the one or more additional embedding response inferences, searching the response imprintation embedding space further includes: defining one or more additional embedding search queries based on the one or more additional embedding response inferences; and searching the response imprintation embedding space based on the one or more additional embedding search queries, and the target embedding representation returned from the response imprintation embedding space is based on searching the response imprintation embedding space with the embedding search query and the one or more additional embedding search queries.
  • In some embodiments, the one or more additional embedding response inferences include a first additional embedding response inference and a second additional embedding response inference, the one or more additional embedding search queries include a first embedding search query based on the first additional embedding response inference and a second embedding search query based on the second additional embedding response inference, searching the response imprintation embedding space based on the first additional embedding search query includes identifying, from the plurality of distinct embedding representations, an embedding representation most similar to the first additional embedding response inference, searching the response imprintation embedding space based on the second additional embedding search query includes identifying, from the plurality of distinct embedding representations, an embedding representation most similar to the second additional embedding response inference, and searching the response imprintation embedding space based on the embedding search query includes identifying, from the plurality of distinct embedding representations, an embedding representation most similar to the embedding response inference.
  • In some embodiments, searching the response imprintation embedding space further includes: identifying an embedding representation most frequently identified by the embedding search query and the one or more additional search queries; and determining which embedding representation of the plurality of distinct embedding representations is closest to the embedding representation most frequently identified by the embedding search query and the one or more additional search queries, and the target embedding representation returned from the response imprintation embedding space corresponds to the embedding representation determined to be closest to the embedding representation most frequently identified by the embedding search query and the one or more additional search queries.
  • In some embodiments, searching the response imprintation embedding space further includes: computing an average embedding representation based on embedding representations identified by the embedding search query and the one or more additional search queries; and determining which embedding representation of the plurality of distinct embedding representations is closest to the average embedding representation, and the target embedding representation returned from the response imprintation embedding space corresponds to the embedding representation determined to be closest to the average embedding representation.
  • In some embodiments, the human-imprinted media response tethered to the target embedding includes a video component, and executing the human-imprinted media response includes: playing the video component of the human-imprinted media response at the user interface of the human video-personified machine learning-based virtual dialogue agent; and displaying, in association with the playing of the video component, a transcript of the human-imprinted media response.
  • In some embodiments, the user input is received via the user interface of the human video-personified machine learning-based virtual dialogue agent, and the user input comprises textual input that relates to one or more dialogue intents.
  • In some embodiments, a method of implementing a human-personified machine learning-based virtual dialogue agent includes: computing an input embedding based on a user input; computing, via a pre-trained machine learning language model, an embedding response inference based on the input embedding; searching, based on the embedding response inference, a response imprintation embedding space that includes a plurality of distinct embedding representations of potential text-based responses to the user input, wherein each of the plurality of distinct embedding representations is tethered to a distinct human-imprinted media response, and searching the response imprintation embedding space includes: (i) defining an embedding search query using the embedding response inference as a search parameter, and (ii) executing the embedding search query to search the response imprintation embedding space, wherein the embedding search query returns a target embedding representation from the response imprintation embedding space; and executing, via a user interface of the human-personified machine learning-based virtual dialogue agent, a human-imprinted media response tethered to the target embedding representation.
  • In some embodiments, the embedding search query returns the target embedding representation because the target embedding representation is closer to the embedding response inference as compared to other embedding representations in the response imprintation embedding space.
  • In some embodiments, the response imprintation embedding space relates to a multi-dimensional vector space. In some embodiments, the method further includes before searching the response imprintation embedding space, constructing the response imprintation embedding space, wherein constructing the response imprintation embedding space includes: identifying a human-imprinted media response corpus that includes a plurality of distinct human-imprinted media responses to likely user input; generating, via a transcriber, a text-based transcription for each of the plurality of distinct human-imprinted media responses; providing, as input to a pre-trained machine learning language model, the text-based transcription generated for each of the plurality of distinct human-imprinted media responses; computing, by the pre-trained machine learning language model, an embedding representation for each of the plurality of distinct human-imprinted media responses based on the text-based transcription generated for each of the plurality of distinct human-imprinted media responses; mapping the embedding representation computed for each of the plurality of distinct human-imprinted media responses to the multi-dimensional vector space.
  • In some embodiments, each of the plurality of distinct human-imprinted media responses includes an audio/video (AV) component.
  • In some embodiments, the user input is received via the user interface of the human-personified machine learning-based virtual dialogue agent, and executing the human-imprinted media response tethered to the target embedding representation includes playing the human-imprinted media response.
  • In some embodiments, in accordance with a determination that an accessibility setting of the human-personified machine learning-based virtual dialogue agent is toggled on: forgoing playing the human-imprinted media response; and displaying a text-based transcription of the human-imprinted media response at the user interface of the human-personified machine learning-based virtual dialogue agent.
  • In some embodiments, a method of implementing a fast-generated virtual dialogue agent includes: receiving, via a web-enabled virtual dialogue agent interface, user stimuli; converting, by a computer implementing one or more pre-trained language machine learning models, the user stimuli to a stimuli embeddings inference; computing a response inference based on the stimuli embeddings inference, wherein computing the response inference includes: performing an embeddings search for a response embeddings of a plurality of distinct response embeddings based on the stimuli embeddings inference; and generating an automated response to the user stimuli, via the web-enabled virtual dialogue agent interface, based on the response embeddings.
  • In some embodiments, the embeddings search searches a multi-dimensional space and identifies which of the plurality of distinct response embeddings is closest to the stimuli embeddings inference.
  • BRIEF DESCRIPTION OF THE FIGURES
  • FIG. 1 illustrates a schematic representation of a system in accordance with one or more embodiments of the present application;
  • FIG. 2 illustrates an example method of configuring a human-personified virtual agent in accordance with one or more embodiments of the present application;
  • FIG. 3 illustrates an example method of implementing a human-personified virtual agent in accordance with one or more embodiments of the present application;
  • FIG. 4 illustrates a schematic representation for creating a single response video imprint in accordance with one or more embodiments of the present application;
  • FIG. 5 illustrates a schematic representation for creating a multiple response video imprint in accordance with one or more embodiments of the present application;
  • FIG. 6 illustrates a schematic representation of processing a response via a human-personified virtual agent in accordance with one or more embodiments of the present application; and
  • FIG. 7 illustrates a schematic representation of interfacing with a human-personified virtual agent in accordance with one or more embodiments of the present application.
  • DESCRIPTION OF THE PREFERRED EMBODIMENTS
  • The following description of the preferred embodiments of the present application are not intended to limit the inventions to these preferred embodiments, but rather to enable any person skilled in the art to make and use these inventions.
  • 1. System for Generating and Deploying a Machine Learning-Based Virtual Agent
  • As shown in FIG. 1 , a system 100 that may configure and deploy a machine learning-based human-personified virtual agent may include a response development module 110, an embedding service 120, a model accessibility/development engine 130, a machine learning-based virtual agent model 140, a dialogue response collection module 150, and an intelligent machine learning-based virtual agent 160.
  • In one or more preferred embodiments, the system 100 may function to configure and/or deploy the intelligent machine learning-based human-personified virtual agent 160 to enable an automated conversational experience between a user and a human-imprinted virtual agent of a subscriber.
  • 1.1 Response Development Module
  • In one or more embodiments, a response development module 110 may be in digital communication with a response development interface (or client interface). For instance, the response development module 110 may be configured to ingest (or identify) responses inputted by a subscriber, at the response development interface, to construct a corpus of responses (e.g., a response corpus).
  • In one or more embodiments, the response development module 110 may interface with a subscriber that may provide a source of knowledge (or a source of responses) for a machine learning-based virtual agent 160. Accordingly, in one or more embodiments, the response development interface may be configured to allow for manual and/or bulk upload of responses or a response corpus (e.g., a video response corpus) that may be identifiable to the response development module 110. In some embodiments, the response development module 110 may include a response development interface that may be configured to allow a subscriber to manually input a string of text that may define an individual response associated with one or more video response imprints; however, in alternative embodiments, the response development interface may also be configured to accept, as input, documents, files, media files, or the like comprising a collection of responses in bulk.
  • Additionally, or alternatively, in one or more embodiments, the response development module 110 may include and/or be in operable communication with an image capturing device or video capturing device that enables a capturing of one or more video response imprints/imprints for building a video response corpus.
  • Additionally, or alternatively, in some embodiments, the response development module 110 may include and/or be in operable communication with one or more video response handling module that may function to partition video response imprints according to any suitable known partitioning or segmentation techniques including the one or more video partitioning and segmentation techniques described herein.
  • In one or more embodiments, the machine learning-based virtual agent 160 (may also be referred to herein as a “machine learning-based virtual assistant” or “machine learning-based human-personified virtual agent) may communicate with an intermediary service that may store the text-based transcriptions of a video response corpus to rapidly identify one or more most likely or most probable responses to user stimulus based on one or more inferred responses of one or more pre-trained machine learning language models.
  • 1.2 Embedding Service
  • In one or more embodiments, an embedding service 120 may preferably function to receive text-based transcriptions of a video response corpus as input and output an embedded response representation for each response (or response imprint) of the video response corpus. In some embodiments, the embedding service may be a sentence/word (or text) embeddings service that may be configured to compute embedded response representations.
  • Additionally, or alternatively, the embedding service 120 may function to generate an embedded response space that may map each of the computed embedded response representations associated with a corresponding response (or response imprint) of the response corpus to the embedded response space. In one or more embodiments, the embedded response space may function to graphically associate (or cluster) semantically similar responses closer to one another than unrelated (or dissimilar) responses.
  • 1.3 Model Accessibility/Development Engine
  • In one or more embodiments, the model accessibility/development engine 130 may preferably include storing and/or at least capable of accessing a plurality of pre-trained and/or pre-developed language processing models. In some embodiments, each of the plurality of language processing models may be pre-developed and/or pre-trained for reading, understanding, interpreting human language, and/or making predictions based on user inputs or user stimuli.
  • Additionally, in some embodiments, the model accessibility/development engine 130 may store and/or identify the baseline embedded response representations computed by the embedding service 120 to identify and/or select one or more applicable pre-trained language processing models based, in part, on the embedding values. In some embodiments, an algorithmic structure of the machine learning virtual agent model 140 underlying the virtual dialogue agent 160 may be the entirety of the plurality of accessed pre-trained language processing models and/or the stored language processing models outputted by the model accessibility/development engine 130.
  • In a preferred embodiment, the machine learning virtual agent model 140 that may be accessed, generated, and/or outputted by the model accessibility/development engine 130 may be capable of predicting and/or inferring responses based on user input.
  • Additionally, or alternatively, the model accessibility/development engine 130 may implement one or more ensembles of pre-trained or trained machine learning models. The one or more ensembles of machine learning models may employ any suitable machine learning including one or more of: supervised learning (e.g., using logistic regression, using back propagation neural networks, using random forests, decision trees, etc.), unsupervised learning (e.g., using an Apriori algorithm, using K-means clustering), semi-supervised learning, reinforcement learning (e.g., using a Q-learning algorithm, using temporal difference learning), adversarial learning, and any other suitable learning style. Each module of the plurality can implement any one or more of: a regression algorithm (e.g., ordinary least squares, logistic regression, stepwise regression, multivariate adaptive regression splines, locally estimated scatterplot smoothing, etc.), an instance-based method (e.g., k-nearest neighbor, learning vector quantization, self-organizing map, etc.), a regularization method (e.g., ridge regression, least absolute shrinkage and selection operator, elastic net, etc.), a decision tree learning method (e.g., classification and regression tree, iterative dichotomiser 3, C4.5, chi-squared automatic interaction detection, decision stump, random forest, multivariate adaptive regression splines, gradient boosting machines, etc.), a Bayesian method (e.g., naïve Bayes, averaged one-dependence estimators, Bayesian belief network, etc.), a kernel method (e.g., a support vector machine, a radial basis function, a linear discriminate analysis, etc.), a clustering method (e.g., k-means clustering, density-based spatial clustering of applications with noise (DBSCAN), expectation maximization, etc.), a bidirectional encoder representation form transformers (BERT) for masked language model tasks and next sentence prediction tasks and the like, variations of BERT (i.e., ULMFiT, XLM UDify, MT-DNN, SpanBERT, RoBERTa, XLNet, ERNIE, KnowBERT, VideoBERT, ERNIE BERT-wwm, MobileBERT, TinyBERT, GPT, GPT-2, GPT-3, GPT-4 (and all subsequent iterations), ELMo, content2Vec, and the like), an associated rule learning algorithm (e.g., an Apriori algorithm, an Eclat algorithm, etc.), an artificial neural network model (e.g., a Perceptron method, a back-propagation method, a Hopfield network method, a self-organizing map method, a learning vector quantization method, etc.), a deep learning algorithm (e.g., a restricted Boltzmann machine, a deep belief network method, a convolution network method, a stacked auto-encoder method, etc.), a dimensionality reduction method (e.g., principal component analysis, partial lest squares regression, Sammon mapping, multidimensional scaling, projection pursuit, etc.), an ensemble method (e.g., boosting, bootstrapped aggregation, AdaBoost, stacked generalization, gradient boosting machine method, random forest method, etc.), and any suitable form of machine learning algorithm. Each processing portion of the system 100 can additionally or alternatively leverage: a probabilistic module, heuristic module, deterministic module, or any other suitable module leveraging any other suitable computation method, machine learning method or combination thereof. However, any suitable machine learning approach can otherwise be incorporated in the system 100. Further, any suitable model (e.g., machine learning, non-machine learning, etc.) may be implemented in the various systems and/or methods described herein.
  • 1.4 Dialogue Response Collection Module
  • In one or more embodiments, the dialogue response collection module 150 may preferably function as the response repository for the machine learning-based virtual agent 160. Accordingly, in one or more preferred embodiments, the response collection module 150 may be configured to collect and/or store the constructed response corpus generated by the response development module 110 and the embedded response representations of the response corpus computed by the embedding service 120.
  • Additionally, in one or more preferred embodiments, the response collection module 150 may be combinable (or associated) with the selected or the identified machine learning virtual agent model(s) 140 (e.g., the pre-trained language processing models) outputted by the model accessibility/development engine 130 to form the underlying structure of the virtual dialogue agent 160.
  • 2. Method for Configuring a Human-Personified Machine Learning Virtual Agent
  • As shown in FIG. 2 , the method 200 for configuring a machine learning-based human-personified virtual agent includes creating a video response corpus S210, intelligently processing the video response corpus S220, and computing and storing embedding values based on the video response corpus S230. The method 200 optionally includes creating a mapping of embedded response representations of the video response corpus S235.
  • 2.10 Creating a Video Response Corpus|Media Corpus
  • S210, which includes identifying and/or creating a video response corpus, may function to create a plurality of expected and/or desired video responses to user stimulus or user utterances to a virtual agent. In a preferred embodiment, video responses may be created and/or obtained from any suitable source including, but not limited to, human agent responses, manuscripts, transcriptions, any video storage database, and/or the like. An expected and/or desired video response may sometimes be referred to herein as a “video response item” or a “video response imprint”. A video response imprint preferably includes a recording or imprintation of a human providing a response to a query, utterance, user communication, or the like.
  • In a first implementation, a video response imprint may be created by an abridged recording, as shown by way of example in FIG. 4 . In such implementation, a human agent or a subscriber may create and/or upload one or more video response imprints that may include one or more short or abridged videos of themselves and/or another person responding to an expected user stimulus and/or utterance. Further, in such implementation, an “abridged recording” or “single response recording” as referred to herein preferably relates to a video recording response imprint that includes a single expected or desired response to user stimulus.
  • In a second implementation, video response imprints may be obtained by an extended recording, as shown by way of example in FIG. 5 . In such implementation, subscribers may create and/or upload video response imprints of recordings from a longer or extended video of themselves and/or another person responding to one or more expected user stimulus' and/or utterances. Additionally, or alternatively, as will be further described below (e.g., in S222), longer video response imprints may be partitioned into smaller video response (sub-)imprints. Further, in such implementation, an “extended recording,” as referred to herein, preferably relates to a video recording response imprint that includes multiple expected or desired responses, preferably that each relate to a single dialogue category or dialogue domain.
  • In either of the implementations described above, an image capturing device (e.g., a video recording camera) may be used by a human agent to create a video response imprint that includes a video recording of themselves or of another human person or agent providing one or more responses for creating a response corpus of a target virtual agent.
  • Additionally, or alternatively, in one variant implementation, S210 may function to create a video response corpus which may include a plurality of distinct video response imprints or a multi- or single video response imprint that may be pre-recorded. That is, in some embodiments, rather than creating the video response corpus by making one or more human-based or human-imprinted recordings with expected or desired response to user stimuli, S210 may function to search and/or source any available repository (e.g., the Internet, YouTube, Google, etc.) having human-based pre-recordings that may include a desired response or an expected response to a user stimulus.
  • It shall be recognized that in some embodiments, the method 200, the method 300 or the like may function to implement any suitable combination of the above-described configuration parameters to create a media file, video response imprint, and/or the like.
  • 2.20 Video Response Corpus Processing
  • S220, which includes intelligently processing the video response corpus, may function to intelligently process a video response corpus to enable a consumption of the content therein for a machine learning-based virtual agent. In one or more embodiments, pre-processing the video response corpus may include partitioning or chunking one or more video response imprints, extracting audio features, and/or transcribing audio features into a consumable or prescribed format (e.g., a textual format), such as an input format suitable for a machine learning virtual agent generation component of a service or system implementing one or more steps of the method 200.
  • 2.22 Partitioning a Video Response Imprint
  • Optionally, or additionally, S220 includes S222, which may function to partition or chunk a video response imprint of the video response corpus. In one or more embodiments, S222 may function to identify that partitioning of a subject video response imprint may be required based on the subject video response imprint having therein multiple distinct responses satisfying a partitioning threshold (e.g., a minimum number of responses to one or more stimuli). Additionally, or alternatively, in one or more embodiments, S222 may determine that a partitioning of a video response imprint may be required based on a length of the video response imprint, the file size of the video response imprint, and/or the like.
  • In a first implementation, S222 may function to partition a video response imprint using an intelligent partitioning scheme. In such implementation, S222 may determine a partition location of the video response imprint if there is a suitable pause within the audio of the response. In some embodiments, a suitable pause length for partitioning the video response imprint may be identified based on a discontinuance of a verbal communication by a human agent within the video and lapse of a predetermined amount of time (e.g., pause length). The pause length, pause threshold, and/or decibel level of the sound of the intelligent partitioning may be adjusted according to each subscriber, video response, etc.
  • In a second implementation, S222 may function to partition a video response imprint using identification partitioning. In such implementation, S222 may determine a partition location of the video response imprint if a user voices a specified keyword and/or phrase or performs an action, such as a predetermined gesture indicating that a partition or break in the video response imprint should be made. For example, the user may pronounce a predetermined keyword, an expected user utterance, and/or the like before articulating their expected response. In another example, a human agent in the recording may make a “thumbs up” gesture or similar gesture indicating a point or section for partitioning the video response imprint.
  • In a third implementation, S222 may function to partition a video response imprint using interval partitioning. In such implementation, S222 may function to partition a video response imprint every ‘x’ number of seconds. For example, a subscriber may determine that every 10 seconds they'd like the video response to be partitioned. Additionally, or alternatively, the video response imprint may be evenly partitioned into halves, quarters, eighths, and/or the like.
  • Additionally, or alternatively, the partitioning may be implemented in any suitable manner. In one implementation, S222 may function to partition a video response imprint by demarcating multiple distinct time segments of the video response imprint. In such implementation, S222 may include pairs of digital markers within the video response imprint in which a first marker of the pair of digital markers may indicate a beginning of a time segment (i.e., sub-video response imprint) and a second marker of the pair may indicate a termination or ending of the time segment. In a second implementation, S222 may function to partition a video response imprint by defining each of a plurality of time segments of the video response imprint, extracting each time segment, and storing the extracted time segments independently. That is, in this second implementation, S222 may function to break a video response imprint having multiple responses into multiple distinct sub-videos.
  • It shall be recognized that in some embodiments, the method 200 or the like may function to implement any suitable video partitioning scheme including a partitioning scheme based on a combination of the above-described techniques or schemes for partition a media file, and/or the like.
  • 2.24 Video Response Identifiers|Selph ID|Imprint ID
  • Additionally, or alternatively, S220 includes S224, which may function to generate and/or assign one or more unique identifiers to each video response corpus and to each of the video response items/imprints within the video response corpus. In a preferred embodiment, S224 may function to identify or assign a global identifier to each distinct video response corpus or to each collection of video responses that maybe related in content (e.g., same domain or category). Additionally, or alternatively, S224 may function to identify or assign a unique local identifier to video response item/imprint within a given video response corpus.
  • In one or more embodiments, the global identifier maybe referred to herein as one of a “Selph ID” or a “corpus identifier (ID)”. In such embodiments, the global identifier may function to identify a subject video response corpus as distinctly including video response imprints within a dialogue category or dialogue domain. That is, the global identifier may indicate an overarching category or domain of the video response content within the group. For example, S224 may function to assign a first global identifier to a first video response corpus in which substantially or all the video response imprints relate to a single, first primary category, such as “air travel,” and may function to assign a second global identifier to a second video response corpus in which substantially or all the video response imprints to a single, first primary category, such as “automotive repair.”
  • In one or more embodiments, the local identifier may be referred to herein as one of an “imprint identifier (ID),” a “video response identifier (ID),” or a “video response imprint ID.” In such embodiments, the local identifier may function as a sub-domain or sub-category identifier that identifies an associated video response imprint as a specific sub-topic or sub-category within a video response corpus. Additionally, or alternatively, each partition of a video response imprint may be assigned a distinct imprint ID for ease of reference, lookup, and/or publishing as a response to a user stimulus.
  • It shall be recognized that, in one or more embodiments, a video response corpus and a plurality of distinct video response imprints within the video response corpus may be assigned identifiers (i.e., global and local identifiers) based on a hierarchical identification structure in which a video response corpus may be assigned a global identifier corresponding to a top-level category or top-level domain and each of the video response imprints with the video response corpus may be assigned a local identifier corresponding to one or more distinct sub-level categories or sub-level domains. In one example, a video response corpus may be assigned a global identifier such as “air travel”, which may function as a top-level domain, and a first video response imprint within the video response corpus may be assigned a first local identifier of “scheduling flight” and a second video response imprint within the video response corpus may be assigned a second local identifier of “flight status”. In this example, each of the first local identifier of “scheduling flight” and the second local identifier of “flight status” may be sub-level categories of the top-level category of “air travel.”
  • In a first implementation, either the global and/or local identifiers associated with either the video response corpus or distinct video response imprints may be implemented in tracking operations and/or processes involving or being applied to either the video response corpus or a distinct video response items/imprints. In a second implementation, either the global and/or local identifiers may be implemented for sourcing or obtaining one or more public URLs of one or more target video response imprints preferably during an interaction between a user and a human-personified virtual dialogue agent. In a third implementation, either the global and/or local identifiers may be implemented for response linking and may be electronically associated with an answer object or the like of a chatbot or virtual agent generation service.
  • As will be further discussed below, the system 100 and/or the method 200 may function to convert the identified (or constructed) video response corpus of S220 into embedding values (e.g., embedded response representations).
  • 2.26 Transcribing a Video Response Imprint
  • Optionally, or additionally, S220 includes S226, which may function to create a transcription of each video response item/imprint of a video response corpus. In one or more embodiments, the transcription of a given video response imprint may include a textual representation of an audio component or verbal response component within the given video response imprint or any media file.
  • In one embodiment, S226 may function to interface with a transcriber or a transcription service for automatically generating a transcription of a subject video response imprint. In such embodiment, S226 may function to transmit, via a network and/or via an API, the subject video response imprint together with local identifier data (e.g., an imprint ID) to the transcription service that may function to create a transcription of the subject video response imprint.
  • In another embodiment, S226 may function to implement a transcription module or the like that may include one or more language models that may function to automatically transcribe an audio component of a subject video response imprint to a text representation.
  • In the circumstance that a transcription of a video response imprint includes a plurality of distinct responses, S226 may additionally or alternatively function to partition the transcription into a plurality of distinct transcriptions corresponding to the plurality of distinct responses. In one embodiment, S226 may function to partition the transcription based on identifying one or more points of silence (e.g., gaps of text between text representations of the transcription). It shall be noted that any suitable technique for partitioning a transcription of a video response imprint may be implemented.
  • In a preferred embodiment, a transcription for a given video response imprint may be stored in electronic association with the given video from which the transcription was created together with the local and/or global identifier of the given video response.
  • 2.30 Computing Embedding Values|Video Response Corpus Vectors
  • S230, which includes computing and storing embedding values based on the video response corpus, may function to convert or generate vector representations or text representations for each response imprint (e.g., each anchor response) of the video response corpus. In one or more preferred embodiments, S230 may function to implement a sentence or text embedding service or language model that may function to convert transcriptions of response imprints of the video response corpus into numerical-based vector representations.
  • In one implementation, the sentence or text embedding service or model that may be implemented for converting each transcription of each video response imprint in the video response corpus may be the same or substantially similar to an embedding service or model implemented with a pre-trained machine learning (language) model, described herein. In such implementation because the embedding service or model may be the same for converting the video response corpus and training samples or other model inputs of the pre-trained language model, S230 may function to map the vector representations for each response imprint in the video response corpus to an n-dimensional vector space that may be familiar, known, and/or used by the pre-trained language model.
  • Additionally, or alternatively, in some embodiments, the method 200 may function to implement a plurality of distinct pre-trained language models that may each include an embedding layer (i.e., a hidden layer) or implement a distinct embedding service or language model. In such embodiments, S230 may function to compute one or more distinct sets of embedding values for the video response imprints of the video response corpus using an embedding layer of a pre-trained language model or using one or more of a plurality of distinct embedding services or models that may be used by the pre-trained language models.
  • For instance, in one or more embodiments that may include using a sentence or text embedding service to generate text representations based on an input of the video response corpus, S230 may function to generate a distinct text representation for each of the plurality of distinct response imprints of the video response corpus.
  • In a first implementation, S230 may function to sequentially or individually input each response imprint of the video response corpus through an embedding service or language model to create an associated baseline embedded response representation. In such example, a transcription of a first response of the video response corpus (e.g., The delivery fee is $3) may be converted to a first embedded response representation, a transcription of a second response of the video response corpus (e.g., We have a wide selection of vegetarian pizzas) may be converted to a second embedded response representation distinct from the first embedded response representation, and a transcription of a third response of the video response corpus (e.g., Your order will arrive in 30 minutes) may be converted to a third embedded response representation distinct from the first embedded response representation and the second embedded response representation. Stated another way, each transcription of a response of the video response corpus may be an individual input into the sentence or text embedding service to compute a corresponding individual output of an embedded response representation. At least one technical benefit of an individual or discrete approach for creating an embedding representation for each response imprint of a response corpus may include an ability to specifically track a correspondence between a respective response imprint and its computed embedded representation thereby enabling a capability to specifically tune or adjust the computed embedded representation within a given multi-dimensional space for embedding values.
  • Alternatively, in a second implementation, S230 may function to input a set of transcriptions of the video response corpus of S210 through an embedding service to create a set of embedded response representations (e.g., Mr). In other words, the input into the embedding service may be the entire response corpus and the output (e.g., after processing the video response corpus through the embedding service) may be a set of baseline embedded response representations.
  • In operation, S230 may function to implement any suitable and/or combination of suitable sentence (or text) embeddings techniques or services to compute embedded response representations. Accordingly, in the case that the identified response corpus may span across a diverse set of text representations (or vector values), S230 may function to identify or define the range of embedding values associated with the video response imprints of the video response corpus.
  • 2.35 N-Dimensional Response Embedding Space
  • Optionally, S230 includes S235, which may function to associate or map each embedded response representation (or embedded vector representation) of the video response corpus into a multi-dimensional vector space (e.g., an n-dimensional embeddings space), that, in one or more embodiments, may be graphically illustrated. In other words, each vector representation of each distinct string of text or word that may define a distinct response of the video response corpus may be used as input for creating a mapping item or a point that may be positioned onto the multi-dimensional embedding space.
  • Accordingly, in such preferred embodiment, the embedded response space (e.g., the n-dimensional embedded response space) may be constructed based on mapping the embedded response representations for each response imprint of the video response corpus. For example, S230 may function to map each embedded response representation that may define a coordinate or vector onto the embedded response space.
  • Additionally, or optionally, each of the embedded representations may be linked, coupled, and/or associated with the anchor response and/or the system-displayed response.
  • It shall be noted that responses of the video response corpus that may share one or more similar characteristics (e.g., response categories, semantically similar responses within a similarity threshold value) or that may have semantically similar meanings may be mapped (or clustered) proximate to one another, in the embedded response space, when compared to unrelated (e.g., dissimilar) responses.
  • It shall be noted that each of the video response corpus, the embeddings values for the video response corpus, and/or the n-dimensional mapping of the embeddings values of the video response corpus, may sometimes be referred to herein as the “dialogue agent response collection”, may be stored in association with one another when configured for a specific virtual dialogue agent. Preferably, the dialogue agent response collection may be stored by an instant-virtual agent generation service or the like for creating chatbots or virtual dialogue agents.
  • 3. Method for Implementing a Machine Learning Human-Personified Virtual Agent
  • As shown in FIG. 3 , the method 300 for implementing a machine learning-based human-personified virtual dialogue agent includes identifying user stimulus data S310, generating a machine learning-based response inference S320, computing a video response imprint (ID) based on the machine learning-based response inference S330, selecting a video response imprint based on a video response imprint ID S340, and implementing a human-personified virtual agent based on the video response S350.
  • 3.10 Receiving a User Utterance/User Stimulus
  • S310, which includes identifying user stimulus data, may function to identify, collect, and/or receive user input data in the form of a user utterance or user stimulus towards one or more human-personified virtual dialogue agents deployed in a production environment of a subscriber. It shall be noted that one or more of the virtual agents deployed in a production environment of a subscriber may be associated with a distinct video response corpus previously configured with a chat agent generation service or the like.
  • In a preferred embodiment, S310 may function to receive a user input or user stimulus via a user interface (e.g., an interface of the virtual dialogue agent) accessible by or provided to the user. It shall be noted that, in one or more embodiments, the interface of the virtual dialogue agent may be accessible by a plurality of channels, including but not limited to, a mobile computing device, a web browser (having a website displayed therein), a social network interface, or any other suitable channel or client interface/device for deploying the virtual dialogue agent.
  • In one or more embodiments, the user utterance or user stimulus may include, but should be not limited to, speech or utterance input, textual input, gesture input, touch input, character input, numerical input, image input and/or any other suitable type of input. It shall be noted that, in one or more embodiments, the user utterance or user stimulus identified, collected, and or received by S310 may be of a single dialogue intent or a plurality of dialogue intents. Additionally, or alternatively, the identified, collected, and or received user stimulus or user utterance may relate to a single dialogue domain or a plurality of dialogue domains.
  • It shall be noted that, in one or more embodiments, S310 may function to identify, receive, and/or collect the user stimulus or user utterance and transmit, via a computer network or the like, the user stimulus or user utterance to an embedding service that may convert or translate the user stimulus or user utterance into an embedded representation consumable by one or more pre-trained language processing models for producing one or more response inferences or response predictions (“input embedding”).
  • In one or more alternative embodiments, S310 may function to directly pass the user input or user stimulus, in a raw state, to one or more of the pre-trained language processing models that may include an embedding layer used to generate embedding values for input into one or more inference layers of the models for producing one or more response inferences.
  • As will be further discussed below, in one or more embodiments, the embedded or vector representation associated with a user utterance or user stimulus may assist with providing the system 100, the method 200, and/or the method 300 the capability of understanding a relational strength between the embedded representation of the user stimulus or the user utterance and the embedded response representations of the video response corpus for intelligently aiding and/or improving a conversational experience.
  • 3.20 Generating Response Inferences Utilizing One or More Pre-Trained Language Processing Models
  • S320, which includes generating a response inference, may function to provide the user communication or user stimulus as input to one or more pre-trained language models. In some embodiments, a chatbot generation service or the like may function as an intermediary between a client interface implementing a virtual dialogue agent (e.g., a subscriber system) and one or more remote or cloud-based systems implementing the one or more pre-trained language models. In such embodiments, S320 may function to provide or transmit the user stimulus from the client interface (i.e., the virtual dialogue interface) together with a global identifier or Selph ID of a subject virtual dialogue agent to the chatbot generation service and in response to a receipt of the user stimulus, the chatbot generation service may function to directly interface with the one or more pre-trained language models for generating at least one response inference based on the user stimulus.
  • In operation, S320 may function to operably communicate with (e.g., access, retrieve, or the like) one or more of the plurality of pre-trained language processing models identified within the system 100 and/or by the method 200 or the like. For example, in one or more embodiments, S320 may digitally communicate or digitally interface with one or more of a plurality of language processing models via an application programming interface (API) that may programmatically integrate both the system 100 (implementing the method 200 and method 300) and foreign or third-party systems implementing the one or more pre-trained language processing models. That is, S320 may function to access one or more of the plurality of pre-trained language processing models by requesting or generating one or more API calls that include user stimulus data to APIs of one or more of the pre-trained language processing models for producing one or more inferred responses to the user stimulus.
  • Accordingly, in one or more embodiments, as the plurality of pre-trained language processing models may be pre-developed and/or pre-trained whereby each of the plurality of pre-trained language processing models may have corresponding configured parameters (e.g., learned weights), the parameters (or weights) of the plurality of language processing models may vary between the distinct pre-trained language models. As a result, each of the distinct language processing models that may process the user stimulus or the user utterance differently and may use distinct embedding models to generate a distinct embedded query representation and may compute a predicted (e.g., embeddings) response inference that may vary from other language processing models. For instance, S320 may function to process the user stimulus or user utterance through a plurality of language processing models and each of the language processing models of the plurality of language processing models may be associated with a distinct embeddings models that generates a distinct user stimulus representation.
  • 3.30 Computing a Video Response to User Stimulus or User Utterance Utilizing One or More Pre-Trained Language Processing Models
  • S330, which includes computing a video response imprint (ID) based on the machine learning-based response inference, may function to intelligently identify a response to the user stimulus or user utterance based on computationally (e.g., spatially, numerically, etc.) evaluating one or more predicted responses or response inferences of the one or more pre-trained language models against the embedded response representations of the video response corpus for a subject virtual dialogue agent, as shown by way of example in FIG. 6 .
  • In one or more preferred embodiments, S330 may function to evaluate each embedded response representation or a subset of the embedded response representations of the video response corpus with reference to each response inference computed by the plurality of pre-trained language processing models based on the user stimulus.
  • In one or more embodiments, if the embedding vector values of an evaluation are based on different embedding models having different vector value ranges or the like, the embedded response representations of the video response corpus and the embedded representations of the user stimulus may be normalized to one another. In this way, S330 may function to compute a similarity metric between an inferred response and one or more embedded response representations of the video response corpus. In such embodiments, the computation of the similarity metric may include mathematically computing a distance value or a distance similarity metric between each embedded response representation of the video response corpus and each inferred response representation produced based on the user stimulus to thereby intelligently identify an optimal or most probable response based on the quantitative analysis of the mathematically computed distance therebetween. It shall be noted that a shorter distance between an embedded response representation and an inferred response representation may express that the two embedded representations signify a higher degree of similarity, and vice versa.
  • Stated differently, each response imprint of the video response corpus may be associated with a corresponding embedded response representation and preferably mapped to an embedded response space and the user stimulus may be associated with one or more inference responses or inference response vectors. In one or more embodiments, the inference response vector may sometimes be mapped to an n-dimensional space and compared or evaluated against a mapping of the embedding values of the video response corpus for a subject virtual dialogue agent.
  • Additionally, or alternatively, in some embodiments, the inference response vector may be mapped directly to the n-dimensional space of the embedding values of the video response corpus. Accordingly, S330 may function to mathematically compute, for each inference response to the user stimulus produced by the one or more the pre-trained language processing models, which embedded response representation of the video response corpus provides the most likely or most probable response to the user utterance or user stimulus through a similarity (or distance) analysis.
  • For example, in one or more embodiments, S330 may perform a quantitative measurement that computes a distance between each embedded response representation of the video response corpus and each inferred response representation that may be computed based on an input of the user stimulus. Accordingly, S330 may further function to identify or select which response from the plurality of responses of the video response corpus that may be the most likely response to the user stimulus by identifying the embedded response representation with the smallest (or shortest) distance to the inferred response representation.
  • It shall be noted that, in one or more embodiments, the selected or optimal response (vector, item) to the user stimulus may be the embedded response representation that occurs with the highest frequency based on the results of the similarity analysis performed for multiple, distinct inferred responses of the plurality of distinct pre-trained language models. For instance, if three (3) distinct response inferences map to a first embedding value of a first response (R_1) of the video response corpus based on similarity and two (2) distinct response inferences map to a second embedding value of a second response (R_2), S330 may function to select the first response, R_1, as the most likely or most probable response to the user stimulus since there is a higher frequency of similarity mapping between the inferred responses of the pre-trained language models and a given embedded representation of a response imprint of the video response corpus of a given virtual dialogue agent.
  • Additionally, or alternatively, in one or more embodiments, the inferred response representation generated by each of the plurality of pre-trained language processing models may be averaged together and the averaged inferred response representation may be used for the similarity analysis computations. In one example, S330 may function to source three (3) response inferences or response predictions from three distinct pre-trained machine learning models based on a given user stimulus. In this example, S330 may function to compute an average inference vector value based on the 3 response inferences and compute a similarity metric between the average vector value and one or more embedded values of the video response corpus of a subject virtual dialogue agent.
  • Accordingly, based on a response computation between the one or more inferred responses of the one or more pre-trained language models and the embedding values of the transcripts of the one or more video response imprints of a video response corpus, S330 may function to identify the most probable or most likely (e.g., best or optimal) video response imprint for responding to the user stimulus. In one embodiment, S330 may function to identify a text-based transcript or the like associated with most probable the video response imprint together with an associated local identifier (e.g., imprint identifier) and return a copy of the text-based transcript of a video response imprint, the imprint ID, and/or the Selph ID to a source of the user stimulus (e.g., subscriber system, client interface, etc.).
  • 3.40 Identifying a Video Response Imprint & Generating a Response Using the Selecting a Video Response Imprint
  • S340, which includes selecting a video response imprint based on a video response imprint ID, may function to generate a response via the virtual dialogue agent based on the identified text-based transcript and/or imprint ID of a video response imprint.
  • In one embodiment, in response to receiving or identifying the text-based transcript of the video response imprint with a corresponding local identifier or imprint ID, S340 may function to evaluate one or more of the text-based transcript and imprint ID against the one or more video response imprints of a video response corpus. In such embodiment, based on the evaluation, S340 may function to identify a video response imprint and/or associated time segment or partition of a video response imprint for responding to the user stimulus.
  • In the circumstance that the video response corpus includes a single video having a sequence of distinct video response imprints partitioned based on time segments and each video response imprint of the sequence having a unique imprint ID, S340 may function to evaluate and/or compare the identified imprint ID along the sequence of video response imprints or time segments and identify a video response imprint or time segment that matches the identified imprint ID.
  • Additionally, or alternatively, in the circumstance that the video response corpus includes a plurality of distinct video response imprints with each video response imprint being stored independently of one another, S340 may function to individually evaluate and/or compare the identified imprint ID to each video response imprint and an associated imprint ID to identify a video response imprint having an imprint ID that matches the identified imprint ID.
  • 3.50 Implementing a Human-Personified Virtual Agent
  • S350, which include implementing a human-personified virtual agent based on the video response, may function to generate and/or provide a response to user stimulus/user utterance by loading and/or playing a video response imprint via the human-personified virtual dialogue agent that best (most likely) responds to the user stimulus/user utterance, as shown by way of example in FIG. 7 .
  • In one embodiment, S350 may function to identify and/or collect a target video response imprint together with an associated text-based transcript of the target video response imprint and load or transmit the video response imprint and transcript to a user system (client interface). Preferably, in such embodiments, the user system may include a display, such a computer or mobile device display panel, that includes a window or an interface object at which the video response imprint may be communicated and/or played.
  • In one implementation, S350 may function to generate a response via the human-personified virtual dialogue agent that includes executing the video response imprint together with the text-based transcript. In such embodiments, executing the text-based transcript may include superimposing on or overlapping the content of the text-based transcript with the video response imprint. In this way, a user may optionally follow the video response imprint based on viewing the text-based transcript of the video response imprint.
  • It shall be noted that, in some embodiments, S350 may only function to display the text-based transcript of the video response imprint if an accessibility setting of the human-personified virtual dialogue agent is toggled on. Conversely, in some embodiments, S350 may only function to display the video response imprint (and not the text-based transcript of the video response imprint) if the accessibility setting of the human-personified virtual dialogue agent is toggled off.
  • Additionally, or alternatively, in some embodiments, the identified imprint ID may be evaluated and/or matched against a proxy response item or a display response item which may be different from an original video response imprint (anchor answer) associated with the identified imprint ID. That is, in some embodiments, an original video response imprint may be tethered to or otherwise associated with a proxy response item that may be presented in lieu of the original video response imprint.
  • Additionally, or alternatively, in one or more implementations, S350 may function to play (via stream, download, and/or the like) any combinations of a video, audio, transcript, and/or the like of a video response imprint.
  • Embodiments of the system and/or method can include every combination and permutation of the various system components and the various method processes, wherein one or more instances of the method and/or processes described herein can be performed asynchronously (e.g., sequentially), concurrently (e.g., in parallel), or in any other suitable order by and/or using one or more instances of the systems, elements, and/or entities described herein.
  • Although omitted for conciseness, the preferred embodiments may include every combination and permutation of the implementations of the systems and methods described herein.
  • As a person skilled in the art will recognize from the previous detailed description and from the figures and claims, modifications and changes can be made to the preferred embodiments of the invention without departing from the scope of this invention defined in the following claims.

Claims (19)

We claim:
1. A machine learning-based method of implementing a human video-personified virtual dialogue agent, the method comprising:
computing an input embedding based on receiving a user input;
computing, via a pre-trained machine learning language model, an embedding response inference based on the input embedding;
searching, based on the embedding response inference, a response imprintation embedding space that includes a plurality of distinct embedding representations of potential responses to the user input, wherein each of the plurality of distinct embedding representations is tethered to a distinct human-imprinted media response, wherein searching the response imprintation embedding space includes:
(i) defining an embedding search query using the embedding response inference as a search parameter,
(ii) searching the response imprintation embedding space based on the embedding search query, and
returning a target embedding representation from the response imprintation embedding space based on the searching of the response imprintation embedding space;
identifying a human-imprinted media response based on the target embedding representation from the response imprintation embedding space; and
executing, via a user interface, the human-imprinted media response for addressing the user input.
2. The method of claim 1, wherein:
searching the response imprintation embedding space includes:
(1) computing a distance between the embedding response inference and one or more of the plurality of distinct embedding representations of the response imprintation embedding space, and
(2) determining which embedding representation of the plurality of distinct embedding representations has a smallest distance to the embedding response inference, and
the target embedding representation returned from the response imprintation embedding space corresponds to the embedding representation having the smallest distance to the embedding response inference.
3. The method of claim 1, wherein:
searching the response imprintation embedding space includes:
(1) computing a similarity metric between the embedding response inference and each of the plurality of distinct embedding representations, and
(2) determining which embedding representation of the plurality of distinct embedding representation is most similar to the embedding response inference, and
the target embedding representation returned from the response imprintation embedding space corresponds to the embedding representation most similar to the embedding response inference.
4. The method of claim 1, wherein the pre-trained machine learning language model computes the embedding response inference based on an embedding space different from the response imprintation embedding space, the method further comprising:
normalizing the embedding response inference to the response imprintation embedding space contemporaneously with defining the embedding search query.
5. The method of claim 1, wherein the response imprintation embedding space relates to an n-dimensional vector space.
6. The method of claim 1, further comprising:
contemporaneously with computing the embedding response inference via the pre-trained machine learning language model:
computing, via one or more additional pre-trained machine learning language models, one or more additional embedding response inferences based on the input embedding,
wherein:
searching the response imprintation embedding space is based on the embedding response inference and the one or more additional embedding response inferences,
searching the response imprintation embedding space further includes:
defining one or more additional embedding search queries based on the one or more additional embedding response inferences; and
searching the response imprintation embedding space based on the one or more additional embedding search queries, and
the target embedding representation returned from the response imprintation embedding space is based on searching the response imprintation embedding space with the embedding search query and the one or more additional embedding search queries.
7. The method of claim 6, wherein:
the one or more additional embedding response inferences include a first additional embedding response inference and a second additional embedding response inference,
the one or more additional embedding search queries include a first embedding search query based on the first additional embedding response inference and a second embedding search query based on the second additional embedding response inference,
searching the response imprintation embedding space based on the first additional embedding search query includes identifying, from the plurality of distinct embedding representations, an embedding representation most similar to the first additional embedding response inference,
searching the response imprintation embedding space based on the second additional embedding search query includes identifying, from the plurality of distinct embedding representations, an embedding representation most similar to the second additional embedding response inference, and
searching the response imprintation embedding space based on the embedding search query includes identifying, from the plurality of distinct embedding representations, an embedding representation most similar to the embedding response inference.
8. The method of claim 7, wherein:
searching the response imprintation embedding space further includes:
identifying an embedding representation most frequently identified by the embedding search query and the one or more additional search queries; and
determining which embedding representation of the plurality of distinct embedding representations is closest to the embedding representation most frequently identified by the embedding search query and the one or more additional search queries, and
the target embedding representation returned from the response imprintation embedding space corresponds to the embedding representation determined to be closest to the embedding representation most frequently identified by the embedding search query and the one or more additional search queries.
9. The method of claim 8, wherein:
searching the response imprintation embedding space further includes:
computing an average embedding representation based on embedding representations identified by the embedding search query and the one or more additional search queries; and
determining which embedding representation of the plurality of distinct embedding representations is closest to the average embedding representation, and
the target embedding representation returned from the response imprintation embedding space corresponds to the embedding representation determined to be closest to the average embedding representation.
10. The method of claim 1, wherein the human-imprinted media tethered to the target embedding representation comprises a video recording of a human making a verbal statement, and executing the human-imprinted media response includes:
playing the video recording of the human-imprinted media response at the user interface; and
displaying, in association with the playing of the video component, a transcript of the human-imprinted media response.
11. The method of claim 1, wherein:
the user input is received via the user interface implementing a human video-personified machine learning-based virtual dialogue agent, and
the user input comprises one or more of an utterance or textual input.
12. A method of implementing a human-personified virtual dialogue agent, the method comprising:
computing an input embedding based on a user input;
computing, via a pre-trained machine learning language model, an embedding response inference based on the input embedding;
searching, based on the embedding response inference, a response imprintation embedding space that includes a plurality of distinct embedding representations of potential text-based responses to the user input, wherein each of the plurality of distinct embedding representations is tethered to a distinct human-imprinted media response, and searching the response imprintation embedding space includes:
(i) defining an embedding search query using the embedding response inference as a search parameter, and
(ii) executing the embedding search query to search the response imprintation embedding space, wherein the embedding search query returns a target embedding representation from the response imprintation embedding space; and
executing, via a user interface implementing a human-personified virtual dialogue agent, a human-imprinted media response tethered to the target embedding representation.
13. The method of claim 12, wherein the embedding search query returns the target embedding representation based on having an n-dimensional distance that is smallest relative to other embedding representations in the response imprintation embedding space.
14. The method of claim 1, wherein the response imprintation embedding space relates to a multi-dimensional vector space, the method further comprising:
constructing the response imprintation embedding space, wherein constructing the response imprintation embedding space includes:
identifying a human-imprinted media response corpus that includes a plurality of distinct human-imprinted media responses to likely user input;
generating, via a transcriber, a text-based transcription for each of the plurality of distinct human-imprinted media responses;
providing, as input to a pre-trained machine learning language model, the text-based transcription generated for each of the plurality of distinct human-imprinted media responses;
computing, by the pre-trained machine learning language model, an embedding representation for each of the plurality of distinct human-imprinted media responses based on the text-based transcription generated for each of the plurality of distinct human-imprinted media responses; and
mapping the embedding representation computed for each of the plurality of distinct human-imprinted media responses to the multi-dimensional vector space.
15. The method of claim 14, wherein each of the plurality of distinct human-imprinted media responses includes an audio/video (AV) component.
16. The method of claim 12, wherein:
the user input is received via the user interface of the human-personified machine learning-based virtual dialogue agent, and
executing the human-imprinted media response tethered to the target embedding representation includes playing the human-imprinted media response.
17. The method of claim 16, wherein:
in accordance with a determination that an accessibility setting of the human-personified virtual dialogue agent is toggled on:
forgoing playing the human-imprinted media response; and
displaying a text-based transcription of the human-imprinted media response at the user interface of the human-personified machine learning-based virtual dialogue agent.
18. A method of implementing a fast-generated virtual dialogue agent, the method comprising:
receiving, via a web-enabled virtual dialogue agent interface, user stimuli;
converting, by a computer implementing one or more pre-trained language machine learning models, the user stimuli to a stimuli embeddings inference;
computing a response inference based on the stimuli embeddings inference, wherein computing the response inference includes:
performing an embeddings search for a response embeddings of a plurality of distinct response embeddings based on the stimuli embeddings inference; and
generating an automated response to the user stimuli, via the web-enabled virtual dialogue agent interface, based on the response embeddings.
19. The method according to claim 18, wherein the embeddings search searches a multi-dimensional space and identifies which of the plurality of distinct response embeddings is closest to the stimuli embeddings inference.
US17/849,589 2021-07-15 2022-06-24 Systems and methods for generation and deployment of a human-personified virtual agent using pre-trained machine learning-based language models and a video response corpus Active US11550831B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/849,589 US11550831B1 (en) 2021-07-15 2022-06-24 Systems and methods for generation and deployment of a human-personified virtual agent using pre-trained machine learning-based language models and a video response corpus

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202163222169P 2021-07-15 2021-07-15
US17/849,589 US11550831B1 (en) 2021-07-15 2022-06-24 Systems and methods for generation and deployment of a human-personified virtual agent using pre-trained machine learning-based language models and a video response corpus

Publications (2)

Publication Number Publication Date
US11550831B1 US11550831B1 (en) 2023-01-10
US20230027078A1 true US20230027078A1 (en) 2023-01-26

Family

ID=84810761

Family Applications (1)

Application Number Title Priority Date Filing Date
US17/849,589 Active US11550831B1 (en) 2021-07-15 2022-06-24 Systems and methods for generation and deployment of a human-personified virtual agent using pre-trained machine learning-based language models and a video response corpus

Country Status (1)

Country Link
US (1) US11550831B1 (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160019202A1 (en) * 2014-07-21 2016-01-21 Charles Adams System, method, and apparatus for review and annotation of audiovisual media content
US20180211552A1 (en) * 2017-01-20 2018-07-26 Coursera, Inc. Smart bookmarks
US20190244609A1 (en) * 2018-02-08 2019-08-08 Capital One Services, Llc Adversarial learning and generation of dialogue responses
US20200105249A1 (en) * 2018-09-28 2020-04-02 International Business Machines Corporation Custom temporal blacklisting of commands from a listening device
US10832062B1 (en) * 2018-09-28 2020-11-10 Zoox, Inc. Image embedding for object tracking
US20210136195A1 (en) * 2019-10-30 2021-05-06 Talkdesk, Inc. Methods and systems for virtual agent to understand and detect spammers, fraud calls, and auto dialers
US20210233031A1 (en) * 2020-01-29 2021-07-29 Cut-E Assessment Global Holdings Limited Systems and Methods for Automating Validation and Quantification of Interview Question Responses
US20210240775A1 (en) * 2020-02-03 2021-08-05 Intuit Inc. System and method for providing automated and unsupervised inline question answering
US11368776B1 (en) * 2019-06-01 2022-06-21 Apple Inc. Audio signal processing for sound compensation

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160019202A1 (en) * 2014-07-21 2016-01-21 Charles Adams System, method, and apparatus for review and annotation of audiovisual media content
US20180211552A1 (en) * 2017-01-20 2018-07-26 Coursera, Inc. Smart bookmarks
US20190244609A1 (en) * 2018-02-08 2019-08-08 Capital One Services, Llc Adversarial learning and generation of dialogue responses
US20200105249A1 (en) * 2018-09-28 2020-04-02 International Business Machines Corporation Custom temporal blacklisting of commands from a listening device
US10832062B1 (en) * 2018-09-28 2020-11-10 Zoox, Inc. Image embedding for object tracking
US11368776B1 (en) * 2019-06-01 2022-06-21 Apple Inc. Audio signal processing for sound compensation
US20210136195A1 (en) * 2019-10-30 2021-05-06 Talkdesk, Inc. Methods and systems for virtual agent to understand and detect spammers, fraud calls, and auto dialers
US20210233031A1 (en) * 2020-01-29 2021-07-29 Cut-E Assessment Global Holdings Limited Systems and Methods for Automating Validation and Quantification of Interview Question Responses
US20210240775A1 (en) * 2020-02-03 2021-08-05 Intuit Inc. System and method for providing automated and unsupervised inline question answering

Also Published As

Publication number Publication date
US11550831B1 (en) 2023-01-10

Similar Documents

Publication Publication Date Title
US20240046043A1 (en) Multi-turn Dialogue Response Generation with Template Generation
US11055355B1 (en) Query paraphrasing
US20210142794A1 (en) Speech processing dialog management
US8321414B2 (en) Hybrid audio-visual categorization system and method
US11081104B1 (en) Contextual natural language processing
US11360927B1 (en) Architecture for predicting network access probability of data files accessible over a computer network
US10755177B1 (en) Voice user interface knowledge acquisition system
US11200885B1 (en) Goal-oriented dialog system
US11605376B1 (en) Processing orchestration for systems including machine-learned components
US11043215B2 (en) Method and system for generating textual representation of user spoken utterance
US20230074681A1 (en) Complex natural language processing
CN110164416B (en) Voice recognition method and device, equipment and storage medium thereof
US20220256175A1 (en) Hierarchical Video Encoders
CN115497465A (en) Voice interaction method and device, electronic equipment and storage medium
US11550831B1 (en) Systems and methods for generation and deployment of a human-personified virtual agent using pre-trained machine learning-based language models and a video response corpus
CN115701612A (en) Machine learning model for detecting subject divergent digital video
KR20230120790A (en) Speech Recognition Healthcare Service Using Variable Language Model
US11854535B1 (en) Personalization for speech processing applications
CN109918486B (en) Corpus construction method and device for intelligent customer service, computer equipment and storage medium
US20230351257A1 (en) Method and system for training virtual agents through fallback analysis
US11803599B2 (en) Apparatus and method for attribute data table matching
US20230252994A1 (en) Domain and User Intent Specific Disambiguation of Transcribed Speech
US11934794B1 (en) Systems and methods for algorithmically orchestrating conversational dialogue transitions within an automated conversational system
US11804225B1 (en) Dialog management system
US11934439B1 (en) Similar cases retrieval in real time for call center agents

Legal Events

Date Code Title Description
FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY

AS Assignment

Owner name: TRUESELPH, INC., MICHIGAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:MARKS, ELDON;MARS, JASON;REEL/FRAME:060331/0703

Effective date: 20220624

FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO SMALL (ORIGINAL EVENT CODE: SMAL); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY

STCF Information on status: patent grant

Free format text: PATENTED CASE