GB2574098A - Interactive systems and methods - Google Patents

Interactive systems and methods Download PDF

Info

Publication number
GB2574098A
GB2574098A GB1903984.1A GB201903984A GB2574098A GB 2574098 A GB2574098 A GB 2574098A GB 201903984 A GB201903984 A GB 201903984A GB 2574098 A GB2574098 A GB 2574098A
Authority
GB
United Kingdom
Prior art keywords
sequence
providing
user
speech
features
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB1903984.1A
Other versions
GB2574098B (en
GB201903984D0 (en
Inventor
Alistair Brady Peter
Allen-Vercoe Hayden
Sankarpandi Sathish
Dickson Ethan
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Orbital Media And Advertising Ltd
Original Assignee
Orbital Media And Advertising Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Orbital Media And Advertising Ltd filed Critical Orbital Media And Advertising Ltd
Priority to GB2009211.0A priority Critical patent/GB2581943B/en
Publication of GB201903984D0 publication Critical patent/GB201903984D0/en
Priority to EP19778590.0A priority patent/EP3743925A1/en
Priority to PCT/GB2019/052611 priority patent/WO2020193929A1/en
Priority to US17/441,791 priority patent/US11900518B2/en
Publication of GB2574098A publication Critical patent/GB2574098A/en
Application granted granted Critical
Publication of GB2574098B publication Critical patent/GB2574098B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/2053D [Three Dimensional] animation driven by audio data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3329Natural language query formulation or dialogue systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/903Querying
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/755Deformable models or variational models, e.g. snakes or active contours
    • G06V10/7553Deformable models or variational models, e.g. snakes or active contours based on shape, e.g. active shape models [ASM]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/16Speech classification or search using artificial neural networks
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/24Speech recognition using non-acoustical features
    • G10L15/25Speech recognition using non-acoustical features using position of the lips, movement of the lips or face analysis
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/06Transformation of speech into a non-audible representation, e.g. speech visualisation or speech processing for tactile aids
    • G10L21/10Transforming into visible information
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/04Real-time or near real-time messaging, e.g. instant messaging [IM]
    • H04L51/046Interoperability with other network applications or services
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/07User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail characterised by the inclusion of specific contents
    • H04L51/10Multimedia information
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/31Indexing; Data structures therefor; Storage structures
    • G06F16/316Indexing structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L2015/088Word spotting
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/06Transformation of speech into a non-audible representation, e.g. speech visualisation or speech processing for tactile aids
    • G10L21/10Transforming into visible information
    • G10L2021/105Synthesis of the lips movements from speech, e.g. for talking heads
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/02User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail using automatic reactions or user delegation, e.g. automatic replies or chatbot-generated messages

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Human Computer Interaction (AREA)
  • Medical Informatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Biomedical Technology (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Acoustics & Sound (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Signal Processing (AREA)
  • Quality & Reliability (AREA)
  • Public Health (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Pathology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Processing Or Creating Images (AREA)
  • Image Analysis (AREA)

Abstract

A method of producing an avatar video, the method comprising the steps of: providing a reference image of a person's face, providing a plurality of characteristic features representative of a facial model XO of the person's face with the characteristic features defining a facial pose dependent on the person speaking. A target phrase to be rendered over a predetermined time period during the avatar video is provided together with a plurality of time intervals t within the predetermined time period. For each of said time intervals t, speech features from the target phrase are extracted to provide a sequence of speech features and generating using the plurality of characteristic features and sequence of speech features, a sequence of facial models Xt for each of said time intervals t which are used to produce the avatar video. The target phrase may be provided as text or audio data and the speech features may comprise at least one phonetic label. The speech features may be extracted with a phonetic classifier module using a Deep Convolutional Network(DCN). The plurality of characteristic features defining the facial pose may comprise at least one Active Shape Model landmark and at least one latent descriptor representing abstract appearance features which may be extracted using a DCN. The sequence of facial models may be generated using a recursive model such as a sequence-to-sequence encoder decoder method and the recursive model may be generated with a Long Short-Term Memory (LSTM) network. The sequence of face images may be generated using a frame generator to combine the reference image with the sequence of facial models Xt possibly using a loss function for reducing differences between the reference image and each of the facial models Xt. A second invention is described for providing an answer to a user comprising: providing a database comprising an indexed question library and a plurality of responses, providing a correlation between the question library and the plurality of responses, receiving a user question, searching keyword information in the question library based on the input and providing at least on response to the user based on said correlation.

Description

- 1 Interactive Systems and Methods
Background
The present disclosure relates generally to interactive health care systems and methods.
In particular, the disclosure relates to providing a speech-driven, audio-visual avatar (anthropomorphous model) of a doctor or a nurse, which may be employed in a virtual environment or advanced man-machine interfaces such as specialised digital assistants. The avatar is not only very realistic but can accurately answer health-care related questions, being provided within a platform of machine learning and deep learning based applications to accurately answer questions posed in the natural language by users.
Prior Art
Health care providers offer health care services to patients on a daily basis. In the United Kingdom, general practitioner doctors, referred to as GPs, are experiencing increased pressure partly due to insufficient government funding available for the National Health Service (NHS). A further factor in the increased workload of health care providers is an increasingly ageing population with older patients being more likely to develop health conditions and require more visits to the GPs.
Particularly, addressing self-treatable minor ailments such as colds, coughs, flu, bad back and hay fever costs the NHS around £2bn* per annum (*PAGB). GPs are currently under immense pressure, with significant amounts of money devoted to dealing with minor ailments (51.4m minor ailment consultations every year). This comes at a time when the NHS is required to find £22 billion of efficiency savings by 2020 (*PAGB).
As it becomes harder to secure a GP appointment and more convenient to search for information online, the core population is showing a dangerous over-reliance upon the socalled ‘Dr Google' to self-diagnose their symptoms. In fact, 1 in 20 Google searches is now health related. Currently, it is estimated that one in four internet users self-diagnose on the internet instead of visiting their GP. This proliferation of internet use for health information offers a mixed bag of valuable and misleading or junk information. It has been reported that 25% of women misdiagnose themselves on the internet (Daily Mail, 2012). Misdiagnosis leads to mistreatment, potentially endangering life through misinformation, which effectively puts further strain on the NHS.
In recent years, a number of online applications or digital assistants have been developed to address these concerns. The applications aim to answer questions asked by a user and provide answers or advice. Users, however, often find such applications too generic and non-engaging (‘robotic’). Furthermore, such automated systems suffer from major reliability issues (one example is Facebook™ chatbots hitting a 70% failure rate).
Aspects of the present invention aim to address the above-mentioned problems.
Summary
Solutions to the problems set out above are provided in the claimed aspects of the invention. These relate to avatar video technology and ground-breaking artificial intelligence (Al) and avatar video technology. Taken either individually or, preferably, in combination, these solutions can serve medically approved information from a video realistic avatar, accessible 24/7, which is both reliable and engaging to the user.
In a first independent aspect of the present invention there is provided a method of producing an avatar video, the method comprising the steps of: providing a reference image of a person's face;
providing a plurality of characteristic features representative of a facial model XO of the person's face, the characteristic features defining a facial pose dependent on the person speaking;
providing a target phrase to be rendered over a predetermined time period during the avatar video and providing a plurality of time intervals t within the predetermined time period;
generating, for each of said times intervals t, speech features from the target phrase, to provide a sequence of speech features; and generating, using the plurality of characteristic features and sequence of speech features, a sequence of facial models Xt for each of said time intervals t.
Advantageously, the sequence of facial models is generated using characteristic features defining a facial pose as well as speech features. This results into a speech-driven sequence of facial models and thus a highly realistic avatar video.
Preferably, the characteristic features defining a facial pose comprise landmark points (landmarks) known from Active Shape Models (ASMs), as well as latent descriptors (vectors) representing abstract appearance features such as colour, texture etc. The characteristic features define a facial pose dependent on the person speaking. A pose preferably includes both high-level positional information i.e. gaze direction, head alignment, as well as capturing specific facial features and expressions.
Preferably, the plurality of characteristic features comprises at least one Active Shape Model (ASM) landmark, and at least one latent descriptors representing abstract appearance features. The at least one latent descriptor may be extracted using a Deep Convolutional Network (DCN).
Speech features are defined as abstract quantifiers of audio information such as, but not limited to, short-time-frequency representations i.e. mel-frequency cepstral coefficients (MFCCs), per-frame local energy, delta coefficients, zero-cross rate etc. Preferably, the speech features are extracted with a phonetic classifier module using a Deep Convolutional Network (DCN).
Preferably, the method further comprises the step of generating, from the sequence of facial models Xt, a sequence of face images to produce the avatar video.
The target phrase may be provided as text data. Alternatively, or in addition to the text data, the target phrase may contain audio data.
Preferably, at least one of said speech features comprises a phonetic label. Phonetic labels are preferably generated at pre-set time intervals to provide a phonetic label for each video frame.
Preferably, the sequence of facial models Xt is generated using a recursive model. The recursive model is preferably based on Long Short-Term Memory networks (LSTMs) comprising internal contextual state cells, wherein the output of the LSTM network is modulated by the state of the contextual state cells. This is an advantageous property when the prediction of the neural network is to depend on the historical context of inputs, rather than only on the very last input.
Generating the sequence of face images comprises may use a frame generator to combine the reference image with the sequence of facial models Xt.
Preferably, the frame generator comprises a discriminator module using at least one loss function for reducing differences between the reference image and each of the facial models Xt in said sequence of facial models Xt.
In a second independent aspect of the present invention, there is provided a method for providing an answer to a user, the method comprising the steps of: providing a database comprising an indexed question library and a plurality of responses; providing a correlation between the indexed question library and the plurality of responses; receiving a question from the user as user input;
searching keyword information in the indexed question library based on the user input; and providing at least one response to the user based on said correlation.
The method may be implemented in an information retrieval system. Preferably, the method further comprises the step of:
receiving feedback input from the user in response to the at least one response provided to the user; and based on the feedback input, searching further keyword information in the indexed symptoms library; and providing at least one further response to the user based on said correlation.
Advantageously, the answer retrieval process is guided with user feedback, for example if the system is unable to retrieve answers confidently.
Preferably, the method actively learns from user interactions. For example, every interaction of a user with the system may be fed back as a way of retraining classification models which improves accuracy as a number of interactions increases.
The correlation is preferably provided using Al algorithms, which may comprise a Long Short-Term Memory (LSTM) algorithm implemented by a Bi-directional Recurrent Neural network.
Preferably, the Al algorithms form a high-level classifier and a low-level classifier. This provides for more accurate and efficient classification. It will be appreciated that a number of classification models may be combined to provide answers accurately. This may consist of a number of high-level classifiers and several classifiers at a lower-level to each of the high-level classifiers.
Preferably, before receiving the user input, the user input is pre-processed, said preprocessing comprising the steps of tokenizing the user input and vectorising the tokenised user input. This enables capturing descriptive qualities of categorical labels, such as giving similar tokens close numerical representations.
Preferably, providing at least one response comprises providing an avatar video produced according to the first independent aspect. The production of realistic avatar videos enhance user experience whilst the interactive method according to the second independent aspect improves reliability over existing techniques. Accordingly, this combination is synergistic and advantageous over the prior art such using videos which have to be shot using real persons and are therefore often lengthy and expensive to provide. The interactive systems according to aspects of the present invention increase scalability and flexibility of applications.
In a third independent aspect there is provided a system for producing an avatar video, the method comprising the steps of:
an image processing model for receiving a reference image of a person's face and for extracting a plurality of characteristic features representative of a facial model XO of the person's face, the characteristic features defining a facial pose dependent on the person speaking;
a speech processing module for extracting a target phrase to be rendered over a predetermined time period during the avatar video and for providing a plurality of time intervals t within the predetermined time period;
the speech processing module configured to generate, for each of said time intervals t, speech features from the target phrase, to provide a sequence of speech features; and an avatar rendering module for generating, using the plurality of characteristic features and sequence of speech features, a sequence of facial models Xt for each of said time intervals t.
Advantageously, the image processing module and speech processing module are separated. This separation provides advantages in both performance and maintenance.
Preferably, the avatar rendering module is configured to represent physical dynamics of the speech features by solving a system of ordinary differential equations (ODEs). This models realistic head movement.
Preferably, the physical dynamics of speech are represented with a neural network
In a fourth independent aspect, there is provided an interactive system (also referred to as an information retrieval system) for providing an answer to a user, the system comprising: a database comprising an indexed question library and a plurality of responses;
a processing module for providing a correlation between the indexed question library and the plurality of responses;
input means for receiving a question from the user as user input;
wherein the processing module is configured to search keyword information in the indexed question library based on the user input; and providing at least one response to the user based on said correlation.
Preferably, the plurality of responses comprise at least one avatar video produced using a system according to the third independent aspect. The avatar is presented to the user, therefore the user is provided with accurate information visually. Users may interact with the system via a mobile, or PC for example,
In a dependent aspect, a healthcare information system comprises an interactive system according to the fourth independent aspect. With a growing amount of data, the healthcare information system enables searching and getting relevant information quickly and accurately. A health question received from a user is processed and developed by the interactive system to fetch the related information from the database; this information is then converted to an avatar video to enhance user experience.
Dependent aspects of each of the independent aspects are provided in the dependent claims.
Particularly when taken in combination, aspects of the present invention can provide more reliable, accurate systems which, at the same time, are visual (video realistic), interactive, personal and contextual to enhance user experience.
In a comparative example, there is provided a method for providing an answer to a user related to a healthcare issue, the method comprising the steps of:
providing a database comprising an indexed symptoms library and a plurality of responses; providing a correlation between the indexed symptoms library and the plurality of responses; receiving user input related to the healthcare issue; searching keyword information in the indexed symptoms library based on the user input; and providing at least one response to the user based on said correlation.
In a subsidiary aspect, the method further comprises the steps of: receiving further user input in response to the at least one response provided to the user; and based on the further input, searching further keyword information in the indexed symptoms library; and providing at least one further response to the user based on said correlation.
In a subsidiary aspect, the correlation comprises at least one Al/machine learning algorithm. In a subsidiary aspect, at least one response includes video or avatar implementation. For example, the avatar may be a realistic video representation of a GP which may be created on the fly from a database of information or combining multiple databases of information.
Brief Description of the Drawings
The disclosure will now be described with reference to and as illustrated by the accompanying drawings in which:
Figure 1 shows a comparative example of an interactive health care system; and
Figures 2, 3A, and 3B outline examples of interactive health care systems with machine learning and avatar capabilities;
Figure 4 shows a system for producing a speech-driven audio-video avatar according to an aspect of the present invention (high-level descriptor of a dual network system);
Figure 5 shows a method of extracting point model and appearance features of a given face image;
Figure 6 shows a speech processing method including steps for feature extraction;
Figure 7 shows a sequential point-model generator used to model the dynamics in head and mouth movement during speech;
Figure 8 shows a style-transfer network used to render images with the appearance of a reference image and pose content from the generated point model representation;
Figure 9 shows an interactive health-care system according to an aspect of the present invention, including an Al engine and video/avatar database;
Figure 10 shows a process employed by sub-modules of the Al engine;
Figure 11 shows the feedback processing and active learning component of the Al engine;
Figure 12 shows the processing component of the Al engine;
Figure 13 shows the pre-processing component of the Al engine;
Figures 14 to 17 illustrate a further system for producing hyper-realistic and responsive avatars according to another aspect of the present invention.
Detailed Description
Interactive systems and methods
Figure 1 shows a first example, wherein a user query 100 is input by a user via text or voice. The query may be a question, for example, about a symptom such as cough or sore throat, or about conditions such as coughs or flu. The input is submitted to a digital platform 200 which incorporates a library of answers, in this example, videos featuring a GP which provides answers. Each video may be on a particular health topic, has associated keywords, and can be searched and output contextually as an answer 300 to the query 100 submitted. Although videos are preferred, it will be appreciated that the library of answers as well as output may have other suitable formats, including text, images, etc. It will also be appreciated that the length of the videos may vary from topic to topic.
In this example, the search is an elastic search (https://en.wikipedia.org/wiki/Elasticsearch)· Advantageously, an elastic search is distributed, providing a scalable, near real-time search. Each video is indexed and tagged with keyword tags relevant to the health topic they address. The search accuracy may be improved by including a function for determining synonyms of the keywords in addition to the assigned keywords themselves.
Figure 2 shows a second example, wherein the user query 100 is input by a user via text or voice. In this example, the digital platform is an artificial intelligence (Al) platform 250 and outputs 350 are provided via a video or video realistic avatar sequence to provide a response to the submitted queries. Machine learning techniques are employed for the Al platform 250 to learn from user feedback training and thus to provide increasingly accurate responses over time. Particular applications envisaged include education of patients, chatbots for answering sexual health-related problems, and prediction of the likelihood of heart problems and back pain diagnostics.
In a preferred scenario, an avatar is presented to the user, prompting the user to ask their question(s). The user input 100 may be either spoken (via a microphone) or written. The system then converts the spoken or written sentences to high dimensional vector representations of the user input 100. This is done through neural architectures such as ‘wordZvec’ ('https://papers.nips.cc/paper/5021-distributed-representations-of-words-andphrases-and-their-compositionality.pdf) or ‘glove’ (https://nlp.stanford.edu/pubs/glove.pdf) where the words having similar syntactical and semantic features are placed in proximity. The high dimensional representations of the user input 100 are used by the system to interrogate a symptoms database for example. A set of initial results is generated.
Next, an output in the form of an avatar video is fetched (or generated) based on the set of initial results. The output may include a question to the user to request further information based on the set of initial results if the Al has low confidence on initial results. Accordingly, the system is interactive and iterative. That is, the system continues to extract useful information from successive user inputs and uses this to interrogate the database in order to generate further, secondary queries and smaller, consecutive results subsets from the initial results set, until a single result or small enough subset of results with high confidence is arrived at. This may include a complete re-set so as to generate a fresh/new set of initial results if the subsequent user responses render this as necessary (e.g. if subsequent queries provide a null/empty subset).
In an example, avatar image sequences are generated offline, in non-real-time, for a given text or audio speech target. This process requires storing a number of similar reference frames to be used to generate the output sequence. More frames provide greater temporal coherence and video quality at the expense of increased computation.
In an alternative, preferred example, avatar sequences are generated on the fly. On the fly generation aims to generate video in real-time from only a single reference image and a known sequence of speech labels, provided as encoded audio sequences or from the text databases of information. The system also incorporates an active learning schema which learns actively based on the history of user inputs and Al responses, improving the Al’s confidence to answer a user query/input continuously over time.
Preferred, but non—essential system capabilities include voice recognition, avatar personalisation (including voice/dialect personalisation) and personalisation/results focusing taking into account a user's preference or medical history.
With reference to Figure 3A, the user query 100 is input by a user via text or voice. Al algorithms are implemented in a digital platform 270, which is faster, scalable and more reliable than elastic search techniques. The output 370 may be in any form, for example, provided on a computer screen, smartphone or tablet.
Preferably, the output 370 is in the form of concise, relevant answers within an avatar video. With reference to Figure 3B, the avatar in this example is produced with an avatar sequence generator 40 using text databases and audio data. Preferably, the avatar sequence generator 40 has capabilities of ‘on the fly' sequence generation. This incorporates a number of features including real-time functionality, single reference image of targets, multi-format speech targets (text or audio) and the ability to generate footage from previously unseen targets.
The Al algorithm improves reliability over existing techniques, whilst the video realistic avatar enhances user experience. This is advantageous to using videos which must be shot using real persons and are therefore often lengthy and expensive to provide. Using avatars increases scalability and flexibility of applications.
Production of speech driven, audio-visual avatars
The present section describes systems and methods according to aspects of the invention, used to create a digital avatar, using audio-visual processing for facial synthesis. From these, a database of digital avatars may be built to be used in the examples of interactive systems and methods provided above, and as will be further described with reference to Figure 9 below.
Advantageously, an interactive user interface may be therefore provided to a specialised chatbot that can answer healthcare questions. It will be appreciated, however, that the described systems and methods can also be used in standalone audio-visual processing algorithms for facial synthesis. The methods make use of modern machine learning and digital signal processing techniques.
The purpose of this aspect of the invention is to create 3-D facial models of a target subject (e.g. a doctor or nurse which a user may already be familiar with) to produce a hyper-realistic speech driven avatar of that target subject. In preferred embodiments, given a target phrase recorded as spoken by the target subject and reference appearance (e.g. an image of the subject), the system will provide videos of the target subject speaking the target phrase.
With reference to Figure 4, a system 41 for creating an avatar video comprises a plurality of modules (modular sub-systems), including: a speech processing module 50, an image processing module (‘face encoder') 60, a frame rendering (‘avatar rendering') module 70 and a post-processing and video sequencer module 80. The speech processing module 50 and the image processing module (‘face encoder') 60 are separate - this separation is important as it provides advantages in both performance and maintenance.
The modular design of the system 41 enables the system to be operable in several configurations (modes), for example for online and offline usage. In offline mode photorealism and synchronicity are prioritised whereas online mode aims to achieve lightfunctionality to support mobile devices and video-streaming. Advantageously, the system 41 may be provided as a service platform, e.g. in combination with a digital platform 270/AI engine 280 as outlined in Figures 3A and Figure 9, respectively, or as a stand-alone application (e.g. a plug-in).
Each module of system 41 comprises data pathway (data flow) and specialised processing. Figure 4 shows the overall data-flow as well as the I/O interface points (inputs 90 and outputs 95). Inputs 90 may include a reference appearance model of a target subject face (target face) which is a 3-D face model for example or a face image. Inputs 90 may also include a target phrase, which may be provided in any suitable form such as text or raw audio data.
The image processing module 60 is configured to extract a plurality of key descriptive parameters (descriptors) from the reference model of the target face (the ‘reference image'). The descriptive parameters may include characteristic features referred to as landmark points (landmarks) known from Active Shape Models (ASMs), as well as latent descriptors (vectors) representing abstract appearance features (such as colour, texture etc.). ASMs are statistical models of the shape of objects which iteratively deform to fit to an example of the object in a new image. The latent descriptors may be extracted using a pre-trained Deep Convolutional Network (DCN).
In alternative embodiments, where no reference appearance model is supplied (e.g. as a reference face image), pre-extracted parameters may be used instead, as available. Advantageously, subjective appearance features may thus be separated from general shape features which is dependent on speech (as changing whilst the target face is speaking).
Historically the parameters used are the location of key-points such as mouth corners, nose edges, etc. In existing parametric models, such as ASMs these are compressed with Principal Component Analysis (PCA) to reduce the dimensionality and standardize representations. The PCA encoded features can then be clustered into distinct modes (i.e. most frequent/dense distributions). These modes of variation capture common expressions and poses. The advantages of this approach are efficiency and relatively low computational time. The disadvantages of this approach are that each model is subjective, requiring large amounts of very similar data for accurate reconstruction, and that rendering new images from point models requires a separate process.
Active Appearance Models (AAMs) attempt to resolve this by parametrising texture maps of the image, however this a limiting factor. In contrast, the fully data-driven approach common in modern computer vision does not attempt to parameterise the subject model and instead is focused on producing images from the offset. This involves learning how pixels are typically distributed in an image. As such, the features are learned directly from the images and are more abstract - typically in the form of edges and gradients that describe low-level image data. A disadvantage is that these models are highly specific to the training task and may function unpredictably to new data. Further restrictions include a need to fix image resolution.
The speech processing model 50 receives an input target phrase. The input target phrase may be generated (e.g. by a chatbot backend) using Natural Language Processing. Alternatively, the input target phrase may be specified by a user.
This input target phrase 90 may be supplied as a text input and/or audio waveform for example. Where no audio recording is available the target phrase may be generated with Text-To-Speech (TTS) software. From the audio waveform, phoneme labels are preferably generated, with a phonetic classifier module 51, at pre-set time intervals - this advantageously provides a phoneme label for each video frame. A phoneme label (also referred to as a phonetic label) is a type of class label indicating fundamental sounds common in speech.
From the input target phrase, the speech processing model 50 extracts speech features and, optionally, phoneme labels. Speech features are defined as abstract quantifiers of audio information such as, but not limited to, short-time-frequency representations i.e. mel-frequency cepstral coefficients (MFCCs), per-frame local energy, delta coefficients, zero-cross rate etc.
An avatar rendering module 70 receives the extracted descriptive parameters from the image processing module 60 (which include landmarks) and the extracted speech features and phonetic labels from the speech processing module 50. The avatar rendering module 70 comprises a point model sequencer 71 which receives the descriptive parameters (point model) from the from the image processing module 60 and the extracted speech features and phonetic labels from the speech processing module 50.
The point model sequencer 71 preferably uses a recursive model (‘pose-point model') to generate a sequence of landmarks giving the face position and pose at each time interval of the avatar video. A ‘pose’ refers to both the high-level positional information i.e. gaze direction, head alignment, as well as capturing specific facial features and expression. The recursive model is preferably based on Long Short-Term Memory networks (LSTMs), which are known as a special type of recurrent neural networks comprising internal contextual state cells that act as long-term or short-term memory cells. The output of the LSTM network is modulated by the state of these cells. This is an advantageous property when the prediction of the neural network is to depend on the historical context of inputs, rather than only on the very last input.
The avatar rendering module 70 further comprises a frame generating model 72 (‘frame generator') which receives the output of the point model sequencer 71, that is, the sequence of landmarks giving the face position and pose at each time interval of the avatar video - additionally we colour code high level semantic regions such as lips, eyes, hair etc. The frame generator renders these into full frames using a specialised style-transfer architecture (as will be described below with reference to Figure 8).
System 41 further comprises a post-processing and video sequencer module 80 which receives the generated frames from the frame generator 72 of the avatar rendering module 70. Following ‘light’ post-processing such as image and temporal smoothing, colour correction, etc, module 80 encodes these frames together with a target audio input into an avatar video. The target audio input provided to the module 80 may be supplied or generated. In an example, the ‘Text-To-Speech’ capability of the speech processing module 50 is used to supply the target audio input to the module 80.
Turning to Figure 5, an exemplary method used by the face encoder 60 is illustrated. At step 600, a face is identified with an image, using for example a pre-trained DCN. At step 610, the identified face image is segmented using a binary mask. The binary mask advantageously removes background from the face image, which reduces variance during training, therefore providing for a more accurate identification of the faces. At step 620, the segmented image is cropped and scaled to a predetermined size.
At step 630, a landmark detector DCN extracts landmark points (landmarks) from the image out at step 620, which represent key parameters. This provides the point model to be input to the point model sequencer 71 of the avatar rendering module 70.
Separately (in parallel to step 630), an appearance encoder network is used, at step 640, to encode the image appearance features as an appearance vector. The appearance vector is input to the frame generator module 72 of the avatar rendering module 70.
Turning to Figure 6, an exemplary method used by the speech processing module 50 is illustrated. At step 500, the module checks if the input target phrase 90 includes audio data. If the input 90 contains no audio, but it contains only text input instead, a Text-ToSpeech (TTS) encoder may be used to produce a waveform specified by the user query. It will be appreciated by the skilled person that there are many envisaged ways of producing the audio data, such as vocoders, concatenated speech, and fully generative methods such as Wavenet.
At step 510, feature extraction is performed using a speech classification algorithm as shown in Figure 6. In this sequential pipeline pre-processing example, post-processing and feature extraction are combined (steps 505).
At steps 505, the audio input 90 is first re-sampled for example by decimation or frequency based interpolation to a fixed frame rate of 16KHz. Following this, the signal is passed through an anti-aliasing filter (e.g. with 8Khz cut-off). Pre-emphasis is performed for example with a simple high-pass filter to amplify higher frequencies better descriptive of speech. Finally, the signal is rms normalised and separated into short time frames synchronised to the video frame rate.
The feature extraction processing involves discrete Fourier transforms on these frames to obtain a spectrogram. The per-frame energy is extracted here. As the frequency is logarithmically scaled, higher frequencies are less impactful and as such can be grouped into energy bands. This is the inspiration behind the mel-cepstral spectrogram, wherein a filter bank is used to group frequencies into increasingly wider bands. This severely reduces dimensionality and increases robustness. The mel-frequencies are then passed through a discrete-cosine-transform (DCT-II) to provide the MFCCs. Post-processing can then be applied per-speaker to transform each feature to a normally distributed variable.
In this example, the speech classification algorithm is used to extract mel-frequency cepstral coefficient (MFCC) audio features and the time derivatives are linearly approximated with a 2nd order symmetric process. These features are then concatenated, at step 510, to give a local contextual window containing the speech features from time steps either side of the specific frame. This has the benefit of increasing the scope of each frame.
At step 520, phonetic labels are generated with the phonetic classifier module 51. In an example, a 1D Convolutional Network is used to provide softmax classifications of the predicted phoneme. This uses an autoencoder to predict the probability distribution across the phonetic labels for a given set of speech features. In addition, Bayesian inference may be applied by modelling a prior distribution of likely phonemes from the text-annotation to improve performance. At step 530, the output of this Network is a sequence of phoneme labels {Po,...Pi,...Pn} for each video frame interval.
Turning to Figure 7, a method to be carried out by the point model sequencer 71 is illustrated. The point model sequencer 71 receives the phoneme labels sequence {Po,...Pi,...Pn} and an initial face pose model X0 and generates face pose models Xt for each frame at time t. The point model sequencer 71 uses a recursive model preferably based on Long Short-Term Memory networks (LSTMs). In effect this is an application of sequenceto-sequence (seqZseq) encoder decoder framework and as such a bi-directional LSTM is a preferred implementation. The output of the point model sequencer 71 is thus a sequence of face pose models Xt per each frame, which represents framewise positional information.
Turning to Figure 8, a method to be carried out by the frame generator 72 is illustrated. Each of the face pose models Xt is input into the frame generator 72 to be combined with the initial face pose model X0 (reference appearance model) and produce full frames. This may be achieved with a discriminator sub-module 723 specialised style transfer architecture, that uses specialised loss functions and discriminator networks. Specifically, the aim is to minimise the difference between appearance encodings of difference frames of the same subject.
Advantageously, a generalised face discriminator ensures realism. A face-discriminator takes single colour images and detects realism. Furthermore, a temporal coherence network may be used to score the neighbouring frames and pose errors. A temporal discriminator is a 2D convolutional encoder that takes a sequence of grayscale images stacked in the channel axis to score the relative temporal consistency. As such, this detects inconsistent movements between frames.
Figures 14 to 17 illustrate a further system 4100 for producing hyper-realistic and responsive avatar videos according to another aspect of the present invention. The system 4100 may be an independent component (module) capable of producing a video realistic and responsive avatar to engage users interactively. The system 4100 combines the strengths of physical models and neural network approaches.
Figure 14 is a schematic block diagram of system 4100 which comprises three main modules: an image processing module (‘parametric model module') 6000, a speech processing (‘Automatic Speech Recognition (ASR)') module 5000 and a frame rendering module (‘frame Tenderer') 7000. The inputs to the system in this example are a single reference image and an arbitrary length mono audio waveform of the speech to be given. Alternatively, the system can use a text response by incorporating a text-to-speech (TTS) module such as the WORLD Vocoder, Tacotron, AWS TTS etc. The output is a video sequence of the avatar speaking the query phrase.
The speech recognition module 5000 transforms the audio input into a sequence of descriptors in a multi-stage sequence as exemplified in Figure 15. First, the audio input in this example is normalised to a predefined rms power and divided into equally spaced frames at intervals of time t. From these frames, features descriptive of speech are extracted, such as MFCCs, i-vectors, instantaneous fundamental frequency, etc. These descriptive features are preferably standardised to equivalent scales. Features from concatenated frames over a short temporal window are fed as inputs to a specialised convolutional recurrent encoder for example, where they are embedded into a latent space to produce a sequence of embeddings. These embeddings are used by both the parametric model 6000 and in a phoneme level classifier 5001 to produce a sequence of per-frame phoneme labels.
The parametric model module 6000 is a temporal version of the physical models used in AAMs and similar. We estimate both a descriptive physical representation and the temporal dynamics as a function of speech. The process employed by the parametric model 6000 is outlined with reference to Figure 16.
The parametric model 6000 represents the physical dynamics of speech with a first order Ordinary Differential Equation (ODE). This allows the position of face vertices to change in response to speech. In the data flow an initial estimate is first extracted from a reference image - while not a necessary requirement, it is preferred that the initial image is frontally aligned, well-lit and in a neutral or resting pose. With the speech embeddings from the ASR network, the framewise derivatives for each vertex are estimated such that by adding these derivatives to the current model we arrive at the vertices positions at the next frame.
This can be done auto-regressively for arbitrary length sequences at arbitrary frame rates to produce a temporal sequence of face poses and expressions.
As these physical models do not contain texture maps or high-resolution detail, rendering is done separately, in the frame Tenderer module 7000 as exemplified in Figure 17. The rendering module 7000 takes semantic information from the previous modules 5000, 6000 pertaining to face pose, expression and speech content alongside a reference image (as was shown in Figure 14) and transforms the semantic maps into photorealistic, temporally smooth video frames. Advantageously, the method renders photorealistic images in arbitrary poses, whilst preserving appearance and identity.
It will be appreciated that systems 41, 4100 as described above may be used in standalone applications outside healthcare, for example to provide avatars for any virtual environments, video-communications applications, video games, TV productions and advanced man-made user interfaces.
Al systems and methods for interactive health care systems
The present section describes systems and methods according to aspects of the invention for providing an Al module to be used in the examples of interactive systems and methods provided above, and particularly, in combination with the avatar database.
The purpose is to create a system architecture and process that can accurately and quickly answers questions posed by the user in the natural language. Advantageously, an interactive user interface may be therefore provided to a specialised chatbot that can accurately answer healthcare questions by a realistic avatar. The system is referred may be referred to as an ‘interactive healthcare system'. It will be appreciated, however, that the described systems and methods can also be used in standalone applications outside healthcare. The systems and methods make use of modern machine learning techniques.
With reference to Figure 9, an interactive system comprises an Al module 280 (‘Al engine') and a database 480 which may include video avatar videos produced with the systems and technique described in the previous section. A user question may be received as an input 100 to the Al module 280 in the form of text or audio data (voice). The Al module 280 processes and analyses the user question and fetches the relevant answer(s) from the video avatar database 480. As will be described in this section, Al algorithms are implemented by the Al module 280, to provide a faster, scalable and more reliable solution than non-machine learning techniques.
The answer(s) fetched from the database 280 may be presented to the user in the form of an output 380 as avatar video, or normal video or text based on availability. Preferably, the output 380 is in the form of concise, relevant answers within a realistic avatar video. The output 380 may be presented in any form, for example, provided on a computer screen, smartphone or tablet.
Turning to Figure 2 shows a data flow for the Al module 280. Users can use voice and/or text to input a question for example. If the input 100 comprises audio data, then this is converted to text using commercially available audio to text platforms.
The input 100 is then then provided to a processing sub-module 281 of the Al module 280. The processing module 281 processes machine and or deep learning algorithms. Before the input 100 is provided to the machine learning algorithm 281, the input is pre-processed with a pre-processing sub-module 282 (shown in the Figures 12 and 13).
With reference to Figure 13, the main functionality of the pre-processing module 282 is to divide a question sentence into words, group of words and characters known as tokenisation. Tokenising methods are known - in this example Tensorflow modules have been used to undertake tokenisation. Once the input 100 is tokenised, it is then vectorised using known techniques such as a word2vec/glove language model. Vectorisation is a technique used to transform categorical data into enumerated representations. Good vectorisation will capture descriptive qualities of the categorical labels - such as giving similar tokens close numerical representations.
Once pre-processed, the input 100 is then provided to the machine learning algorithm of the processing module 280 for training and prediction. The machine learning algorithm used in this example is Bi-LSTM which represents a combination of Long Short-Term Memory (LSTM) and Bi-directional Recurrent Neural Networks (RNNs). As the name suggests, bi-directional RNNs are trained on both the forward and backward bass of a sequence simultaneously. In comparison the bi-directional LSTM is similar but also includes internal passing and forget gates allowing features to pass through long sequences more easily. Bi-LSTM is the special development of artificial neural networks to process sequence and time series data. It will be appreciated that the algorithm used will constantly evolve and that other suitable algorithms may be used.
A hierarchal set of Bi-LSTM algorithms forms the classification architecture of the processing module 280. Depending on number of categories answered, the classification system is divided. With reference to Figure 12, the processing module 280 comprises a high-level classifier sub-module 283 and a low-level classifier sub-module 284. For example: if there are 100 categories, the high-level classifier module 283 performs a classification of 10 categories and the low-level classifier sub-module 284 categorises specific 10 of each of the 10 categories. Alternatively, the high-level classifier module 283 categorises 20 categories and low-level classifier sub-module 284 categorises specific 5 of each of the 20 categories. This mainly depends on the architecture used and it will be appreciated that this configuration may change based on the performance. As previously mentioned, the Al module 280 is translatable therefore the specific division will be dependent on the application's domain.
With reference to Figure 11 and Figure 12, the classification architecture produces output(s) 285. For each output 285 generated, a confidence value 286 is provided. A confidence value represents a probability of the output(s) 285 being associated with the input question. For example, a predetermined threshold for the confidence level to be met may be set by the system administrator (a manual input). The threshold may be decided based on the application, and this is usually set above 95%. If the threshold is met, then the answer(s) 285 are displayed as an output 380 to the user. If the threshold is not met, the user is provided with list of options to choose as to guide to fetch the correct answer.
Once the answer is displayed as output 380, the user is requested for a feedback 385. An exemplary process of providing user feedback is shown in Figure 11. All user feedback may stored in the database 480 or other suitable databases.
To improve the performance of the system an active learning schema is implemented. An analysis is preferably carried out on the feedback data. For example, the feedback data is ‘yes' in the case that the user is happy with the results obtained and ‘no’ otherwise. If the feedback data is ‘yes' then the questions and answers are stored in a retraining database. The retraining database stores failure cases along with the response for review and model validation. If the feedback is ‘no’, then this is flagged for manual check and then added to the retraining database for algorithm retraining.
Applications and interpretation
The foregoing examples and descriptions of embodiments of the present invention as described herewith may be implemented for example in GP triage rooms. However, the foregoing examples and descriptions of embodiments of the present invention have been presented only for purposes of illustration and description. They are not intended to be exhaustive or to limit the present invention to the forms disclosed. Accordingly, modifications and variations will be apparent to practitioners skilled in the art. In particular, it is envisaged that the search and machine learning principles may be applied to topics outside health care, such as sex education, product marketing and customer support and so on.
Further, the Al algorithms and avatars may be located on a client computing device. It will be understood however that not all of the logic for implementing the Al algorithms and/or avatar needs to be located on the client computing device and can be based on one or more server computer systems with a user interface being provided locally on the client computing device. Similarly, logic for implementing the avatar can be stored locally on the client computing device, while the information learned by the system (Al part) can be stored partially or entirely on one or more servers. The specific manner in which the Al algorithms and avatars are respectively hosted in not essential to the disclosure.
Those skilled in the art will further appreciate that aspects of the invention may be implemented in computing environments with many types of computer system configurations including personal computers, desktop computers, laptops, hand-held devices, multi-processor systems or programmable consumer electronics, mobile telephones, tablets and the like.

Claims (24)

Claims
1. A method of producing an avatar video, the method comprising the steps of: providing a reference image of a person's face;
providing a plurality of characteristic features representative of a facial model XO of the person's face, the characteristic features defining a facial pose dependent on the person speaking;
providing a target phrase to be rendered over a predetermined time period during the avatar video and providing a plurality of time intervals t within the predetermined time period;
generating, for each of said times intervals t, speech features from the target phrase, to provide a sequence of speech features; and generating, using the plurality of characteristic features and sequence of speech features, a sequence of facial models Xt for each of said time intervals t.
2. A method according to claim 1, further comprising the step of generating, from the sequence of facial models Xt, a sequence of face images to produce the avatar video.
3. A method according to claim 1 or claim 2, wherein the target phrase is provided as text data and/or audio data.
4. A method according to any preceding claim, wherein at least one of said speech features comprises a phonetic label.
5. A method according to any preceding claims, wherein the speech features are extracted with a phonetic classifier module using a Deep Convolutional Network (DCN).
6. A method according to any preceding claim, wherein the plurality of characteristic features comprises at least one Active Shape Model landmark, and at least one latent descriptors representing abstract appearance features.
7. A method according to claim 6, wherein the at least one latent descriptor is extracted using a Deep Convolutional Network (DCN).
8. A method according to any preceding claims, wherein the sequence of facial models Xt is generated using a recursive model.
9. A method according to claim 8, wherein the recursive model is comprises a sequence-to-sequence encoder decoder method.
10. A method according to claim 9, wherein the recursive model is generated with a Long Short-Term Memory network.
11. A method according to any of claims 2 to 10, wherein generating the sequence of face images comprises using a frame generator to combine the reference image with the sequence of facial models Xt.
12. A method according to claim 11, wherein the frame generator comprises a discriminator module using at least one loss function for reducing differences between the reference image and each of the facial models Xt in said sequence of facial models Xt.
13. A method for providing an answer to a user, the method comprising the steps of: providing a database comprising an indexed question library and a plurality of responses; providing a correlation between the indexed question library and the plurality of responses; receiving a question from the user as user input;
searching keyword information in the indexed question library based on the user input; and providing at least one response to the user based on said correlation.
14. A method according to claim 13, wherein, the method further comprises the steps of:
receiving feedback input from the user in response to the at least one response provided to the user; and based on the feedback input, searching further keyword information in the indexed symptoms library; and providing at least one further response to the user based on said correlation.
15. A method according to claim 13 or claim 14, wherein the correlation is provided using Al algorithms comprising a Long Short-Term Memory (LSTM) algorithm implemented by a Bi-directional Recurrent Neural network.
16. A method according to claim 15, wherein the Al algorithms form a high-level classifier and a low-level classifier.
17. A method according to any of claims 13 to 16, wherein, before receiving the user input, the user input is pre-processed, said pre-processing comprising the steps of tokenizing the user input and vectorising the tokenised user input.
18. A method according to any of claims 13 to 17, wherein providing at least one response comprises providing an avatar video produced according to any of claims 1 to 12.
19. A system for producing an avatar video, the method comprising the steps of: an image processing model for receiving a reference image of a person's face and for extracting a plurality of characteristic features representative of a facial model XO of the person's face, the characteristic features defining a facial pose dependent on the person speaking;
a speech processing module for extracting a target phrase to be rendered over a predetermined time period during the avatar video and for providing a plurality of time intervals t within the predetermined time period;
the speech processing module configured to generate, for each of said times intervals t, speech features from the target phrase, to provide a sequence of speech features; and an avatar rendering module for generating, using the plurality of characteristic features and sequence of speech features, a sequence of facial models Xt for each of said time intervals t.
20. A system for producing an avatar video according to claim 19, wherein the avatar rendering module is configured to represent physical dynamics of the speech features by solving a system of ordinary differential equations (ODEs).
21. A system according to claim 20, wherein the physical dynamics of speech are represented with a neural network.
22. An interactive system for providing an answer to a user, the system comprising: a database comprising an indexed question library and a plurality of responses;
a processing module for providing a correlation between the indexed question library and the plurality of responses;
input means for receiving a question from the user as user input;
wherein the processing module is configured to search keyword information in the indexed question library based on the user input; and providing at least one response to the user based on said correlation.
23. An interactive system according to claim 22, wherein the plurality of responses comprise at least one avatar video produced using a system according to any of claims 19 22.
24. A healthcare information system comprising an interactive system according to claim 23.
GB1903984.1A 2018-03-26 2019-03-22 Interactive systems and methods Active GB2574098B (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
GB2009211.0A GB2581943B (en) 2018-03-26 2019-03-22 Interactive systems and methods
EP19778590.0A EP3743925A1 (en) 2018-03-26 2019-09-17 Interactive systems and methods
PCT/GB2019/052611 WO2020193929A1 (en) 2018-03-26 2019-09-17 Interactive systems and methods
US17/441,791 US11900518B2 (en) 2018-03-26 2019-09-17 Interactive systems and methods

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GBGB1804807.4A GB201804807D0 (en) 2018-03-26 2018-03-26 Interaactive systems and methods

Publications (3)

Publication Number Publication Date
GB201903984D0 GB201903984D0 (en) 2019-05-08
GB2574098A true GB2574098A (en) 2019-11-27
GB2574098B GB2574098B (en) 2020-09-30

Family

ID=62068149

Family Applications (3)

Application Number Title Priority Date Filing Date
GBGB1804807.4A Ceased GB201804807D0 (en) 2018-03-26 2018-03-26 Interaactive systems and methods
GB1903984.1A Active GB2574098B (en) 2018-03-26 2019-03-22 Interactive systems and methods
GB2009211.0A Active GB2581943B (en) 2018-03-26 2019-03-22 Interactive systems and methods

Family Applications Before (1)

Application Number Title Priority Date Filing Date
GBGB1804807.4A Ceased GB201804807D0 (en) 2018-03-26 2018-03-26 Interaactive systems and methods

Family Applications After (1)

Application Number Title Priority Date Filing Date
GB2009211.0A Active GB2581943B (en) 2018-03-26 2019-03-22 Interactive systems and methods

Country Status (4)

Country Link
US (1) US11900518B2 (en)
EP (1) EP3743925A1 (en)
GB (3) GB201804807D0 (en)
WO (1) WO2020193929A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110941945A (en) * 2019-12-02 2020-03-31 百度在线网络技术(北京)有限公司 Language model pre-training method and device

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10943100B2 (en) * 2017-01-19 2021-03-09 Mindmaze Holding Sa Systems, methods, devices and apparatuses for detecting facial expression
GB201804807D0 (en) 2018-03-26 2018-05-09 Orbital Media And Advertising Ltd Interaactive systems and methods
WO2023073596A1 (en) * 2021-10-27 2023-05-04 WingNut Films Productions Limited Audio source separation processing workflow systems and methods
US11763826B2 (en) 2021-10-27 2023-09-19 WingNut Films Productions Limited Audio source separation processing pipeline systems and methods
US11900519B2 (en) * 2021-11-17 2024-02-13 Adobe Inc. Disentangling latent representations for image reenactment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0993197A2 (en) * 1998-10-07 2000-04-12 CSELT Centro Studi e Laboratori Telecomunicazioni S.p.A. A method and an apparatus for the animation, driven by an audio signal, of a synthesised model of human face
US20050057570A1 (en) * 2003-09-15 2005-03-17 Eric Cosatto Audio-visual selection process for the synthesis of photo-realistic talking-head animations
GB2406470A (en) * 2003-09-25 2005-03-30 Canon Res Ct Europ Ltd Display of facial poses associated with a message to a mobile
US20180253881A1 (en) * 2017-03-03 2018-09-06 The Governing Council Of The University Of Toronto System and method for animated lip synchronization
CN109308731A (en) * 2018-08-24 2019-02-05 浙江大学 The synchronous face video composition algorithm of the voice-driven lip of concatenated convolutional LSTM
US20190057533A1 (en) * 2017-08-16 2019-02-21 Td Ameritrade Ip Company, Inc. Real-Time Lip Synchronization Animation

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008141125A1 (en) * 2007-05-10 2008-11-20 The Trustees Of Columbia University In The City Of New York Methods and systems for creating speech-enabled avatars
US20120130717A1 (en) * 2010-11-19 2012-05-24 Microsoft Corporation Real-time Animation for an Expressive Avatar
US9613450B2 (en) * 2011-05-03 2017-04-04 Microsoft Technology Licensing, Llc Photo-realistic synthesis of three dimensional animation with facial features synchronized with speech
US20140006012A1 (en) * 2012-07-02 2014-01-02 Microsoft Corporation Learning-Based Processing of Natural Language Questions
CN105144205B (en) * 2013-04-29 2018-05-08 西门子公司 The apparatus and method of natural language problem are answered using multiple selected knowledge bases
CN105528349B (en) * 2014-09-29 2019-02-01 华为技术有限公司 The method and apparatus that question sentence parses in knowledge base
US11113598B2 (en) * 2015-06-01 2021-09-07 Salesforce.Com, Inc. Dynamic memory network
US10799186B2 (en) * 2016-02-12 2020-10-13 Newton Howard Detection of disease conditions and comorbidities
WO2017210753A1 (en) * 2016-06-10 2017-12-14 Local Knowledge-app Pty Ltd A system for the automated semantic analysis processing of query strings
US10586368B2 (en) * 2017-10-26 2020-03-10 Snap Inc. Joint audio-video facial animation system
GB201804807D0 (en) * 2018-03-26 2018-05-09 Orbital Media And Advertising Ltd Interaactive systems and methods

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0993197A2 (en) * 1998-10-07 2000-04-12 CSELT Centro Studi e Laboratori Telecomunicazioni S.p.A. A method and an apparatus for the animation, driven by an audio signal, of a synthesised model of human face
US20050057570A1 (en) * 2003-09-15 2005-03-17 Eric Cosatto Audio-visual selection process for the synthesis of photo-realistic talking-head animations
GB2406470A (en) * 2003-09-25 2005-03-30 Canon Res Ct Europ Ltd Display of facial poses associated with a message to a mobile
US20180253881A1 (en) * 2017-03-03 2018-09-06 The Governing Council Of The University Of Toronto System and method for animated lip synchronization
US20190057533A1 (en) * 2017-08-16 2019-02-21 Td Ameritrade Ip Company, Inc. Real-Time Lip Synchronization Animation
CN109308731A (en) * 2018-08-24 2019-02-05 浙江大学 The synchronous face video composition algorithm of the voice-driven lip of concatenated convolutional LSTM

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110941945A (en) * 2019-12-02 2020-03-31 百度在线网络技术(北京)有限公司 Language model pre-training method and device

Also Published As

Publication number Publication date
GB2574098B (en) 2020-09-30
GB2581943A (en) 2020-09-02
EP3743925A1 (en) 2020-12-02
GB202009211D0 (en) 2020-07-29
GB2581943B (en) 2021-03-31
US20220172710A1 (en) 2022-06-02
GB201903984D0 (en) 2019-05-08
GB201804807D0 (en) 2018-05-09
WO2020193929A1 (en) 2020-10-01
US11900518B2 (en) 2024-02-13

Similar Documents

Publication Publication Date Title
US11900518B2 (en) Interactive systems and methods
Zhang et al. Multimodal intelligence: Representation learning, information fusion, and applications
WO2021233112A1 (en) Multimodal machine learning-based translation method, device, equipment, and storage medium
Gideon et al. Improving cross-corpus speech emotion recognition with adversarial discriminative domain generalization (ADDoG)
CN113420807A (en) Multi-mode fusion emotion recognition system and method based on multi-task learning and attention mechanism and experimental evaluation method
KR20210158344A (en) Machine learning system for digital assistants
CN113723166A (en) Content identification method and device, computer equipment and storage medium
CN113205817A (en) Speech semantic recognition method, system, device and medium
CN113380271B (en) Emotion recognition method, system, device and medium
CN111967334B (en) Human body intention identification method, system and storage medium
Zhang et al. Deep learning-based multimodal emotion recognition from audio, visual, and text modalities: A systematic review of recent advancements and future prospects
CN115393933A (en) Video face emotion recognition method based on frame attention mechanism
CN116564289A (en) Visual speech recognition for digital video using generative countermeasure learning
Ren et al. Label distribution for multimodal machine learning
CN111126084A (en) Data processing method and device, electronic equipment and storage medium
Huang et al. Fine-grained talking face generation with video reinterpretation
Dweik et al. Read my lips: Artificial intelligence word-level arabic lipreading system
Hong et al. When hearing the voice, who will come to your mind
Rana et al. Multi-task semisupervised adversarial autoencoding for speech emotion
Bai et al. Low-rank multimodal fusion algorithm based on context modeling
CN113886539A (en) Method and device for recommending dialect, customer service equipment and storage medium
Ai et al. A Two-Stage Multimodal Emotion Recognition Model Based on Graph Contrastive Learning
Preethi Analyzing lower half facial gestures for lip reading applications: Survey on vision techniques
Wang et al. TASTA: Text‐Assisted Spatial and Temporal Attention Network for Video Question Answering
Ayoub Multimodal Affective Computing Using Temporal Convolutional Neural Network and Deep Convolutional Neural Networks

Legal Events

Date Code Title Description
732E Amendments to the register in respect of changes of name or changes affecting rights (sect. 32/1977)

Free format text: REGISTERED BETWEEN 20201217 AND 20201223