WO2019227099A1 - Method and system for building artificial and emotional intelligence systems - Google Patents

Method and system for building artificial and emotional intelligence systems Download PDF

Info

Publication number
WO2019227099A1
WO2019227099A1 PCT/US2019/034184 US2019034184W WO2019227099A1 WO 2019227099 A1 WO2019227099 A1 WO 2019227099A1 US 2019034184 W US2019034184 W US 2019034184W WO 2019227099 A1 WO2019227099 A1 WO 2019227099A1
Authority
WO
WIPO (PCT)
Prior art keywords
framework
aei
human
bot
input
Prior art date
Application number
PCT/US2019/034184
Other languages
French (fr)
Inventor
Carlos A. Nevarez
Sanggyoon OH
Pierre RICADAT
Thibault Louis Alexis DECKERS
Original Assignee
Bpu Holdings Corp.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Bpu Holdings Corp. filed Critical Bpu Holdings Corp.
Publication of WO2019227099A1 publication Critical patent/WO2019227099A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback

Definitions

  • This disclosure relates to a methods and systems for building artificial and emotional intelligence systems.
  • AEI systems facilitate communication between humans and non-human entities.
  • AEI systems include virtual assistants and bots.
  • Bots are automatons with the ability to acquire skills and interact with humans and other entities.
  • Virtual assistants and bots are capable of autonomously sending messages to and communicating directly with humans based on their skills as opposed to providing a facility for sending messages between two human users. The bots acquire skills through programming and continuous learning over time.
  • individuals may build AEI systems/virtual assistants to communicate with target groups of humans/users.
  • a virtual assistant is capable of communicating directly with humans/users in a conversational way such as speech or texting/SMS. Unlike a human assistant, a virtual assistant never gets tired, sleepy, or bored.
  • the virtual assistant may use scripts and dialogues that it has been programmed to recognize.
  • a doctor wishing to utilize an AI assistant to provide a service for his patients must have the technical skills to build the AEI system/virtual assistant or liaise with developers/experts that are able to do so. This may be time consuming and prohibitively expensive. Further, the doctor may have difficulties communicating with the developer and the developer may not fully understand the doctor’s requirements. The doctor may also wish to deploy the same program on different communication platforms so that users may access it in different ways. For instance, the doctor may want a virtual assistant to be available via text messaging on a mobile device using Apple’s Siri and via a smart speaker using Google’s Home.
  • the developer may need to create different versions of the AEI system/virtual assistant utilizing different types of programming languages and computing environment.
  • One developer may not have all the necessary skills to develop an AEI system that operates across the platforms.
  • the framework helps nondevelopers easily express requirements and create AEI systems/virtual assistants utilizing the framework that leverages learning from existing systems/virtual assistants and simplifies and streamlines communication between developers and non-developers.
  • the present invention is directed at a framework that provides a system and method for building AEI systems and virtual assistances.
  • the framework operates as a robust networking system and architecture designed to integrate with human and non-human participants using APIs that make the system easily accessible to participants and make it easy to adapt existing architectures, devices, and systems.
  • the framework supports extensible security protocols so that the system is able to adapt in both unsecured and strongly regulated network environments.
  • the use of the framework eliminates the need for developers to spend time and resources developing menial and fundamental necessary services when developing AEI systems/applications.
  • the artificial and emotional intelligence (AEI) framework includes an agent, an artificial intelligence application, and a server that includes an I/O processor, at least one API, an event manager API that manages the API, and an event bus.
  • the framework is in communication with a network via the event bus, which is also connected to the server, agent, I/O processor, and event manager API.
  • the framework is configured to integrate human and nonhuman participants,
  • the I/O processor is configure to accept human input and nonhuman input.
  • the server of the framework includes an identity data management component configured to abstract how things are authenticated, managed, and authorized.
  • the server of the framework includes a virtual service cloud.
  • the framework includes a natural language processor.
  • event bus of the framework is connected to at least one service, wherein the event bus can call upon the event manager API to connect to the at least one service.
  • the framework includes framework nodes capable of implementing services.
  • the I/O processor of the framework can be configured to operate as a single input system that accommodates input/put artifacts that accept language or ode using API parameters as operational I/O components.
  • the agent of the framework can include a bot that utilizes a root dialogue to drive interactions with the human participant.
  • the AI application is configured to monitor and analyze input from the human participant created by the root dialogue and modify the root dialogue based upon the analysis of the input.
  • the framework can include an interactive virtual assistant that utilizes the root dialogue and captures interactions with the human participant.
  • the interactive virtual assistant is configured to capture input that represents the mood of the human participant.
  • the interactive virtual assistance can be further configured to apply natural language processing to the input of the human participant.
  • the interactive virtual assistant provides multi-media content, it can capture the mood provided by the human participant in response to the multi-media content provided.
  • the bot is configured to ask the human participant to provide more information related to the input for clarification.
  • FIG. 1 is a schematic representation of the framework ecosystem.
  • FIG. 2 is a schematic representation of the framework microservice architecture.
  • FIG. 3 is a schematic representation of the framework overview.
  • FIG. 4 is a schematic representation of the framework and compatible devices used to support an application.
  • FIG. 5 is a schematic representation of services provided using the framework.
  • FIG. 6A - 6D are schematic representations of services within the framework.
  • FIG. 7A - 7B are schematic representations of client nodes within the framework.
  • FIG. 8 is a schematic representation of a traditional application and a bot.
  • FIG. 9 is an illustrative root dialogue utilizing the framework.
  • FIG. 10 is an illustrative conversational user interface utilizing the framework.
  • FIG. 11 is an illustrative Telegram user interface implementing a bot.
  • FIG. 12 is a schematic representation of the conversational Artificial Intelligence landscape.
  • FIG. 13 is a schematic representation of the dialogue conversation subsystem.
  • FIG. 14 is an exemplary medical virtual assistant.
  • FIG. 15 is a schematic representation of the dialogue/coaching system.
  • FIG. 16 is an illustrative AEI Studio Graphical User Interface.
  • FIG. 17A-B are illustrative screenshots from AEI Studio showing the Intent
  • the word“comprise” and variations of the word, such as“comprising” and“comprises,” means“including but not limited to,” and is not intended to exclude, for example, other additives, components, integers or steps.
  • “Exemplary” means“an example of* and is not intended to convey an indication of a preferred or ideal embodiment.“Such as” is not used in a restrictive sense, but for explanatory purposes.
  • the methods and systems may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects.
  • the methods and systems may take the form of a computer program product on a computer-readable storage medium having computer-readable program instructions (e.g., computer software) embodied in the storage medium.
  • the present methods and systems may take the form of web- implemented computer software.
  • the present methods and systems may be implemented by centrally located servers, remote located servers, user devices, or cloud services. Any suitable computer-readable storage medium may be utilized including hard disks, CD-ROMs, optical storage devices, or magnetic storage devices.
  • the methods and systems discussed below can take the form of function specific machines, computers, and/or computer program instructions.
  • These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including computer-readable instructions for implementing the function specified in the flowchart block or blocks.
  • the computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer- implemented process such that the instructions that execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart block or blocks.
  • the computer program instructions, logic, intelligence can also be stored and implemented on a chip or other hardware components.
  • blocks of the block diagrams and flowchart illustrations support combinations of means for performing the specified functions, combinations of steps for performing the specified functions and program instruction means for performing the specified functions. It will also be understood that each block of the block diagrams and flowchart illustrations, and combinations of blocks in the block diagrams and flowchart illustrations, can be implemented by special purpose hardware-based computer systems that perform the specified functions or steps, or combinations of special purpose hardware and computer instructions. [0038] The methods and systems that have been introduced above, and discussed in further detail below, have been and will be described as comprised of units. One skilled in the art will appreciate that this is a functional description and that the respective functions can be performed by software, hardware, or a combination of software and hardware.
  • a unit can be software, hardware, or a combination of software and hardware.
  • the units can comprise a computer.
  • This exemplary operating environment is only an example of an operating environment and is not intended to suggest any limitation as to the scope of use or functionality of operating environment architecture. Neither should the operating environment be interpreted as having any dependency or requirement relating to any one or combination of components illustrated in the exemplary operating environment.
  • the processing of the disclosed methods and systems can be performed by software components.
  • the disclosed systems and methods can be described in the general context of computer-executable instructions, such as program modules, being executed by one or more computers or other devices.
  • program modules comprise computer code, routines, programs, objects, components, data structures, etc., that perform particular tasks or implement particular abstract data types.
  • the disclosed methods can also be practiced in grid-based and distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network.
  • program modules can be located in both local and remote computer storage media including memory storage devices.
  • Adapt/adaption refers to a process where an interactive/computing system (adaptive system) adapts its behavior to individual users based on information acquired about its user(s) and its environment.
  • An agent is a program that acts on behalf of a user or another program in a relationship of agency.
  • Bots are a type of agent.
  • An agent may or may not be embodied (paired with a robot body) or may be software executing on a computing device (for example, Apple's Siri).
  • An agent may be autonomous or work with other agents or humans.
  • Bots may aggregate other bots and act as a single bot.
  • Analytics is the analytics specific to the domain of software systems taking into account source code, static and dynamic characteristics (e.g., software metrics) as well as related processes of their development and evolution.
  • An application is software for computers and computing devices to perform a group of coordinated functions, tasks or activities.
  • An Application Programming Interface comprises a set of definitions, communication protocols and tools for building software that serves as a set of defined methods of communication between components within a system. It serves as the“building blocks” for a system that is generated or built by a programmer/developer.
  • An API abstracts the underlying implementation, presenting information as objects or actions for the programmer/developer to manipulate and allows him to build a computer system or program without understanding all the underlying operations.
  • An API must be supported by an accompanying“embodiment” such as a server or (more commonly referred to) a service. (References to a/the service refer to the embodiment (unless otherwise indicated) and references to how the service interacts with external entities refer to the API.
  • AEI Artificial and Emotional Intelligence
  • a bot/chatbot is an autonomous program on a network (especially the Internet) that can interact with computer systems and/or human users.
  • a cache is hardware or software used to store data temporarily in a computing environment/system.
  • the stored data is a replica of data that is persistently resident on a system opaque to the user of the cache. Opaque means that a user may or may not whether they’re reading from the cache or the original source of the data.
  • a client is a part of computer hardware or software that accesses a service made available by a server.
  • the server is often (but not always) on another computer system, in which case the client accesses the service by way of a network.
  • Cloud computing is the on-demand availability of computer system resources, especially data storage and computing power, without direct active management by the user.
  • the term is generally used to describe data centers available to many users over the Internet.
  • Large clouds predominant today, often have functions distributed over multiple locations from central servers.
  • a service cloud provides on-demand availability of services (APIs) provided to clients by servers.
  • the cloud may make a server directly visible to a client of the server, or it may hide the actual server and rather provide an abstract service API by which the functions provided by the service can be used.
  • Context in computer programming is a structure, instance or object that contains minimal set of attributes, properties or states that allow for the execution or management of a defined set of operations or tasks.
  • Implicit context is that provided by a system in the form of well-known, accessible APIs and does not need to be included in the invocation of a program that understands the implicit context.
  • Explicit context is included in the invocation of a program that requires the context, in this case the context is specified in the API for the program being invoked.
  • Emotional Intelligence is defined as tools and services that a. promote self awareness, b. promote personal development from self-awareness, c. promotes relationship awareness from knowing oneself better, and d. promotes relationship development from better understanding of the relationship. Because of (a) personal awareness, a person must learn to identify core values from an emotional perspective, to understand the difference between feeling, emotion and mood and use this knowledge to improve themselves and their relationships with others.
  • GPU Graphics Processing Unit
  • CPUs differ significantly from CPUs in that they include more robust chipsets involving math processing units and optimized memory architectures allowing for much faster processing of mathematical models often used for graphical applications as well as Artificial Intelligence models.
  • GUI Graphical User Interface
  • 1DM Identity Management System
  • I/O Input/output
  • Signals/data received by a system are referred to as inputs and signals/data sent from a system are referred to as outputs.
  • Intent refers to what a user is commanding the virtual assistant/system to do i.e. what function/service the user wants the system to perform or call upon. For example, if a user submits the command“Tell me the weather now in Atlanta, GA” the intent is commanding the weather service to report the weather at this moment.
  • the Internet of Things is the extension of network connectivity into physical devices and everyday objects. Embedded with electronics, network connectivity, and other forms of hardware (such as sensors), these devices can communicate and interact with others, and they can be remotely monitored and controlled.
  • Micro Service Architecture is a variant of the service-oriented architectural style and a software development technique that structures an application as a collection of loosely coupled services. Services are fine-grained and the protocols are lightweight. This provides modularity and makes the application more resilient to architecture erosion. MSA breaks systems into the smallest independent modules possible and provides light-weight protocols and APIs to bound them together into a macro service / application.
  • Natural Language Processing concerns interactions between computers and humans in human (i.e. natural) languages. It involves speech recognition, natural language understanding and natural language generation.
  • a proxy server/service is a dedicated computer or a software system running on a computer that acts as an intermediary between an endpoint device, such as a computer, and another server from which a user or client is requesting a service.
  • a proxy has the property of acting on behalf of either its client or the server. Typically, neither the client nor the server are aware of the proxy which acts as a complete duplicate of either the client or the server for the scope of the proxy functions.
  • a service is a mechanism to enable access to one or more capabilities or software functionalities, where the access is provided using a prescribed interface/API and is exercised consistent with constraints and policies as specified by the service description.
  • Skills are capabilities taught to autonomous programs and/or AEI systems such as“report the weather.” Skills are developed as bots interact with people, services, etc. Skills may be added as programs using the AEI Framework API, or they can be created using high-level tools such as the AEI Studio.
  • the framework 100 provides a method and system for quickly and easily building AEI systems 260 and virtual assistants 262.
  • a microservices architecture (MSA) 120 is used to build services 220 intended to be provided over a network 160 via a cloud 250.
  • the framework’s 100 core architecture comprises networking capabilities, IDM 255 and an Event Management System/Event Bus 140.
  • AEI systems 260 facilitate communication between humans/users 300 and non-human entities such as bots 240.
  • the framework 100 operates as a robust networking system and architecture designed to integrate with human and non-human participants using APIs 280 that are easily accessible to participants and make it easy to proficient existing architectures, devices, and systems.
  • the framework 100 supports extensible security protocols so that it is possible to adapt in both unsecured and strongly regulated network environments. The use of the framework 100 eliminates the need for developers 315 to spend time and resources developing menial and fundamental necessary services 220 when developing AEI systems 260 applications 105 and assistants 262.
  • the framework 100 supports APIs 280 that enable AEI system/virtual assistants 260, 262 to develop skills 200 in addition to the development of services 220.
  • the framework 100 may be utilized to assemble bots 240 for a specific system 260 such as Alexa, Google Home etc.
  • the framework 100 is capable of integrating with existing AI applications, systems and computing devices.
  • the framework 100 is capable of supporting complete applications including mobile applications.
  • the framework 100 is configured
  • the framework 100 utilizes an event bus 140 which offers open service API that is available to any other virtual machine, programming language or framework capable of invoking an interface/ API.
  • the framework 100 comprises higher level interfaces such as GUIs 620 which allow abstracted access to the underlying services 220, and lower level interfaces. Both high and low levels can include APIs 280.
  • the lower levels of the framework 100 are compatible with all programming languages for example JavaScript, PHP, and NodeJS.
  • the higher level interfaces are accessible through web pages and web services using web socket APIs. For example the higher level interfaces may run inside Scala containers on any platform hosting a compatible JVM.
  • the framework 100 may be provided as a component within manufactured computer chips.
  • the framework 100 may be included in the GPU of a computing device which allows for the building of image recognition algorithms that learn from human expressions and can learn to identify emotions from expressions.
  • Image recognition algorithms utilize a lot of processing power and including them in the GPU optimizes the performance of the device.
  • the framework architecture 120 comprises an input/output (I/O) processor 340, an event manager API 290, a Framework event bus 140, bots 240, skills 200, IoT 230, cloud(s) 250, IDM 255, NLP 265, and analytics 275.
  • the I/O processor 240 can be configured to communicate with a human/user 300 and non-human input 320. When inputs are received by the I/O processor 340, they are converted to text.
  • the processor 340 determines the intent 640 in context 130.
  • the bot 240 receives the input(s) from the I/O processor 340 and sends a message over the Framework event bus 140 to the desired service 220 and waits for a reply.
  • a bot 240 may go to the Framework event bus 140 and request connection to the weather service 220.
  • the system may reply with a default or a list of available services 220 (which may also be preconfigured or prepaid) in order to“resolve” the services 220 (i.e. map services 220 to the needs of the bot 240).
  • the bot 240 can either directly call the services 220 through its API entry points (taking out the abstraction of the bus 140 which protects the bot 240 from hanging on to outdated service references) or invoke the API 280 directly.
  • the bot 240 may also take the input, setup the parameters to an event and fire the event requesting the request fulfilment.
  • the service 220 sits and listens for incoming requests from the bus 140. When it is alerted to a request, the service 220 checks that the request is valid (valid licenses, payment current, etc.). If it is valid, the service 220 honors the request, fills up the parameters and fires the event response. If the request is transmitted through the API 280, the same thing happens, except that the service 220 replies through the web socket.
  • the bot 262 may use its own I/O system or it can use the framework’s I/O processor 340 to obtain inputs and display output.
  • the bus 140 can be used to easily create connectors for IoT 130, known AIs, algorithms, web services, and other programs.
  • the I/O processor 340 communicates with the event manager API 290.
  • the event manager API 290 can be configured to work with several APIs 280, which the event manager API 290 can access via a database (not shown).
  • the framework 100 provides a hot container (not shown) that automatically uses the framework’s I/O processor 340, NLP 265 and related services that are configured at install time. If the client purchases services 220, they are“wired” at install so the hots 240 can quickly make use of them. Similar to the installation of a windows program, the installer may ask if you need visual c runtime or java runtime and if you want it downloaded and installed. It will then install the necessary .DLL’s (modules) that make the installed hots 240 work. In the public case, users will typically use a single bot 262, or they will customize and configure one. In the corporate case, they will probably preconfigure the bot sets by user type.
  • a hospital sets up their access controls so that the system administrators will have access to everything, but the administrators will not be able to see patient data. Doctors can see everything but cannot configure the bots 240, nurses and staff can potentially add entries to the patient’s care record, but cannot modify or delete without an administrator/doctor’s approval and staff may have cursory access.
  • the framework event bus 140 communicates with the APIs 280 associated with the various other components connected to the framework MSA 120, including, but not limited to, bots 240, skills 200, IoT 230, the cloud(s) 250, and other services 220, including the IDM 255, NLP 265, and analytics 275.
  • AH services 220 are offered via an API 280 through the bus 140.
  • the components that comprise the MSA 120 are substitutable.
  • the NLP 265 can coexist with or be replaced with a new/as yet undeveloped NLP 265.
  • the framework 100 the developer can leverage the cross-platform environment to deploy skills 200 across AEI boundaries.
  • a developer can create an Alexa skill 200 that uses the features of the framework 100 within the constraints allowed by Alexa’s framework.
  • the bus 140 can support multiple versions of the system running at once simultaneously and without interference.
  • FIG, 3 shows a schematic representation of the framework 100 overview as it relates to a particular AEI system/application 260.
  • the system 260 comprises framework services 220 and an API(s) 280, as well as other components as shown in FIG. 2.
  • the use of an MSA 120 facilitates the integration of future components and systems with the framework 100.
  • there are no general purpose APIs for GPUs because it was not anticipated that technologies like blockchain and AI would require access to GPUs.
  • OS architectures have to be changed to accommodate the GPU driven architectures and provide APIs that can, for instance, allow the user to set up Direct Memory Access pipelines from the CPU on the mother board, to a GPU on a separate card.
  • the MSA 120 allows for incorporation of new technologies and new systems without a fundamental redesign. APIs 280 will be compatible with new hardware and software capabilities.
  • a virtual assistant 262 via the framework 100, can speak with various wearable devices 400.
  • the framework 100 facilitates communication with a Spire wearable device 400, which monitors breathing and stress levels.
  • a virtual assistant 262 (not shown) can administer a questionnaire to a user 300.
  • Biofeedback 410 obtained from the device 400 can be utilized by the virtual assistant 262 in its conversation with the user 300.
  • the system 260 can add heart rate data captured with a smart watch monitor 400 to correlate the heart rate to other biological functions and levels 410. GPS sensors may capture location awareness and feed it back to the system 260 for greater insight.
  • signals from a Muse headband 400 which monitors brain waves, can be included and with signals across all devices 400 correlated.
  • the MSA 120 can support a virtual assistant 262that can correlate biometric information 410 obtained via wearable devices 400 with location based and other information about a user 300.
  • the MSA 120 of the framework 100 can use third party devices as subsystems of a medical application 105 using the event management system/Framework event bus 140. Signals from the devices may be imported and incoiporate the devices 400 through APIs 280 for a final assistive virtual assistant 262.
  • the framework 100 comprises a network 160 with at least one server 180 capable of running necessary framework services 220 and capable of connecting to other machines running necessary framework services 220.
  • Agents/clients 110, 115 access framework and other services 220 through API(s) 280.
  • Agents 110, clients 115 and servers 180 serve as framework nodes 125 capable of implementing framework 100 services 220.
  • the framework 100 communicates with code in the same virtual machine or with other virtual machines (i.e., the server 180 can be physical or virtual) depending on desired configurations, capabilities of the physical host, and requirements of the application.
  • the framework 100 is able to support a plurality of network architecture configurations including client/server, peer-to-peer, distributed, and the like.
  • Nodes 125 act as part of the framework’s I/O processing system. As discussed above, the nodes 125 act as a client 115 or a server 180, and support the Framework API(s) 280.
  • an AEI Framework server 180 when installed, it/they comprise a service cloud 250.
  • the service cloud 250 can be configured to operate with typical public web parameters like any other web service 220, or it can be configured to operate in connection with any supported corporate infrastructure. Both are accomplished by using the IDM 255.
  • the IDM abstracts how things are authenticated, managed and authorized. Also, it is possible to take one service cloud 250 and connect it to other service clouds 250 using what is called a federated access control or federation. Access points (servers) are identified which create the necessary context 130 for one cloud 250 to access the services on another cloud 250, and vice-versa.
  • a virtual service cloud 250 is installed in a hospital
  • the hospital may take advantage of skills 200 and hots 240, both discussed below, in use in the global AEI Framework cloud.
  • An installer may choose to download services 220 and install them locally on the private cloud 250, or to federate them from the public cloud 250. If the hospital wants to provide entertainment services provided by an entertainment bot 240 from Netflix, or just videos directly from the Netflix service, if the policies of the hospital allow the access to the public internet service, then the installer can simply open up the access (like a firewall) to the Netflix video service.
  • the install can configure the payment gateways, and the necessary certificates to facilitate the patient’s transactions when accessing content.
  • the hospital wants to use a patient profile skill 200 that is used to obtain and manage patient medical records and it’s not legal to share these with other hospitals, and not safe to transmit these over public lines due to regulations, then the installer may download the skill 200 (or bot 240 depending on how it’s licensed and sold) and set up its own licensing and payment system.
  • Bots 240 are similar to .EXE files in the sense that they are self-contained, full programs that run autonomously whereto skills are similar to .DLLs which are run-time loadable libraries and modules that are used by the bot. In this way, the system is able to easily support public, private, or hybrid clouds via the IDM 255.
  • a single input/output system accommodates input/output artifacts 350 that can accept language or code using API parameters as operational I/O components.
  • the framework 100 is configured to have a single input system that is configured to work with various artifact devices, including well known devices and services (e.g., Siri, Google Assistant, etc.), standard I/O connections, or customized speech recognition and test to speech synthesizers.
  • bots 240 are used for the input, and are configured to convert a user’s input into intents 640, entities 650 and the necessary parameters 645 to do its job.
  • NLP 265 can be utilized to determine the intent 640 of the content, entities 650, and the other data to form parameters 645. These components are used to call a corresponding service 220, or utilize the event bus 140 to find the corresponding API(s) 280.
  • the bus 140 can keep track of servers 180 (as well as what speaks what version of what API 280) whereas calling a service 220 directly, the client 115 may lose function if the server changes unexpectedly.
  • FIGS. 6A - 6C are schematic representations of services 220 within the framework 100.
  • the framework 100 is configured to call upon various necessary services 220, including, but not limited to, IDM/identity security 255, events communication 292, AI 297, algorithms 295, bots 240, and skills 200.
  • the core network services 222 may comprise connection services 224, session management services 225, network management services 226 and routing/naming/transport services 227.
  • FIGS. 7A - 7B are a schematic representations of client nodes 115/125 within the framework 100. Input/output is converted as needed, and passed along to the bots 240, which can then use the input/output in the manner for which they have been configured.
  • an IDM 255 provides the ability to manage accounts (create, delete, authorize, authenticate and delegate) and enables and provisions communication and networking between nodes 125.
  • Skills 200, bots 240, algorithms 295 and base AEI services 220 may be accessed remotely using Identity 242 and Events.
  • the bots 240 can be assigned an Identity 242, which is then published on the Event system 290, and can be registered as an API provider.
  • the bot 240 when a user 300 uses the bot 240, the user 300 is assigned an identity, for operations on behalf of the user, the bot 240 then can assume the identity of its user 300 as the user’s agent (delegate). This is important for auditing purposes because when running code it is important to know under what Identity 242 it is running, the user (who has all rights to his/her data, or the agent (who has only limited access to that data). For example, a developer 315 going through the system’s logs can see that someone activated any Event, but it cannot tell who, but it does know what agent did it and it can query the agent to receive whatever info is publicly available. In an aspect, everything utilizing the framework 100 must have an identity 242.
  • An AEI system 260 having at least one bot 240, may require both client/server 115, 180 or client/cloud 1 15, 250 components due to fault-tolerance or security restrictions. From an architectural perspective, the cloud 250 provides resilience and fault- tolerance by insulating the client from dependencies on a specific server 180.
  • a client node 115/125 has the main responsibility of communicating with the human 300 or device 190 and is responsible for providing I/O processing.
  • the client node 1 15/125 can use the voice services 220 on an iPhone (using the virtual assistant, Siri) to deliver text to speech input/output.
  • the user speaks and Siri converts the voice to text and conversely, when the system returns an output, Siri converts the text to voice.
  • Siri converts the text to voice.
  • speech synthesis via a speech synthesizer is applied to emulate a person’s voice when returning an output.
  • the speech synthesizer service can include any speech to text service provider (e.g., Sin, Google Assistant, Sonos speaker, and the like).
  • the configuration of the virtual assistant/bot/application 262/240/105 is determined by desired outcomes and requirements that are specified by the programmer/developer 315 based upon scripts or directions provided by experts 310 associated with the bot 240. For example, for hospital settings, a developer will work with a doctor to take a policy to be implemented, with the doctor providing the policy, and the developer 315 providing the coding for implementation. There may be other parties involved, the patient has a role, the doctor, the hospital administrator, the HIPPA compliance officer, so the developer 315 has to create the code that will make the bot 240 operate in accordance with all those specific roles, definitions, and desired outcomes.
  • an application 105 may embed the use of the framework API(s) 280 to invoke services 220 directly in which case the application 105 is responsible for managing content such as session information, API keys, login credentials, etc. Some chipsets may use this mechanism to“attach” themselves to the services 220 or devices 190 on the framework 100 functionality.
  • the framework 100 may provide client implementations for particular platforms in which case client code may provide emulation or redirection functionality. For example, by implementing a simple file system interface, the client 115 can allow the user input 350 to be written to a remote file for asynchronous processing. The response is received when a response file is received similar to an email system sending messages back and forth using files as the communication vehicle.
  • the bots 240 can implement redirection when sending filed back and forth, with the framework to share files between users.
  • the bots 240 can be trained by uploading files with sample dialog, with the files organized in a specific format that allows the bot 240 to“ingest” the file, allowing the bot 240, via the NLP 265, to be“upgraded” with new knowledge.
  • context 130 can be maintained in real time sessions between nodes 125 with login and access controls being held securely in a session context 130.
  • Some applications may utilize a hybrid system employing both asynchronous and real time capabilities as described above.
  • context 130 can include an API key that is created and saved to allow a client node 115 to communicate with a server node in subsequent calls to the server node.
  • Context 130 can be transient and volatile, or it can be persistent and have longer scope that just a single session.
  • the footprint of the node 125 may need to be minimal or behave differently from an“always on” network.
  • nodes 125 may communicate logic using a very light-weight agent 1 10, as opposed to a heavy weight agent.
  • a heavy weight agent contains all sorts of necessary instruments to carry on its work, this could be user credentials, certificates, session keys, API keys, etc.
  • a light-weight agent is distributed, i.e.
  • FIG. 6D illustrates the relationship between the heavy-weight agent 130 and a light weight agent 110.
  • code by the agent 110 is very minimal and a very light-weight context 130 is maintained with some remote node(s) 127 handling the heavy-weight parts of the context 130 on behalf of the agent 1 10.
  • Distribution of logic between nodes 125 allows for extremely light-weight implementations on devices 190 that may have limited resources.
  • a remote node 127 can provide hardened proxy services 220 on behalf of the constrained node 125.
  • a customer is wearing an Apple watch, which is powerful enough to run apps, but has very limited resources, and is not a hardened, secure hardware platform.
  • the framework 100 can rely on another bot 240 or remote node 127 that does have a secure hardware platform. In this way, the framework 100 operates as a strong, secure and scalable distributed network router.
  • caching is used.
  • Caches 135 can be manual, automatic, synchronized, loosely-coupled, etc.
  • the framework 100 doesn’t require a particular cache 135 implementation but may support different types of caching to address a number of known scenarios. For example, if a hospital requires that a device 190 provides a response to a patient’s action within a prescribed amount of time, and the required service 220 using the framework 100 cannot offer a Quality of Service guarantee to meet the specified response then the assistant (in this case the developer 315 building the assistant 262) cannot assume that it will have access to remote nodes 127 at runtime. In this case, the framework 100 must be configured to provide sufficient resources to implement the“cached" API 280, and behaviors (e.g., locally accessible versions of the server or at least local replicas that synchronize back with the server when connectivity is available) expected by the application 260.
  • behaviors e.g., locally accessible versions of the server or at least local replicas that synchronize back with the server when connectivity is available
  • the hospital may require that the cache 135 be refreshed at specific intervals, or when the device 190 is in a particular state.
  • Normal caching protocols and known algorithms may be used to satisfy the policies of the installation.
  • the framework 100 provides installation options that support the multiple caching modes.
  • the framework 100 may be combined with analytics 275 to provide an AEI system 260 capable of guiding a user 300 through information pertaining to Emotional Intelligence.
  • the framework 100 is used to write assistants that help a person understand themselves and their relationships better, and through the effort, by capturing large samples of this data across many different types of people, build large computer models that are based on the emotional models mentioned previously.
  • the framework 100 can utilize classification of sentiment into negative and positive emotion; once the natural language analysis, via a NLP 265, can accurately identify positive and negative language, the framework 100 can classify the emotions into distinct emotions using currently accepted emotional models.
  • the actual model is less relevant as is testing for accuracy and picking the model we find the most accurate most of the time; once the models are accurate, the framework 100 can move beyond analyzing text to analyzing other things, facial expressions, body language, behaviors, and even correlating emotion to bio feedback 410 from wearables 400.
  • the framework 100 serves as a substrate that promotes the development of applications 105 that interact in a human-like way such as being able to convey empathy or sympathy in what normally would have been robot-like interactions
  • the framework 100 also serves as a substrate that facilitates the use of natural language expressions by AEI systems/applications and virtual assistants 260, 105, 262.
  • this capability may be combined with speech synthesizing technology to emulate a person’s voice.
  • a speaking virtual assistant262 for administering care to a person with Alzheimer’s disease or dementia may combine biological information 410 obtained using wearables 400 and administer tests while communicating empathetically with the user in the voice of a loved one.
  • a system 260 like the Aroma Shooter may be integrated with a virtual assistant 262 to recreate a person’s natural scent or provide alternative calming/soothing scents depending on the assistant’s 262 assessment of the user’s 300 state of mind.
  • a main screen 520 drives the interaction with the rest of the application through the use of menus, selections and screens.
  • a bot 240 uses a root dialogue 245 to drive interactions. Even where a traditional application 500 and bot application 240 provide access to the same information, the bot 240 delivers its content 560 through natural dialogue with the person, rather than providing a navigation system on the screen.
  • the main interaction of the framework 100 takes place in the form of a user interface/dialogue screen 540 between the virtual assistant 262 and the human/user 300 through which conversation takes place in a natural, human language interaction.
  • the virtual assistant 262 speaks or sends messages to the human/user 300 as naturally as a human assistant sends messages to his boss, or her colleague.
  • the framework 100 provides sample user interfaces 540.
  • This may be a“chat” or“messenger” interface to interact with a patient, also commonly referred to as a conversational or chatbot interface.
  • FIG. 9 shows an example framework root dialogue screen 245 on a smart phone 190 (not shown).
  • the screen name typically found in the navigation areas of computer applications are the screen name (Home 545), a link to access additional options (three vertical dots 547) and a settings icon (gear 549).
  • a bot 240 can include many of the same GUIs 620 and functionality as a traditional application. However, the big difference is that the bot 240 is dedicated for the use of the conversational AI and the user for the“chat.”
  • the framework 100 is used to provide a virtual assistant 262 for the treatment of delirium.
  • various other disorders and diseases can be monitored and treated. Mistakes made in the treatment of delirium costs medical facilities billions of dollars and claims attributed to falls, involuntary bowel movements, broken bones, soiled and infected beds and equipment, premature release of patients with reduced cognitive abilities etc.
  • the protocols at most hospitals require that a very simple test be administered to patients a certain number of times per day, however tests are often not administered. Human nurses may forget, choose to rely on their own observations, skip testing altogether, or may be embarrassed/feel badly for the patient.
  • An example virtual assistant 262 wakes up every so often to administer the prescribed test.
  • the test can be determined from doctors, and implemented into the framework 100 by developers 315 and fine-tuned through AI. If the patient passes, this is entered into the patient’s records, if not, it is also entered and hospital staff is alerted. Over time, the virtual assistant also looks for patterns. For example, how long after a medication is administered does the failure happen? Can it be a drug interaction? Who is around the patient? Where is the patient? These are easy and unobtrusive observations that provide tremendous insight and have a huge, positive impact into solving a problem that is really caused by human error.
  • a doctor would provide a dialogue script which includes dialogue for the bot and the expected responses from the patients. Based upon the responses, the bot 240 takes the prescribed action.
  • NLP 265 can handle a large number of linguistic variations in a response, so a script can still be used if a patient’s response isn’t exact.
  • the bot 240 can continuously learn based upon the NLP input, and remember specific things about a patient over the course of the patient’s treatment.
  • the framework 100 can include a programming application 600 with tools for training and learning which can be described as an AEI studio 600.
  • the AEI studio 600 can be used to quickly and easily generate an interactive virtual assistant 262 with programmable icons and titles.
  • a bot interface 540 can be as rich and textured as a traditional application 500 user interface.
  • the bot interface 540 in FIG. 9 features an icon/picture representing the virtual assistant 262, text bubbles 535 through which the virtual assistant 262 communicates, and a data entry dialogue window 550 to allow the user 300 to communicate back.
  • the conversational interface 540 provides a scrolling window/page where gestures are easily applied to navigation.
  • the hot 240 is configured to interact in a more conversational setting, using the applications and components discussed above.
  • an assistant 262 is created to manage an individual's time and tasks, and utilizes a conversational approach to manage.
  • the bot 240 may reside on the user’s computer or smart device (i.e., iPhone, iPad, etc.). When the bot 240“sees” the user typing in a date and time, the bot 240 might ask the user if the user needs to schedule something for that date and time in their calendar, or create a task.
  • the method of interacting with the user generates a social reaction that could not have happened from a program with a blank page where the user could write anything they wanted to. Instead, simple questions like“how do you feel?” are provided, with replies similar to“I’m sorry you don’t feel well, what do you think you could do to feel better?”, or“why do you think that is?” are supplied. These simple questions elicit deep thinking and responses from the users that further assist the AEI framework 100 in developing Emotional Intelligence.
  • the virtual assistant 262 can provide faces that represent the user’s mood.
  • the user 300 is able to select a face for whichever mood matches his/her current state, or can simply write anything in response to the“how do you feel” question being posed by the virtual assistant 262.
  • the virtual assistant 262 is provided with many possible responses to questions.
  • the system applies context 130 to the query (much like a human does).
  • the context 130 is provided by the data already known and collected by the virtual assistant 262. For example, previous answers provided during psychological evaluation and even the person’s profile can be used to provide the context 130.
  • the virtual assistant 262 can know if it is the user’s birthday, or based upon previous answers, whether the user is an introvert or extrovert, and adjust accordingly (e.g., not asking as many questions if the user is an introvert).
  • the virtual assistant 262 is given a score whenever it receives an appropriate response to a question.
  • the response is given a score and a weight based on internal algorithms for natural language processing this is all taking place inside the Natural Language Processor 265, so when the NLP 265 returns a number to the bot 240, the bot 240 knows that the response returned by the natural language system (the script analyzer) is 95% accurate.
  • the AI detects a correct interaction between the bot 240 and the human 300, and as it detects fewer and few incorrect interactions where the human no longer has to keep asking a question, or the bot 240 doesn’t keep saying“I don’t know that”, then the particular interaction (question/response) is considered well-trained, and reduces the need for further input.
  • the NLP engine is searched for interactions that are either unused, or need more input. This allows the developers 315 to also become more confident that the AI is able to interact with people accurately.
  • FIG. 10 shows another exemplary conversational AI application 105 created using the framework 100 that functions as a personal content curator.
  • the conversational screen 540 can be effectively used to deliver complete web page abstracts, articles, and multi-media content 560.
  • the user 300 uses icons to indicate when they like or dislike a piece of content 560.
  • the user inputs are used to yield improved content selection.
  • the AI application 105 is able to capture specific emotions such as whether the content make the user sad, happy or angry etc. In an aspect, the specific emotions are captured based upon the response of the user, that is, the input provided by the user to the AI application 260. For example, as a user reads articles, and as they identify articles they like, the AI application 260 builds a better understanding of the kinds of articles the user likes.
  • the AI application 105 records the response and immediately or later asks for enough information to determine whether this is a new mood, or part of an existing one.
  • the framework 100 also provides the ability for the virtual assistant 262 to ask an expert 310 who can further refine the dialogue 210.
  • the response will be sent to an appropriate expert associated with the development team who provided the scripts/parameters/dialogue of the treatment/program originally. This can be sent via another virtual assistant 262 or other traditional means (e.g., email, text. Etc.).
  • the expert can provide new dialogue, or programming, to answer the question posed. .
  • An AEI training bot 240 may be designed and implemented using the AEI studio 600.
  • FIG. 1 1 shows a chat bot 240 implemented using the popular messaging application Telegram. In other embodiments, other messaging applications can be utilized.
  • Telegram it is very easy to build an AEI training bot 240 where any human 300 wishing to have a training session with the bot 240 can provide conversational inputs 350 within Telegram. Initially, the human 300 may provide both sides of a conversation, and the bot 240 will register these inputs 350 as being the original samples and request more examples from human participants 300. Next, the bot 240 solicits additional samples from human participants via Telegram that are willing to participate. Once the initial dialogue is completed, the bot 240 will ask the person 300 if they want to help provide more samples.
  • the bot 240 may say“Hi John, I recently learned this phrase:“today is a wonderful day”, here are the current sample responses I currently have 1.“It’s an awesome day”,“I love this day”, can you provide me with more examples please? One complete phrase per response please.”
  • the bot 240 is using the existing scripts/intent 640 and other entities 650 to ask for alternative ways to say the same thing.
  • the bot 240 is able to find humans 300 on an existing messaging platform to teach it how to respond to natural human conversation.
  • the bot 240 may engage with individuals outside the Telegram application and may engage with other bots 240.
  • Bots 240 may converse through other existing messaging platforms such as iMessage, FaceBook Messenger, Slack and the like.
  • chat bots 240 are limited to their function as bots 240,
  • the framework 100 allows the underlying AEI system 260 to interact (as a bot 240) across different systems and platforms with humans 300, software, hardware, and other bots.
  • chatbots 240 use an automated dialogue system/technology to communicate in a conversational style that mimics natural human language.
  • FIG. 12 illustrates different elements that may comprise Conversational Artificial Intelligence 297 including chatbots 240, intelligent dialogue, speech recognition, speech synthesis, dialogue management, information extraction and natural language understanding.
  • the AEI systems 260 use one or more root dialogue systems 245 to interact, provide services 220 and learn from entities (humans 300, software, other hots 240 etc.).
  • the dialogue system 245 may be domain specific, meaning it relates to a specific field/art (medical, financial, technical etc.).
  • the dialogue system 245 manages the various root dialogues in which a user may engage. For example, a user may start with a telemed root dialogue that then needs access to a pharmacy dialogue after being completed.
  • the dialogue system 245 manages the connections and interactions between the different dialogues.
  • the dialogue system 245 is independent of the input/output processing system 340 and the AEI system/virtual assistant 260, 262 (See FIG. 15).
  • Dialogue is pre-programmed, meaning that an AEI system 260 has to be trained on what to say and how to respond.
  • the assistant 262 may also be capable of self-training dialogue as discussed above (e.g., ask questions of the user or refer to an expert).
  • local dialogue sets may be provided within the programming code of the AEI system 260.
  • the framework 100 may utilize a replica design pattern (i.e., a local copy synchronized with a master copy somewhere on at least one known source location, which can be set up by the system administrator).
  • the framework 100 has the ability to use NLP 265 at run time.
  • this functionality requires that the device 190 have sufficient storage and processing power.
  • the dialogue system 245 utilizes a store- and-forward dialogue design which is asynchronous (similar to email).
  • Dialogue sets 210 containing offline conversational elements may be downloaded periodically from servers/the cloud 180, 250 when the functionality can be supported (for example, internet access, sufficient storage or processing power).
  • the dialogue sets 210 can be added via subscription services to provide updated events or have administrators supply them.
  • the dialogue system 245 utilizes an API 280 that abstracts the location of the dialogue files 210, and exposes a mechanism for storing new dialogue 210 for programmatic use.
  • the stored dialogues 210 are uploaded, and the updated dialogues 210 re- downloaded.
  • the architecture 120 therefore allows for real-time NLP and training of terms for dialogue 210, serving as a“dialogue cache".
  • FIG. 13 shows how the dialogue system 245 uses a conversation sub-system (e.g., NLP) that includes appropriate responses to expected interactions.
  • NLP conversation sub-system
  • the conversation sub-system hands the interaction to the“unknown” system, which attempts to resolve the interaction.
  • the system 260 is continually learning, and it is possible that a new trained response has been entered but not updated through the network of nodes 125 on the framework 100. If the interaction is resolved, then the conversation system adds the interaction to the known interactions and is then able to respond appropriately. If the interaction is resolved, then the NLP system 265 may be invoked, which can teach the new terms and responses to the dialogue system 245, or request help from a human operator 30.
  • the NLP system 265 may create a new entity650 and resubmit it as an addition to the root dialogue 245 (see FIG. 17B).
  • the NLP system 265 would add another “then” statement or perhaps even another condition depending on whether it is modifying an intent, or a story.
  • the dialogue system 245 can then request help from an expert 310. In either case, the system 260 can either update the running system, or it can add a new dialogue root with the new interactions to the known dialogue 210.
  • the learning system needs to incorporate human learning, it’s possible to have synchronous (i.e., live and immediate conversation) and/or asynchronous communication (conversation that happens piecemeal when someone responds - e.g., email) with a human 300.
  • the human 300 can teach the system 260 simply by chatting with it, through text messaging or another mechanism such as email, or voice message.
  • the formats of these messages may be prescriptive to allow the AEI system 260 to unequivocally understand new terms and what possible interactions might be required.
  • Interactions may be either simple or complex. Simple interactions are fairly prescriptive in that a specific term prescribes specific answers. More complex interactions may require the understanding or knowledge of frame of reference. Frame of reference has to do with the situation in which the conversation is taking place. For example, if a parent tells a child,“I love you”, then the child will most certainly want to say,“I love you too.” In this case, where the frame of reference is well-known, the interaction is simple. However, if a man tells a woman“I love you,” more information is needed to provide the appropriate response. Therefore, the agent/bot 110/240 has to be able to determine whether interactions are simple or complex.
  • the system 260 over time is taught via the scripts, NLP 265, and experts 310, as to whether the interactions are simple or complex.
  • the nature of the interactions can be more natural as the AEI System 260 learns more about a person.
  • the more one knows a person the more one can rely on that knowledge to provide frame of reference which includes implicit information about interactions.
  • the framework 100 supports relationship identification and development of relationship matrices/groups.
  • a person’s relationship matrix is critical for Emotional Intelligence. Understanding how a person feels about others, and others about them is incredibly useful in helping a person understand themselves and help themselves build better relationships. Matrices can be built based upon self-identification from profile set ups, as well as requests between users to establish a recognized relationship within the system.
  • the dialogue system uses the relationship parameters (e.g., what type of relationship does the user have with others, professional, emotional, familial, etc.) to influence the terms used in the interactions.
  • relationship parameters e.g., what type of relationship does the user have with others, professional, emotional, familial, etc.
  • there are often many different synonyms and similar expressions that can create much more efficient dialogue. In other situations, for example, medical situations, where a patient expects a more professional approach, the range of terms and expressions would be much narrower than the personal situations.
  • Conversations/interactions may be scripted (as discussed above) or unscripted. Unscripted conversations can be generated from social conversations between a user and a bot 240. In either case the AEI system 260 uses trained conversation guides to direct the flow of an interaction. The system 260 can be programmed to use a single conversation guide, or multiple. When using multiple guides, the AEI system 260 may use one at a time (exclusive), or it may use many guides at once. For example, if the bot 240 allows social conversations (e.g., idle chitchat), chitchat may only be allowed when the other parts of the bot 240 are not active.
  • social conversations e.g., idle chitchat
  • chitchat may only be allowed when the other parts of the bot 240 are not active.
  • the root dialogue 245 can take care of enabling or disabling chitchat at the prescribed moment.
  • the event“consultation started” is fired, the root dialogue 245can subscribe to this event and disable chit chat, when the event“consultation ended” is fired, the root dialogue 245 re-enables chitchat.
  • Conversation guides may include events and timers with the ability to allow the assistant 262 to create and deliver reminders. Time triggers may initiate conversations and may include additional functions such as prescription reminders and the like or as mentioned in the example, to enable or disable functions.
  • a virtual assistant 262 manages prescriptions on behalf of a medical practitioner.
  • a doctor may submit a prescription to the AE1 system/virtual assistant 262 indicating that a patient take youngtal (fictitious drug name) 3 times weekly (morning, noon and night).
  • the virtual assistant 262 is also programmed to verify the prescription for any interactions with other medications currently being taken by the patient.
  • the virtual assistant 262 can call upon a prescription verification bot 240 that is in communication with or is part of the virtual assistant 262 to verify the prescriptions, as well as call upon a drug interaction service to identity potential issues between the various prescriptions assigned to the user.
  • the virtual assistant 262 Since a patient can have new prescriptions all the time, the virtual assistant 262 must first check if the patient has received any new prescribed medications since the previous use and if so re-verify known interactions. If there’s a known interaction, the virtual assistant 262 will inform the doctor and hold the prescription until the doctor confirms the prescription. The doctor has the ability to cancel or approve the prescription. A doctor may require warnings and additional information for the virtual assistant 262 to communicate to the patient, for example,“Mrs. Jones, Doctor Gomez recommends that you take this medication with food.” Further, when setting up the reminders, the assistant 262 will remind the patient that they should have had some food prior to taking their medication, and will ask the patient if they want to be reminded to eat 15 or 20 minutes prior to the medication.
  • the virtual assistant 262 can set this up based upon the doctor’s recommendation of eating food with the drug, and knowledge of the hospital’s timeline for food orders. At the prescribed time the assistant 262 will prompt the patient td take his/her medication, and may provide additional information and assistance. For instance, the assistant 262 may inquire about the patient’s emotional/physical state. The assistant 262 may make enquiries in response to noted changes in the patient’s biological state based on inputs from wearable devices. The assistant 262 may schedule appointments for the patient, send messages to the doctor on the patient’s behalf, set up a call to fulfill a prescription, use psychologist prescribed tests to help the patient detect emotional stress and so on. The assistant 262 may look for signs of depression in the patient’s responses or detect changes in altitude that indicate a fall.
  • a doctor’s prescribed dialogue 210 and a psychologist’s dialogue 210 are used to guide the interactions between the assistant 262 and the patient.
  • the assistant 262 can be used for tracking emotional and physical states across time, and provide daily, weekly, and monthly reports (or more).
  • the assistant 262 records something other than expected, the information will be forwarded to the doctor(s) to assist in research and development of more efficient medicines with more continuous updates.
  • doctors and care providers it is very difficult for doctors and care providers to get feedback on a patient’s well-being and health unless it is initiated by the patient.
  • an AEI system 260 with its virtual assistant 262 can keep track of a patient’s temperature, moods, heart rate, breath, and other biometric measurements 410 easily and inexpensively.
  • conversation developers 315 design guided conversations using the runtime coach.
  • the AEI system 260 may adapt terms and responses.
  • a professional may design prescriptive scripts to be followed faithfully by the assistant.
  • Developers 315 may seek input from experts 310 and supplement the scripts with programming that can take advantage of any service 220.
  • the experts 310 may prescribe the dialogue 210 and the actions to be taken, and developers 315 can then create programs that carry out those actions, using online emulators.
  • a professional may perform testing to check whether the AEI system 260 is behaving as expected, or whether changes are needed.
  • a virtual assistant 262 in a mobile or wearable device 400 provides the ability to use biological profile information 410 in real-time that can benefit the person 300 directly, and also helps communicators to personalize conversations for targeted audiences/persons.
  • the framework 100 facilitates the design of highly individualized/personalized AEI systems 260 and virtual assistants 262.
  • the assistants 262 may provide different language sets for children, teens, young adults, adults, and the elderly etc. Thus helping narrow linguistic gaps between the age groups.
  • An AEI system 260 may also employ emotional intelligence to communicate effectively with individuals of different backgrounds, be trained to respond to the different biases (using different languages, understand cultural references, etc.), and allows human users 300 to interface with and supplement the capabilities of the AEI system 260.
  • the I/O processor/system 340 is separate from the assistant 262. This allows for the use of different mechanisms to interact with a person 300 or entities. For example, iOS users may use Siri to interact with the assistant 262 or other AEI systems 260.
  • the framework 100 may be embedded in a robot/robotic device.
  • the system nodes 125 use APIs 280 to interact with each other and converse.
  • the framework 100 is able to accept/receive and process a wide variety and inputs 350 and facilitates the availability of the tools to help accelerate the expansion of conversational AEI systems 260.
  • the AEI studio 600 is an interactive application that can significantly reduce the work and complexity of creating scripts.
  • the framework 100 can be accessed through APIs 280, as more people create skills 200 (i.e., the things a bot/assistant can do - the programmatic equivalents of functions or modules in programming) for bots 240 and applications 260, 262, the skills 200 are made available through AEI studio 600 and can be reused in new applications 105, systems 260 and bots 240.
  • the skills 200 can be modified by adding different variables. Users utilize a GUI 620 as shown in FIG. 16 to build skills 200 in an organized fashion.
  • AEI systems 260 process inputs by dividing them into various components including intents 640 (what is the person trying to accomplish), entities 650 (what are the objects of the intent 640) and triggers 670 (what are the events that can cause this skill 200 to be activated and operate).
  • the language trainer breaks out phrases into intents 640, entities 650, triggers, etc. for the AEI studio in the different subscreens of the skills builder.
  • the AEI studio 600 provides a way to teach the language analyzer 265 how to recognize when a human 300 intends to ask for the weather as shown in FIG. 17A and 17B.
  • the user 300 may include information like date and time, geo location, locality or region (entities).
  • the AEI system 260 is provided with examples for each entity 650.
  • entities 650, intents 640, and the like the different components can be color coded.
  • AEI Studio 600 provides access to memory slots/variables so that skills 200 can retain responses, inputs, and hold information that can be transferred to other skills 200.
  • the phrase“Hello John” is comprised of an intent 640 -“hello” and an entity -“John”.
  • the entity 650 would be saved in a memory slot called name, so throughout the bot, any time we wanted to display the name, we simply use the slot assigned to name, and John will display on the screen.
  • the designer/developer 315 can enter sample appropriate responses with variations, much like it uses samples for inputs, to form a dialogue 210, also known as a story 660 in the AEI Studio 600 environment.
  • a story 660 comprises the identification of at least one intent 640 and possible inputs and an impetus to take an action.
  • the skills comprise intents 640. For example, where“ask weather now” is programmed, the intent 640 is identified as asking the weather now, which is part of the ask weather skill.
  • the story 660 then invokes the weather service through a trigger 670 - that is, when a user states or enters what is the weather now", the root dialogue recognizes this, and initiates the skill 200 of providing the weather!
  • the weather service 200 will wait for a request for the weather.
  • the AEI system 260 will then pull inputs from slots to find out whether it should return default weather information (here, now) or some other variant.
  • This skill 200 can be included by designers/developers 315 of a different application 105, system 260 or bot 240 instead of being reprogrammed.

Abstract

The present invention is directed at a framework that provides a system and method for building artificial and emotional intelligence systems and virtual assistants. In an aspect, the framework operates as a robust networking system and architecture designed to integrate with human and non-human participants using APIs that are easily accessible to participants and make it easy to adapt existing architectures, devices, and systems.

Description

METHOD AND SYSTEM FOR BUILDING ARTIFICIAL AND EMOTIONAL
INTELLIGENCE SYSTEMS
FIELD OF THE INVENTION
[0001] This disclosure relates to a methods and systems for building artificial and emotional intelligence systems.
BACKGROUND
[0002] Artificial and Emotional Intelligence (AEI) systems facilitate communication between humans and non-human entities. AEI systems include virtual assistants and bots. Bots are automatons with the ability to acquire skills and interact with humans and other entities. Virtual assistants and bots are capable of autonomously sending messages to and communicating directly with humans based on their skills as opposed to providing a facility for sending messages between two human users. The bots acquire skills through programming and continuous learning over time. Presently, individuals may build AEI systems/virtual assistants to communicate with target groups of humans/users. A virtual assistant is capable of communicating directly with humans/users in a conversational way such as speech or texting/SMS. Unlike a human assistant, a virtual assistant never gets tired, sleepy, or bored. The virtual assistant may use scripts and dialogues that it has been programmed to recognize.
[0003] Presently, a doctor wishing to utilize an AI assistant to provide a service for his patients must have the technical skills to build the AEI system/virtual assistant or liaise with developers/experts that are able to do so. This may be time consuming and prohibitively expensive. Further, the doctor may have difficulties communicating with the developer and the developer may not fully understand the doctor’s requirements. The doctor may also wish to deploy the same program on different communication platforms so that users may access it in different ways. For instance, the doctor may want a virtual assistant to be available via text messaging on a mobile device using Apple’s Siri and via a smart speaker using Google’s Home. In order to work with different types of devices that operate different computing systems, the developer may need to create different versions of the AEI system/virtual assistant utilizing different types of programming languages and computing environment. One developer may not have all the necessary skills to develop an AEI system that operates across the platforms. [0004] There is a need for a way to quickly and easily build AEI system/virtual assi slants utilizing a framework that does not require programming from scratch. Further, there is a need for the ability to provide AEI system/virtual assistants using a framework that can integrate easily across different types of systems, programs and computing devices. The framework helps nondevelopers easily express requirements and create AEI systems/virtual assistants utilizing the framework that leverages learning from existing systems/virtual assistants and simplifies and streamlines communication between developers and non-developers.
SUMMARY
[0005] The present invention is directed at a framework that provides a system and method for building AEI systems and virtual assistances. In an aspect, the framework operates as a robust networking system and architecture designed to integrate with human and non-human participants using APIs that make the system easily accessible to participants and make it easy to adapt existing architectures, devices, and systems. The framework supports extensible security protocols so that the system is able to adapt in both unsecured and strongly regulated network environments. The use of the framework eliminates the need for developers to spend time and resources developing menial and fundamental necessary services when developing AEI systems/applications.
[0006] In an aspect, the artificial and emotional intelligence (AEI) framework includes an agent, an artificial intelligence application, and a server that includes an I/O processor, at least one API, an event manager API that manages the API, and an event bus. The framework is in communication with a network via the event bus, which is also connected to the server, agent, I/O processor, and event manager API. The framework is configured to integrate human and nonhuman participants, In such aspects, the I/O processor is configure to accept human input and nonhuman input.
[0007] In an aspect, the server of the framework includes an identity data management component configured to abstract how things are authenticated, managed, and authorized. In another aspect, the server of the framework includes a virtual service cloud. In another aspect, the framework includes a natural language processor. In one aspect, event bus of the framework is connected to at least one service, wherein the event bus can call upon the event manager API to connect to the at least one service. In another aspect, the framework includes framework nodes capable of implementing services. The I/O processor of the framework can be configured to operate as a single input system that accommodates input/put artifacts that accept language or ode using API parameters as operational I/O components.
[0008] In an aspect, the agent of the framework can include a bot that utilizes a root dialogue to drive interactions with the human participant. In such instances, the AI application is configured to monitor and analyze input from the human participant created by the root dialogue and modify the root dialogue based upon the analysis of the input. In another instance, the framework can include an interactive virtual assistant that utilizes the root dialogue and captures interactions with the human participant. In one embodiment, the interactive virtual assistant is configured to capture input that represents the mood of the human participant. The interactive virtual assistance can be further configured to apply natural language processing to the input of the human participant. In cases when the interactive virtual assistant provides multi-media content, it can capture the mood provided by the human participant in response to the multi-media content provided. In an aspect, the bot is configured to ask the human participant to provide more information related to the input for clarification.
[0009] These and other objects and advantages of the invention will become apparent from the following detailed description of the preferred embodiment of the invention.
[0010] Both the foregoing general description and the following detailed description are exemplary and explanatory only and are intended to provide further explanation of the invention as claimed. The accompanying drawings are included to provide a further understanding of the invention and are incorporated in and constitute part of this specification, illustrate several embodiments of the invention, and together with the description serve to explain the principles of the invention.
BRIEF DESCRIPTION OF THE DRAWINGS
[0011] The following drawings show generally, by way of example, but not by way of limitation, various examples discussed in the present disclosure. In the drawings:
[0012] FIG. 1 is a schematic representation of the framework ecosystem.
(0013] FIG. 2 is a schematic representation of the framework microservice architecture.
[0014] FIG. 3 is a schematic representation of the framework overview. [0015] FIG. 4 is a schematic representation of the framework and compatible devices used to support an application.
[0016] FIG. 5 is a schematic representation of services provided using the framework.
[0017] FIG. 6A - 6D are schematic representations of services within the framework.
[0018] FIG. 7A - 7B are schematic representations of client nodes within the framework.
[0019] FIG. 8 is a schematic representation of a traditional application and a bot.
[0020] FIG. 9 is an illustrative root dialogue utilizing the framework.
[0021] FIG. 10 is an illustrative conversational user interface utilizing the framework.
[0022] FIG. 11 is an illustrative Telegram user interface implementing a bot.
[0023] FIG. 12 is a schematic representation of the conversational Artificial Intelligence landscape.
[0024] FIG. 13 is a schematic representation of the dialogue conversation subsystem.
[0025] FIG. 14 is an exemplary medical virtual assistant.
[0026] FIG. 15 is a schematic representation of the dialogue/coaching system.
[0027] FIG. 16 is an illustrative AEI Studio Graphical User Interface.
[0028] FIG. 17A-B are illustrative screenshots from AEI Studio showing the Intent and
Story features.
DETAILED DESCRIPTION
[0029] The present disclosure wiU now be described more fully hereinafter with reference to the accompanying drawings, which are intended to be read in conjunction with this detailed description, the summary, and any preferred and/or particular embodiments specifically discussed or otherwise disclosed. This invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. Instead, these embodiments are provided by way of illustration only and so that this disclosure will be thorough, complete and will fully convey the full scope of the invention to those skilled in the art.
[0030] As used in the specification and the appended claims, the singular forms“a,”“an” and“the” include plural referents unless the context clearly dictates otherwise. Ranges may be expressed herein as from“about” one particular value, and/or to“about” another particular value. When such a range is expressed, another embodiment includes from the one particular value and/or to the other particular value. Similarly, when values are expressed as approximations, by use of the antecedent“about,” it will be understood that the particular value forms another embodiment. It will be further understood that the endpoints of each of the ranges are significant both in relation to the other endpoint, and independently of the other endpoint.
[0031] “Optional” or“optionally” means that the subsequently described event or circumstance may or may not occur, and that the description includes instances where said event or circumstance occurs and instances where it does not.
[0032] Throughout the description and claims of this specification, the word“comprise” and variations of the word, such as“comprising” and“comprises,” means“including but not limited to,” and is not intended to exclude, for example, other additives, components, integers or steps.“Exemplary” means“an example of* and is not intended to convey an indication of a preferred or ideal embodiment.“Such as” is not used in a restrictive sense, but for explanatory purposes.
[0033] Disclosed are components that can be used to perform the disclosed methods and systems. These and other components are disclosed herein, and it is understood that when combinations, subsets, interactions, groups, etc., of these components are disclosed that while specific reference of each various individual and collective combinations and permutation of these may not be explicitly disclosed, each is specifically contemplated and described herein, for all methods and systems. This applies to all aspects of this application including, but not limited to, steps in disclosed methods. Thus, if there are a variety of additional steps that can be performed it is understood that each of these additional steps can be performed with any specific embodiment or combination of embodiments of the disclosed methods.
[0034] As will be appreciated by one skilled in the art, the methods and systems may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the methods and systems may take the form of a computer program product on a computer-readable storage medium having computer-readable program instructions (e.g., computer software) embodied in the storage medium. More particularly, the present methods and systems may take the form of web- implemented computer software. In addition, the present methods and systems may be implemented by centrally located servers, remote located servers, user devices, or cloud services. Any suitable computer-readable storage medium may be utilized including hard disks, CD-ROMs, optical storage devices, or magnetic storage devices. In an aspect, the methods and systems discussed below can take the form of function specific machines, computers, and/or computer program instructions.
[0035] Embodiments of the methods and systems are described below with reference to block diagrams and flowchart illustrations of methods, systems, apparatuses, and computer program products. It will be understood that each block of the block diagrams and flowchart illustrations, and combinations of blocks in the block diagrams and flowchart illustrations, respectively, can be implemented by computer program instructions. These computer program instructions may be loaded onto a special purpose computer, special purpose computers and components found in cloud services, or other specific programmable data processing apparatus to produce a machine, such that the instructions which execute on the computer or other programmable data processing apparatus create a means for implementing the functions specified in the flowchart block or blocks.
[0036] These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including computer-readable instructions for implementing the function specified in the flowchart block or blocks. The computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer- implemented process such that the instructions that execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart block or blocks. The computer program instructions, logic, intelligence can also be stored and implemented on a chip or other hardware components.
[0037] Accordingly, blocks of the block diagrams and flowchart illustrations support combinations of means for performing the specified functions, combinations of steps for performing the specified functions and program instruction means for performing the specified functions. It will also be understood that each block of the block diagrams and flowchart illustrations, and combinations of blocks in the block diagrams and flowchart illustrations, can be implemented by special purpose hardware-based computer systems that perform the specified functions or steps, or combinations of special purpose hardware and computer instructions. [0038] The methods and systems that have been introduced above, and discussed in further detail below, have been and will be described as comprised of units. One skilled in the art will appreciate that this is a functional description and that the respective functions can be performed by software, hardware, or a combination of software and hardware. A unit can be software, hardware, or a combination of software and hardware. In one exemplary aspect, the units can comprise a computer. This exemplary operating environment is only an example of an operating environment and is not intended to suggest any limitation as to the scope of use or functionality of operating environment architecture. Neither should the operating environment be interpreted as having any dependency or requirement relating to any one or combination of components illustrated in the exemplary operating environment.
[0039] The processing of the disclosed methods and systems can be performed by software components. The disclosed systems and methods can be described in the general context of computer-executable instructions, such as program modules, being executed by one or more computers or other devices. Generally, program modules comprise computer code, routines, programs, objects, components, data structures, etc., that perform particular tasks or implement particular abstract data types. The disclosed methods can also be practiced in grid-based and distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules can be located in both local and remote computer storage media including memory storage devices.
Definitions
[0040] Adapt/adaption refers to a process where an interactive/computing system (adaptive system) adapts its behavior to individual users based on information acquired about its user(s) and its environment.
[0041] An agent is a program that acts on behalf of a user or another program in a relationship of agency. Bots are a type of agent. An agent may or may not be embodied (paired with a robot body) or may be software executing on a computing device (for example, Apple's Siri). An agent may be autonomous or work with other agents or humans. Bots may aggregate other bots and act as a single bot. (0042] Analytics is the analytics specific to the domain of software systems taking into account source code, static and dynamic characteristics (e.g., software metrics) as well as related processes of their development and evolution.
[0043] An application is software for computers and computing devices to perform a group of coordinated functions, tasks or activities.
[0044] An Application Programming Interface (API) comprises a set of definitions, communication protocols and tools for building software that serves as a set of defined methods of communication between components within a system. It serves as the“building blocks” for a system that is generated or built by a programmer/developer. An API abstracts the underlying implementation, presenting information as objects or actions for the programmer/developer to manipulate and allows him to build a computer system or program without understanding all the underlying operations. An API must be supported by an accompanying“embodiment" such as a server or (more commonly referred to) a service. (References to a/the service refer to the embodiment (unless otherwise indicated) and references to how the service interacts with external entities refer to the API.
[0045] Artificial and Emotional Intelligence (AEI) refers to machines and software that mimic functions typically associated with human capabilities such as learning and problem solving and gives them the ability to act autonomously and display/understand human emotion.
[0046] A bot/chatbot is an autonomous program on a network (especially the Internet) that can interact with computer systems and/or human users.
[0047] A cache is hardware or software used to store data temporarily in a computing environment/system. The stored data is a replica of data that is persistently resident on a system opaque to the user of the cache. Opaque means that a user may or may not whether they’re reading from the cache or the original source of the data.
[0048] A client is a part of computer hardware or software that accesses a service made available by a server. The server is often (but not always) on another computer system, in which case the client accesses the service by way of a network.
[0049] Cloud computing is the on-demand availability of computer system resources, especially data storage and computing power, without direct active management by the user. The term is generally used to describe data centers available to many users over the Internet. Large clouds, predominant today, often have functions distributed over multiple locations from central servers. A service cloud provides on-demand availability of services (APIs) provided to clients by servers. The cloud may make a server directly visible to a client of the server, or it may hide the actual server and rather provide an abstract service API by which the functions provided by the service can be used.
[0050] Context in computer programming is a structure, instance or object that contains minimal set of attributes, properties or states that allow for the execution or management of a defined set of operations or tasks. In the normal operations of computer systems, as one program invokes other programs, the context keeps growing and maintaining these states, attributes and properties that allow the program being invoked to receive information necessary to carry out its programming. Implicit context is that provided by a system in the form of well-known, accessible APIs and does not need to be included in the invocation of a program that understands the implicit context. Explicit context is included in the invocation of a program that requires the context, in this case the context is specified in the API for the program being invoked.
[0051] Emotional Intelligence is defined as tools and services that a. promote self awareness, b. promote personal development from self-awareness, c. promotes relationship awareness from knowing oneself better, and d. promotes relationship development from better understanding of the relationship. Because of (a) personal awareness, a person must learn to identify core values from an emotional perspective, to understand the difference between feeling, emotion and mood and use this knowledge to improve themselves and their relationships with others.
[0052 J A Graphics Processing Unit (GPU) is a specialized electronic circuit capable of quickly manipulating memory to accelerate the creation of images intended for output to a display device. GPUs differ significantly from CPUs in that they include more robust chipsets involving math processing units and optimized memory architectures allowing for much faster processing of mathematical models often used for graphical applications as well as Artificial Intelligence models.
[0053] A Graphical User Interface (GUI) is a form of user interface that allows users to interact with electronic devices through graphical icons and visual indicators such as secondary notation. [0054] Identity Management System (1DM) describes the management of individual identities, their authentication, authorization, roles and privileges within or across system and enterprise boundaries with the goal of increasing security and productivity.
[0055] Input/output (I/O) describes the communication between information processing systems or entities. Signals/data received by a system are referred to as inputs and signals/data sent from a system are referred to as outputs.
[0056] Intent refers to what a user is commanding the virtual assistant/system to do i.e. what function/service the user wants the system to perform or call upon. For example, if a user submits the command“Tell me the weather now in Atlanta, GA” the intent is commanding the weather service to report the weather at this moment.
[0057] The Internet of Things (IoT) is the extension of network connectivity into physical devices and everyday objects. Embedded with electronics, network connectivity, and other forms of hardware (such as sensors), these devices can communicate and interact with others, and they can be remotely monitored and controlled.
[0058] Micro Service Architecture (MSA) is a variant of the service-oriented architectural style and a software development technique that structures an application as a collection of loosely coupled services. Services are fine-grained and the protocols are lightweight. This provides modularity and makes the application more resilient to architecture erosion. MSA breaks systems into the smallest independent modules possible and provides light-weight protocols and APIs to bound them together into a macro service / application.
[0059] Natural Language Processing (NLP) concerns interactions between computers and humans in human (i.e. natural) languages. It involves speech recognition, natural language understanding and natural language generation.
[0060] A proxy server/service is a dedicated computer or a software system running on a computer that acts as an intermediary between an endpoint device, such as a computer, and another server from which a user or client is requesting a service. A proxy has the property of acting on behalf of either its client or the server. Typically, neither the client nor the server are aware of the proxy which acts as a complete duplicate of either the client or the server for the scope of the proxy functions. [0061] A service is a mechanism to enable access to one or more capabilities or software functionalities, where the access is provided using a prescribed interface/API and is exercised consistent with constraints and policies as specified by the service description.
[0062] Skills are capabilities taught to autonomous programs and/or AEI systems such as“report the weather.” Skills are developed as bots interact with people, services, etc. Skills may be added as programs using the AEI Framework API, or they can be created using high-level tools such as the AEI Studio.
The Framework
[0063] The framework 100 provides a method and system for quickly and easily building AEI systems 260 and virtual assistants 262. A microservices architecture (MSA) 120 is used to build services 220 intended to be provided over a network 160 via a cloud 250. The framework’s 100 core architecture comprises networking capabilities, IDM 255 and an Event Management System/Event Bus 140. AEI systems 260 facilitate communication between humans/users 300 and non-human entities such as bots 240. The framework 100 operates as a robust networking system and architecture designed to integrate with human and non-human participants using APIs 280 that are easily accessible to participants and make it easy to adept existing architectures, devices, and systems. The framework 100 supports extensible security protocols so that it is possible to adapt in both unsecured and strongly regulated network environments. The use of the framework 100 eliminates the need for developers 315 to spend time and resources developing menial and fundamental necessary services 220 when developing AEI systems 260 applications 105 and assistants 262.
[0064] As illustrated in FIG. 1, the framework 100 supports APIs 280 that enable AEI system/virtual assistants 260, 262 to develop skills 200 in addition to the development of services 220. The framework 100 may be utilized to assemble bots 240 for a specific system 260 such as Alexa, Google Home etc. The framework 100 is capable of integrating with existing AI applications, systems and computing devices. In an aspect, the framework 100 is capable of supporting complete applications including mobile applications. The framework 100 is configured
• ·. ;
to work with all virtual machines and programming languages, for example the Java Virtual Machine and Scala programming language. In an aspect, the framework 100 utilizes an event bus 140 which offers open service API that is available to any other virtual machine, programming language or framework capable of invoking an interface/ API.
[0065] In an aspect, the framework 100 comprises higher level interfaces such as GUIs 620 which allow abstracted access to the underlying services 220, and lower level interfaces. Both high and low levels can include APIs 280. The lower levels of the framework 100 are compatible with all programming languages for example JavaScript, PHP, and NodeJS. The higher level interfaces are accessible through web pages and web services using web socket APIs. For example the higher level interfaces may run inside Scala containers on any platform hosting a compatible JVM.
[0066] The framework 100 may be provided as a component within manufactured computer chips. The framework 100 may be included in the GPU of a computing device which allows for the building of image recognition algorithms that learn from human expressions and can learn to identify emotions from expressions. Image recognition algorithms utilize a lot of processing power and including them in the GPU optimizes the performance of the device.
[0067] In an aspect, as shown in FIG. 2, the framework architecture 120 comprises an input/output (I/O) processor 340, an event manager API 290, a Framework event bus 140, bots 240, skills 200, IoT 230, cloud(s) 250, IDM 255, NLP 265, and analytics 275. The I/O processor 240 can be configured to communicate with a human/user 300 and non-human input 320. When inputs are received by the I/O processor 340, they are converted to text. The processor 340 determines the intent 640 in context 130. The bot 240 receives the input(s) from the I/O processor 340 and sends a message over the Framework event bus 140 to the desired service 220 and waits for a reply. If the intent 640 requires the invocation of a skill 200, then it invokes the skill 200 with the parameters provided. For example, a bot 240 may go to the Framework event bus 140 and request connection to the weather service 220. The system may reply with a default or a list of available services 220 (which may also be preconfigured or prepaid) in order to“resolve” the services 220 (i.e. map services 220 to the needs of the bot 240). Once the mapping is completed/services resolved, the bot 240 can either directly call the services 220 through its API entry points (taking out the abstraction of the bus 140 which protects the bot 240 from hanging on to outdated service references) or invoke the API 280 directly. The bot 240 may also take the input, setup the parameters to an event and fire the event requesting the request fulfilment. [0068] On the server side, the service 220 sits and listens for incoming requests from the bus 140. When it is alerted to a request, the service 220 checks that the request is valid (valid licenses, payment current, etc.). If it is valid, the service 220 honors the request, fills up the parameters and fires the event response. If the request is transmitted through the API 280, the same thing happens, except that the service 220 replies through the web socket. The bot 262 may use its own I/O system or it can use the framework’s I/O processor 340 to obtain inputs and display output. It can call the services 220 directly or through the event bus 140, and similarly the services 220 can opt to use the bus 140 or not. The bus 140 can be used to easily create connectors for IoT 130, known AIs, algorithms, web services, and other programs. The I/O processor 340 communicates with the event manager API 290. The event manager API 290 can be configured to work with several APIs 280, which the event manager API 290 can access via a database (not shown).
[0069] In an aspect, the framework 100 provides a hot container (not shown) that automatically uses the framework’s I/O processor 340, NLP 265 and related services that are configured at install time. If the client purchases services 220, they are“wired” at install so the hots 240 can quickly make use of them. Similar to the installation of a windows program, the installer may ask if you need visual c runtime or java runtime and if you want it downloaded and installed. It will then install the necessary .DLL’s (modules) that make the installed hots 240 work. In the public case, users will typically use a single bot 262, or they will customize and configure one. In the corporate case, they will probably preconfigure the bot sets by user type.
[0070] For example, a hospital sets up their access controls so that the system administrators will have access to everything, but the administrators will not be able to see patient data. Doctors can see everything but cannot configure the bots 240, nurses and staff can potentially add entries to the patient’s care record, but cannot modify or delete without an administrator/doctor’s approval and staff may have cursory access. To make all this work, there are a plurality of databases that are configured and managed to operate the system. The framework event bus 140 communicates with the APIs 280 associated with the various other components connected to the framework MSA 120, including, but not limited to, bots 240, skills 200, IoT 230, the cloud(s) 250, and other services 220, including the IDM 255, NLP 265, and analytics 275. AH services 220 are offered via an API 280 through the bus 140. In an aspect, the components that comprise the MSA 120 are substitutable. For instance, the NLP 265 can coexist with or be replaced with a new/as yet undeveloped NLP 265. By utilizing the framework 100, the developer can leverage the cross-platform environment to deploy skills 200 across AEI boundaries. For instance, a developer can create an Alexa skill 200 that uses the features of the framework 100 within the constraints allowed by Alexa’s framework. Further, when a service is configured to run with the bus 140, the bus 140 can support multiple versions of the system running at once simultaneously and without interference.
[0071] FIG, 3 shows a schematic representation of the framework 100 overview as it relates to a particular AEI system/application 260. The system 260 comprises framework services 220 and an API(s) 280, as well as other components as shown in FIG. 2. The use of an MSA 120 facilitates the integration of future components and systems with the framework 100. For example, there are no general purpose APIs for GPUs because it was not anticipated that technologies like blockchain and AI would require access to GPUs. Presently, OS architectures have to be changed to accommodate the GPU driven architectures and provide APIs that can, for instance, allow the user to set up Direct Memory Access pipelines from the CPU on the mother board, to a GPU on a separate card. The MSA 120 allows for incorporation of new technologies and new systems without a fundamental redesign. APIs 280 will be compatible with new hardware and software capabilities.
[0072] Unlike existing solutions, the framework 100 provides access to IoT devices 230, legacy systems, web services 220 and any other device that uses an API 280 because the framework 100 integrates with services 220 outside conversational assistance. Any API 280 can easily be integrated using the Framework Event Bus 140 (See FIG. 2). This allows AEI systems 260to be incorporated into conversational applications with wearables 400 and similar devices that have biometric and other sensors.
[0073] For example, as illustrated in FIG. 4, a virtual assistant 262, via the framework 100, can speak with various wearable devices 400. In an aspect, the framework 100 facilitates communication with a Spire wearable device 400, which monitors breathing and stress levels. A virtual assistant 262 (not shown) can administer a questionnaire to a user 300. Biofeedback 410 obtained from the device 400 can be utilized by the virtual assistant 262 in its conversation with the user 300. For more accuracy, the system 260 can add heart rate data captured with a smart watch monitor 400 to correlate the heart rate to other biological functions and levels 410. GPS sensors may capture location awareness and feed it back to the system 260 for greater insight. For even greater accuracy, signals from a Muse headband 400, which monitors brain waves, can be included and with signals across all devices 400 correlated. In this way, the MSA 120 can support a virtual assistant 262that can correlate biometric information 410 obtained via wearable devices 400 with location based and other information about a user 300. In this sense, the MSA 120 of the framework 100 can use third party devices as subsystems of a medical application 105 using the event management system/Framework event bus 140. Signals from the devices may be imported and incoiporate the devices 400 through APIs 280 for a final assistive virtual assistant 262.
[0074] As shown in FIG. 5, the framework 100 comprises a network 160 with at least one server 180 capable of running necessary framework services 220 and capable of connecting to other machines running necessary framework services 220. Agents/clients 110, 115 access framework and other services 220 through API(s) 280. Agents 110, clients 115 and servers 180 (and anything connecting to the framework 100 through an API 280) serve as framework nodes 125 capable of implementing framework 100 services 220. The framework 100 communicates with code in the same virtual machine or with other virtual machines (i.e., the server 180 can be physical or virtual) depending on desired configurations, capabilities of the physical host, and requirements of the application. The framework 100 is able to support a plurality of network architecture configurations including client/server, peer-to-peer, distributed, and the like. Nodes 125 act as part of the framework’s I/O processing system. As discussed above, the nodes 125 act as a client 115 or a server 180, and support the Framework API(s) 280.
[0075] In an exemplary embodiment, when an AEI Framework server 180 (one or more) is installed, it/they comprise a service cloud 250. The service cloud 250 can be configured to operate with typical public web parameters like any other web service 220, or it can be configured to operate in connection with any supported corporate infrastructure. Both are accomplished by using the IDM 255. The IDM abstracts how things are authenticated, managed and authorized. Also, it is possible to take one service cloud 250 and connect it to other service clouds 250 using what is called a federated access control or federation. Access points (servers) are identified which create the necessary context 130 for one cloud 250 to access the services on another cloud 250, and vice-versa.
[0076] In an aspect, where a virtual service cloud 250 is installed in a hospital, the hospital may take advantage of skills 200 and hots 240, both discussed below, in use in the global AEI Framework cloud. An installer may choose to download services 220 and install them locally on the private cloud 250, or to federate them from the public cloud 250. If the hospital wants to provide entertainment services provided by an entertainment bot 240 from Netflix, or just videos directly from the Netflix service, if the policies of the hospital allow the access to the public internet service, then the installer can simply open up the access (like a firewall) to the Netflix video service. If the hospital wants to participate in the Netflix affiliate program and offer video on demand for payment, also at this point, the install can configure the payment gateways, and the necessary certificates to facilitate the patient’s transactions when accessing content. On the other hand, if the hospital wants to use a patient profile skill 200 that is used to obtain and manage patient medical records and it’s not legal to share these with other hospitals, and not safe to transmit these over public lines due to regulations, then the installer may download the skill 200 (or bot 240 depending on how it’s licensed and sold) and set up its own licensing and payment system. Bots 240 are similar to .EXE files in the sense that they are self-contained, full programs that run autonomously whereto skills are similar to .DLLs which are run-time loadable libraries and modules that are used by the bot. In this way, the system is able to easily support public, private, or hybrid clouds via the IDM 255.
{0077] In an aspect, a single input/output system accommodates input/output artifacts 350 that can accept language or code using API parameters as operational I/O components. In an aspect, the framework 100 is configured to have a single input system that is configured to work with various artifact devices, including well known devices and services (e.g., Siri, Google Assistant, etc.), standard I/O connections, or customized speech recognition and test to speech synthesizers. In an aspect, bots 240 are used for the input, and are configured to convert a user’s input into intents 640, entities 650 and the necessary parameters 645 to do its job. In an aspect, once the text is acquired, NLP 265 can be utilized to determine the intent 640 of the content, entities 650, and the other data to form parameters 645. These components are used to call a corresponding service 220, or utilize the event bus 140 to find the corresponding API(s) 280. In an aspect, the bus 140 can keep track of servers 180 (as well as what speaks what version of what API 280) whereas calling a service 220 directly, the client 115 may lose function if the server changes unexpectedly.
[0078] FIGS. 6A - 6C are schematic representations of services 220 within the framework 100. The framework 100 is configured to call upon various necessary services 220, including, but not limited to, IDM/identity security 255, events communication 292, AI 297, algorithms 295, bots 240, and skills 200. As shown in FIG. 6C, the core network services 222 may comprise connection services 224, session management services 225, network management services 226 and routing/naming/transport services 227.
[0079] FIGS. 7A - 7B are a schematic representations of client nodes 115/125 within the framework 100. Input/output is converted as needed, and passed along to the bots 240, which can then use the input/output in the manner for which they have been configured.
[0080] In order to ensure that the framework 100 can identify and provide secure communications between nodes 125 (i.e., the MSA 120), an IDM 255 provides the ability to manage accounts (create, delete, authorize, authenticate and delegate) and enables and provisions communication and networking between nodes 125. Skills 200, bots 240, algorithms 295 and base AEI services 220 may be accessed remotely using Identity 242 and Events. Using an IDM 255, the bots 240 can be assigned an Identity 242, which is then published on the Event system 290, and can be registered as an API provider. In addition, when a user 300 uses the bot 240, the user 300 is assigned an identity, for operations on behalf of the user, the bot 240 then can assume the identity of its user 300 as the user’s agent (delegate). This is important for auditing purposes because when running code it is important to know under what Identity 242 it is running, the user (who has all rights to his/her data, or the agent (who has only limited access to that data). For example, a developer 315 going through the system’s logs can see that someone activated any Event, but it cannot tell who, but it does know what agent did it and it can query the agent to receive whatever info is publicly available. In an aspect, everything utilizing the framework 100 must have an identity 242. An AEI system 260, having at least one bot 240, may require both client/server 115, 180 or client/cloud 1 15, 250 components due to fault-tolerance or security restrictions. From an architectural perspective, the cloud 250 provides resilience and fault- tolerance by insulating the client from dependencies on a specific server 180.
[0081] A client node 115/125 has the main responsibility of communicating with the human 300 or device 190 and is responsible for providing I/O processing. For example, the client node 1 15/125 can use the voice services 220 on an iPhone (using the virtual assistant, Siri) to deliver text to speech input/output. The user speaks and Siri converts the voice to text and conversely, when the system returns an output, Siri converts the text to voice. This is what is termed I/O processing. In some implementations, speech synthesis via a speech synthesizer is applied to emulate a person’s voice when returning an output. The speech synthesizer service can include any speech to text service provider (e.g., Sin, Google Assistant, Sonos speaker, and the like).
[0082] The configuration of the virtual assistant/bot/application 262/240/105 is determined by desired outcomes and requirements that are specified by the programmer/developer 315 based upon scripts or directions provided by experts 310 associated with the bot 240. For example, for hospital settings, a developer will work with a doctor to take a policy to be implemented, with the doctor providing the policy, and the developer 315 providing the coding for implementation. There may be other parties involved, the patient has a role, the doctor, the hospital administrator, the HIPPA compliance officer, so the developer 315 has to create the code that will make the bot 240 operate in accordance with all those specific roles, definitions, and desired outcomes.
[0083] In an aspect, an application 105 may embed the use of the framework API(s) 280 to invoke services 220 directly in which case the application 105 is responsible for managing content such as session information, API keys, login credentials, etc. Some chipsets may use this mechanism to“attach” themselves to the services 220 or devices 190 on the framework 100 functionality. Alternatively, the framework 100 may provide client implementations for particular platforms in which case client code may provide emulation or redirection functionality. For example, by implementing a simple file system interface, the client 115 can allow the user input 350 to be written to a remote file for asynchronous processing. The response is received when a response file is received similar to an email system sending messages back and forth using files as the communication vehicle. By implementing such a system, users do not need to use actual coding language. The bots 240 can implement redirection when sending filed back and forth, with the framework to share files between users. The bots 240 can be trained by uploading files with sample dialog, with the files organized in a specific format that allows the bot 240 to“ingest” the file, allowing the bot 240, via the NLP 265, to be“upgraded” with new knowledge. For a more sophisticated application 105, context 130 can be maintained in real time sessions between nodes 125 with login and access controls being held securely in a session context 130. Some applications may utilize a hybrid system employing both asynchronous and real time capabilities as described above. In an aspect, context 130 can include an API key that is created and saved to allow a client node 115 to communicate with a server node in subsequent calls to the server node. Context 130 can be transient and volatile, or it can be persistent and have longer scope that just a single session. [0084] The footprint of the node 125 may need to be minimal or behave differently from an“always on” network. In this“distributed” model, nodes 125 may communicate logic using a very light-weight agent 1 10, as opposed to a heavy weight agent. A heavy weight agent contains all sorts of necessary instruments to carry on its work, this could be user credentials, certificates, session keys, API keys, etc. A light-weight agent is distributed, i.e. it uses a very small amount of local memory to store a reference to a remote instance of the heavy-weight agent which then acts as a proxy for the light-weight agent (client). FIG. 6D illustrates the relationship between the heavy-weight agent 130 and a light weight agent 110. In this mode, code by the agent 110 is very minimal and a very light-weight context 130 is maintained with some remote node(s) 127 handling the heavy-weight parts of the context 130 on behalf of the agent 1 10. Distribution of logic between nodes 125 allows for extremely light-weight implementations on devices 190 that may have limited resources.
[0085] Where a node 125 cannot meet the security constraints of a system 260, a remote node 127 can provide hardened proxy services 220 on behalf of the constrained node 125. For example, a customer is wearing an Apple watch, which is powerful enough to run apps, but has very limited resources, and is not a hardened, secure hardware platform. The framework 100 can rely on another bot 240 or remote node 127 that does have a secure hardware platform. In this way, the framework 100 operates as a strong, secure and scalable distributed network router. When using remote nodes 127 to provide hardened proxy service, caching is used. Caches 135 can be manual, automatic, synchronized, loosely-coupled, etc. The framework 100 doesn’t require a particular cache 135 implementation but may support different types of caching to address a number of known scenarios. For example, if a hospital requires that a device 190 provides a response to a patient’s action within a prescribed amount of time, and the required service 220 using the framework 100 cannot offer a Quality of Service guarantee to meet the specified response then the assistant (in this case the developer 315 building the assistant 262) cannot assume that it will have access to remote nodes 127 at runtime. In this case, the framework 100 must be configured to provide sufficient resources to implement the“cached" API 280, and behaviors (e.g., locally accessible versions of the server or at least local replicas that synchronize back with the server when connectivity is available) expected by the application 260. In another example, the hospital may require that the cache 135 be refreshed at specific intervals, or when the device 190 is in a particular state. Normal caching protocols and known algorithms may be used to satisfy the policies of the installation. In short, the framework 100 provides installation options that support the multiple caching modes.
[0086] The framework 100 may be combined with analytics 275 to provide an AEI system 260 capable of guiding a user 300 through information pertaining to Emotional Intelligence. In an aspect, the framework 100 is used to write assistants that help a person understand themselves and their relationships better, and through the effort, by capturing large samples of this data across many different types of people, build large computer models that are based on the emotional models mentioned previously. In an aspect, the framework 100 can utilize classification of sentiment into negative and positive emotion; once the natural language analysis, via a NLP 265, can accurately identify positive and negative language, the framework 100 can classify the emotions into distinct emotions using currently accepted emotional models. In an aspect, the actual model is less relevant as is testing for accuracy and picking the model we find the most accurate most of the time; once the models are accurate, the framework 100 can move beyond analyzing text to analyzing other things, facial expressions, body language, behaviors, and even correlating emotion to bio feedback 410 from wearables 400.
[0087] In an aspect, the framework 100 serves as a substrate that promotes the development of applications 105 that interact in a human-like way such as being able to convey empathy or sympathy in what normally would have been robot-like interactions, The framework 100 also serves as a substrate that facilitates the use of natural language expressions by AEI systems/applications and virtual assistants 260, 105, 262. Additionally, this capability may be combined with speech synthesizing technology to emulate a person’s voice. For example, a speaking virtual assistant262 for administering care to a person with Alzheimer’s disease or dementia may combine biological information 410 obtained using wearables 400 and administer tests while communicating empathetically with the user in the voice of a loved one. Additionally, a system 260 like the Aroma Shooter may be integrated with a virtual assistant 262 to recreate a person’s natural scent or provide alternative calming/soothing scents depending on the assistant’s 262 assessment of the user’s 300 state of mind.
A .
Case Study
[0088] Referring to FIG. 8, in a traditional application 500, a main screen 520 drives the interaction with the rest of the application through the use of menus, selections and screens. In contrast, a bot 240 uses a root dialogue 245 to drive interactions. Even where a traditional application 500 and bot application 240 provide access to the same information, the bot 240 delivers its content 560 through natural dialogue with the person, rather than providing a navigation system on the screen.
[0089] The main interaction of the framework 100 takes place in the form of a user interface/dialogue screen 540 between the virtual assistant 262 and the human/user 300 through which conversation takes place in a natural, human language interaction. The virtual assistant 262 speaks or sends messages to the human/user 300 as naturally as a human assistant sends messages to his boss, or her colleague.
[0090] To deploy this solution, the framework 100 provides sample user interfaces 540. This may be a“chat” or“messenger” interface to interact with a patient, also commonly referred to as a conversational or chatbot interface. FIG. 9 shows an example framework root dialogue screen 245 on a smart phone 190 (not shown). In the top banner area, typically found in the navigation areas of computer applications are the screen name (Home 545), a link to access additional options (three vertical dots 547) and a settings icon (gear 549). As shown, a bot 240 can include many of the same GUIs 620 and functionality as a traditional application. However, the big difference is that the bot 240 is dedicated for the use of the conversational AI and the user for the“chat.”
[0091] In the following case study, the framework 100 is used to provide a virtual assistant 262 for the treatment of delirium. In other embodiments, various other disorders and diseases can be monitored and treated. Mistakes made in the treatment of delirium costs medical facilities billions of dollars and claims attributed to falls, involuntary bowel movements, broken bones, soiled and infected beds and equipment, premature release of patients with reduced cognitive abilities etc. The protocols at most hospitals require that a very simple test be administered to patients a certain number of times per day, however tests are often not administered. Human nurses may forget, choose to rely on their own observations, skip testing altogether, or may be embarrassed/feel badly for the patient.
[0092] An example virtual assistant 262 wakes up every so often to administer the prescribed test. As discussed above, the test can be determined from doctors, and implemented into the framework 100 by developers 315 and fine-tuned through AI. If the patient passes, this is entered into the patient’s records, if not, it is also entered and hospital staff is alerted. Over time, the virtual assistant also looks for patterns. For example, how long after a medication is administered does the failure happen? Could it be a drug interaction? Who is around the patient? Where is the patient? These are easy and unobtrusive observations that provide tremendous insight and have a huge, positive impact into solving a problem that is really caused by human error. Having an automatic, friendly, empathic bot 240 wake up and interact with a patient, administer tests, offer access to menu choices, entertainment and even just simple chitchat guided by psychology experts (i.e., experts provide the instructions or things to test) to provide positive conversation and reinforcement provides a tremendous improvement at very little cost to the medical facility. For example, a doctor would provide a dialogue script which includes dialogue for the bot and the expected responses from the patients. Based upon the responses, the bot 240 takes the prescribed action. NLP 265 can handle a large number of linguistic variations in a response, so a script can still be used if a patient’s response isn’t exact. The bot 240 can continuously learn based upon the NLP input, and remember specific things about a patient over the course of the patient’s treatment.
[0093] The framework 100 can include a programming application 600 with tools for training and learning which can be described as an AEI studio 600. The AEI studio 600 can be used to quickly and easily generate an interactive virtual assistant 262 with programmable icons and titles. As shown in FIG. 9, a bot interface 540 can be as rich and textured as a traditional application 500 user interface. The bot interface 540 in FIG. 9 features an icon/picture representing the virtual assistant 262, text bubbles 535 through which the virtual assistant 262 communicates, and a data entry dialogue window 550 to allow the user 300 to communicate back. The conversational interface 540 provides a scrolling window/page where gestures are easily applied to navigation.
[0094] In an aspect, the hot 240 is configured to interact in a more conversational setting, using the applications and components discussed above. For example, an assistant 262 is created to manage an individual's time and tasks, and utilizes a conversational approach to manage. The bot 240 may reside on the user’s computer or smart device (i.e., iPhone, iPad, etc.). When the bot 240“sees” the user typing in a date and time, the bot 240 might ask the user if the user needs to schedule something for that date and time in their calendar, or create a task. This is a differentiating factor because people react differently to conversations, and with the growing popularity of text/SMS messages, or even facebook, whatsapp, or similar messaging applications, people are now accustomed to these asynchronous narrative interfaces (conversational), and having an automaton interacting with a human who can’t really tell it’s an automaton because its linguistic capabilities are so close to a human’s, which makes the bots more welcomed by the user.
[0095] In other words, the method of interacting with the user generates a social reaction that could not have happened from a program with a blank page where the user could write anything they wanted to. Instead, simple questions like“how do you feel?” are provided, with replies similar to“I’m sorry you don’t feel well, what do you think you could do to feel better?", or“why do you think that is?” are supplied. These simple questions elicit deep thinking and responses from the users that further assist the AEI framework 100 in developing Emotional Intelligence.
[0096] In this example, the virtual assistant 262 can provide faces that represent the user’s mood. The user 300 is able to select a face for whichever mood matches his/her current state, or can simply write anything in response to the“how do you feel” question being posed by the virtual assistant 262. The virtual assistant 262 is provided with many possible responses to questions. The system applies context 130 to the query (much like a human does). The context 130 is provided by the data already known and collected by the virtual assistant 262. For example, previous answers provided during psychological evaluation and even the person’s profile can be used to provide the context 130. For example, the virtual assistant 262 can know if it is the user’s birthday, or based upon previous answers, whether the user is an introvert or extrovert, and adjust accordingly (e.g., not asking as many questions if the user is an introvert).
[0097] To assist with making the virtual assistant 260 as effective as possible, the virtual assistant 262 is given a score whenever it receives an appropriate response to a question. When a response is processed, rather than looking for an exact match, the response is given a score and a weight based on internal algorithms for natural language processing this is all taking place inside the Natural Language Processor 265, so when the NLP 265 returns a number to the bot 240, the bot 240 knows that the response returned by the natural language system (the script analyzer) is 95% accurate. At this point developers decide if the 5% margin of error is enough to say“I don’t know the answer to that",“can you repeat the question?’’, or“did you mean to ask ....” For example, hello, hi, howdy, what’s up, wassup, yo, etc are all variants of salutations. As the virtual assistant/bot 262, 240 converses more and more, and the responses receive higher and higher weights by receiving appropriate responses, it becomes more confident and needs less training. In other words, when the AI detects a correct interaction between the bot 240 and the human 300, and as it detects fewer and few incorrect interactions where the human no longer has to keep asking a question, or the bot 240 doesn’t keep saying“I don’t know that”, then the particular interaction (question/response) is considered well-trained, and reduces the need for further input. From time to time, the NLP engine is searched for interactions that are either unused, or need more input. This allows the developers 315 to also become more confident that the AI is able to interact with people accurately.
[0098] FIG. 10 shows another exemplary conversational AI application 105 created using the framework 100 that functions as a personal content curator. The conversational screen 540 can be effectively used to deliver complete web page abstracts, articles, and multi-media content 560. The user 300 uses icons to indicate when they like or dislike a piece of content 560. The user inputs are used to yield improved content selection. The AI application 105 is able to capture specific emotions such as whether the content make the user sad, happy or angry etc. In an aspect, the specific emotions are captured based upon the response of the user, that is, the input provided by the user to the AI application 260. For example, as a user reads articles, and as they identify articles they like, the AI application 260 builds a better understanding of the kinds of articles the user likes. It will ask, if the user read article a and article b, what are the main differences? If it then looks at article c, and the tone of the article is more like article a, then it will expect a high score from the user, and if it gets it, then it starts trying to find more articles like those already identified. Unlike traditional programming where the computer was looking only for exact match or fail, the AI application 105 is able to say“good enough match”.
[0099] When a user 300 responds to a query, if a response is not recognized, the AI application 105 records the response and immediately or later asks for enough information to determine whether this is a new mood, or part of an existing one. The framework 100 also provides the ability for the virtual assistant 262 to ask an expert 310 who can further refine the dialogue 210. When a response is logged as not recognized, the response will be sent to an appropriate expert associated with the development team who provided the scripts/parameters/dialogue of the treatment/program originally. This can be sent via another virtual assistant 262 or other traditional means (e.g., email, text. Etc.). Upon receipt, the expert can provide new dialogue, or programming, to answer the question posed. . [00100] An AEI training bot 240 may be designed and implemented using the AEI studio 600. FIG. 1 1 shows a chat bot 240 implemented using the popular messaging application Telegram. In other embodiments, other messaging applications can be utilized. Using Telegram, it is very easy to build an AEI training bot 240 where any human 300 wishing to have a training session with the bot 240 can provide conversational inputs 350 within Telegram. Initially, the human 300 may provide both sides of a conversation, and the bot 240 will register these inputs 350 as being the original samples and request more examples from human participants 300. Next, the bot 240 solicits additional samples from human participants via Telegram that are willing to participate. Once the initial dialogue is completed, the bot 240 will ask the person 300 if they want to help provide more samples. If the person 300 agrees, the bot 240 may say“Hi John, I recently learned this phrase:“today is a wonderful day”, here are the current sample responses I currently have 1.“It’s an awesome day”,“I love this day”, can you provide me with more examples please? One complete phrase per response please.” In other words, the bot 240 is using the existing scripts/intent 640 and other entities 650 to ask for alternative ways to say the same thing. For example, when the phrase is initially entered the intent 640 will probably be called“a good day”, that is what is intended to communicate, the entity 650 is“day", and so you find the entity 650 in all the phrases, so the NLP engine 265 will assume that“great”,“I love” and“awesome" are somewhat akin to“good" when used with the entity“day". In this way, the bot 240 is able to find humans 300 on an existing messaging platform to teach it how to respond to natural human conversation. The bot 240 may engage with individuals outside the Telegram application and may engage with other bots 240. Bots 240 may converse through other existing messaging platforms such as iMessage, FaceBook Messenger, Slack and the like. Presently, chat bots 240 are limited to their function as bots 240, The framework 100 allows the underlying AEI system 260 to interact (as a bot 240) across different systems and platforms with humans 300, software, hardware, and other bots.
Conversational AEI Architecture / Dialogue Technology
[00101] Within the framework 100, chatbots 240 (also referred to as Artificial Conversation Entities) use an automated dialogue system/technology to communicate in a conversational style that mimics natural human language. FIG. 12 illustrates different elements that may comprise Conversational Artificial Intelligence 297 including chatbots 240, intelligent dialogue, speech recognition, speech synthesis, dialogue management, information extraction and natural language understanding.
[00102] The AEI systems 260 use one or more root dialogue systems 245 to interact, provide services 220 and learn from entities (humans 300, software, other hots 240 etc.). The dialogue system 245 may be domain specific, meaning it relates to a specific field/art (medical, financial, technical etc.). In an aspect, the dialogue system 245 manages the various root dialogues in which a user may engage. For example, a user may start with a telemed root dialogue that then needs access to a pharmacy dialogue after being completed. The dialogue system 245 manages the connections and interactions between the different dialogues. The dialogue system 245 is independent of the input/output processing system 340 and the AEI system/virtual assistant 260, 262 (See FIG. 15). Dialogue is pre-programmed, meaning that an AEI system 260 has to be trained on what to say and how to respond. The assistant 262 may also be capable of self-training dialogue as discussed above (e.g., ask questions of the user or refer to an expert).
[00103] Additionally, particularly for mission-critical assistants such as medical assistants, local dialogue sets may be provided within the programming code of the AEI system 260. In order for the AEI system 260 to work, there has to be at least one sample dialogue present for the system to start learning. The framework 100 may utilize a replica design pattern (i.e., a local copy synchronized with a master copy somewhere on at least one known source location, which can be set up by the system administrator). In such aspects, the framework 100 has the ability to use NLP 265 at run time. However, this functionality requires that the device 190 have sufficient storage and processing power. Alternatively, the dialogue system 245 utilizes a store- and-forward dialogue design which is asynchronous (similar to email). Dialogue sets 210 containing offline conversational elements may be downloaded periodically from servers/the cloud 180, 250 when the functionality can be supported (for example, internet access, sufficient storage or processing power). The dialogue sets 210 can be added via subscription services to provide updated events or have administrators supply them. The dialogue system 245 utilizes an API 280 that abstracts the location of the dialogue files 210, and exposes a mechanism for storing new dialogue 210 for programmatic use. In an aspect, when the system goes offline and then becomes re-connected, the stored dialogues 210 are uploaded, and the updated dialogues 210 re- downloaded. The architecture 120 therefore allows for real-time NLP and training of terms for dialogue 210, serving as a“dialogue cache". [00104] FIG. 13 shows how the dialogue system 245 uses a conversation sub-system (e.g., NLP) that includes appropriate responses to expected interactions. When an unexpected interaction occurs, the conversation sub-system hands the interaction to the“unknown” system, which attempts to resolve the interaction. The system 260 is continually learning, and it is possible that a new trained response has been entered but not updated through the network of nodes 125 on the framework 100. If the interaction is resolved, then the conversation system adds the interaction to the known interactions and is then able to respond appropriately. If the interaction is resolved, then the NLP system 265 may be invoked, which can teach the new terms and responses to the dialogue system 245, or request help from a human operator 30. For example, if the NLP system 265 is able to get enough information back from a user, or whatever source to be able to classify the interaction and break it down into intent 640, entities 650, and associated parameters/data 645, the NLP system 265 via the dialogue system 245 may create a new entity650 and resubmit it as an addition to the root dialogue 245 (see FIG. 17B). Here, the NLP system 265 would add another “then” statement or perhaps even another condition depending on whether it is modifying an intent, or a story. If the NLP system 265 is not able to gather enough information to create a new addition, the dialogue system 245 can then request help from an expert 310. In either case, the system 260 can either update the running system, or it can add a new dialogue root with the new interactions to the known dialogue 210.
[00105] In the case where the learning system needs to incorporate human learning, it’s possible to have synchronous (i.e., live and immediate conversation) and/or asynchronous communication (conversation that happens piecemeal when someone responds - e.g., email) with a human 300. The human 300 can teach the system 260 simply by chatting with it, through text messaging or another mechanism such as email, or voice message. The formats of these messages may be prescriptive to allow the AEI system 260 to unequivocally understand new terms and what possible interactions might be required.
[00106] Interactions may be either simple or complex. Simple interactions are fairly prescriptive in that a specific term prescribes specific answers. More complex interactions may require the understanding or knowledge of frame of reference. Frame of reference has to do with the situation in which the conversation is taking place. For example, if a parent tells a child,“I love you”, then the child will most certainly want to say,“I love you too.” In this case, where the frame of reference is well-known, the interaction is simple. However, if a man tells a woman“I love you,” more information is needed to provide the appropriate response. Therefore, the agent/bot 110/240 has to be able to determine whether interactions are simple or complex. As discussed above, the system 260 over time is taught via the scripts, NLP 265, and experts 310, as to whether the interactions are simple or complex. As the AEI system 260 builds more knowledge of human relationships, the nature of the interactions can be more natural as the AEI System 260 learns more about a person. As with human to human communications, the more one knows a person, the more one can rely on that knowledge to provide frame of reference which includes implicit information about interactions.
[00107) The framework 100 supports relationship identification and development of relationship matrices/groups. A person’s relationship matrix is critical for Emotional Intelligence. Understanding how a person feels about others, and others about them is incredibly useful in helping a person understand themselves and help themselves build better relationships. Matrices can be built based upon self-identification from profile set ups, as well as requests between users to establish a recognized relationship within the system. When a relationship matrix is available, the dialogue system uses the relationship parameters (e.g., what type of relationship does the user have with others, professional, emotional, familial, etc.) to influence the terms used in the interactions. In natural language, there are often many different synonyms and similar expressions that can create much more efficient dialogue. In other situations, for example, medical situations, where a patient expects a more professional approach, the range of terms and expressions would be much narrower than the personal situations.
Conversation
[00108] Conversations/interactions may be scripted (as discussed above) or unscripted. Unscripted conversations can be generated from social conversations between a user and a bot 240. In either case the AEI system 260 uses trained conversation guides to direct the flow of an interaction. The system 260 can be programmed to use a single conversation guide, or multiple. When using multiple guides, the AEI system 260 may use one at a time (exclusive), or it may use many guides at once. For example, if the bot 240 allows social conversations (e.g., idle chitchat), chitchat may only be allowed when the other parts of the bot 240 are not active. Let’s say during a doctor-patient interaction, the chitchat bot will remain disabled until the official consultation is over. The root dialogue 245 can take care of enabling or disabling chitchat at the prescribed moment. Once again the power of using an event system, when a patient starts a consultation, the event“consultation started" is fired, the root dialogue 245can subscribe to this event and disable chit chat, when the event“consultation ended” is fired, the root dialogue 245 re-enables chitchat. Conversation guides may include events and timers with the ability to allow the assistant 262 to create and deliver reminders. Time triggers may initiate conversations and may include additional functions such as prescription reminders and the like or as mentioned in the example, to enable or disable functions.
Virtual Medical Assistant Example
[00109] As shown in FIG. 14, a virtual assistant 262 manages prescriptions on behalf of a medical practitioner. A doctor may submit a prescription to the AE1 system/virtual assistant 262 indicating that a patient take youngtal (fictitious drug name) 3 times weekly (morning, noon and night). The virtual assistant 262 is also programmed to verify the prescription for any interactions with other medications currently being taken by the patient. In such aspects, the virtual assistant 262 can call upon a prescription verification bot 240 that is in communication with or is part of the virtual assistant 262 to verify the prescriptions, as well as call upon a drug interaction service to identity potential issues between the various prescriptions assigned to the user. Since a patient can have new prescriptions all the time, the virtual assistant 262 must first check if the patient has received any new prescribed medications since the previous use and if so re-verify known interactions. If there’s a known interaction, the virtual assistant 262 will inform the doctor and hold the prescription until the doctor confirms the prescription. The doctor has the ability to cancel or approve the prescription. A doctor may require warnings and additional information for the virtual assistant 262 to communicate to the patient, for example,“Mrs. Jones, Doctor Gomez recommends that you take this medication with food.” Further, when setting up the reminders, the assistant 262 will remind the patient that they should have had some food prior to taking their medication, and will ask the patient if they want to be reminded to eat 15 or 20 minutes prior to the medication. The virtual assistant 262 can set this up based upon the doctor’s recommendation of eating food with the drug, and knowledge of the hospital’s timeline for food orders. At the prescribed time the assistant 262 will prompt the patient td take his/her medication, and may provide additional information and assistance. For instance, the assistant 262 may inquire about the patient’s emotional/physical state. The assistant 262 may make enquiries in response to noted changes in the patient’s biological state based on inputs from wearable devices. The assistant 262 may schedule appointments for the patient, send messages to the doctor on the patient’s behalf, set up a call to fulfill a prescription, use psychologist prescribed tests to help the patient detect emotional stress and so on. The assistant 262 may look for signs of depression in the patient’s responses or detect changes in altitude that indicate a fall.
[00110J In this example, a doctor’s prescribed dialogue 210 and a psychologist’s dialogue 210 are used to guide the interactions between the assistant 262 and the patient. The assistant 262 can be used for tracking emotional and physical states across time, and provide daily, weekly, and monthly reports (or more). In the case of medications, if the assistant 262 records something other than expected, the information will be forwarded to the doctor(s) to assist in research and development of more efficient medicines with more continuous updates. Presently, it is very difficult for doctors and care providers to get feedback on a patient’s well-being and health unless it is initiated by the patient. Using the framework 100, an AEI system 260 with its virtual assistant 262 can keep track of a patient’s temperature, moods, heart rate, breath, and other biometric measurements 410 easily and inexpensively.
[00111] Referring to FIG. 15, conversation developers 315 design guided conversations using the runtime coach. The AEI system 260 may adapt terms and responses. For prescriptive scripts, particularly for professional applications where miscommunication can be dangerous for the user, a professional (doctor, engineer, lawyer etc) may design prescriptive scripts to be followed faithfully by the assistant. Developers 315 may seek input from experts 310 and supplement the scripts with programming that can take advantage of any service 220. The experts 310 may prescribe the dialogue 210 and the actions to be taken, and developers 315 can then create programs that carry out those actions, using online emulators. A professional may perform testing to check whether the AEI system 260 is behaving as expected, or whether changes are needed. This not only solves many end-user problems, but it brings together the subject matter experts with the technology experts in an environment rich with content of all kinds: programs, AIs, videos, pictures, music, web services 220, etc. This is accomplished rather easily using the framework 100 as the programmer has a variety of tools available, for example, the webhook. Using a webhook (i.e., remote API endpoint), a programmer can invoke any service available on the internet, including new services 22Q that the developer might write. TheMSA 120 makes the possibilities endless. At run-time, because an AEI system 260 might not be able to understand every possible response, there is a feedback mechanism that allows the assistant 262 to interact with the trainer/coach when the bot 240 and the person might be having a hard time understanding or following a conversation.
[00112] In addition to programming versatility, a virtual assistant 262 in a mobile or wearable device 400 provides the ability to use biological profile information 410 in real-time that can benefit the person 300 directly, and also helps communicators to personalize conversations for targeted audiences/persons. In contrast with existing AEI systems 260 that provide a single AEI System 260 to interact with all users (Alexa, Google Home and Siri), the framework 100 facilitates the design of highly individualized/personalized AEI systems 260 and virtual assistants 262. By allowing the targeting of scripts, the assistants 262 may provide different language sets for children, teens, young adults, adults, and the elderly etc. Thus helping narrow linguistic gaps between the age groups. An AEI system 260 may also employ emotional intelligence to communicate effectively with individuals of different backgrounds, be trained to respond to the different biases (using different languages, understand cultural references, etc.), and allows human users 300 to interface with and supplement the capabilities of the AEI system 260.
[00113] As shown in FIG. 15, the I/O processor/system 340 is separate from the assistant 262. This allows for the use of different mechanisms to interact with a person 300 or entities. For example, iOS users may use Siri to interact with the assistant 262 or other AEI systems 260. The framework 100 may be embedded in a robot/robotic device. The system nodes 125 use APIs 280 to interact with each other and converse. In summary, the framework 100 is able to accept/receive and process a wide variety and inputs 350 and facilitates the availability of the tools to help accelerate the expansion of conversational AEI systems 260.
AEI Studio
[00114] The AEI studio 600 is an interactive application that can significantly reduce the work and complexity of creating scripts. Though the framework 100 can be accessed through APIs 280, as more people create skills 200 (i.e., the things a bot/assistant can do - the programmatic equivalents of functions or modules in programming) for bots 240 and applications 260, 262, the skills 200 are made available through AEI studio 600 and can be reused in new applications 105, systems 260 and bots 240. The skills 200 can be modified by adding different variables. Users utilize a GUI 620 as shown in FIG. 16 to build skills 200 in an organized fashion. In order to build a skill 200, Natural Language Understanding is necessary as a skill does what it does by listening to instructions as the human interacts with the dialogues (usually the root dialogue 245). AEI systems 260 process inputs by dividing them into various components including intents 640 (what is the person trying to accomplish), entities 650 (what are the objects of the intent 640) and triggers 670 (what are the events that can cause this skill 200 to be activated and operate). The language trainer breaks out phrases into intents 640, entities 650, triggers, etc. for the AEI studio in the different subscreens of the skills builder. The AEI studio 600 provides a way to teach the language analyzer 265 how to recognize when a human 300 intends to ask for the weather as shown in FIG. 17A and 17B.
[00115] For example, when asking the AEI system 260 for weather information, the user 300 may include information like date and time, geo location, locality or region (entities). The AEI system 260 is provided with examples for each entity 650. To distinguish between entities 650, intents 640, and the like, the different components can be color coded. By providing the information in this way, a developer 315 does not need sophisticated computer programming abilities to utilize the studio 600 and create skills 200. AEI Studio 600 provides access to memory slots/variables so that skills 200 can retain responses, inputs, and hold information that can be transferred to other skills 200.
[00116] For example, the phrase“Hello John” is comprised of an intent 640 -“hello” and an entity -“John”. The entity 650 would be saved in a memory slot called name, so throughout the bot, any time we wanted to display the name, we simply use the slot assigned to name, and John will display on the screen. The designer/developer 315 can enter sample appropriate responses with variations, much like it uses samples for inputs, to form a dialogue 210, also known as a story 660 in the AEI Studio 600 environment.
[00117] A story 660 comprises the identification of at least one intent 640 and possible inputs and an impetus to take an action. The skills comprise intents 640. For example, where“ask weather now” is programmed, the intent 640 is identified as asking the weather now, which is part of the ask weather skill. The story 660 then invokes the weather service through a trigger 670 - that is, when a user states or enters what is the weather now", the root dialogue recognizes this, and initiates the skill 200 of providing the weather! In the framework 100, the weather service 200 will wait for a request for the weather. The AEI system 260 will then pull inputs from slots to find out whether it should return default weather information (here, now) or some other variant. This skill 200 can be included by designers/developers 315 of a different application 105, system 260 or bot 240 instead of being reprogrammed.
[00118] Having thus described illustrative embodiments of the present invention, those skilled in the art will appreciate that the disclosures are illustrative only and that various other alternatives, adaptations, and modifications may be made within the scope of the present invention. Accordingly, the present invention is not limited to the specific embodiments as illustrated herein, but is only limited by the following claims.

Claims

CLAIMS What is claimed is:
1. An artificial and emotional intelligence (AEI) framework (100) for building an AEI agent/bot, the framework comprising:
a. an agent;
b. a network;
c. an artificial intelligence (AI) application; and
d. a server comprising;
i. an input/output (I/O) processor;
ii. at least one application programming interface (API);
iii. an event manager API, the event manager API configured to manage the at least one API ; and
iv. an event bus connected to the server, agent, the network, the I/O processor and the event manager API,
wherein the framework is configured to integrate human and non-human participants.
2. The AEI framework of claim 1 , wherein the server further comprises an identity data management component configured to abstract how things are authenticated, managed, and authorized.
3. The AEI framework of claim 1 , wherein the server comprises a virtual service cloud.
4. The AEI framework of claim 1 , further comprising a natural language processor*
5. The AEI framework of claim 1 , wherein the I/O processor is configured to accept human input and non-human input.
6. The AEI framework of claim 1, wherein the event bus is connected to at least one service, wherein the event bus can call upon the event manager API to connect to the at least one service.
7. The AEI framework of claim 1 , further comprising framework nodes capable of implementing services.
8. The AEI framework of claim 1 , wherein the I/O processor is configured to operate as a single input system that accommodates input/put artifacts that accept language or ode using API parameters as operational I/O components.
9. The AEI framework of claim 1, wherein the agent comprises a bot, wherein the bot utilizes a root dialogue to drive interactions with the human participant.
10. The AEI framework of claim 9, wherein the AI application is configured to monitor and analyze input from the human participant created by the root dialogue and modify the root dialogue based upon the analysis of the input.
11. The AEI framework of claim 9, further comprising an interactive virtual assistant that utilizes the root dialogue and captures interactions with the human participant.
12. The AEI framework of claim 1 1, wherein the interactive virtual assistant is configured to capture input that represents the mood of the human participant.
13. The AEI framework of claim 12, wherein the interactive virtual assistant is configured to apply natural language processing to the input of the human participant.
14. The AEI framework of claim 12, wherein the interactive virtual assistant is providing multi-media content, wherein the mood provided by the human participant is in response to the multi-media content provided.
15. The AEI framework of claim 10, wherein the bot is configured to ask the human
participant to provide more information related to the input for clarification.
PCT/US2019/034184 2018-05-25 2019-05-28 Method and system for building artificial and emotional intelligence systems WO2019227099A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201862676621P 2018-05-25 2018-05-25
US62/676,621 2018-05-25

Publications (1)

Publication Number Publication Date
WO2019227099A1 true WO2019227099A1 (en) 2019-11-28

Family

ID=68616497

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2019/034184 WO2019227099A1 (en) 2018-05-25 2019-05-28 Method and system for building artificial and emotional intelligence systems

Country Status (1)

Country Link
WO (1) WO2019227099A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220309240A1 (en) * 2021-03-24 2022-09-29 International Business Machines Corporation Auto generation of conversational artifacts from specifications

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090248399A1 (en) * 2008-03-21 2009-10-01 Lawrence Au System and method for analyzing text using emotional intelligence factors
US20120016678A1 (en) * 2010-01-18 2012-01-19 Apple Inc. Intelligent Automated Assistant
US20120117005A1 (en) * 2010-10-11 2012-05-10 Spivack Nova T System and method for providing distributed intelligent assistance
US8738739B2 (en) * 2008-05-21 2014-05-27 The Delfin Project, Inc. Automatic message selection with a chatbot
US20140244712A1 (en) * 2013-02-25 2014-08-28 Artificial Solutions Iberia SL System and methods for virtual assistant networks
US9749267B2 (en) * 2012-02-14 2017-08-29 Salesforce.Com, Inc. Intelligent automated messaging for computer-implemented devices
US20170293864A1 (en) * 2016-04-08 2017-10-12 BPU International, Inc. System and Method for Searching and Matching Content Over Social Networks Relevant to an Individual

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090248399A1 (en) * 2008-03-21 2009-10-01 Lawrence Au System and method for analyzing text using emotional intelligence factors
US8738739B2 (en) * 2008-05-21 2014-05-27 The Delfin Project, Inc. Automatic message selection with a chatbot
US20120016678A1 (en) * 2010-01-18 2012-01-19 Apple Inc. Intelligent Automated Assistant
US20120117005A1 (en) * 2010-10-11 2012-05-10 Spivack Nova T System and method for providing distributed intelligent assistance
US9749267B2 (en) * 2012-02-14 2017-08-29 Salesforce.Com, Inc. Intelligent automated messaging for computer-implemented devices
US20140244712A1 (en) * 2013-02-25 2014-08-28 Artificial Solutions Iberia SL System and methods for virtual assistant networks
US20170293864A1 (en) * 2016-04-08 2017-10-12 BPU International, Inc. System and Method for Searching and Matching Content Over Social Networks Relevant to an Individual

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220309240A1 (en) * 2021-03-24 2022-09-29 International Business Machines Corporation Auto generation of conversational artifacts from specifications
US11748559B2 (en) * 2021-03-24 2023-09-05 International Business Machines Corporation Auto generation of conversational artifacts from specifications

Similar Documents

Publication Publication Date Title
Schroeder et al. Pocket skills: A conversational mobile web app to support dialectical behavioral therapy
CN109313665B (en) Creation of computer messaging robots
US9729592B2 (en) System and method for distributed virtual assistant platforms
US11037545B2 (en) Interactive personal assistive devices and systems with artificial intelligence, and related methods
Miller et al. Apps, avatars, and robots: The future of mental healthcare
US20150066817A1 (en) System and method for virtual assistants with shared capabilities
Davis et al. Embodied cognition as a practical paradigm: introduction to the topic, the future of embodied cognition
US20150067503A1 (en) System and method for virtual assistants with agent store
JP2021119468A (en) Transitioning between private state and non-private state
Neerincx et al. Socio-cognitive engineering of a robotic partner for child's diabetes self-management
Dingler et al. The use and promise of conversational agents in digital health
Calvaresi et al. EREBOTS: Privacy-compliant agent-based platform for multi-scenario personalized health-assistant chatbots
Lo et al. e-Babylab: An open-source browser-based tool for unmoderated online developmental studies
KR20200076720A (en) Use a distributed state machine for automatic assistants and human-to-computer conversations to protect personal data
Yang et al. The human-centric metaverse: A survey
WO2015039105A1 (en) System and method for distributed virtual assistant platforms
Viduani et al. Chatbots in the field of mental health: challenges and opportunities
WO2019227099A1 (en) Method and system for building artificial and emotional intelligence systems
Bublyk et al. Decision Support System Design For Low-Voice Emergency Medical Calls At Smart City Based On Chatbot Management In Social Networks.
Rojc et al. Multilingual chatbots to collect patient-reported outcomes
Robbins Vygotsky's Non‐classical Dialectical Metapsychology
Maroto-Gómez et al. A biologically inspired decision-making system for the autonomous adaptive behavior of social robots
Mukhiya et al. A reference architecture for data-driven and adaptive internet-delivered psychological treatment systems: Software architecture development and validation study
Lang et al. Technological innovations in the education and treatment of persons with intellectual and developmental disabilities
Lanovaz Some characteristics and arguments in favor of a science of machine behavior analysis

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19806447

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19806447

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 210521)

122 Ep: pct application non-entry in european phase

Ref document number: 19806447

Country of ref document: EP

Kind code of ref document: A1