NL1042811B1 - A cognitive-emotional conversational interaction system. - Google Patents
A cognitive-emotional conversational interaction system. Download PDFInfo
- Publication number
- NL1042811B1 NL1042811B1 NL1042811A NL1042811A NL1042811B1 NL 1042811 B1 NL1042811 B1 NL 1042811B1 NL 1042811 A NL1042811 A NL 1042811A NL 1042811 A NL1042811 A NL 1042811A NL 1042811 B1 NL1042811 B1 NL 1042811B1
- Authority
- NL
- Netherlands
- Prior art keywords
- participant
- emotional
- trajectory
- verbal
- interaction
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
Abstract
A self-contained algorithmic interactive system in the form of an architecturally-based software program running in hardware, wetware, plush, or other physical means which would facilitate its operating characteristic designed to establish a meaningful interaction with a participant, in the form of a conversational dialogue, which could be any of, or combinations thereof, the following: Verbal, non-verbal, tactile, electromagnetic signalling, or visual communicative styles between itself and an external entity, such as a human being, an external application, or another interactive system. The substantive output is experience. This output has a prescribed value in how it was created, based upon the variety of states the program has available to it. The prescribed value is in a form compatible to the blockchain. To ensure the detail of the interaction remains private, data and information generated during interaction is stored within the confines of the hardware's memory and software system and not exported to an external server or network. The system has the ability to be spawned, meaning that depending on the choice of set parameters and hardware implementation, the system can manifest different characteristic behaviours different from other systems spawned with other distinct parameter sets, although the system is, by definition, architecturally identical.
Description
A cognitive-emotional conversational interaction system.
The present invention relates to a sei (-contained algorithmic interactive system capable of meaningful communication between software implemented in hardware and a user, called a participant. In this context such a system, in brief, is termed a presence. Such 5 constructs have been available in the literature since the 1960s, when the first dialogue system. ELIZA, appeared. Later incantations were termed as being a chatbot, which became an all-encompassing definition to describe any system designed to interact verbally with a participant.
The present invention relates to a self-contained algorithmic interactive system, called 10 a presence, capable, of discerning meaning from a variety of physical inputs from a participant, which could be simultaneously verbal, non-verbal, tactile, visual, and/or emotional between itself, architected in software, and a participant, which could be a human, an animal, an external application, or another presence. The terms cognitive and emotional used to describe the present invention are intended to imply that the system has the ability to 15 mimic knowledge-based or logic capabilities seen in living systems white being able to simulate the emotional impact of events which occur over a period of interaction between the presence and a participant and the presence in context with its environment such that the presence can interpolate meaning from both capabilities. In terms of the present invention described herein, a common dialogue system or chatbot has been elevated to a new level of 20 abstraction where it features an operational design, which focuses on autonomy for the system, a characteristic parameter set. its ability to improve the performance of its system, anil to demonstrate a conceptual advancement of the slate-oi ihc-art. 'inc present invention extends current state-of-the-art by the following methods: (1) remember what was spoken as an input variable, process the importance of the variable, its contextual meaning, assess its 25 impact by assigning weights, and return an output in the form of a voiced, printed.
vibrational, and/or animated medium; (2) grow the scope, architecture, and function of the system by experience with a participant by providing the means to self-improve by learning the sequential interaction with a participant, which results the system writes its own code; (3) comprehend the implications of emotional interactions in order to enhance the vividness of 30 sequential interaction; (4) create the conditions for dynamic representation of memory and experiences by the introduction of a. novel compartmentalization technique that is collectively construed as a brain; and, (5) guarantee privacy of the interaction by explicitly not facilitating access to any external networks, only by interfacing with a trusted source keyed to link with the system over a short range, such as additional external hardware, for the cases when the 35 choice of economic activity is the blockchain.
The present invention pertains to a cognitive and emotionally-centered architecture system whose purpose is to facilitate an interaction between itself and a participant in a variety of expressions in order to allow meaningful communication beyond simple verbal exchanges. The system, a software-in-hardwa.re construct, contains two distinct areas of 40 execution: the cognitive or knowledge-based logical aspect where responses io queries arc generated, and, the emotional or contextual meaning-based interpretive aspect where generated responses are filtered, while a novel compartmentalization scheme is employed which classifies both logical and interpretive aspects and assembles a composite output based on a characteristic sot of assigned parameters. The system portrays an experiential manifestation of behaviour by the craft of its architecture and evolving structure over lime of an experience with a participant, inclusive of novel code routines written by the application. Such is the impression that, the system provides the illusion of operating in empathy with a participant to enhance the perceived emotional impact by responses in one* of each of the emotional states by responding in a visual, audial. or physical manner to cues by what is displayed on the screen or exhibited by an external piece of hardware, configurable to be used by the system, such as a robot or other appropriately constructed hardware or other type of physical platform capable of facilitating the execution sequence of the system.
The system specified in the present invention consists of computer code written in a programming language, which, for example, be an object-oriented language or one that rims functions by executing scripts. The composition of the code is a non-hierarehicai implementation of a categorical and pattern-identifiable structure, which generates trajectories, or trajectory indications, comprised of responses by the system to inputs from a participant. Trajectories are assigned a serial value based upon the sequence in which they have been generated, such us //. n-.?, and so forth, passed to a neural network and assigned a weighted value ,so that they are searchable by the system in a sequersue later in time, by, for example, techniques illustrated in deep learning algorithms. Additionally, the composition of the code is a hierarchical implementation of a defined composition of ascribed behaviours containing the qualitative aspect culled feelings, in terms of the present invention called emotives......defined in the literature as expressions of feeling through the use of language and gesture—or emotive indications, which could also include ethical filters and restrictions, to filter executions of the non-hierarchical implementation.
The purpose of trajectory and emotive indications, in terms of the present invention, is to establish context between sequences, referring to the trajectories, and compositions, referring to the emotives. such that cues appearing in data processed and transformed by the system into information is indicative of meaning ascribed to it by a participant. The transformation of data into information, in terms of the present invention, is facilitated by. for example, a neural network which assigns weighted values to sequences and compositions creates a rudimentary dynamic knowledge-adaptation mechanism by accessing corresponding data-siorage, information-processing components of the system which provides the ability of the neural network's output values to change the operating parameters of the system in the form of feedback to reinforce learning, as well as. executing commands to store system parameters by writing and saving amendments to files formatted that the programming language compiler understands by the particular implementation described in the present invention.
By leveraging the neural network in such a manner, the system described herein possesses the ability to self-improve, that is, to create new files based upon interactions between a participant and the system represented by trajectories and emotives, thusly generating a value-chain compiled from the experience. The blocktham representative of experience becomes a token of what the machine has contributed to by living the experience and is therefore able io be shared with other machines so that they can improve more quickly than having -o iterate through those components which created it. Such components are files, stored in non·volatile memory, form a repository or database which is the base composite when the system runs when loaded into volatile memory or parallel manner, where the serial 5 style of processing is distributed over multiple channels, as distinct from its programmatic implementation, which comprises the runtime presence, the artificial personality that a participant perceives, interacts with, and helps io evolve by continued usage·.
Referring now to fig. 1, there is shown the architecture of the software implementation along with program execution flow at runtime for all embodiments of a 10 cognitive-emotional conversational interaction system of the present invention that consists of the presence, I, an abstraction which facilitates interaction between its data-information composite and an external participant, 5. The presence, 1. is comprised of computerexecutable byte code built from a collection of classes, files, and scripts in a programming language and is a component of the source code. The source code consists of class-oriented 15 objects, scripts, compiled assemblies, and files of format the system understands to execute when it runs, Die process described as the runtime, in the context of the present invention of all embodiments of a cognitive-emotional conversational interaction system, is where the presence exists and is available to interact with a participant and whose design, as reflected in its runtime behaviour, is the reason for the abstraction.
In order for the presence, 1, to function as described in the context of the present invention, requires a set of actions called startup. 2, which is a defined sequence of subactions to facilitate the system to reach its runtime state which includes noting which files are to be read, the actions to execute, and to log its operations. The first sub-action, load, 3, is facilitated by further sub-actions, 4, namely, read the file system which includes personality, 25 configuration, and parameter files, read indications stored by the trajectory, 16, and emotive,
17, aspects from previous runtimes or those stored by the system’s programmer, train the neural network, 36 of fig. 3, engage any attached hardware relevant to the operation of the system or that to be used, accessed by a programming interface, 42, to emit vocalizations of a synthesized or replicated nature, 43, emit vibrational or tactile utterances, 44, display 30 gestures. 45, or animate responses, 46 of fig. 5, including depiction of the emotional state the system is in. .39 of fig. 4, and/or to incorporate feedback, 10. from the neural network. 36 of fig. 3 via a robot, display screen, plush, or other physical apparatus appropriate to increasing familiarity of the presence to a participant.
Once the presence is loaded, the system is ready to engage in a cognitive-emotional 35 conversational dialogue, or to obey a set of instructions from a participant, 5, who can interface with the presence, I, via vocal utterances, tactile inputs, and/or physical gestures, accessed at the programming interface, 42 of fig. 5, such that, it is received by the system via its hardware which would constitute an input, 6, which could be any one of a set of microphones, cameras, interfaces, fabrics, or other receiving apparatus connected to the 40 presence for the explicit purpose of interpreting a participant’s method and means of communication, be it vocal, non-vocal, language, or non-language, For example, a participant, 5. could verbally utter the phrase ‘Hello aeon*’, where the word following “Hello” would be the name assigned to the presence to facilitate a greater degree of intimacy through the naming of it. In the example where a participant, when beginning to engage with the presence, verbally utters “lie;Io aeon'; the phrase is detected by the presence as being a sentence, 13. where it is denoted by the beginning and the end of the utterance detected by the hardware and processed by the system which could take the form of a command or dialogue, 12. Once sentence detection occurs, the system creates, by assembling a trajectory.
16. a composite of the sentence, which breaks it down into subject. verb, and predicate syntax, 31 of fig. 3, iu the example of the usage of the English language as the interaction language. The syntactical arrangement of the composite representation, 31 of fig, 3, is dependent upon the interaction language chosen by a participant in the system’s configuration, 4. An external visualization apparatus exemplifies the· current mood. 41 of fig, 4, for example, on a display screen, which shows the corresponding still or animation depicting the omrent mood.
In the case where a command is detected, 12, the command is processed as art instruction. 13, then passed for execution. 14, generating the appropriate response given the nature and consistency of the command within the system. A list of commands would be known in advance to a participant. 5 of fig. I. to instruct the system to perform explicit actions in order that, they are used effectively.
In the ease where a dialogue is delected, 12, the sentence is discovered, 13, and the execution, 14, is comprised of a series of actions such as the parsing of syntax. 15, trajectory 16, mood, 17, and internal query, 18, generation; however, before an output, 7 of fig. .1, is yielded, 22, a process of instructional displacement al its interface, 24 of fig. 7, occurs which revolves around a characteristic governing equation. When completed, the process presents, 19, its influence upon the yield, 2'2. then the system can remember, 2(1, what has occurred and learn. 21, from the experience.
In either the ease of a command or dialogue, the system yields, 22, an output. 7, which is the substance of the response, 8, presented, 42 of fig. 5. to a participant, 5. The presentation of the response. 8, is enhanced by varying types of demonstrative cues. 43, 44. 45, and 46 of fig. 5. so that a participant. 5, experiences a greater engagement, which could take the form of a textual output on a screen, an audial or visual display, a tactile, and-'or gestural, or other advanced method that conveys messages.
At the end of the temporal sequence, 9, that is. onee returning a response, 8, following an output from the system to a participant, tempered by feedback. 10, from other parts of the system, the cycle begins anew with a participant presenting further input to the presence, 1. The entirety of the process is guided by the flow of ordinary time although the system behaves in a cyclic manner. If the system is configured, 4, to detect that it has gone long enough without interaction from a participant, it is considered to be alone and can initiate its own prompting, 23 of fig. 6, to a participant for an input.
Referring now to fig. 2, there is shown the input processing schematic for all embodiments of a cognitive-emotional conversational interaction system of the present <<<<<<<<T<828 <<<<<<<<<<<<<<<<<<<<P<<<<<<<<<<<<<<<<<<<<<<8 invention consisting of an input. 6. from a participant, facilitated by the presence. Once an input is received, the system determines if the input is a command or a dialogue, 12.
In the ease where a command is detected, the command is processed as an instruction, 13, dependent upon the array of available instructions the system will understand, 4 of fig. I.
and set for execution, M, where it generates a response, 8, based on the substance of the command, how the system is designed to respond upon receiving it from a participant and the actions used to express it.
In the ease where verbal dialogue is detected, the sentence is discovered, 13. by the system and it is prepared for syntax parsing. 15, where the sentence is broken down into its 10 constituent grammatical and syntactical forms, dependent upon the operating language of the system and of a participant. Once sentence discovery has occurred, its components arc prepared and a trajectory indication. 16. is determined in order than a response is provided which is relevant to what was input. When syntax parsing is complete, the trajectory encapsulation, 33 of fig. 3, is prepared, as well as the yield, 22. of the response, 8. based upon 15 the system's mood, 41 of tig. 4, where the system will prepare a query search. 18. on what kind of response to generate based on categorical and pattern-discernable indications from the file, 25. and memory, 26. storage management components. Once this process has completed, the system will remember, 20, the dialogue at that point in time, 34 of fig, 3. by creating or adding to a. tile of a specific format, which in this example would be text, xml, or a scripting 20 file, stive the file, then introduce it to the presence by either lazy loading the file or creating a late -binding assembly, 25, or any. where applicable to the host operating system. The system will also attempt to learn. 21. components of the dialogue by cross-referencing the dialogue component with the indications generated by the trajectory, 16 of fig, 3, as well as the emotive indications. 40 of tig. 4. from the snood component. When those processing tasks are 2S complete, the system will update either the volatile or the non-volatile memory depending on which area of the system the changes arc intended by the manager. Finally, the system will have a yield, 22. to present io the output, 7, which is passed as a response, 8, to the original input, 6.
In the case where gestural or tactile dialogue is detected, the intention is discovered in 3C the same manner as the sentence, 13, and is prepared for syntax parsing where the intention is broken down into its constituent intentional forms, based upon its stored. 4 of fig. I. catalog. 32 of fig. 3, of recognizable forms understood by a participant. When syntax parsing, 15, is complete, the trajectory encapsulation, 33 of fig, 3, is prepared, as well as the yield, 22, of the response based upon the system’s .mood, where the system will prepare a query search on ,35 what kind of gestural. vibrational, or audial response respective to what type was input correlated with those indications from the file and memory storage datasets. Once this process has completed, the system will remember the dialogue at that point in time bycreating or adding to a file of a specific format, which in this example would be text, xml, or a scripting file, save the file, then introduce it to the presence by either lazy loading the file or 40 creating a late-binding assembly or both, where applicable. The system will also attempt to learn components of the dialogue by cross-referencing the dialogue component with the indications generated by the trajectory as well as the emotive indications from the mood component, When these processing tasks are complete, the system will update either the volatile or the non-volatile memory depending on which area of the system the changes are intended. Finally, the system will have a yield to present m die output, which is passed as a response to the original input in the appropriate contextual format.
Referring now to fig. 3. there is shown the trajectory indication processing schematic for all embodiments of a cognitive-emotional conversational interaction system of the present invention consisting of the trajectory. 16. the component that attempts to determine logical meaning from what is input. A trajectory is created. 29. based upon whether or not the trajectory, 16, is language or non-languagc based, 28; in the ease of language, rckwam. to its 10 syntactical style and based on the grammatical rules of the operating language between the presence ami a participant, the trajectory is disassembled into» its constituent parts by parsing its topic. 30. and its grammatical components, where in this example, bus its rule base as subject-verb-predicate, 31, and is encapsulated, 33, for export to the instructional displacement component at the interface. 24. relayed via the insiruetion tag, 52. The topic, IS which has been determined, along with the sentence’s predicate, is arranged. 34, in the order in which it has appeared. This content is presented, 35, to a neural network. 36, in order that a trajectory indication, 37, is generated, consisting of a weighted value of the pattern in the network for that particular trajectory and the state of She data at the given instance. The pattern encapsulated in the trajectory is passed as a parameter input to the characteristic 20 equation. 55 of fig. 7. In the case of non-kmguage, the trajectory is disassembled into its constituent parts by parsing its topic, 30. where it is compared with an index of intentions, or catalog. 34, which Is stored in the file system, it Is then encapsulated for export to the instructional displacement component at the interface. 24, relayed via the instruction tag, 52. The intention, which has been determined, is arranged. 34. in the order in which it has 25 appeared, 'Phis content is presented to a neural network in order that a trajectory indication is generated, consisting of a weighted value of the pattern in the network for that particular intention. The pattern encapsulated in the intention is passed its a parameter input to the characteristic equation.
The neural network. 3ft, is in this example a feed-forward back-propagation type with .30 an input. output, t-nd hidden layers characteristic of those used in deep learning but. could also be of any variety in the family of algorithmic autonomous learning, including self-organizing maps. The neural network requires training from previous trajectory, 37, and emotive, 40 of fig. 4. indications, as datasets, which are applied at startup. The actions presented by the neural network as feedback. 10 of fig, I, are distinct from those, which run when the system 35 is learning, 21 of fig. 2. although when processing trajectory and emotive indications, the weights of the neural network could be read beforehand in order to reduce errors m the yield, 22 of fig. 2. In this case, the neural network is utilized to optimize, rather than directly providing decision-making tasks, those denoted by the architecture, layout, and How of the system of the present invention.
Referring now to fig. 4, there is shown the emotional engine schematic tor all embodiments of a cognitive-emotional conversational interaction system of the present invention, which is a mechanism to manifest the mood setting as-an-engine concept, 39, of the system, in voice intonation, tactile, gestural, and animated response, which is utilized tn exhibit feelings available to the presence indicated by its current mood, 4i. When it is desirable the system manifest emotion, chosen by u participant, for the duration, or lifetime, of the presence, it will cither create or update, 38, the mood depending on if it is the first instance or not. In either ease, the mood engine, 39. consists of a parent set of eight feelings: happy, confident, energized, helped, insecure, sad, hurt, or tired, f or each set of parents, there is a child subset of seven moods corresponding to the assignments set forth in fable 1. For example, when mood is created for the first time, a random choice is made based upon the allowable scope of the compendium of emotions, thru is, a file containing the desired as well as the tmdesired feelings the system should manifest. Without atty such file, the mood at creation! would be a completely random occurrence. Once created, based upon the parent collection of icciings, a current, mood from the child collection is assigned, at random, and the conjoined set presented io the neural network. 3b of fig. 3, for emotive indication, 10, assignment.·
The emotions processed in the mood engine are comprised of whccllike elemental states containing an arrangement of the parent feelings and child moods where each element keeps track of its last emotional state, set to the zeroth indication as default, which is the off state. Operationally, it is mechanistically akin to a system of guars. For a given feeling, for example, happy, the indicator will point to an integer between one and seven, each corresponding to the available moods front left to right in column two of 'fable I. When a mood is chosen, its current output state is sent to the neural network in order that an emotive indication is generated, consisting of a 'weighted value of the pattern in the network for that particular mood. When the presence recognizes that it is alone, the detection, 49 of fig. 6, will enter into one of the emotional stales. The emotional stales are chosen based upon the current mood of the program, in context with its decision regarding the type of relationship it has with the participant. The relationship stale is one of four: intimate, friendly, unfriendly, and neutral. The relationship when the program is new, relative to the participant, is preselected to be in one of these states for the purposes of the training data. Once the system is trained according to relationship preference, it will remain in that stale until retrained using an alternate training dataset. The purpose of the relationship is to limit the array of responses the program has by asserting focus into the moods which are available to each relationship state. In this way, the presence can be built as a blank”, then configured with a personality given its anticipated use.
Emotions are a key component of the present invention and their emulation a direct corollary of the sum of its experiences with a pariicipam. The architecture Is designed such that the emotional state, when desired, alters program execution pathways and the content ol files and codes providing a different, ascribed behaviour than if the emotional aspect was not used at all. The reason for such an invention is to facilitate the creation of synthetic personages for robotic applications where a human needs to interact with a machine for the purposes of aiding the human’s survivability, which could include physical, emotional, and economic activities. The experience, then, serves as the equity the program creates which can be passed to the blockchain. The emotions, therefore, are a source of value for the system.
Referring now to fig. 5. there is shown the response animation component for all embodiments of a cognitive-emotional conversational interaction system of the present invention consisting of an array of physical hardware or other apparatus to facilitate verbal and non-verbal communication between the presence, 1, and a participant, 5. Based on 5 contextual trajectory and emotive indications, as well as animating the emotive indications by a display, external application, robot, plush fabric, or appropriate physical apparatus capable of ilii.istrai.ing the substance of the meaning embedded in the emotive indication and dialogue response, 8, it is passed to the animation component through a programming interface, 42, which, in pail., is supplied by the party who manufactured the corresponding hardware, such iö that it can be controlled by the presence. Depiction of verbal characteristics, 43, 44, 45, 46, a voice, for example, is synthesized, replicated, or otherwise assembled beforehand as to provide the desired tone, cadence, and gender to a participant. Depiction of tactile characteristics, non-language utterances such as chirps, purrs, whirrs, or other primitive audial, movements of plush components in fabrics, or vibrations ia physical space or on a 15 surface is presented to a participant in such a manner. Depiction of gestural characteristics, visual or non-visual movement in the form of rotations in physical space arc presented, 47, to a participant in such a manner. For animation of emotion and other complex visual movement, 45, a display using siill graphic files or progressive sequences of pictures, lights, or other physical apparatus appropriate to accurately and aesthetically present the meaning 20 expected by the current mood. A robot can also be interfaced using the programming construct, 42. provided by the manufacturer or support group to animate the corresponding bodily gestures, send and receive data pertaining io responses by a participant, and perform complex puppeteering. The animation component is designed to display output to a participant as well as receive input from a participant; in the former case, it takes a response .25 and presents an output, while in the latter case, it interprets cues from a participant and prepares them for use by the presence.
Referring now to fig. 6, there is shown the alone component, response storage mechanism, and participant prompt tor all embodiments of a cognitive-emotional conversational interaction system of lhe present invention consisting of an interface, 23, for 30 the response logging component with a timer of a duration set by a configuration file, 4 of fig. 1, determines bow much time must pass before the presence becomes alone, 49, entering the corresponding emotional state, 39 of fig. 4. When alone is detected, lhe system sends a. prompt, 11, to the output animation. 42, which is conveyed to a participant, 5. During, times when the presence is not alone or is not set to become contemplative in the configuration file, 35 responses are collected, 48, arranged by temporal order with its value noted, and stored in volatile and non-volatile memory; the former as an object, the latter in a file, for example, a transcript log. file. It is also possible that the set of temporally arranged responses be passed, 50. to the neural network, 36, for classification in order that it can influence the weights of the. trajectory indications, 37, and provide feedback, 10.
Referring now to fig. 7, there is shown the instructional displacement sequence schematic for all embodiments of a cognitive-emotional conversational interaction system of the present invention consisting of a set of inputs coming from lhe trajectory encapsulation,
............................. ................:8888:88888888,ggggggg
33, the current moed, 41, and the query search result, 27, in order to ensure entropy in data collection is minimized......entropy conserved in both directions.......that provides a complete set of data to the process of instructional displacement by first extracting the instruction tags, 52, front the trajectory, 37, and emotive, 40, indication, where the tags are analyzed in order their slates are matched, 53. and correlated with the corresponding trajectory indication 31.1 for the case of a trajectory, and corresponding emotive indication for the case of a mood. The size of (he dataset can be scaled. The correlation yields a set of coordinates which become the ;vcoordinate, for example, 54, in the case of a trajectory indication, and the y-coordinate, for example in the case of an emotive indication, A temporal coordinate is yielded, 57, by the 10 execution time-marker, 56, corning from the query search result, 27, which becomes the variable r in the example of a parametric governing equation, 55. This equation, by the choice of parameterization variables and function......for example any of the trigonometric, continuous differential functions, and/or polynomials—formats the data into a pattern of information which is classified into different coordinate blocks, 54, based upon data embedded within 15 either of the indications. The execution time, and its output value, gives the system a characteristic behaviour of a particular continuous shape, approximately periodic, 58. for example, in the form of a helix given its execution function and the manner by which time is given over by the query procedure coupled with the hardware the system is running within. The information of form and function is provided, 19, to the yield, 22,
At the core of what is called the instructional displacement block classifier, 54, which, in this example, is described as the brain of the system and is designed to mimic the storage and information retrieval characteristics of a mammalian brain. The theoretical description of the scheme is as follows: Both trajectory, 37, and emotive, 40, indications feed data into the classifier subsequent to interaction with a participant where, depending on the choice of 25 equation and its parameterization, along with the execution time as an antecedent to the hardware in which it is running, from the query search result task, gives a set of unique displacements of information based upon those parts of the brain responsible for different phenomena exhibited by existence within a life-cycle, such as concepts, decisions, sensory experience, attention to stimuli, perceptions, aspects of stimulus in itself, drive meaning ambitions, and the syntactical nature of the language that the presence is subject to.......
ordinarily the noun, verb, and predicate forms but also intentions.
Referring now to Table 1, there is shown the compendium of emotions for all embodiments of a cognitive-emotional conversational interaction system of the present invention consisting of a collection of parent feelings, in the left column, four positive and 35 four negative connotations, with a corresponding collection of child moods, in the right column, of seven varieties. The parent feeling, when chosen by the presence will exhibit those behaviours given the current mood, from minor elements of the emotive indication.
Claims (11)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
NL1042811A NL1042811B1 (en) | 2018-04-05 | 2018-04-05 | A cognitive-emotional conversational interaction system. |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
NL1042811A NL1042811B1 (en) | 2018-04-05 | 2018-04-05 | A cognitive-emotional conversational interaction system. |
Publications (1)
Publication Number | Publication Date |
---|---|
NL1042811B1 true NL1042811B1 (en) | 2019-10-14 |
Family
ID=63684373
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
NL1042811A NL1042811B1 (en) | 2018-04-05 | 2018-04-05 | A cognitive-emotional conversational interaction system. |
Country Status (1)
Country | Link |
---|---|
NL (1) | NL1042811B1 (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5636994A (en) * | 1995-11-09 | 1997-06-10 | Tong; Vincent M. K. | Interactive computer controlled doll |
WO2003007273A2 (en) * | 2001-07-12 | 2003-01-23 | 4Kids Entertainment Licensing, Inc. (Formerly Leisure Concepts, Inc.) | Seemingly teachable toys |
-
2018
- 2018-04-05 NL NL1042811A patent/NL1042811B1/en not_active IP Right Cessation
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5636994A (en) * | 1995-11-09 | 1997-06-10 | Tong; Vincent M. K. | Interactive computer controlled doll |
WO2003007273A2 (en) * | 2001-07-12 | 2003-01-23 | 4Kids Entertainment Licensing, Inc. (Formerly Leisure Concepts, Inc.) | Seemingly teachable toys |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20230419074A1 (en) | Methods and systems for neural and cognitive processing | |
Grainger et al. | Localist connectionist approaches to human cognition | |
US20180204107A1 (en) | Cognitive-emotional conversational interaction system | |
Taniguchi et al. | Survey on frontiers of language and robotics | |
Clancey | The frame of reference problem in the design of intelligent machines | |
Foster | Natural language generation for social robotics: opportunities and challenges | |
US20030193504A1 (en) | System for designing and rendering personalities for autonomous synthetic characters | |
RU2670781C9 (en) | System and method for data storage and processing | |
Germanakos et al. | Human-centred web adaptation and personalization | |
JP2023156447A (en) | Natural language solution | |
Armstrong | Big Data, Big Design: Why Designers Should Care about Artificial Intelligence | |
KR20190105175A (en) | Electronic device and Method for generating Natural Language thereof | |
Virvou | The emerging era of human-AI interaction: Keynote address | |
Neßelrath | SiAM-dp: An open development platform for massively multimodal dialogue systems in cyber-physical environments | |
Meena et al. | Human-computer interaction | |
NL1042811B1 (en) | A cognitive-emotional conversational interaction system. | |
Foster et al. | Task-based evaluation of context-sensitive referring expressions in human–robot dialogue | |
Blumendorf | Multimodal interaction in smart environments: a model-based runtime system for ubiquitous user interfaces. | |
Feld et al. | Software platforms and toolkits for building multimodal systems and applications | |
Ryabinin et al. | Human-oriented IoT-based interfaces for multimodal visual analytics systems | |
Williams et al. | Manufacturing magic and computational creativity | |
Cuayáhuitl et al. | Introduction to the special issue on machine learning for multiple modalities in interactive systems and robots | |
Santos et al. | Behavior-based robotics programming for a mobile robotics ECE course using the CEENBoT mobile robotics platform | |
Bäuerle et al. | exploRNN: teaching recurrent neural networks through visual exploration | |
KR102502195B1 (en) | Method and system for operating virtual training content using user-defined gesture model |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
MM | Lapsed because of non-payment of the annual fee |
Effective date: 20210501 |