CN116057544A - Custom interactive language learning system using four-value logic - Google Patents

Custom interactive language learning system using four-value logic Download PDF

Info

Publication number
CN116057544A
CN116057544A CN202080104205.8A CN202080104205A CN116057544A CN 116057544 A CN116057544 A CN 116057544A CN 202080104205 A CN202080104205 A CN 202080104205A CN 116057544 A CN116057544 A CN 116057544A
Authority
CN
China
Prior art keywords
user
story
language
blackboard
phrase
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202080104205.8A
Other languages
Chinese (zh)
Inventor
罗杰·密德茂尔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Luo JieMidemaoer
Original Assignee
Luo JieMidemaoer
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Luo JieMidemaoer filed Critical Luo JieMidemaoer
Publication of CN116057544A publication Critical patent/CN116057544A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • G10L13/02Methods for producing synthetic speech; Speech synthesisers
    • G10L13/027Concept to speech synthesisers; Generation of natural phrases from machine-based concepts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/02Knowledge representation; Symbolic representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/04Inference or reasoning models
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B19/00Teaching not covered by other main groups of this subclass
    • G09B19/06Foreign languages
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/042Knowledge-based neural networks; Logical representations of neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N7/00Computing arrangements based on specific mathematical models
    • G06N7/01Probabilistic graphical models, e.g. probabilistic networks
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification techniques
    • G10L17/16Hidden Markov models [HMM]
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification techniques
    • G10L17/18Artificial neural networks; Connectionist approaches

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Business, Economics & Management (AREA)
  • Educational Administration (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Educational Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Electrically Operated Instructional Devices (AREA)
  • Machine Translation (AREA)

Abstract

A customized interactive language learning system teaches a user a personalized vocabulary word set through interactive images and stories. Interactions are modeled by probabilistic rules in semantic and neural networks with objects and relationships. Dialogs and narratives are dynamically generated based on the states of the interactive story model using phrase rewrite rules and neural networks implementing a four-valued logic system in which truth values for objects and relationships are encoded as true, false, defined, and undefined in a single memory array.

Description

Custom interactive language learning system using four-value logic
Technical Field
The present disclosure relates to the field of language guidance, and more particularly, to a system for teaching new vocabulary words to students to obtain a primary language and a secondary language through an interactive story modeled with a blackboard architecture that implements a semantic network in combination with a neural network implemented with a four-valued logic system.
Background
Language learning programs are valuable tools for teaching students new vocabulary words or even entirely new languages. However, most existing programs teach each student the same word in the same way. For example, such tutorials typically teach from a predefined list of universal vocabulary words. When they comprise an audiovisual presentation, they are typically pre-recorded or use the same script for each student.
Students, especially young children, typically learn through interactive scenes. However, many existing programs are either entirely predefined or there are limited options for students to interact with the presentation to ask questions or to obtain more information about what they are looking at or listening to.
Furthermore, most systems do not customize the word list being taught to a specific academic level for each student. This can limit the student's understanding of the target language. For example, teaching from only a fixed list of the most common words in the target language may result in students not being able to understand the less common words that are more representative of how the language is used in practice, because speaker fluents in a language do not limit themselves to only the most common words in the language.
What is needed is a language learning system that presents an interactive story in a manner that optimizes student learning, the interactive story including new words to learn. The system should present interactive dialogs to the students using a vocabulary personalized for each student. The interactive story is modeled in a semantic network and a neural network using a four-valued logic system, so when a dialogue is generated or when a student asks questions about the story, the system can generate convincing answers.
Drawings
Fig. 1 illustrates an embodiment of a language learning system.
FIG. 2 shows a non-limiting example of a knowledge source that may add data to a blackboard of a language learning system, modify data on a blackboard, and/or access data stored on a blackboard.
FIG. 3 shows a non-limiting example of a process for generating a vocabulary using a text analyzer.
FIG. 4 shows a non-limiting example of a diagram showing a Hipu-Hertan (Heap-Hertan) law constructed from classes created from random partition functions.
Fig. 5A shows a logic table for a negative operation in a four-valued logic system.
FIG. 5B illustrates a logic table for a conjunctive operation in a four-valued logic system.
Fig. 5C shows a logic table for a disjunctive operation in a four-valued logic system.
FIG. 6A illustrates a 3-element data triplet in a semantic network.
FIG. 6B illustrates a 2-element data triplet in a semantic network.
FIG. 7 illustrates an example of encoding multiple truth values associated with a particular object or relationship using a single memory array.
FIG. 8 illustrates a model of phrase rewrite rules.
Fig. 9 illustrates a method of evaluating the left-hand side of a phrase rewrite rule based on a particular argument to determine if the phrase rewrite rule should be applied.
FIG. 10 illustrates an exemplary embodiment of a phrase rewrite rule list that may be used to generate a target language sentence based on an input argument triplet.
Fig. 11 illustrates an example of feature assignments between cells on the right-hand side of various exemplary phrase rewrite rules.
FIG. 12 illustrates a process for modeling the state of an interactive story with a language learning system.
Fig. 13 shows a non-limiting example of the vocabulary ratio C at page 324 of "language as a high-level theory (The Advanced Theory Of Language As Choice Or Chance) of choice or opportunity" from guttav herman (Gustav herman), where the random partitioning function is set to five.
Detailed Description
Fig. 1 illustrates an embodiment of a language learning system 100. Language learning system 100 may present dynamically generated interactive stories to a user by being able to teach the user the audio and/or visual content of vocabulary words.
The language learning system 100 may include one or more input components 102, one or more output components 104, and a processing system including a blackboard 106, a control module 108, an audio processor 110, a graphics processor 112, and one or more knowledge sources 114. The blackboard 106 can be a database or other central memory location accessible by other components of the language learning system 100.
The input component 102 can be a component through which a user can input data into the blackboard 106, such as a camera, microphone, touch screen, keyboard, mouse, or any other component for inputting images, audio, or other data into the blackboard 106. As non-limiting examples, the language learning system 100 may include a camera capable of capturing images and/or video of a user as the user interacts with the language learning system 100, and/or a microphone capable of recording audio of words spoken by the user.
The output component 104 can be a component, such as a visual display and speakers, through which a user can perceive images and/or audio generated by the language learning system 100. In some embodiments, the output component 104 for the generated image may be a projector, such as a two-dimensional or stereoscopic 3D projector. In other embodiments, the output component 104 for the generated image may be a holographic display, a 3D television or monitor, a 2D television or monitor, an augmented and/or virtual reality headset, or any other type of display or connection to an external display.
In some embodiments, visual content may be presented through the output component 104, the output component 104 causing the visual content to be immersive to the user, which may increase the chance of the user participating in an interactive story, thus better learning the language being taught. In some embodiments, the immersive visual content may be presented by a virtual reality headset or other head mounted display, while in other embodiments, the holographic display, 3D television, or 3D projector may present the immersive visual content to the user. As a non-limiting example, some virtual reality headsets have age limitations or guidelines, and thus in some embodiments, a language learning system may present interactive stories to a child through immersive visual content using a 3D projector.
In embodiments where output component 104 is a stereoscopic 3D projector, one image may be projected for the left eye of the viewer while another image is projected for the right eye of the viewer. The viewer may wear corresponding 3D glasses, such as stereoscopic 3D glasses, polarized 3D glasses, or shutter 3D glasses. The 3D glasses may block or filter light such that the left eye of the viewer sees the projected left eye image and the right eye of the viewer sees the projected right eye image. When the left eye image and the right eye image depict the same scene from different viewpoints spaced apart by a distance between human eyes, a viewer can perceive the scene in three dimensions.
The blackboard 106 can be a central memory location accessible to other components of the language learning system 100 such that the other components can add data to the blackboard 106, access data stored on the blackboard 106, and/or modify data stored on the blackboard 106. As will be discussed further below, the blackboard 106 can store data representing the semantic network 116 and/or the neural network 119 and/or the phrase rewrite rules 120 to analyze the data or model the data using the four-value logic system 118. As a non-limiting example, semantic network 116 and/or neural network 119 and/or phrase rewrite rules 120 may model the state of an interactive story presented by language learning system 100, where the state of the interactive story is changed over time using four-value logic system 118.
The control module 108 may include one or more Central Processing Units (CPUs) or other processors linked to the blackboard 106. As will be discussed further below, the control module 108 may perform operations on data stored in the blackboard 106 and/or activate other components to perform such operations or to assist the blackboard 106.
One or more audio processors 110 may be linked to the blackboard 106. In some embodiments, the audio processor 110 may be a dedicated audio card or processor, while in other embodiments, the audio processor 110 may be part of the control module 108, another CPU, or other processor.
The audio processor 110 can process existing audio data stored in the blackboard 106. As a non-limiting example, the audio processor 110 may process input audio data captured by a microphone or other input component 102 that has been stored in the blackboard 106, such as processing the input audio data for speech recognition as described below.
The audio processor 110 may also or alternatively generate new audio data for the interactive story. As non-limiting examples, audio processor 110 may generate audio for the voices of the story character and/or mix the generated audio with music, sound effects, or other audio for story play. In some embodiments, audio generated by the audio processor 110 may be passed through the blackboard 106 to speakers for playback. In an alternative embodiment, the audio processor 110 may be directly linked to the speakers so that the generated audio may be directly played by the speakers.
One or more graphics processors 112 may be linked to the blackboard 106. In some embodiments, graphics processor 112 may be a dedicated graphics card or a dedicated Graphics Processor (GPU), while in other embodiments graphics processor 112 may be part of control module 108, another CPU, or other processor.
The graphics processor 112 may process graphics, images, visual data, or neural networks stored in the blackboard 106. As a non-limiting example, the graphics processor 112 may process input images, such as still images or video of a user, captured by a camera or other input component 102.
Graphics processor 112 may also or alternatively generate new graphics, images, and/or other visual data for the interactive story. As a non-limiting example, graphics processor 112 may generate visual content that depicts the current state of a story. In some embodiments, the graphics processor 112 may pass the generated visual content through the blackboard 106 to a projector, screen, or other output component 104. In an alternative embodiment, the graphics processor 112 may be linked directly to the output component 104 such that the generated image is displayed directly.
One or more knowledge sources 114 may be linked to the blackboard 106 such that each knowledge source 114 may independently contribute to, modify, and/or extract data stored at the blackboard 106. In some embodiments, the input component 102, the output component 104, the control module 108, the audio processor 110, and/or the graphics processor 112 of the language learning system can act as a knowledge source 114 such that they can add data to the blackboard 106, modify data on the blackboard 106, or output data from the blackboard 106.
FIG. 2 shows a non-limiting example of a knowledge source 114 that may add data to the blackboard 106, modify data on the blackboard 106, and/or access data stored on the blackboard 106. Knowledge source 114 may include audio asset 202, audio generation module 204, language identification module 206, text analyzer 208, visual asset 210, visual generation module 212, and/or visual identification module 214.
The audio asset 202 may be used by the audio processor 110 or the neural network 119 to generate audio for an interactive story. In some embodiments, the audio asset 202 may be a pre-recorded sound sample of dialog, sound effects, music, or other types of audio. In other embodiments, the audio asset 202 may be an audio model or algorithm by which audio may be dynamically generated.
The audio generation module 204 may use the current state of the interactive story modeled in the semantic network 116 or the neural network 119 and the corresponding audio asset 202 to generate audio for the interactive story. Audio generation module 204 may generate music and/or sound effects for the current scene and a narrative or dialogue spoken by the story character. As will be discussed in greater detail below, audio generation module 204 may dynamically generate a narrative of a conversation according to phrase rewrite rules 120 and vocabulary 122 such that words used in the story assist a user in learning new vocabulary words in a target language. In some embodiments, the audio generation module 204 may be part of the audio processor 110, while in other embodiments, the audio generation module 204 may inform the individual audio processor 110, via the blackboard 106, which sounds to generate and/or which audio assets 202 to use.
In some embodiments, text associated with the generated audio may be displayed via the output component in addition to or in lieu of playing back the generated audio. As non-limiting examples, text such as subtitles, dialogue bubbles, or other text may be displayed with or instead of an audible version of the playback text.
The language identification module 206 may use an acoustic Markov (Markov) model or a neural network 119 to identify words in the recorded audio data that are added to the blackboard 106 through the input component 102, such as a microphone. When a word is identified and added to the blackboard 106, other knowledge sources 114 can analyze the word to determine an appropriate response. As a non-limiting example, when a user asks questions about an interactive story, language learning system 100 may pause the story in response to the user's questions. In some embodiments, random partitioning may be applied to reduce the number of bits per encoded word, which may improve the accuracy of the markov model and the neural network 119.
The text analyzer 208 may generate phrase rewrite rules 120 and vocabulary 122 based on one or more text source files. When an interactive story is presented with language learning system 100 (such as when language learning system 100 generates sentences for a narration or dialogue), rules 120 and vocabulary 122 may be rewritten using the generated phrases.
In some embodiments, phrase rewrite rules 120 and vocabulary 122 generated by text analyzer 208 may be stored on blackboard 106. As a non-limiting example, each generated phrase rewrite rule 120 and/or unique words in the generated vocabulary 122 may be stored in the semantic network 116 or may be stored in other locations in the blackboard 106. In alternative embodiments, phrase rewrite rules 120 and/or vocabulary 122 may be stored as a separate knowledge source 114.
FIG. 3 shows a non-limiting example of a process for generating phrase rewrite rules 120 and vocabulary 122 using text analyzer 208.
At step 302, the text analyzer 208 may load one or more source text files into memory. The source text file may be a book, an article, or any other type of source text. In some embodiments, the source text file may be selected based on content or genre such that grammars and words derived from the source text file for phrase rewrite rules 120 and vocabulary 122 may be related to a particular educational topic or objective. As shown in fig. 13, the text analyzer 208 may also analyze the input text file using vocabulary ratio (vocabulary ration) C1301 derived from the random partition function, and then automatically search other databases for similar text for the same genre. The text analyzer 208 may use a Good-Turing (Good-Turing) frequency estimate of repetition rate, or Gustrav-Herman formula of You Er (Yule) features that are dual to the Good-Turing repetition rate, to automatically rate text found to be the same genre as lexical richness. Text with a lower repetition rate vocabulary that is rated has a richer vocabulary for sampling from it. The blackboard 106 can train the neural network 119 for text generation using the text collected by the text analyzer 208, the vocabulary ratio C1301, and the random partitioning function.
At step 304, the text analyzer 208 may generate a vocabulary 400 from the source text file. The text analyzer 208 may generate a list of words found in the source text file and generate statistical calibration data from the source text using the Gustrav herman formula of the GoldTurn frequency repetition rate or the Yule feature that is dual to the GoldTurn repetition rate and random partitioning, which takes into account the different sample sizes of the text and its semantic content. The text analyzer 208 may use this information to generate the vocabulary 400. As shown in fig. 4, a vocabulary 400 constructed from random partitioning functions may model a number of words in a source text file for the length of the text source file, where the logarithm of the number of words is on a first axis and the logarithm of the text length is on a second axis.
At step 306, the text analyzer 208 may subdivide the vocabulary 400 into a student's knowledge level line 404 and a target ratio line 406 along the axis of the log of text length. Subdividing the vocabulary 400 along the axis of the log of text length may associate a student's knowledge level line 404 with the vocabulary level so that the student will be presented with a mix of words that they already know and new words from the random partition class will be included at a ratio determined by the target line relationship 406. The text analyzer 208 may estimate the overall knowledge level of the student with respect to the target language based on previous interactions with the language learning system 100 or by managing tests similar to standardized tests for computer language SAT scores. The estimate of the student's knowledge level may be used to establish a vertical knowledge level line 404 on the vocabulary 400 along the axis of the log of text length, as shown in FIG. 4. The text analyzer 208 may also build a vertical target line 406 on the graph to the right of the knowledge horizontal line 404, which may then be used to take advantage of the Heap-Herman law of vocabulary growth, select classes of new vocabulary items that are consistent with each other using the ratios provided by the random partition function classes 411, 412, 413, 414 at the target line 406, and use it to present the new vocabulary to the learner in an optimal manner. The random partition ratio gives the number of words to be extracted from the random partition class of text that has not been included in the vocabulary 122 that has been presented to the user. The ratio is determined from classes, which may be any number from two to the total number of words in the text, generated by a random partitioning function. As a non-limiting example, four classes labeled 411, 412, 413, and 414 present examples of these ratios generated from text divided into four classes. The words from the target line area 406 are then merged into the vocabulary 122.
The text analyzer 208 may also use the target rate line 406 to find example probabilities in grammatical form and use the example probabilities and accompanying relevant data as paraphrasing inputs to generate or modify the probability phrase rewrite rules 120 that are also adjusted to the knowledge level of the student. As a non-limiting example, existing phrase rewrite rules 120 may be modified by re-weighting probabilities of rules associated with grammatical forms based on the frequency of occurrence of new vocabulary terms in target vocabulary area 402. In some embodiments, when an existing phrase rewrite rule 120 overlays a grammatical form found in a target ratio line 406, the existing phrase rewrite rule 120 may be used, while when a new grammatical form is found in the target ratio line 406, a new phrase rewrite rule 120 may be generated.
At step 308, the text analyzer 208 may output the vocabulary 122 generated from the text source file and the new or modified phrase rewrite rules 120 to the blackboard 106 or as the knowledge source 114.
In alternative embodiments, phrase rewrite rules 120 and/or vocabulary 122 may be generated or modified in any other manner, such as manually creating a list of rules or words, or identifying the most frequently occurring words in one or more text source files.
Graphics processor 112 may use visual asset 210 to render images of an interactive story. Visual asset 210 may be a two-dimensional image or 3D model of a character, item, setting, background, or other story element. Visual asset 210 may also be an animation file, font, or any other asset that may be used by graphics processor 112 to generate an image.
The visual generation module 212 may use the current state of the interactive story modeled in the semantic network 116 and/or the neural network 119 and the corresponding visual assets 210 to render images of the interactive story with the graphics processor 112. In some embodiments, the visual generation module 212 may be part of the graphics processor 112, while in other embodiments, the visual generation module 212 may inform the individual graphics processor 112, via the blackboard 106, which images to render and/or which visual assets 210 to use.
The visual recognition module 214 may use visual data captured by the input component 102 to track the physical movement of the user over time and to recognize gestures made by the user. As non-limiting examples, the camera may have captured 2D or stereoscopic still images, infrared data, or video frames of the user, and the visual recognition module 214 may update the 3D model of the user over time based on the captured visual data to recognize gestures of the user. The visual recognition module 214 may also use the generated story images stored on the blackboard 106, which are also displayed via the output component 104, in order to correlate the recognized gestures with the images that the user is seeing. As a non-limiting example, the visual recognition module 214 may track the movement of the user to identify when the user makes a pointing gesture, track the direction of the gesture to identify a point of interest at which the user is pointing, and view the generated story image to determine which story objects are being displayed at the point of interest so that the visual recognition module 214 may identify the particular story object at which the user is pointing. In some embodiments, the visual recognition module 214 may be part of the graphics processor 112, while in other embodiments, the visual recognition module 214 may use gesture data stored in a blackboard that is recognized by the input component 102 and/or the separate graphics processor 112.
In some embodiments, when language recognition is performed using the language recognition module 206, the visual recognition module 214 may additionally or alternatively analyze the physical environment of the user to obtain visual cues. In some embodiments, the visual recognition module 214 may identify objects in the vicinity of the user, which may assist the language learning system 100 in interpreting the context of the user's statement or question.
In some embodiments, language learning system 100 may be self-contained within a unit such as a projector. As a non-limiting example, in some embodiments, the components of the language learning system 100 shown in fig. 1 may be housed within the body of a 3D projector. In other embodiments, some components of the language learning system 100 may be in separate devices or housings and connected to external displays and/or speakers.
Fig. 5A to 5C show logic tables of logic operations used in the four-value logic system 118. The language learning system 100 may use a four-value logic system 118 to store and evaluate data within the blackboard 106, semantic network 116, neural network 199, and phrase rewrite rules 120. In some embodiments, the four-value logic system 118 may be used to evaluate the proposition attributes when modeling an interactive story. As a non-limiting example, the number of propositional attributes for realistic human character simulation may be in the range of hundreds of thousands to millions, although in some embodiments fewer or greater numbers may be used for interactive stories presented by language learning system 100. The proposition four-value logic system 118 described herein can also be used as a theorem prover.
The four-value logic system 118 may be used to evaluate and operate on a variable having one of four possible values: true (T), false (F), defined (D), and undefined (U). As a non-limiting example, the four-valued logic system 118 may be used during conditional testing of a proposition attribute that designates a variable as true, false, defined, or undefined. The variable designated as having a defined value must be true or false. When a conditional test of the proposition attribute is performed, a variable designated as having an undefined value may have any value of four truth values. As a non-limiting example, an undefined variable may have any of four true values during a phase state transition implemented by phrase rewrite rules 120 as discussed below.
FIG. 5A shows a logical table of negative operations, also referred to as logical NOT, in the four-valued logic system 118
Figure BDA0004113432000000091
And (5) calculating. In the four-value logic system 118: />
Figure BDA0004113432000000092
Evaluating as T; />
Figure BDA0004113432000000093
Evaluating as F; />
Figure BDA0004113432000000094
Evaluating as D; and->
Figure BDA0004113432000000095
The evaluation is U.
Fig. 5B shows a logic table of the conjunctive operation in the four-value logic system 118, also referred to as a logical AND (Λ) operation. In the four-value logic system 118: fΛf evaluates to F; fΛt evaluates to F; fΛu evaluates to F; fΛd evaluates to F; TΛF evaluates to F; t ΛT evaluates to T; TΛU evaluates to U; TΛD evaluates to D; u ΛF evaluates to F; u ΛT evaluates to U; u ΛU evaluates to U; u ΛD evaluates to F; dΛf evaluates to F; dΛt evaluates to D; dΛu evaluates to F; and dΛd evaluates to D.
Fig. 5C shows a logical table of disjunctive operations in the four-value logic system 118, also referred to as a logical OR (v) operation. In the four-value logic system 118: f, evaluating F as F; f, evaluating T as T; f, evaluating U as U; f, evaluating D as D; t is evaluated as T; t is evaluated as T; t is evaluated as T; t is evaluated as T; u is evaluated as U; u is V, T is evaluated as T; u is evaluated as U; u is V D and is evaluated as T; d, evaluating F as D; d, evaluating T as T; d, evaluating U as T; and D is evaluated as D.
Fig. 6A and 6B illustrate an embodiment of a data triplet 600 that may be formed from data stored in the semantic network 116. Semantic network 116 may include objects 602 and relationships 604. As shown in fig. 6A-6B, the triplet 600 may include two elements or three elements.
The object 602 may be a node in the semantic network 116. The object 602 may represent an entity such as a story character or item, a base element such as a digital zero, a class data structure, or any other type of data. In some embodiments, an object 602 may point to another object 602 or triplet 600 in the semantic network 116.
The relationship 604 may represent an attribute of the object 602, a function applicable to other objects 602 and/or the relationship 604, or a relationship between two objects 602. As a non-limiting example, the relationship 604 may be a function that operates on true values associated with the object 602 or other relationship 604, such as the logical operators of the four-valued logic system 118 described above with respect to FIGS. 5A-5C. Although in some embodiments, each relationship 604 may represent a basic basis function, more complex functions may be constructed by linking smaller functions together.
Each object 602 or relationship 604 in the semantic network 116 and/or the neural network 119 may be associated with a plurality of truths, such as truths indicating attributes of the object and/or whether relationships with other objects 602 or functions represented by the relationship 604 are applicable to the object 602. The true values may be true values, false values, defined values, and undefined values used in the four-value logic system 118.
FIG. 7 illustrates an example of encoding multiple truth values associated with a particular object 602 or relationship 604 using a single memory array 700. A single memory array 700 may be defined for each object 602 and/or relationship 604 and/or data triplet 600, where each memory array 700 has a plurality of index locations, each index location being two bits in size. The memory structure 700 may be an array, vector, list, or any other similar data structure.
Two bits in memory array 700 may be used to encode a particular true value associated with object 602 or relationship 604 or triplet 600. Since both bits may be 0 or 1, four possible values corresponding to the four possible true values used in four-value logic system 118 may be encoded at particular index positions. As a non-limiting example, in some embodiments, a "0" in a first bit position in memory array structure 700 and a "0" at a second bit position in memory array 700 may indicate a true value of T, a "1" in a first bit position in memory array 700 and a "1" at a second bit position in memory array 700 may indicate a true value of F, a "0" in a first bit position in memory array 700 and a "1" at a second bit position in memory array 700 may indicate a true value of D, and a "1" in a first bit position in memory array 700 and a "0" at a second bit position in memory array 700 may indicate a true value of U. Multiple bit positions in memory array 700 may be combined to form scalar variables or bits for floating point calculations. In some embodiments, the size of the memory array 700 may be limited by word size or memory limitations in the computer architecture, which may introduce chunking factors in the theoretical runtime computation of the system.
Returning to FIG. 6A, in some embodiments, a triple 600 may include two objects 602 and a relationship 604. Thus, in some cases, the triplet 600 may represent a particular relationship between the target object 602 and another object 602 in the semantic network 116 and/or the neural network 119 based on the relationship 604.
As a non-limiting example, when a first object 602 of a triplet represents a story character named "Bob", a second object 602 of the triplet represents a dog character in the story, and a relationship 604 of the triplet indicates that the first object 602 "likes" the second object 602, the triplet 600 of the triplet will indicate that Bob likes the dog. Another triplet 600 in the semantic network 116 may invert the two objects 602 with the same or different relationships 604 such that the triplet 600 may indicate different relationships from the perspective of a dog. For example, while one triplet 600 may indicate that Bob likes a dog, another triplet 600 may indicate that Bob is disliked by a dog.
In other embodiments, the triple 600 may include one object 602 and two relationships 604. These types of triple 600 may be considered secondary triples 600 in natural language processing, while triple 600 having two objects 602 and one relationship 604 may be considered primary triples 600. As a non-limiting example, a primary triplet may represent "I sawman" where object 602 represents "I" and "man" and relationship 604 represents "saw", while a secondary triplet of the primary triplet linked to "sawrelationship 604 may represent" saw with telescope "where relationship 604 represents" saw "and" with "and object 602 represents" telescope ". In some embodiments, the relationship 604 representing a verb such as "with" may be modeled using a partial recursive function, and thus in some embodiments, the secondary triplet 600 may be limited to a partial recursive function.
Returning to FIG. 6B, a two-element triplet 600 may include an object 602 and a relationship 604. Thus, in some cases, the relationship 604 of a two-element triplet may identify a particular function that may be applied to the object 602 of the triplet.
In some embodiments, the values associated with the object 602 or the relationship 604 may be encoded using true, false, defined, or undefined truth values described above with respect to the four-value logic system 118. As a non-limiting example, a true value for T in the "like" relationship 604 may indicate that the target object 602 likes another object 602 in the triplet 600 in romance, a true value for F in the "like" relationship 604 may indicate that the target object 602 dislikes another object 602 in the triplet 600 in romance, a true value for D in the "like" relationship 604 may indicate that the target object 602 has a romantic feel to another object 602 in the triplet 600 and may like it in romance or dislike it in romance, and a true value for U in the "like" relationship 604 may indicate that the attribute that is liked in romance is unimportant in the relationship.
As described above, the relationship 604 may represent a function that may operate on one or more objects 602. The relationship of the three-element triplet 600 may be an original recursive operation or a general recursive operation, as they may employ two objects 602 as operands. However, the relationship for the two-element triplet 600 may be limited to the original recursive operation with one object 602 as the operand. As non-limiting examples, the relationship 604 in the two-element triplet 600 may be a successor function that increments the value by one, or a phrase override rule 120 that takes a single object 602 as an operand and checks whether it is correctly quantized.
In some embodiments, the data structure of the triplet 600 may represent a directed acyclic graph of predicate calculus. The predicate calculus represented by triplet 600 may use the proposition calculus of four-valued logic system 118 described above. Predicate calculus can implement both classical inference systems and visual inference systems, with both inference systems utilized to compare and compare mathematical theorem in different branches of logic.
Thus, four-value logic system 118 may be used to test and evaluate propositions involving objects 602 and/or 606. As a non-limiting example, both phrase rewrite rules 120 and specific input arguments may be evaluated using four-value logic system 118 to determine if phrase rewrite rules 120 should be applied based on the input arguments.
Fig. 8 shows a model of phrase rewrite rules 120. Phrase rewrite rules 120 may have a left-hand side (LHS) and a right-hand side (RHS). Left-hand acceptable arguments, such as 2-element triplet 600, 3-element triplet 600, object 602, or relationship 604. The left hand side may have one cell accepting the argument and the right hand side may have one or more cells. When rules are applied, the right hand cell may replace the left hand single cell. Instead of the right-hand cells, the cells may inherit the elements of the input argument as their arguments, such that each cell on the right-hand side may be evaluated based on the argument using a different corresponding phrase rewrite rule 120. In some phrase rewrite rules 120, a substitute right-hand cell may inherit elements of a left-hand argument for vertical delivery of feature inheritance in a grammar. As a non-limiting example, in the example phrase rewrite rules 120 shown in fig. 10 and 11, the right-hand cell prefixed with an asterisk may indicate a cell that inherits an element of an input argument.
In some embodiments, the left-hand side of the phrase rewrite rules 120 may be stored as relationships 604 in the semantic network 116, while the arguments to be evaluated by the phrase rewrite rules 120 may be represented by objects 602 in the semantic network 116 (such as separate objects 602 or objects 602 that point to a particular 2-element or 3-element triplet 600). Thus, the 2-element triplet 600 may express a potential application of the phrase rewrite rule 120, where the relation 604 of the triplet indicates the phrase rewrite rule 120 and the object 602 of the triplet indicates the argument to be evaluated by the phrase rewrite rule 120. Thus, the four-value logic system 118 may be used to evaluate the potential application of the phrase rewrite rules 120 to determine whether the phrase rewrite rules 120 should in fact be applied. Phrase rewrite rules 120 constructively model the duality for computing analogy by implementing phrase rewrite rules 120 with four-value logic system 118 in semantic network 116 where the truth values are stored in parallel memory structure 700.
Fig. 9 illustrates a method of evaluating the left-hand side of the phrase rewrite rules 120 based on a particular argument to determine if the phrase rewrite rules 120 should be applied. The evaluation of the rules according to the procedure of fig. 9 may be performed under the constant O (C) or simply under the time O (1) if the chunk factors imposed by the word size of the computer are ignored. In some embodiments, the process of FIG. 9 may be performed by determining whether the bit values in a single memory array 700 associated with the arguments in the semantic network 116 are correctly quantized.
At step 900, the argument may be passed to the left hand side of phrase rewrite rules 120. An argument may be a 2-element triplet 600, a 3-element triplet 600, or a separate object or relationship. Some phrase rewrite rules 120 may anticipate a particular type of argument.
At step 902, language learning system 100 may evaluate the argument to determine whether all content expected to be set to true by the left hand side is set to true in a single memory array 700 of the argument. If so, the language learning system 100 may move to step 904. If not, the language learning system 100 may move to step 910 and determine that the phrase rewrite rules 120 are not to be applied.
At step 904, language learning system 100 may evaluate the argument to determine if all content expected to be set to false by the left hand side is set to false in the single memory array 700 of the argument. If so, the language learning system 100 may move to step 906. If not, the language learning system 100 may move to step 910 and determine that the phrase rewrite rules 120 are not to be applied.
At step 906, the language learning system 100 may evaluate whether the left-hand side expected horizontal features, scalar values, and/or probability information, as well as other types of information, are correctly quantified in the proposition algorithm in the single memory array 700 of arguments. Such information may be encoded in four-value logic system 118 as defined true values. If the desired feature is properly encoded, the language learning system 100 may move to step 908 and apply the phrase rewrite rules 120. If they are not, then the language learning system 100 may move to step 910 and determine that phrase rewrite rules 120 are not to be applied.
If the truth of the argument matches the criteria expected on the left hand side, the language learning system 100 may apply the phrase rewrite rules 120 by replacing the single element on the left hand side with one or more elements on the right hand side, step 908. The right hand side cell may inherit some or all of the elements of the argument originally passed to the left hand side. As a non-limiting example, when the left hand side accepts a 2-element triplet 600 and the right hand side has two cells, the argument triplet 600 may be decomposed and the object 602 may be used as an argument for a first cell on the right hand side and the relationship 604 may be used as an argument for a second cell on the right hand side. As another non-limiting example, when the left hand side accepts a 3-element triplet 600 and the right hand side has two cells, the argument triplet 600 may be broken down and the first object 602 of the argument may be used as an argument for the first cell on the right hand side and the relationship 604 of the argument and the second object 602 may be used as an argument for the second cell on the right hand side. As will be described in more detail below, in some phrase rewrite rules, features of a specified inheritance argument on the right hand side may also be distributed among other arguments on the right hand side.
In some embodiments, the blackboard 106 can have a list of phrase rewrite rules 120. The language learning system 100 may evaluate the proposition starting with one phrase rewrite rule 120 at the top of the list and then move to other phrase rewrite rules 120 as appropriate based on whether the earlier phrase rewrite rule is applied and/or whether the left-hand cell of the earlier applied rule is decomposed into other cells on the right-hand side.
Fig. 10-11 illustrate exemplary embodiments of a list of phrase rewrite rules 120 that may be used to generate a target language sentence based on an input argument triplet 600. This list is merely exemplary, as some embodiments of language learning system 100 may use more phrase rewrite rules 120 and/or different phrase rewrite rules 120. Fig. 10 shows that the left-hand side of each phrase rewrite rule 120 is replaced with the right-hand side, while fig. 11 shows horizontal feature inheritance within the right-hand side of each phrase rewrite rule 120.
Phrase rewrite rules 120 may be used to generate sentences that can be expressed to students during an interactive story, such as sentences that describe the current state of objects 602, relationships 604, or triples 600 in semantic network 112 and/or neural network 119 when the interactive story is modeled. In some embodiments, the words used to generate the sentence are from a neural network that trains according to the text selected from the text analyzer 208 after having searched other databases and selected the text for training using the vocabulary ratio C1301 from the random partitioning function as a test of similarity in the semantic ensemble.
Fig. 10 shows each phrase rewrite rule in a syntax unit. The syntax element may represent a start rule or a specific syntax type. As a non-limiting example, "S" may indicate a start rule, "N" may indicate a noun or noun phrase, "V" may indicate a verb or verb phrase, "Prep" may indicate a preposition or preposition phrase, and "Det" may indicate a determinant. As a non-limiting example, rule 1 shown in fig. 10 is the starting rule that produces noun and verb units on the right hand side. The noun and verb elements on the right hand side each inherit a specified portion of triplet 600 as an argument that passes to the left hand side of the starting rule. In fig. 10, the side of the right hand side of each rule following the "/" label indicates such inheritance, and the line connects the right hand side elements to the types of elements they inherit from the argument. As a non-limiting example, in rule 1 of FIG. 10, the noun element on the right hand side may inherit an object 602 ("O") from an input argument, while the verb element inherits a relationship 604 ("R") from an input argument. Other rules (such as rule 2) having nouns as their left-hand side units may then be used to evaluate noun units and their arguments. Similarly, other rules (such as rule 4) having verbs as their left-hand side units may be used to evaluate the verb units and their arguments.
In some embodiments, the right-hand cell of phrase rewrite rule 120 may indicate that the characteristics of the argument inherited by that cell are to be distributed among the other cells on the right-hand side. As a non-limiting example, the asterisk preceding the syntax element in fig. 10 indicates the vertical inheritance of features from the left hand side that will be assigned among other arguments on the right hand side. A feature may be an attribute of an input argument, such as an indication that input object 602 is: singular or plural; is living; is a human; should be expressed with or without qualifiers or logos; or any other attribute. Thus, when the characteristics of the input argument triplet 600 are inherited and assigned to the right-hand side of rule units among the arguments, words expressing these grammar units may be selected from the vocabulary 122 so that they agree with each other with respect to the assigned characteristics. As a non-limiting example, when the initial argument includes a complex number of objects 602, multiple objects 602 may be maintained throughout the process via inherited allocation features such that the final word used to express the state of the objects 602 in the generated statement consistently indicates that the objects 602 are complex numbers.
Fig. 11 illustrates an example of horizontal feature assignment among the right hand side cells of various exemplary phrase rewrite rules 120. As a non-limiting example, as shown in rule 1, when the input argument that initiates an "S" rule is a 2-element triplet 600 in which the object 602 is a complex number, the object 602 may be inherited by the "NP" element on the right-hand side of the rule. "VP" may inherit the relationship 604 of the argument, as shown in rule 1 in FIG. 10. However, because the "NP" unit is preceded by an asterisk, the characteristics of the inherited argument of the "NP" unit may also be assigned to the argument of the "VP" unit (as shown in FIG. 11), thereby marking the relationship 604 of the "VP" unit with complex characteristics. Thus, language learning system 100 may consider this feature when selecting words to express relationship 604 such that the words selected for the verbs match the complex nature of the associated object. As such, assigning such features after inheritance may allow long-range dependencies in phrase structure grammar. The presence of these features can be ensured by testing defined truth values when determining whether to apply phrase override rules.
In some embodiments, a subscript or another symbol associated with each syntax element in phrase rewrite rule 120 may indicate a priority level. The language learning system 100 may first attempt to apply the higher priority phrase rewrite rules 120. If phrase rewrite rules 120 are found not to apply to its input arguments, language learning system 100 may move to a lower priority phrase rewrite rule 120 to, in turn, determine whether the phrase rewrite rule applies to the argument.
In some embodiments, language learning system 100 may use a stack data structure in blackboard 106 in evaluating phrase rewrite rules 120 to generate sentences. As a non-limiting example, the blackboard 106 can first push an "S" element to the stack when generating a statement for the input triplet 600. When an "S" rule is found to apply to the input triplet 600, the "S" may pop up from the stack and the syntax elements on the right hand side may be pushed to the stack, such as the "NP" and "VP" elements from rule 1 shown in FIG. 10. "NP" may be popped from the stack and similarly evaluated using a rule with "NP" on its left-hand side, where replacement units from the applicable right-hand side are pushed to the stack. When no additional phrase rewrite rules 120 are applicable to the pop-up unit, a word matching the grammar type of the grammar unit may be selected from the vocabulary 122 and added to the output sentence. Features that have been inherited by the grammar elements (such as singular, plural, living, human or other features) can be considered to select an appropriate word having inherited features. The language learning system 100 may then move to the next element on the stack. When the stack is empty, the generated statement may be complete. The text of the generated sentence may be visually displayed to the user via the output component 104 and/or corresponding audio may be generated using the audio generation module 204 and the audio asset 202 for playback via the output component 104.
Fig. 12 illustrates a process for modeling the state of an interactive story with language learning system 100. As described above, the blackboard 106, which is comprised of the semantic network 116 and/or the neural network 119, can model the state of the interactive story and, as the story progresses, can present audio and/or visual content to the user that represents the current state of the story via the output component 104.
At step 1202, the language learning system 100 may be initialized with the data in the blackboard 106 and/or the semantic network 116 and/or the neural network 119 and/or the phrase rewrite rules 120 using the four-value logic system 118. Semantic network 116 and/or neural network 119 may also be initialized with objects 602 and relationships 604 representing story roles, items, settings, relationships, and/or other story elements.
In some embodiments, the objects 602 and/or relationships 604 of the story element may be initialized according to a preset initial state or according to one of a plurality of possible preset initial states. In other embodiments, at least some aspects of the objects 602 and/or relationships 604 may be randomized or dynamically selected. As a non-limiting example, in some embodiments, names of characters, items, and other story elements may be randomly selected from a vocabulary 122 (shown in FIG. 3) and/or a preset list generated by text analyzer 208 from a text source.
The blackboard 106 can also be initialized with probability rules as follows: assuming a known state in the semantic network 116, a probability is defined that the state of one or more objects and/or relationships 604 changes or remains unchanged. As a non-limiting example, a rule may be defined in semantic network 116 that indicates that a particular character in a story simulation will pick up an item in the simulation 50% of the time when the character is considered to be close to the item and has not yet held the item in the simulation.
At step 1204, language learning system 100 may begin story simulation. Starting from the initial story state, language learning system 100 may evaluate the set of probability rules to change the state of the story. As a non-limiting example, when the probability rules described above are evaluated and the state of the semantic network 116 indicates that the position of the character in the simulation is near the position of the item and that the character has not already held the item is true, the rules may be applied such that the relationship 604 between the character object 602 and the item object 602 in the 3-element triplet 600 changes to indicate that the character is now holding the item 50% of the time.
When the semantic network 116 is initialized and updated in the blackboard 106, the knowledge source 114 can access the state of the objects 602 and relationships 604 such that output audio and/or visual content can be generated and presented to the user via the output component 104. As a non-limiting example, the visual generation module 212 may instruct the graphics processor 112 to render images showing characters, items, or other story elements in the current state modeled by the semantic network 116 using the appropriate visual assets 210, and these images may be displayed via the 3D projector or other output component 104. Similarly, the audio generation module 204 may instruct the audio processor 110 to use the appropriate audio asset 202 to generate audio that may be played back via speakers. As a non-limiting example, when story characters interact with each other or with items in a story simulation, a dialogue or narration expressing the state of the story may be generated according to the phrase-overwriting rules 120 described above, such that the user reads and/or hears words in the target language corresponding to the story being performed.
At step 1206, the language learning system 100 can check the blackboard 106 to determine if any user input has been received via the input component and added to the blackboard 106. If no user input is received, language learning system 100 may move to step 1208 and continue modeling the story according to the probability rules described above, and then return to step 1206 to check for new user input. If user input is detected at step 1206, the language learning system 100 may move to step 1210.
At step 1210, the language learning system 100 may interpret the new user input added to the blackboard 106. When the new user input is a voice recording of a user's question or statement, the control module 108 may activate the language recognition module 206 to interpret the recorded voice. The control module 108 may similarly activate the visual recognition module 214 to recognize the location on the generated visual content at which the user points in conjunction with the recorded statement or question.
At step 1212, language learning system 100 may determine whether a response should be presented to the user in response to the new user input. When a response is not required, language learning system 100 may move directly to step 1208 to continue modeling the story simulation. As a non-limiting example, when the user input is a story-independent statement, the user input may be ignored and language learning system 100 continues to model the story simulation without responding. However, when the language learning system 100 determines that the user's input is a question or statement to which it may respond, the language learning system 100 may move to step 1214 to formulate and present a response. In some embodiments, when the language learning system 100 cannot determine the user's question or what is referred to by the statement, the language learning system 100 may generate a question using the phrase rewrite rules 120 to ask the user for more information.
At step 1214, when the user input is a question, the language learning system 100 may attempt to respond to the user's question. As a non-limiting example, when visual content depicting the current state of a story shows a character smelling flowers and the input component captures the user's inquiry "what is that while pointing to the flowers? "when the language learning system 100 can identify the problem, identify that the user is pointing to a portion of the generated visual content that represents the flower object 602 in the semantic network. The language learning system 100 may then use the phrase rewrite rules to generate a sentence indicating that the item is a flower, and play back the corresponding audio that speaks "that is a flower". Because objects 602 in the semantic network can be dynamically generated and/or named based on the vocabulary 122 from the text source, users can learn new words in the target language from the vocabulary 122 by interacting with the story and asking questions about what they see and hear. In some embodiments, language learning system 100 may pause story simulation and story presentation while formulating and presenting responses. After presenting the response, language learning system 100 may move to step 1208 to continue modeling the story simulation.
When the user input is a statement regarding the current state of the story at step 1214, in some embodiments, language learning system 100 may record the statement, but proceed to step 1208 to continue modeling the story simulation without directly responding to the user. As a non-limiting example, when a user points to a story item in the generated visual content and speaks "that is beautiful," the language learning system 100 may record the user's preferences for items in the associated object 602 in the semantic network 116. Such preferences may be used in conjunction with probability rules such that the preferred objects 602 are more likely to reappear during the interactive story or have good treatment relative to other objects 602 in the simulation.
While the invention has been described in conjunction with specific embodiments thereof, it is evident that many alternatives, modifications and variations will be apparent to those skilled in the art. Accordingly, the invention as described and claimed hereinafter is intended to embrace all such alternatives, modifications and variations as fall within the spirit and broad scope of the appended claims.
Reference to the literature
The following listed references are incorporated herein by reference:
Good, i.j. "study of the history of probability theory and statistics", XXXVII a.m.turing in world war ii (Studies in the History of Probability and statistics, XXXVII a.m.turing's statistical work in World War II) ", biomerika journal, volume 66, phase 2, 1979, page number: 393-396.
Good, i.j. "language statistics (Statistics of Language)", encyclopedia of linguistics, information and controls, meetham, a.r. and r.a. hudson editions, new york: pergamon Press, 1969.
Herman, g. "high-level theory of language as choice or opportunity (The Advanced Theory of Language as Choice or Chance)", new york: springer-Verlag, 1966.
Herman, g. "quantitative linguistics (Quantitative Linguistics)", belfas: butterworth & Co., 1964.
Kleene, s. "metamath", new york: elsevier publishing company, usa, 1974.
Kleene, s. "mathematical logic (Mathematical Logic)", new york: dover publication, 2002.
Klein, s. "automatic description in paper format (Automatic Paraphrasing in Essay Format)", journal of mechanical translation and calculation linguistics (Mechanical Translations and Computationa Linguistics), 8:68-83.
Klein, s. "forward: history of MESSY (FORWARD: the History of MESSY) ", metanotation simulation system user manual, M.A.Applebaum, UWCS technical report #272, page 169.
Klein, s.aeschliman, applebaum, balsisger, curtis, foster, kalish, kamin, lee and price, "Simulation of the general Luo Puhe-column wieldy-strauss hypothesis using the meta-symbol Simulation system" (Simulation d' hypotheses emises par Propp et Levi-Strauss en utilisant un systeme de Simulation meta-symbol), informatique et Sciences Humaines, stage 28, page number: 63-133, mars.
Klein, S. "analogy basis of creativity in language, culture and art: upper of the stone age to the male 2100 year (The Analogical Foundations of Creativity in Language, culture & the arms: the Upper Paleolithic to CE) ", language, visual & Music (extension, vision & Music), mcKevitt, mulvihill and Nuallin editions, amsterdam: john Benjamin, page number: 347-371.
Steedman, M. "Classification grammar (Categorial Grammar)", university of pennsylvania computer and information science series technical report, code MS-CIS-92-52, U.S. 1992.
Yngve, V. "from grammar to science: new foundation of common linguistics (From Grammar to Science: new foundations for general linguistics) ", amsterdam: john Benjamins, 1996.

Claims (16)

1. A language professor processing method, comprising:
modeling semantic and neural networks for interactive imagery for dynamic, interactive story generation using triplets having one object and one relationship, two objects and one relationship, or one object and two relationships, wherein objects, relationships, and triplets associated with truth values are expressed using a four-valued logic system that encodes truth values, false values, defined values, and undefined values using a single memory array;
defining a set of phrase rewrite rules that accept the input triples as arguments;
the specific phrase rewrite rules are applied to the specific arguments if:
an element expected to be true by the phrase rewrite rules is set to true in the argument,
setting an element expected to be false by the phrase rewrite rules to be false in the argument, and
setting elements expected to be quantized by the phrase rewrite rules in the argument as defined;
When the phrase rewrite rule is applied, replacing a grammar unit on the right-hand side from the specific phrase rewrite rule with a grammar unit on the left-hand side, inheriting elements of its argument to the grammar unit on the right-hand side, and assigning features of the inherited argument from the specified grammar unit to other grammar units on the right-hand side;
generating a sentence of the target language by selecting words from the vocabulary to express a grammar unit generated by applying a phrase rewrite rule or using a neural network; and
the sentence is audibly and/or visually presented to the user.
2. The language professor processing method of claim 1, further comprising displaying visual content representing the status of the interactive story via a 3D projector.
3. The language professor processing method of claim 1, further comprising updating the state of the objects and/or relationships in the semantic and neural networks over time by probabilistic rules and user interactions.
4. The language professor processing method of claim 1, wherein a single memory array is used to encode a true value associated with each object and/or relationship, wherein the values of two bit positions using the single memory array are indicative of an encoded true value, an encoded false value, an encoded defined value, or an encoded undefined value.
5. The language professor processing method of claim 4, wherein in the conditional test and quantitative check of the true values and variables encoded in the memory arrays associated with the elements of the argument, it is determined whether the phrase rewrite rule is to be applied in constant O (C) time, or if the computer word size is ignored, it is simply determined whether the phrase rewrite rule is to be applied in time O (1).
6. The language teaching processing method according to claim 1, further comprising:
generating a vocabulary based on the one or more source text for user-guided text selection using the God-Turing repetition rate, vocabulary ratio C, and random partitioning;
subdividing the vocabulary into target ratio lines;
adding new words to the vocabulary based on words in the target ratio line; and
the set of phrase rewrite rules is defined or modified based on the grammatical form and the probabilities of the modified vocabulary.
7. The language teaching processing method according to claim 1, further comprising:
receiving a voice recording of a user captured by a microphone;
identifying the spoken word in the recorded user input according to a Markov model or neural network;
Determining a state of the interactive story related to the recognized spoken word;
generating a response to said spoken word using said phrase rewrite rules and a neural network; and
the response is presented to the user.
8. The language teaching processing method according to claim 7, further comprising:
displaying visual content representing a state of the interactive story;
receiving an image of a user captured by a camera;
identifying a gesture performed by a user in the image;
a designated area of the visual content pointed to by the user's gesture is determined, and the visual content at the designated area is used to determine an object that the user said word refers to in the semantic network when generating a response to the said word.
9. A language learning system, comprising:
a blackboard memory area for central control and command, the blackboard memory area storing data representing:
a four-value logic system that allows true, false, defined, and undefined values;
a semantic network for encoding the four-value proposition logic;
a neural network encoding the four-valued proposition logic;
phrase rewrite rules list;
visual interaction and dynamic interaction stories;
An audio generation module linked to the blackboard memory area, the audio generation module configured to generate interactive dialog and synthesized story audio for an avatar based on the phrase rewrite rules matching the status of the interactive story and to add the story audio to the blackboard memory area;
a speaker linked to the blackboard memory area, the speaker configured to play the story audio from the blackboard memory area to a user;
a visual generation module linked to the blackboard memory area, the visual generation module configured to render visual conversations and story visual content representing the status of the interactive story and add the story visual content to the blackboard memory area;
a display component linked to the blackboard memory area, the projector configured to present the story visual content from the blackboard memory area to a user, wherein the semantic network models the interactive story by objects and relationships, the association between objects and relationships being represented by a 2-element triplet having one object and one relationship or a 3-element triplet having two objects and one relationship or one object and two relationships,
Wherein the state of the interactive story is modeled over time by probability rules that change the state of the object and/or the relationship, and
wherein the four-value logic system is used to determine whether to apply a particular phrase override rule based on a particular input triplet.
10. The language learning system of claim 9, wherein the display component is a 3D projector.
11. The language learning system of claim 9 wherein the truth value associated with each object and/or relationship is encoded using a single memory array, wherein values encoded using two bit positions in the memory array indicate an encoded truth value, an encoded false value, an encoded defined value, or an encoded undefined value.
12. The language learning system of claim 9 wherein one or more of the phrase rewrite rules are applied to triples to generate target language statements expressing logical properties or actions of triples in the semantic network; and/or generating a target language statement or completing a logical function represented by a triplet using a neural network trained to represent logical properties or actions of the triplet.
13. The language learning system of claim 12 wherein the phrase rewrite rules, neural network, and word selected for the sentence are based on a target score line that can be directed to a particular semantic domain through user interaction using vocabulary ratio C and a Gooder-Turing repetition rate for automatic text selection;
presenting the new vocabulary to the user from the selected text in an optimal manner by creating a graph of pairs of text lengths/vocabulary pairs, selecting a starting point in the graph for the user from previous interactions with the system; and
the ratio provided by the class created by the random partitioning function is used to select a new vocabulary to be presented to the user.
14. The language learning system of claim 9, further comprising:
a microphone linked to the blackboard memory area, the microphone configured to add recorded user input to the blackboard memory area; and
an audio recognition module linked to the blackboard memory area, the audio recognition module configured to recognize the spoken word in the recorded user input according to a Markov model or a neural network, and to add the recognized word to the blackboard memory area.
15. The language learning system of claim 14 wherein the audio generation module generates a sentence using the phrase override rule or neural network in response to the spoken word identified by the audio identification module in the recorded user input and adds the generated sentence to the blackboard memory area for playback with the speaker.
16. The language learning system of claim 14, further comprising:
a camera linked to the blackboard memory area, the camera configured to add an image of a user to the blackboard memory area; and
a visual recognition module linked to the blackboard memory area, the visual recognition module configured to recognize gestures directed to portions of the story visual content displayed by the display component.
CN202080104205.8A 2020-06-07 2020-06-07 Custom interactive language learning system using four-value logic Pending CN116057544A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2020/036521 WO2021251940A1 (en) 2020-06-07 2020-06-07 Tailored interactive language learning system using a four-valued logic

Publications (1)

Publication Number Publication Date
CN116057544A true CN116057544A (en) 2023-05-02

Family

ID=78846343

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202080104205.8A Pending CN116057544A (en) 2020-06-07 2020-06-07 Custom interactive language learning system using four-value logic

Country Status (2)

Country Link
CN (1) CN116057544A (en)
WO (1) WO2021251940A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118132729B (en) * 2024-04-28 2024-08-16 支付宝(杭州)信息技术有限公司 Answer generation method and device based on medical knowledge graph

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050255431A1 (en) * 2004-05-17 2005-11-17 Aurilab, Llc Interactive language learning system and method
US9575951B2 (en) * 2013-09-03 2017-02-21 Roger Midmore Methods and systems of four valued analogical transformation operators used in natural language processing and other applications
US10446055B2 (en) * 2014-08-13 2019-10-15 Pitchvantage Llc Public speaking trainer with 3-D simulation and real-time feedback
WO2018203912A1 (en) * 2017-05-05 2018-11-08 Midmore Roger Interactive story system using four-valued logic

Also Published As

Publication number Publication date
WO2021251940A1 (en) 2021-12-16

Similar Documents

Publication Publication Date Title
US20200302827A1 (en) Tailored Interactive Learning System Using A Four-Valued Logic
US10249207B2 (en) Educational teaching system and method utilizing interactive avatars with learning manager and authoring manager functions
US10679626B2 (en) Generating interactive audio-visual representations of individuals
US7778948B2 (en) Mapping each of several communicative functions during contexts to multiple coordinated behaviors of a virtual character
Matthews An introduction to natural language processing through Prolog
Keizer et al. Dialogue act recognition with bayesian networks for dutch dialogues
US20220301250A1 (en) Avatar-based interaction service method and apparatus
CN110832570B (en) Interactive story system using four-value logic
Oliveira et al. Co-PoeTryMe: interactive poetry generation
Skantze Galatea: A discourse modeller supporting concept-level error handling in spoken dialogue systems
CN117216234A (en) Artificial intelligence-based speaking operation rewriting method, device, equipment and storage medium
CN116057544A (en) Custom interactive language learning system using four-value logic
Vyas An Approach of Using Embodied Conversational Agent for Personalized Tutoring
Macias-Huerta et al. CARLA: Conversational Agent in Virtual Reality with Analytics.
Alsubayhay et al. A review on approaches in Arabic chatbot for open and closed domain dialog
Satria et al. EFL learning media for early childhood through speech recognition application
DeMara et al. Towards interactive training with an avatar-based human-computer interface
Maniatis et al. VOXReality: Immersive XR Experiences Combining Language and Vision AI Models
Dündar A robot system for personalized language education. implementation and evaluation of a language education system built on a robot
RU2807436C1 (en) Interactive speech simulation system
Xu Language technologies in speech-enabled second language learning games: From reading to dialogue
KR20190106011A (en) Dialogue system and dialogue method, computer program for executing the method
Alghamdi et al. Natural Language Processing for a Personalised Educational Experience in Virtual Reality
Landragin et al. Relevance and perceptual constraints in multimodal referring actions
Nijholt Agent assistance: from problem solving to music teaching

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination