CN110832570A - Interactive story system using four-value logic - Google Patents

Interactive story system using four-value logic Download PDF

Info

Publication number
CN110832570A
CN110832570A CN201780092872.7A CN201780092872A CN110832570A CN 110832570 A CN110832570 A CN 110832570A CN 201780092872 A CN201780092872 A CN 201780092872A CN 110832570 A CN110832570 A CN 110832570A
Authority
CN
China
Prior art keywords
story
phrase
storage area
blackboard
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201780092872.7A
Other languages
Chinese (zh)
Other versions
CN110832570B (en
Inventor
罗杰·密德茂尔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Publication of CN110832570A publication Critical patent/CN110832570A/en
Application granted granted Critical
Publication of CN110832570B publication Critical patent/CN110832570B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B19/00Teaching not covered by other main groups of this subclass
    • G09B19/06Foreign languages
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/76Architectures of general purpose stored program computers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/205Parsing
    • G06F40/216Parsing using statistical methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/40Processing or translation of natural language
    • G06F40/55Rule-based translation
    • G06F40/56Natural language generation
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B7/00Electrically-operated teaching apparatus or devices working with questions and answers

Abstract

A language learning system that teaches personalized vocabulary to users through an interactive story. The interactive story is modeled by probabilistic rules in a semantic network having objects and relationships. Dialogs and narratives are dynamically generated based on states of a model of the interactive story using phrase rewrite rules evaluated using a four-valued logic system in which truth values of objects and relationships are coded as true, false, defined and undefined in a parallel storage structure.

Description

Interactive story system using four-value logic
Technical Field
The present disclosure relates to the field of linguistic instructions, and in particular to a system for teaching new vocabularies to students through an interactive story modeled with a neural network using a four-valued logic system.
Background
Language learning programs are valuable tools for teaching students new words or even completely new languages. However, most existing programs teach each student the same words in the same manner. For example, such tutoring programs often teach from a predefined list of generic words. When they comprise audiovisual presentations, they are usually pre-recorded or use the same script for each student.
Students, especially young children, preferably learn often through interactive scenes. However, many existing programs are either all predefined or have limited options for students to interact with the presentation to ask questions or get more information about what they hear or see.
In addition, most systems do not customize the word lists taught to each student. This may limit the student's understanding of the target language. For example, teaching only from a fixed list of the most common words in the target language may result in students not understanding the less common words that are more representative of how the language is used in practice, because fluent speakers do not restrict themselves to only the most common words in the language.
There is a need for a language learning system that presents interactive stories to students. The system should present the interactive story to the students using a vocabulary list that is personalized for each student. The interactive story should be modeled in a semantic network so that the state of the modeled story can be evaluated using a four-value logic system when a dialog is generated or when a student asks a question about the story and the system generates an answer.
Drawings
FIG. 1 illustrates an embodiment of a language learning system.
FIG. 2 shows non-limiting examples of knowledge sources that can add data to the blackboard of the language learning system, modify data on the blackboard, and/or access data stored on the blackboard.
FIG. 3 shows a non-limiting example of a process for generating a vocabulary list using a text analyzer.
FIG. 4 shows a non-limiting example of a vocabulary diagram.
FIG. 5A shows a logic table for a negation operation in a four-valued logic system.
FIG. 5B illustrates a logic table for a conjunction operation in a four-valued logic system.
FIG. 5C illustrates a logic table for an extract operation in a four-valued logic system.
FIG. 6A illustrates a 3-element data triple in a semantic network.
FIG. 6B illustrates 2-element data triples in a semantic network.
FIG. 7 illustrates an example of encoding multiple truth values associated with a particular object or relationship using two parallel storage structures.
FIG. 8 illustrates a model of phrase rewrite rules.
FIG. 9 illustrates a method of evaluating the left portion of a phrase rewrite rule based on certain arguments to determine if the phrase rewrite rule should be applied.
FIG. 10 illustrates an exemplary embodiment of a list of phrase rewrite rules that may be used to generate sentences of a target language based on input parametric triples.
FIG. 11 shows an example of feature assignments in the cell to the right of various exemplary phrase rewrite rules.
FIG. 12 illustrates a process for modeling the state of an interactive story with a language learning system.
Detailed Description
Fig. 1 illustrates an embodiment of a language learning system 100. The language learning system 100 may present the dynamically generated interactive story to the user through audio and/or visual data that may teach the user a vocabulary.
The language learning system 100 can include one or more input components 102, one or more output components 104, and a processing system, wherein the processing system includes a blackboard 106, a control module 108, an audio processor 110, a graphics processor 112, and one or more knowledge sources 114. The blackboard 106 can be a database or other central memory location accessible by other components of the language learning system 100.
The input component 102 can be an element that a user can input data into the blackboard 106, such as a camera, microphone, touch screen, keyboard, mouse, or any other element for inputting images, audio, or other data into the blackboard 106. As non-limiting examples, language learning system 100 may include a camera that takes images and/or video of the user as the user interacts with language learning system 100 and/or a microphone that may record audio of words spoken by the user.
The output component 104 may be an element that: the user may perceive images and/or audio produced by the language learning system 100 through the elements, such as a visual display and a speaker. In some embodiments, the output component 104 for the generated image may be a projector, such as a two-dimensional or stereoscopic 3D projector. In other embodiments, the output component 104 for the generated image may be a holographic display, a 3D television or monitor, a 2D television or monitor, an augmented and/or virtual reality headset or any other type of display or connection to an external display.
In some embodiments, the visual material may be presented by the output component 104 rendering the visual material immersive to the user, which may increase the likelihood that the user engages in the interactive story and thus better learns the professor's language. While in some embodiments, the immersive visual material may be presented by a virtual reality headset or other head mounted display, in other embodiments, a holographic display, a 3D television, or a 3D projector may present the immersive visual material to the user. As a non-limiting example, some virtual reality headsets have age restrictions or guidelines, and such as in some embodiments, the language learning system may use a 3D projector to present interactive stories to toddlers through immersive visual material.
In embodiments where the output component 104 is a stereoscopic 3D projector, while one image may be projected to the right eye of the viewer, another image may be projected to the left eye of the viewer. The viewer may wear corresponding 3D glasses, such as stereoscopic 3D glasses, polarized 3D glasses, or shutter 3D glasses. The 3D glasses may block or filter the light such that the projected left eye image is seen by the left eye of the viewer and the projected right eye image is seen by the right eye of the viewer. When the left-eye image and the right-eye image show the same scene from different viewpoints spaced apart by the distance between the human eyes, the viewer may perceive the scene three-dimensionally.
The blackboard 106 can be a centralized memory location accessible to the other components of the language learning system 100 such that the other components can add data to the blackboard 106, access data stored on the blackboard 106, and/or modify data stored on the blackboard 106. As will be discussed further below, the blackboard 106 can store data representing the semantic network 116 and analyze or model the data in the semantic network 116 using a four-valued logic system 118 and/or phrase rewrite rules 120. By way of non-limiting example, the semantic network 116 may model the state of an interactive story presented by the language learning system 100 according to a four-valued logic system 118 and/or phrase rewrite rules 120, where the state of the interactive story changes over time.
The control module 108 may include one or more CPUs or other processors linked to the blackboard 106. As will be discussed further below, the control module 108 can perform operations on data stored in the blackboard 106 and/or activate other components to perform such operations or to assist the blackboard 106.
One or more audio processors 110 can be linked to the blackboard 106. In some embodiments, the audio processor 110 may be a dedicated audio card or processor, while in other embodiments the audio processor 110 may be part of the control module 108, another CPU, or other processor.
The audio processor 110 can process existing audio data stored in the blackboard 106. As a non-limiting example, the audio processor 110 can process input audio data captured by the microphone or other input component 102 that has been stored in the blackboard 106, such as processing the input audio data for speech recognition as described below.
The audio processor 110 may also (or alternatively) generate new audio data for the interactive story. As a non-limiting example, the audio processor 110 may generate audio for the speech of the story character and/or mix the generated audio with music, sound effects, or other audio played for the story. In some embodiments, audio generated by the audio processor 110 may be delivered to speakers via the blackboard 106 for playback. In an alternative embodiment, the audio processor 110 may be directly linked to a speaker so that the generated audio may be played directly by the speaker.
One or more graphics processors 112 may be linked to the blackboard 106. In some embodiments, the graphics processor 112 may be a dedicated graphics card or a Graphics Processing Unit (GPU), although in other implementations, the graphics processor 112 may be part of the control module 108, another GPU, or other processor.
The graphics processor 112 can process graphics, images, and/or other visual data stored in the blackboard 106. By way of non-limiting example, the graphics processor 112 may process input images captured by the camera or other input component 102, such as still images or video of the user.
Graphics processor 112 may also (or alternatively) generate new graphics, new images, and/or other new visual data for the interactive story. By way of non-limiting example, graphics processor 112 may generate visual material showing the current state of the story. In some embodiments, the graphics processor 112 may communicate the generated visual material to the projector, screen, or other output component 104 via the blackboard 106. In an alternative embodiment, the graphics processor 112 may be directly linked to the output component 104 such that the resulting image is displayed directly.
One or more knowledge sources 114 can be linked to the blackboard 106 such that each knowledge source 114 can independently assist, modify, and/or extract data stored at the blackboard 106. In some embodiments, the input component 102, the output component 104, the control module 108, the audio processor 110, and/or the graphics processor 112 of the language learning system may act as a knowledge source 114 such that they can add data to the blackboard 106, modify data on the blackboard 106, or output data from the blackboard 106.
FIG. 2 shows a non-limiting example of a knowledge source 114 that can add data to the blackboard 106, modify data on the blackboard 106, and/or access data stored on the blackboard 106. Knowledge source 114 may include an audio resource 202, an audio generation module 204, a language identification module 206, a text analyzer 208, a visual resource 210, a visual profile generation module 212, and/or a visual profile identification module 214.
The audio resource 202 may be used by the audio processor 110 to generate audio for an interactive story. In some embodiments, the audio resource 202 may be a sound sample, sound effect, music, or other type of audio of a pre-recorded conversation. In other embodiments, the audio resource 202 may be an audio model or algorithm that can dynamically generate audio.
The audio generation module 204 may generate audio for the interactive story using the current state of the interactive story modeled in the semantic network 116 and the corresponding audio resources 202. The audio generation module 204 may generate music and/or sound effects for the current scene as well as a narration or conversation spoken by the storyboard character. As will be discussed in more detail below, the audio generation module 204 may dynamically generate a narrative of the conversation based on the phrase rewrite rules 120 and the vocabulary list 122 such that the words used in the story help the user learn new words of the target language. In some embodiments, the audio generation module 204 may be part of the audio processor 110, while in other embodiments the audio generation module 204 may inform the individual audio processor 110 through the blackboard 106 which sounds are to be generated and which music resources 202 are to be used.
In some embodiments, in addition to playing the generated audio, text associated with the generated audio may be displayed via the output component, or alternatively, text associated with the generated audio may be displayed via the output component in place of playing the generated audio. By way of non-limiting example, text such as subtitles, speech bubbles, or other text may be displayed while an audible version of the text is played, or text such as subtitles, speech bubbles, or other text may be displayed without playing an audible version of the text.
The language identification module 206 may use an acoustic Markov (Markov) model to identify words in the recorded audio data that are added to the blackboard 106 by the input component 102, such as a microphone. When a word is identified and added to the blackboard 106, other knowledge sources 114 may analyze the word to determine an appropriate response. As a non-limiting example, when a user asks questions about an interactive story, language learning system 100 may pause the story to respond to the user's questions. In some embodiments, a stochastic partition may be applied to reduce the number of bits per encoded word, which may improve the accuracy of the markov model.
Text analyzer 208 may generate phrase rewrite rules 120 and vocabulary list 122 based on one or more text source files. The generated phrase rewrite rules 120 and vocabulary list 122 may be used when presenting an interactive story with language learning system 100 (such as when language learning system 100 generates sentences for narration or conversation).
In some embodiments, the phrase rewrite rules 120 and vocabulary list 122 generated by the text analyzer 208 may be stored on the blackboard 106. As a non-limiting example, each generated phrase rewrite rule 120 and/or unique terms in the generated vocabulary list 122 may be stored in the semantic network 116 or other location in the blackboard 106. In alternative embodiments, the phrase rewrite rules 120 and/or vocabulary list 122 may be stored as a separate knowledge source 114.
FIG. 3 shows a non-limiting example of a process for generating phrase rewrite rules 120 and vocabulary list 122 using text analyzer 208.
At step 302, text analyzer 208 may load one or more source text files into memory. The source text file may be a book, an article, or any other type of source text. In some embodiments, the source text file may be selected based on content or volume, such that the grammars or terms derived from the source text file for the phrase rewrite rules 120 and vocabulary list 122 may be related to a particular educational topic or goal.
At step 304, the text analyzer 208 may generate a vocabulary diagram 400 from the source text file. The text analyzer 208 may generate a list of words found in the source text file and generate statistical calibration data for the source text using Good-Turing frequency estimation and stochastic partitioning, wherein the statistical calibration data takes into account different sample sizes of the text and semantic content of the text. The text analyzer 208 may use the information to generate the vocabulary diagram 400. As shown in fig. 4, the vocabulary table 400 may model the number of words in the source text file relative to the length of the text source file, where the logarithm of the number of words is on a first axis and the logarithm of the text length is on a second axis. The chart can then be used to select categories of new vocabulary items that are equivalent to each other and provide the learner with this language in an optimal manner using the Heap-Herdan vocabulary growth laws.
At step 306, the text analyzer 208 may subdivide the vocabulary diagram 400 into target vocabulary regions 402 along the logarithmic axis of the text length. Subdividing the vocabulary 400 along the logarithmic axis of the text length may associate new words represented on the vocabulary 400 with the student's vocabulary level, thereby presenting the student with a mixture of words and new words that they already know. Based on previous interactions with the language learning system 100 or by managing tests similar to standardized tests for calculating spoken SAT scores, the text analyzer 208 may estimate the student's overall knowledge level with respect to the target language. As shown in fig. 4, an estimate of the student's knowledge level may be used to establish a vertical knowledge level line 404 on the vocabulary diagram 400 along the logarithmic axis of the text length. The text analyzer 208 may also establish a vertical target line 406 on the chart to the right of the horizontal knowledge line 404 such that the target analyzer 208 may define a target vocabulary region 402 on the vocabulary chart 400 below the chart line and between the student's horizontal knowledge line 404 and the target line 406. The words in the target vocabulary region 402 may be used in the vocabulary list 122.
The text analyzer 208 may also use the target vocabulary region 402 to find examples in grammatical form and use the examples in grammatical form and accompanying related data as paraphrase input to generate or modify probabilistic phrase rewrite rules 120 that are also adjusted to the knowledge level of the student. As a non-limiting example, existing phrase rewrite rules 120 may be modified by re-weighting the probabilities for rules associated with grammatical forms based on the frequency of occurrence of new words in the target word region 402. In some embodiments, existing phrase rewrite rules 120 may be used when they override grammatical forms found in the target vocabulary area 402, whereas new phrase rewrite rules 120 may be generated when new grammatical forms are found in the target vocabulary area 402.
At step 308. The text analyzer 208 may output the vocabulary list 122 generated from the text source file and the new or modified phrase rewrite rules 120 to the blackboard 106 or as a knowledge source 114.
In alternative embodiments, the phrase rewrite rules 120 and/or vocabulary list 122 may be generated or modified in any other manner, such as manually creating a list of rules or words, or identifying the most frequently occurring words in one or more text source files.
Visual resource 210 may be used by graphics processor 112 to render images for an interactive story. Visual resource 210 can be a two-dimensional image or 3D model of a character, item, setting, background, or other story element. The visual resource 210 may also be an animation file, font, or any other resource that may be used by the graphics processor 112 to produce an image.
The visual material generation module 212 may utilize the graphics processor 112 to render images for the interactive story using the current state of the interactive story modeled in the semantic network 116 and the corresponding visual resources 210. In some embodiments, the visual profile generation module 212 may be part of the graphics processor 112, while in other embodiments the visual profile generation module 212 may inform the individual graphics processor 112 through the blackboard 106 as to which images are to be rendered and/or which visual resources 210 are to be used.
The visual profile recognition module 214 can use visual data captured by the input component 102 to track physical movements of the user over time and recognize gestures made by the user. As a non-limiting example, the camera may capture 2D or stereoscopic still images, infrared data, or video frames of the user, and the visual profile recognition module 214 may update the 3D model of the user over time based on the captured visual data to recognize the gesture of the user. The visual profile recognition module 214 can also use the generated story images stored on the blackboard 106, which can also be played via the output component 104, to associate recognized gestures with the image being viewed by the user. As a non-limiting example, the visual profile recognition module 214 may track the user's movements to recognize when the user makes a pointing gesture, track the direction of the gesture to recognize the point of interest at which the user is pointing, and view the resulting story image to determine what story object is playing at the point of interest, such that the visual profile recognition module 214 may recognize the particular story object at which the user is pointing. In some embodiments, the visual recognition module 214 may be part of the graphics processor 112, while in other embodiments the visual recognition module 214 may use gesture data stored in the blackboard that is recognized by the input component 102 and/or the separate graphics processor 112.
In some embodiments, the visual profile recognition module 214 may additionally or alternatively analyze the user's physical environment for visual cues when performing language recognition with the language recognition module 206. In some implementations, the visual profile identification module 214 can identify objects near the user, which can help the language learning system 100 in interpreting the content of the user's statement or question.
In some embodiments, the language learning system 100 may be included independently within a unit (such as a projector). As a non-limiting example, in some embodiments, the components of the language learning system 100 shown in fig. 1 may be housed within the body of a 3D projector. In other embodiments, some components of the language learning system 100 may be in separate devices or housings and may be connected to an external display and/or speakers.
Fig. 5A-5C illustrate logic tables for logical operations used in the four-valued logic system 118. The language learning system 100 can use the four-valued logic system 118 to store data within the blackboard 106 and semantic network 116 and to evaluate data within the blackboard 106 and semantic network 116, such as when implementing the phrase rewrite rules 120. In some embodiments, the four-valued logic system 118 may be used to evaluate propositional attributes when modeling interactive stories. By way of non-limiting example, while the number of propositional attributes used for realistic character role simulation can range from thousands to millions, fewer or more propositional attributes used for realistic character role simulation can be used in some embodiments for the interactive story presented by language learning system 100. The four-valued logic system 118 described herein is both complete for the first order predicate calculus and ω -consistent in the second and higher order predicate calculus. The four-valued logic system 118 may also be an intuitive theorem prover and a classic theorem prover.
The four-valued logic system 118 may be used to evaluate and operate on variables having one of four possible values, true (T), false (F), defined (D), and undefined (U). As a non-limiting example, the four-valued logic system 118 may be used during conditional sentence testing to designate variables as true, false, defined, or undefined propositional attributes. A variable specified to have a defined value must be either true or false. When a conditional sentence test of propositional attributes is performed, a variable specified to have an undefined value may have any one of four true values. As a non-limiting example, undefined variables discussed below may have any one of four true values during the phase transition implemented by the phrase rewrite rule 120.
Fig. 5A shows a logic table for a negation operation (also referred to as a logical not (¬) operation) in the four-valued logic system 118. In the four-valued logic system 118: ¬ F, the calculation result is T; ¬ T has a calculation of F; ¬ U has a calculation result of D; and the calculation result of ¬ D is U.
FIG. 5B shows a logic table for the conjunctive operation (also known as a logical AND (^) operation) in the four-valued logic system 118. In the four-valued logic system 118: the calculation result of F ^ F is F; the calculation result of F ^ T is F; the calculation result of F ^ U is F; the calculation result of F ^ D is F; the calculation result of T ^ F is F; the calculation result of T ^ T is T; the calculation result of T inverted V U is U; the calculation result of T ^ D is D; the calculation result of U inverted V F is F; the calculation result of U inverted V T is U; the calculation result of U inverted V U is U; the calculation result of U inverted V D is F; the calculation result of D ^ F is F; the calculation result of D ^ T is D; the calculation result of D inverted U is F; and the calculation result of D ^ D is D.
FIG. 5C shows a logical table for an extract operation (also referred to as a logical or (V-cut) operation) in the four-valued logic system 118. In the four-valued logic system 118: the calculation result of the F is F; the calculation result of the V-shaped T is T; the calculation result of the V-shaped U is U; d represents the calculation result of the V-D; the calculation result of TV is T; the calculation result of the T is T; the calculation result of the T V is T; the calculation result of the T V-D is T; the calculation result of the U V-F is U; the calculation result of the U V-T is T; the calculation result of the U is U; the calculation result of the U V-D is T; d is the calculation result of the V.Dv.F; the calculation result of D V is T; the calculation result of D V is T; and the calculation result of the D is D.
Fig. 6A and 6B illustrate an embodiment of a data triple 600 that may be formed from data stored in the semantic network 116. Semantic network 116 may include objects 602 and relationships 604. As shown in fig. 6A and 6B, a triple 600 may include two or three elements.
The objects 602 may be nodes in the semantic network 116. The object 602 may represent an entity such as a story character or item, a base element such as a numeric zero, a class data structure, or any other type of data. In some embodiments, an object 602 may point to another object 602 or triple 600 in the semantic network 116.
The relationship 604 may represent an attribute of the object 602, a function that may be applied to other objects 602 and/or relationships 604, or an association between two objects 602. As a non-limiting example, the relationship 604 may be a function that operates on truth values associated with the object 602 or other relationships 604, such as the logical operators of the four-valued logic system 118 described above with respect to fig. 5A-5C. However, in some embodiments, each relationship 604 may represent a basic primary function, and more complex functions may be constructed by linking smaller functions together.
Each object 602 or relationship 604 in semantic network 116 may be associated with a plurality of truth values (such as truth values indicating the attributes of the object and/or relationships with other objects 602, or truth values indicating whether the function represented by relationship 604 may be applied to object 602). The true values may be true, false, defined, and undefined values used in the four-valued logic system 118.
FIG. 7 illustrates an example of encoding multiple truth values associated with a particular object 602 or relationship 604 using two parallel storage structures 700. A set of two parallel storage structures 700 may be defined for each object 602 and/or relationship 604, where each of the two storage structures 700 has multiple index positions, each index position being 1 bit in size. The storage structure 700 may be an array, a vector, a list, or any other similar data structure.
A particular true value associated with an object 602 or a relationship 604 may be encoded using a bit in the first storage structure 700 and a bit at the same index position in the second storage structure 700. Since two bits may be either 0 or 1, four possible values corresponding to the four possible true values used in the four-valued logic system 118 may be encoded at a particular index position. As a non-limiting example, in some embodiments, a "0" in the first storage structure 700 and a "0" at the same index position in the second storage structure 700 may indicate a true value T, a "1" in the first storage structure 700 and a "1" at the same index position in the second storage structure 700 may indicate a true value F, a "0" in the first storage structure 700 and a "1" at the same index position in the second storage structure 700 may indicate a true value D, and a "1" in the first storage structure 700 and a "0" at the same index position in the second storage structure 700 may indicate a true value U. The bit positions in the two storage structures 700 may be combined to form scalar variables or bits for floating point calculations. In some embodiments, the size of the parallel storage structure 700 may be limited by word size or memory limitations of the computer architecture, which may introduce a chunking factor.
Returning to FIG. 6A, in some embodiments, a three-element triple 600 may include two objects 602 and a relationship 604. Thus, in some cases, a three-element triple 600 may represent a particular relationship between a subject object 602 and another object 602 in the semantic network 116 based on a relationship 604.
By way of non-limiting example, when a first object 602 of a triple represents a story character named "Bob," a second object 602 of the triple represents a puppy character in the story, and a relationship 604 of the triple indicates that the first object 602 "likes" the second object 602, the triple 600 will indicate that Bob likes the puppy. Another three-element triple 600 in the semantic network 116 may have the two objects 602 reversed with the same or different relationships 604 so that the triple 600 may indicate a different relationship from the point of view of the puppy. For example, while one three-element triple 600 may indicate that bob likes puppy, another three-element triple 600 may indicate that bob does not like bob.
In other embodiments, a triple element triple 600 may include one object 602 and two relationships 604. In natural language processing, these types of triple triples 600 may be considered secondary triples 600, whereas triple triples 600 having two objects 602 and one relationship 604 may be considered primary triples 600. As a non-limiting example, a primary triplet may represent "i seen a person" with an object 602 representing "i" and "person" and a relationship 604 representing "seen" whereas a secondary triplet linked to the "seen" relationship 604 in the primary triplet may represent "seen" and "using" with the relationship 604 and an object 602 representing "telescope" represents "seen with the telescope". In some embodiments, relationships 604 representing verbs (such as "utilized") may be modeled using partial recursion, and in some embodiments, secondary triples 600 like this may also be limited to partial recursion. As will be readily appreciated by one of ordinary skill in the art, the previously described triples 600 may be applicable to alternative language structures other than english, such as languages employing prefix notation structures, verb-subject-object or suffix notation structures, and/or any other known, convenient, and/or later developed language structures.
Returning to FIG. 6B, a binary triple 600 may include an object 602 and a relationship 604. Thus, in some cases, a relationship 604 of a two-element triple may identify a particular function that may be applied to the object 602 of the triple.
In some embodiments, values associated with object 602 or relationship 604 may be encoded using true, false, defined, or undefined true values described above for four-valued logic system 118. As a non-limiting example, a true value T in the "like" relationship 604 may indicate that the subject object 602 likes another object 602 in the triple 600, a true value F in the "like" relationship 604 may indicate that the subject object 602 dislikes another object 602 in the triple 600, a true value D in the "like" relationship 604 may indicate that the subject object 602 knows another object 602 in the triple 600 and may like or dislike the other object 602, and a true value U in the "like" relationship 604 may indicate that the subject object 602 does not know the other object 602 in the triple 600 or whether the subject object 602 knows that the other object 602 is unknown.
As described above, the relationship 604 may represent a function that may operate on one or more objects 602. Since the relationship for the three-element triple 600 may treat the two objects 602 as operands, the relationship for the three-element triple 600 may be an original recursive operation or a general recursive operation. However, the relationship for the two-element triple 600 may be limited to the original recursive operation that treats one object 602 as an operand. As non-limiting examples, the relationship 604 in the bixel triple 600 may be a successor function that adds one to the value, or may be a phrase rewrite rule 120 that treats the single object 602 as an operand and checks whether the operand is correctly quantized.
In some embodiments, the data structure of the triple 600 may represent a predicate calculus. The subset of predicate calculus represented by the triple 600 may be a propositional calculus using the four-valued logic system 118 as described above. Because the system can use both classical and intuitive inference systems for comparing and comparing grammars validated by information providers across natural languages, stored predicate calculus can be used when comparing classical and intuitive inference systems. By limiting the relationship for the two-element triple 600 to the original recursion and using the classified predicate calculus, the four-valued logic system 118 can be used to validate the characteristics of the second-order predicate calculus with mathematically true but classically false information providers.
Thus, propositions involving object 602 and/or object 606 can be tested and evaluated using four-valued logic system 118. As a non-limiting example, the phrase rewrite rules 120 may be evaluated together with the particular input arguments using the four-value logic system 118 to determine whether the phrase rewrite rules 120 should be applied based on the input arguments.
FIG. 8 illustrates a model of phrase rewrite rules 120. The phrase rewrite rules 120 may have a left portion (LHS) and a right portion (RHS). The left part may accept arguments (such as a 2-element triple 600, a 3-element triple 600, an object 602, or a relationship 604). The left part may have one cell that accepts the parameter, whereas the right part may have one or more cells. When applying the rule, the single cell on the left may be replaced with the cell on the right. The cells in the alternate right portion may inherit the elements of the input arguments as their arguments, such that each cell in the right portion may be evaluated based on the arguments using a different respective phrase-rewrite rule 120. In some phrase rewrite rules 120, the cells in the alternative right portion may inherit elements of the arguments of the left portion to pass feature inheritance vertically in the grammar. As a non-limiting example, in the example phrase rewrite rules 120 shown in fig. 10 and 11, the cells prefixed with an asterisk on the right may indicate cells that inherit elements of the input arguments.
In some embodiments, the left portion of the phrase rewrite rules 120 may be stored as a relationship 604 in the semantic network 116, while arguments to be evaluated by the phrase rewrite rules 120 may be represented by objects 602 in the semantic network 116 (such as independent objects 602 or objects 602 that point to a particular 2-element or 3-element triple 600). Thus, a 2-element triple 600 may express a potential application of the phrase rewrite rule 120, where the triple's relationship 604 indicates the phrase rewrite rule 120 and the triple's object 602 indicates the arguments to be evaluated by the phrase rewrite rule 120. The four-valued logic system 118 may be used to evaluate potential applications of the phrase rewrite rules 120 to determine whether the phrase rewrite rules 120 should actually be applied. By implementing the phrase rewrite rules 120 with the four-valued logic system 118 in the semantic network 116 where truth values are stored in the parallel storage structure 700, the phrase rewrite rules 120 can constructively model the duality used to compute the analogy.
Fig. 9 illustrates a method of evaluating the left portion of the phrase rewrite rule 120 based on a particular argument to determine whether the phrase rewrite rule 120 should be applied. The evaluation of the rules may be performed according to the process of fig. 9 for a constant o (c) time. In some embodiments, the process of FIG. 9 may be performed by determining whether bit values in the parallel storage structure 700 associated with arguments in the semantic network 116 are correctly quantized.
In step 900, the arguments may be passed to the left portion of the phrase rewrite rule 120. A argument may be a 2-element triple 600, a 3-element triple 600, or a separate object or relationship. Some phrase rewrite rules 120 may expect certain types of arguments.
At step 902, language learning system 100 may evaluate arguments to determine if all content expected to be set to true through the left portion is set to true in the parallel storage structure 700 of arguments. If so, the language learning system 100 can move to step 904. If not, the language learning system 100 may move to step 910 and determine that the phrase rewrite rules 120 will not be applied.
In step 904, the language learning system 100 may evaluate the arguments to determine if all content expected to be set to false through the left portion is set to false in the parallel storage structure 700 of arguments. If so, the language learning system 100 can move to step 906. If not, the language learning system 100 may move to step 910 and determine that the phrase rewrite rules 120 will not be applied.
At step 906, the language learning system 100 can evaluate whether horizontal features and other types of information expected by the left are correctly encoded in classical propositional calculations. Such horizontal features and other information may be encoded as true values that have been defined in the four-valued logic system 118. If the expected features are correctly encoded, the language learning system 100 can move to step 908 and apply the phrase rewrite rules 120. If not, the language learning system 100 may move to step 910 and determine that the phrase rewrite rules 120 will not be applied.
At step 908, if the parametric truth values match the criteria expected by the left portion, the language learning system 100 may apply the phrase rewrite rules 120 by replacing the individual cells of the left portion with one or more cells of the right portion. The right cell may inherit some or all of the elements of the arguments that were originally passed to the left. As a non-limiting example, when the left accepts a 2-element triple 600 and the right has two cells, the parameter triple 600 may be decomposed and the object 602 may be used as a parameter for the first cell of the right, while the relationship 604 may be used as a parameter for the second cell of the right. As another non-limiting example, when a 3-element triple 600 is accepted on the left and has two cells on the right, the parametric triple 600 may be decomposed and the first object 602 of parameters may be used as the parameters of the first cell on the right, while the relationship 604 of parameters and the second object 602 may be used as the parameters of the second cell on the right. As will be described in more detail below, in some phrase rewrite rules, the features of the right-hand portion that specify inherited quantities may also be assigned to other quantities in the right-hand portion.
In some embodiments, the blackboard 106 may have a list of phrase rewrite rules 120. The language learning system 100 may evaluate propositions starting with one phrase rewrite rule 120 at the top of the list and then move to other phrase rewrite rules 120 as appropriate based on whether the earlier phrase rewrite rule 120 was applied and/or whether the cells at the left of the earlier applied rule were broken down into other cells at the right.
Fig. 10 and 11 illustrate an exemplary embodiment of a list of phrase rewrite rules 120 that may be used to generate sentences of a target language based on input parameter triples 600. This list is merely exemplary, as some embodiments of the language learning system 100 may use more phrase rewrite rules 120 and/or different phrase rewrite rules 120. Fig. 10 shows the replacement of the left part with the right part of each phrase rewrite rule 120, while fig. 11 shows horizontal feature inheritance within the right part of each phrase rewrite rule 120.
The phrase rewrite rules 120 may be used to generate sentences that may be expressed to students during an interactive story, such as sentences that describe the current state of objects 602, relationships 604, or triples 600 in the semantic network 112 when the interactive story is modeled. In some embodiments, the words used to generate the sentence when the phrase rewrite rules 120 are applied may be selected from the vocabulary list 122 generated by the text analyzer 208.
Fig. 10 expresses each phrase rewriting rule in terms of syntactic units. Syntax elements may represent a starting rule or specify a syntax type. As a non-limiting example, "S" may indicate a start rule, "N" may indicate a noun or noun phrase, "V" may indicate a verb or verb phrase, "Prep" may indicate a preposition or preposition phrase, and "Det" may indicate a determinant. As a non-limiting example, rule 1 shown in FIG. 10 is a start rule that derives the right noun and verb units. The right noun and verb elements each inherit a specified portion of the triple 600 as an argument, wherein the specified portion of the triple 600 is passed as an argument to the left of the start rule. In fig. 10, such inheritance is indicated next to the right of each rule after the "//" mark, and a line connects the right cell to the type of element that the right cell inherits from the argument. As a non-limiting example, in rule 1 of FIG. 10, the right noun cell may inherit the object 602 ("O") from the input argument while the verb cell inherits the relationship 604 ("R") from the input argument. The noun units and their arguments may then be evaluated using other rules that use nouns as left-hand units, such as rule 2. Similarly, other rules that take verbs as left-hand units (such as rule 4) may be used to evaluate the arguments of verb units and verb units.
In some embodiments, the cells to the right of the phrase rewrite rule 120 may indicate that the features of the arguments inherited by the cells are to be assigned in other cells in the right. As a non-limiting example, an asterisk preceding the syntax element in fig. 10 indicates that the features of the argument of the syntax element are to be assigned in the other arguments in the right part. The features may be attributes of the input quantities, such as an indication that the input object 602 is: singular or plural; is living; is a human; should be expressed with or without a qualifier or specifier; or any other attribute. Thus, when the features of the input parametric triple 600 are inherited or assigned into the parameters of the cells to the right of the rule, words that express these syntactic cells may be selected from the vocabulary list 122 such that the words are consistent with each other with respect to the assigned features. As a non-limiting example, when the initial argument comprises a complex object 602, the plurality of objects 602 may be maintained throughout the process via inherited assigned features such that the final words used to express the state of the object 602 in the resulting sentence consistently indicate that the object 602 is complex.
FIG. 11 shows an example of horizontal feature assignments in the cell to the right of various exemplary phrase rewrite rules 120. As a non-limiting example, as illustrated by rule 1, when the input argument of the starting "S" rule is a 2-element triple 600 with the object 602 being a complex number, the object 602 may be inherited by the "NP" element at the right of the rule. A "VP" can inherit the relationship 604 of the arguments as shown by rule 1 in FIG. 10. However, since the asterisk precedes the "NP" unit, the feature of the inherited argument of the "NP" unit can also be assigned to the argument of the "VP" unit as shown in fig. 11, marking the relation 604 of the "VP" unit with a complex feature. Thus, the language learning system 100 may consider the features when selecting words for the formulation relationship 604 such that the words selected for the verb match the complex nature of the associated object. As such, assigning such features after inheritance may allow for long distance dependencies in the phrase structure grammar. The presence of such features may be ensured by testing against defined truth values when determining whether to apply the phrase rewrite rules.
In some embodiments, a subscript or another symbol associated with each syntax element in the phrase rewrite rules 120 may indicate a priority level. The language learning system 100 may first attempt to apply the higher priority phrase rewrite rules 120. If a phrase rewrite rule 120 is found to be inapplicable to its input arguments, language learning system 100 may move to a lower priority phrase rewrite rule 120 to determine if the lower priority phrase rewrite rule 120 is applicable to the arguments.
In some embodiments, the language learning system 100 can use the stack data structure in the blackboard 106 when evaluating the phrase rewrite rules 120 to produce sentences. As a non-limiting example, the blackboard 106 may first push the "S" element onto the stack when generating a sentence for the input triplet 600. When an "S" rule is found to be applicable to the input triplet 600, the "S" may be popped off the stack and the syntax elements on the right (such as the "NP" and "VP" elements in rule 1 shown in fig. 10) may be pushed onto the stack. "NP" may be popped off the stack and similarly evaluated using the rule of "NP" on the left, and a replacement cell from the right where applicable is pushed onto the stack. When no additional phrase rewrite rules 120 can be applied to the popped-up cells, words matching the grammatical type of the syntactic cell may be selected from the vocabulary list 122 and added to the output sentence. The words that fit the inherited characteristics may be selected in consideration of characteristics that have been inherited by the syntactic element, such as singular, plural, animate, human, or other characteristics. The language learning system 100 may then move to the next unit on the stack. When the stack is empty, the resulting sentence may be completed. The text of the sentence generated by the user can be visually displayed via the output component 104 and/or corresponding audio can be generated using the audio generation module 204 and the audio resource 202 to be played via the output component 104.
Fig. 12 illustrates a process for modeling the state of an interactive story with language learning system 100. As mentioned above, the semantic network 116 can model the state of the interactive story, and as the story progresses, audio and/or visual material representing the current state of the story can be presented to the user via the output component 104.
At step 1202, the language learning system 100 can be initialized with data representing the four-valued logic system 118 and the phrase rewrite rules 120 in the blackboard 106 and/or the semantic network 116. Semantic network 116 may also be initialized with objects 602 and relationships 604 representing story characters, items, settings, associations, and/or other story elements.
In some embodiments, objects 602 and/or relationships 604 for story elements may be initialized according to a preset initial state or according to one of a plurality of possible preset initial states. In other embodiments, at least some aspects of object 602 and/or relationship 604 may be randomized, or at least some aspects of object 602 and/or relationship 604 may be dynamically selected. By way of non-limiting example, in some embodiments, the nouns, items, and other story elements of the character may be randomly selected from a preset list and/or vocabulary list 122 generated by text analyzer 208 from a text source as shown in fig. 3.
The semantic network 116 may also be initialized with probabilistic rules that define a probability of a state of one or more objects and/or relationships 604 changing or remaining intact given a known state in the semantic network 116. As a non-limiting example, rules may be defined in the semantic network 116 that indicate the following: when a particular character in a story simulation is deemed to be near and not holding an item in the story simulation, the particular character will have a 50% chance of picking up the item.
In step 1204, language learning system 100 may begin a story simulation. Beginning with an initial story state, language learning system 100 may evaluate a probabilistic rule set to change the state of the story. As a non-limiting example, when a probabilistic rule as described above is evaluated and the state of the semantic network 116 indicates that the state is true in a simulation where the position of a character is near the position of an item and the character does not already hold the item, the rule may be applied such that the relationship 604 between the character object 602 and the item object 602 in the 3-element triple 600 changes with a 50% probability to indicate that the character is holding the item.
When the semantic network 116 is initialized and updated in the blackboard 106, the knowledge source 114 can access the state of the objects 602 and relationships 604 so that output audio and/or visual material can be generated and presented to the user via the output component 114. By way of non-limiting example, the visual material generation module 212 can direct the graphics processor 112 to render images showing the character, project, or other story element in the current state modeled by the semantic network 116 using suitable visual resources 210, and these images can be displayed via the 3D projector or other output component 104. Similarly, the audio generation module 204 may direct the audio processor 110 to use the appropriate audio resource 202 to generate audio that may be played via speakers. As a non-limiting example, as story characters in a story simulation interact with each other or with items, a dialog or narration expressing the state of a story may be generated according to the phrase rewrite rules 120 as described above, such that a user reads and/or hears words in a target language corresponding to the story in progress.
In step 1206, the language learning system 100 can check the blackboard to determine if any user input has been received via the input component and has been added to the blackboard. If no user input is received, the language learning system 100 may move to step 1208 and continue to model stories according to probabilistic rules as described above, and thereafter return to step 1206 to check for new user input. If user input is detected in step 1206, the language learning system 100 can move to step 1210.
In step 1210, the language learning system 100 can interpret the new user input added to the blackboard 106. When the new user input is a voice recording of a question or statement of the user, control module 108 may activate language recognition module 206 to interpret the recorded voice. Control module 108 may similarly activate visual material identification module 214 to identify the location on the generated visual material at which the user is pointing in conjunction with the recorded statement or question.
At step 1212, the language learning system 100 may determine whether a response should be presented to the user in response to the new user input. When no response is required, language learning system 100 may move directly to step 1208 to continue modeling the story simulation. By way of non-limiting example, when the user input is a story-independent statement, the user input can be ignored and language learning system 100 continues to model the story simulation without responding. However, when the language learning system 100 determines that the user input is a question or statement to which the language learning system 100 can respond, the language learning system 100 can move to step 1214 to formulate and present a response. In some embodiments, when the language learning system 100 is unable to determine what the user's question or statement represents, the language learning system 100 may generate a question using the phrase rewrite rules 120 to ask the user more information.
When the user input is a question, the language learning system 100 may attempt to respond to the user's question, via step 1214. By way of non-limiting example, when the visual showing the current state of the story shows that the character is smelling and the input component captures that the user asks "what is while pointing at the flower? "the language learning system 100 may identify a problem, identifying that the user is pointing to a portion of the generated visual material representing a flower object 602 in the semantic network. The language learning system 100 may then use the phrase rewrite rules to generate sentences indicating that the items are flowers and play a corresponding audio saying "that is flowers". Because objects 602 in the semantic network may be dynamically generated and/or named based on the vocabulary list 122 from the text source, users may learn new words in the target language from the vocabulary list 122 by interacting with stories and asking questions about what they see and hear. In some embodiments, language learning system 100 may pause the story simulation and story presentation while the responses are formulated and presented. After the responses have been presented, language learning system 100 can move to step 1208 to continue modeling the story simulation.
When the user input is a statement pertaining to the current state of the story, at step 1214, the language learning system 100 may, in some embodiments, notice the statement, but proceed to step 1208 to continue modeling the story simulation without directly responding to the user. As a non-limiting example, the language learning system 100 may note the user's preferences for items in the associated objects 602 in the semantic network 116 when the user points to a story item and says "that is beautiful" in the generated visual material. Such preferences may be used in conjunction with probabilistic rules such that preferred objects 602 may be most likely to reappear during the interactive story or have good processing results in the simulation relative to other objects 602.
While the invention has been described in conjunction with specific embodiments thereof, it is evident that many alternatives, modifications, and variations will be apparent to those skilled in the art. Accordingly, the invention as described and claimed is intended to embrace all such alternatives, modifications and variations as fall within the spirit and broad scope of the appended claims.
Reference to the literature
The following references are incorporated herein by reference:
Herdan,G.The Advanced Theory of Language as Choice or Chance.NewYork:Springer-Verlag,1966.
Herdan,G.Quantitative Linguistics.Belfast:Butterworth&Co.,1964.
Kleene,S.Metamathematics.New York:American Elsevier Publishing Co.,1974.
Kleene,S.Mathematical Logic.New York:Dover Publications,2002.
Klein,S."Automatic Paraphrasing in Essay Format".MechanicalTranslations and Computationa Linguistics.8:68-83.
Klein,S."FORWARD:The History of MESSY".In The Meta-symbolicSimulation System User Manual,M.A.Applebaum.UWCS Tech.Report#272,169 pages.
Klein,S.,Aeschliman,Applebaum,Balsisger,Curtis,Foster,Kalish,Kamin,Lee&Price."Simulation d'hypotheses emises par Propp et Levi-Strauss enutilisant un systeme de simulation meta-symbolique".Informatique et SciencesHumaines,No.28,pp.63-133.Mars.
Klein,S."The Analogical Foundations of Creativity in Language,Culture&the Arts:The Upper Paleolithic to 2100 CE."In Language,Vision&Music,edited by McKevitt,Mulvihill&Nuallin,Amsterdam:John Benjamin,pp.347-371.
Steedman,M."Categorial Grammar".University of Pennsylvania Departmentof Computer and Information Science Technical Report No.MS-CIS-92-52.USA,1992.
Yngve,V.From Grammar to Science:New foundations for generallinguistics.Amsterdam:John Benjamins,1996.

Claims (16)

1. a language teaching process comprising:
initializing a semantic network modeling an interactive story using triples having one object and one relationship, two objects and one relationship, or one object and two relationships, wherein a truth value associated with each object and relationship is expressed using a four-valued logic system that allows true, false, defined, and undefined values;
defining a phrase rewriting rule set which accepts input triples as parameters;
applying a particular phrase rewrite rule to a particular argument when an element expected to be true by the particular phrase rewrite rule is set to true in the particular argument, an element expected to be false by the particular phrase rewrite rule is set to false in the particular argument, and an element associated with a horizontal feature is encoded as expected in the particular argument;
replacing a left part of the specific phrase rewriting rule with a syntax unit of a right part of the specific phrase rewriting rule when the specific phrase rewriting rule is applied, inheriting elements of arguments of the left part to the syntax unit of the right part, and assigning features of the inherited arguments which specify the syntax units to other syntax units on the right part;
generating a sentence of a target language by selecting a syntactic unit of a word expression generated by the applied phrase rewriting rule from the vocabulary list; and is
The sentence is presented to the user audibly and/or visually.
2. The language teaching process of claim 1, further comprising: displaying, via a 3D projector, visual material representing a state of the interactive story.
3. The language teaching process of claim 1, further comprising: updating the state of the objects and/or relationships in the semantic network over time by probabilistic rules.
4. A language teaching process according to claim 1, wherein the true values associated with each object and/or relationship are encoded using two storage structures, wherein the values in the respective bit positions in the two storage structures indicate a true value of the encoding, a false value of the encoding, a defined value of the encoding or an undefined value of the encoding.
5. A language teaching process according to claim 4 wherein the determination of whether a phrase rewrite rule is to be applied is made by testing the conditional sentence for a constant O (c) time via true values encoded in said two storage structures associated with elements of the argument.
6. The language teaching process of claim 1, further comprising:
generating a vocabulary diagram based on one or more source texts using good-turing frequency estimation and stochastic partitioning;
subdividing the vocabulary diagram into target vocabulary regions;
generating the vocabulary list based on words in the target vocabulary region; and is
Defining or modifying the phrase rewrite rule set based on a grammatical form in the target vocabulary region.
7. The language teaching process of claim 1, further comprising:
receiving a voice recording of a user captured by a microphone;
identifying spoken words in the recorded user input according to a Markov model;
determining a state of the interactive story related to the identified spoken words;
generating a response to the spoken word using the phrase rewrite rule; and is
Presenting the response to the user.
8. The language teaching process of claim 7, further comprising:
displaying visual material representing a state of the interactive story;
receiving an image of a user captured by a camera;
identifying a gesture performed by a user in the image;
determining a designated area of the visual material pointed to by the user's gesture, and using the visual material at the designated area to determine objects referenced by the user's spoken words in the semantic network when generating responses to the spoken words.
9. A language learning system comprising:
a blackboard storage area for central control and commands, the blackboard storage area storing data representing:
a semantic network modeling the interactive story;
a four-valued logic system that allows true values, false values, defined values, and undefined values; and
a list of phrase rewrite rules;
an audio generation module linked to the blackboard storage area, the audio generation module configured to generate story audio based on the phrase rewriting rules matching the state of the interactive story and add the story audio to the blackboard storage area;
a speaker linked to the blackboard storage area, the speaker configured to play the story audio from the blackboard storage area to a user;
a visual material generation module linked to the blackboard storage area, the visual material generation module configured to render story visual material representing a state of the interactive story and add the story visual material to the blackboard storage area;
a display component linked to the blackboard storage area, the projector configured to present the story visuals from the blackboard storage area to a user,
wherein the semantic network models the interactive story by objects and relationships, wherein associations between objects and relationships are represented using 2-element triples having one object and one relationship, 3-element triples having two objects and one relationship, or 3-element triples having one object and two relationships,
wherein the state of the interactive story is modeled over time by probabilistic rules, wherein the probabilistic rules change the state of the objects and/or the relationships, and
wherein whether to apply a particular phrase rewrite rule based on a particular input triplet is determined using the four-valued logic system.
10. The language learning system of claim 9, wherein the display is a 3D projector.
11. A language learning system as claimed in claim 9 wherein the true values associated with each object and/or relationship are encoded using two storage structures, wherein the values in the respective bit positions in the two storage structures indicate a true value of the encoding, a false value of the encoding, a defined value of the encoding or an undefined value of the encoding.
12. The language learning system of claim 9 wherein one or more of the phrase rewrite rules are applied to the input triples to produce a sentence in the semantic network in a target language that expresses a state of the input triples.
13. The language learning system of claim 12 wherein the phrase rewrite rules and the words selected for the sentence are based on a target vocabulary region subdivided from a vocabulary graph generated from one or more source texts using goodwill frequency estimation and random partitioning.
14. The language learning system of claim 9, further comprising:
a microphone linked to the blackboard storage area, the microphone configured to add recorded user input to the blackboard storage area; and
an audio recognition module linked to the blackboard storage area, the audio recognition module configured to recognize spoken words in the recorded user input according to a Markov model and add the recognized words to the blackboard storage area.
15. The language learning system of claim 14 wherein the audio generation module generates sentences using the phrase rewrite rules in response to spoken words identified by the audio recognition module in the recorded user input and adds the generated sentences to the blackboard storage area for playing with the speaker.
16. The language learning system of claim 14, further comprising:
a camera linked to the blackboard storage area, the camera configured to add an image of a user to the blackboard storage area; and
a visual material recognition module linked to the blackboard storage area, the visual material recognition module configured to recognize a gesture directed to a portion of the story visual material displayed by the display assembly.
CN201780092872.7A 2017-05-05 2017-05-05 Interactive story system using four-value logic Active CN110832570B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2017/031410 WO2018203912A1 (en) 2017-05-05 2017-05-05 Interactive story system using four-valued logic

Publications (2)

Publication Number Publication Date
CN110832570A true CN110832570A (en) 2020-02-21
CN110832570B CN110832570B (en) 2022-01-25

Family

ID=64016546

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201780092872.7A Active CN110832570B (en) 2017-05-05 2017-05-05 Interactive story system using four-value logic

Country Status (3)

Country Link
EP (1) EP3619700A4 (en)
CN (1) CN110832570B (en)
WO (1) WO2018203912A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116099202A (en) * 2023-04-11 2023-05-12 清华大学深圳国际研究生院 Interactive digital narrative creation tool system and interactive digital narrative creation method

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116057544A (en) * 2020-06-07 2023-05-02 罗杰·密德茂尔 Custom interactive language learning system using four-value logic
CN116108830B (en) * 2023-03-30 2023-07-07 山东大学 Syntax-controllable text rewriting method and device

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1083952A (en) * 1992-09-04 1994-03-16 履带拖拉机股份有限公司 Authoring and translation system ensemble
AU5637000A (en) * 1999-06-30 2001-01-31 Invention Machine Corporation, Inc. Semantic processor and method with knowledge analysis of and extraction from natural language documents
CN1474319A (en) * 2002-08-09 2004-02-11 无敌科技股份有限公司 Computer executable story editing and telling forein language teaching system and method thereof
US20090063375A1 (en) * 2004-11-08 2009-03-05 At&T Corp. System and method for compiling rules created by machine learning program
CN101595474A (en) * 2007-01-04 2009-12-02 思解私人有限公司 Language analysis
CN101635006A (en) * 2008-07-22 2010-01-27 中国科学院计算技术研究所 Mutual exclusion and semaphore cell block of multi-core processor satisfying SystemC syntax
US7827254B1 (en) * 2003-11-26 2010-11-02 Google Inc. Automatic generation of rewrite rules for URLs
US20130338997A1 (en) * 2007-03-29 2013-12-19 Microsoft Corporation Language translation of visual and audio input
CN104036780A (en) * 2013-03-05 2014-09-10 阿里巴巴集团控股有限公司 Man-machine identification method and system
CN105045784A (en) * 2014-12-12 2015-11-11 中国科学技术信息研究所 English expression access device method and device
US20160049094A1 (en) * 2014-08-13 2016-02-18 Pitchvantage Llc Public Speaking Trainer With 3-D Simulation and Real-Time Feedback
CN105706092A (en) * 2013-09-03 2016-06-22 罗杰·密德茂尔 Methods and systems of four-valued simulation
CN105706091A (en) * 2013-09-03 2016-06-22 罗杰·密德茂尔 Methods and systems of four valued analogical transformation operators used in natural language processing and other applications
CN105814598A (en) * 2013-10-11 2016-07-27 罗杰·密德茂尔 Methods and systems of four-valued monte carlo simulation for financial modeling
US9571617B2 (en) * 2005-07-26 2017-02-14 International Business Machines Corporation Controlling mute function on telephone

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1083952A (en) * 1992-09-04 1994-03-16 履带拖拉机股份有限公司 Authoring and translation system ensemble
AU5637000A (en) * 1999-06-30 2001-01-31 Invention Machine Corporation, Inc. Semantic processor and method with knowledge analysis of and extraction from natural language documents
CN1474319A (en) * 2002-08-09 2004-02-11 无敌科技股份有限公司 Computer executable story editing and telling forein language teaching system and method thereof
US7827254B1 (en) * 2003-11-26 2010-11-02 Google Inc. Automatic generation of rewrite rules for URLs
US20090063375A1 (en) * 2004-11-08 2009-03-05 At&T Corp. System and method for compiling rules created by machine learning program
US9571617B2 (en) * 2005-07-26 2017-02-14 International Business Machines Corporation Controlling mute function on telephone
CN101595474A (en) * 2007-01-04 2009-12-02 思解私人有限公司 Language analysis
US20130338997A1 (en) * 2007-03-29 2013-12-19 Microsoft Corporation Language translation of visual and audio input
CN101635006A (en) * 2008-07-22 2010-01-27 中国科学院计算技术研究所 Mutual exclusion and semaphore cell block of multi-core processor satisfying SystemC syntax
CN104036780A (en) * 2013-03-05 2014-09-10 阿里巴巴集团控股有限公司 Man-machine identification method and system
CN105706092A (en) * 2013-09-03 2016-06-22 罗杰·密德茂尔 Methods and systems of four-valued simulation
CN105706091A (en) * 2013-09-03 2016-06-22 罗杰·密德茂尔 Methods and systems of four valued analogical transformation operators used in natural language processing and other applications
CN105814598A (en) * 2013-10-11 2016-07-27 罗杰·密德茂尔 Methods and systems of four-valued monte carlo simulation for financial modeling
US20160049094A1 (en) * 2014-08-13 2016-02-18 Pitchvantage Llc Public Speaking Trainer With 3-D Simulation and Real-Time Feedback
CN105045784A (en) * 2014-12-12 2015-11-11 中国科学技术信息研究所 English expression access device method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
J. IDO, Y. MATSUMOTO, T. OGASAWARA AND R. NISIMURA: "Humanoid with Interaction Ability Using Vision and Speech Information", 《2006 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116099202A (en) * 2023-04-11 2023-05-12 清华大学深圳国际研究生院 Interactive digital narrative creation tool system and interactive digital narrative creation method

Also Published As

Publication number Publication date
CN110832570B (en) 2022-01-25
EP3619700A4 (en) 2020-10-14
WO2018203912A1 (en) 2018-11-08
EP3619700A1 (en) 2020-03-11

Similar Documents

Publication Publication Date Title
US20200302827A1 (en) Tailored Interactive Learning System Using A Four-Valued Logic
US10249207B2 (en) Educational teaching system and method utilizing interactive avatars with learning manager and authoring manager functions
Griol et al. An architecture to develop multimodal educative applications with chatbots
CN110832570B (en) Interactive story system using four-value logic
Kahn et al. AI programming by children
Su et al. A recursive dialogue game for personalized computer-aided pronunciation training
CN110991195A (en) Machine translation model training method, device and storage medium
Axelsson et al. Using knowledge graphs and behaviour trees for feedback-aware presentation agents
Dall’Acqua et al. Toward a linguistically grounded dialog model for chatbot design
Hofs et al. Natural interaction with a virtual guide in a virtual environment: A multimodal dialogue system
Vyas An Approach of Using Embodied Conversational Agent for Personalized Tutoring
WO2021251940A1 (en) Tailored interactive language learning system using a four-valued logic
Macías et al. New trends on human-computer interaction: Research, development, new tools and methods
Rodrigues et al. Studying natural user interfaces for smart video annotation towards ubiquitous environments
Dündar A robot system for personalized language education. implementation and evaluation of a language education system built on a robot
Magnusson et al. 2007: The Acoustic, the Digital and the Body: A Survey on Musical Instruments
Ruskin Cognitive influences on the evolution of new languages
RU2807436C1 (en) Interactive speech simulation system
TWM572553U (en) Dynamic story-oriented language digital teaching system
Alhosban et al. The effectiveness of aural instructions with visualisations in e-learning environments
KR20190106011A (en) Dialogue system and dialogue method, computer program for executing the method
Klüwer et al. Evaluation of the KomParse Conversational Non-Player Characters in a Commercial Virtual World.
Spierling Models for interactive narrative actions
ONAN et al. ENHANCING AUTOMATIC IMAGE CAPTIONING SYSTEM LSTM
Griol et al. Increasing the role of data analytics in m-learning conversational applications

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant