CN108536720B - Concept support device and concept support method - Google Patents

Concept support device and concept support method Download PDF

Info

Publication number
CN108536720B
CN108536720B CN201711235125.2A CN201711235125A CN108536720B CN 108536720 B CN108536720 B CN 108536720B CN 201711235125 A CN201711235125 A CN 201711235125A CN 108536720 B CN108536720 B CN 108536720B
Authority
CN
China
Prior art keywords
relationship
knowledge
proposition
graph
document
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711235125.2A
Other languages
Chinese (zh)
Other versions
CN108536720A (en
Inventor
三好利升
雷淼媚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hitachi Ltd
Original Assignee
Hitachi Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hitachi Ltd filed Critical Hitachi Ltd
Publication of CN108536720A publication Critical patent/CN108536720A/en
Application granted granted Critical
Publication of CN108536720B publication Critical patent/CN108536720B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/205Parsing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3329Natural language query formulation or dialogue systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Mathematical Physics (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Human Computer Interaction (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Machine Translation (AREA)

Abstract

A concept support apparatus and a concept support method are provided for supporting a concept of a user along a context of composition or speech of the user during composition or speech of the user. The concept supporting apparatus holds the following information: information indicating a structure of a 1 st graph including nodes indicating elements constituting propositions included in a document, the document defining knowledge, and arrow lines indicating relationships between the elements indicated by the propositions; and information indicating the order of appearance of the relationships in the document; the device accepts input of propositions; extracting the relationship between the elements contained in the proposition and the elements expressed by the proposition; generating a 2 nd graph including nodes representing the extracted elements and including relationships between the extracted elements as arrowed lines; when it is determined that the relationship shown in the 2 nd chart is included in the 1 st chart, a relationship appearing later than the relationship in the document is specified.

Description

Concept support device and concept support method
Technical Field
The present invention relates to a concept supporting apparatus and a concept supporting method.
Background
As a background art in this field, Japanese patent application laid-open No. 2007-241901 (patent document 1) is known. This publication describes: "is provided with: an argument extracting means for extracting a plurality of arguments associated with the inputted theme from an opinion text group associated with the theme; an inherent degree calculating means for calculating, for each of the plurality of arguments, an inherent degree indicating a proportion of opinions having a position corresponding to the argument in the opinion text including the argument; an importance calculating unit that calculates an importance of the opinion including the matching standpoint for each of the plurality of points; a related language extracting unit for extracting related languages related to the plurality of arguments; representative opinion selecting means for selecting a plurality of representative opinions among positive or negative opinions for each of the plurality of points; and an interface means for outputting the intrinsic degree and the importance degree of each of the plurality of points outputted from the respective means. "(refer to abstract of description).
Patent document 1: japanese patent laid-open publication No. 2007-241901
Disclosure of Invention
For example, consider a user to write composition or speak about a given subject. The technique described in patent document 1 supports composition and speech of a user by extracting arguments on a given subject in advance.
However, when a user develops a composition or a speech, each argument extracted by the technique described in patent document 1 is not necessarily an argument suitable for the context of the composition or the conversation. It is therefore an object of the present invention to provide a concept for supporting a user along the context of the user's composition or speech while the user is writing the composition or speaking.
In order to solve the above problem, one aspect of the present invention employs the following configuration. A concept supporting apparatus, which supports a concept of a user, includes a processor and a memory; the above memory holds the following information: graph information indicating a structure of a 1 st graph including nodes indicating elements constituting a proposition included in a document defining knowledge and 1 st type arrow lines indicating a relationship between the elements indicated by the proposition; and context information indicating an order of appearance of the relationships in the document; the processor receives input of propositions; extracting a relationship between an element included in the proposition for which the input is accepted and the element represented by the proposition for which the input is accepted; generating a 2 nd graph including nodes representing the extracted elements and including relationships between the extracted elements as the 1 st kind of arrow lines; when it is determined that the relationship indicated in the 2 nd graph is included in the 1 st graph with reference to the graph information, specifying a relationship appearing later than the relationship in the document with reference to the context information; outputting information indicating the determined above-mentioned relationship.
Effects of the invention
One aspect of the present invention is capable of supporting a user's conception along a context of the user's composition or speech while the user is writing the composition or speech.
Problems, structures, and effects other than those described above will be more apparent from the following description of the embodiments.
Drawings
Fig. 1 is a block diagram showing a configuration example of a concept supporting apparatus according to embodiment 1.
Fig. 2 is a flowchart showing an example of processing performed by the concept supporting apparatus according to the present embodiment.
Fig. 3A is an example of a knowledge graph showing knowledge in the document of example 1.
Fig. 3B is an example of a knowledge graph showing knowledge in the document of example 1.
Fig. 3C is an example of a knowledge graph showing knowledge in the document of example 1.
Fig. 3D is an example of the node table of embodiment 1.
Fig. 3E is an example of an arrow table of example 1.
Fig. 4A is an example of a knowledge graph showing knowledge in the document of example 1.
Fig. 4B is an example of a knowledge graph showing knowledge in the document of example 1.
Fig. 4C is an example of the node table of embodiment 1.
Fig. 4D is an example of an arrow table of example 1.
Fig. 5A is an example of a knowledge graph showing knowledge in the document of example 1.
Fig. 5B is an example of a knowledge graph showing knowledge in the document of example 1.
Fig. 5C is an example of a knowledge graph showing knowledge in the document of example 1.
Fig. 5D is an example of a knowledge graph showing knowledge in the document of example 1.
Fig. 5E is an example of a knowledge graph showing knowledge in the document of example 1.
Fig. 6A is an example of a knowledge graph showing knowledge in the document of example 1.
Fig. 6B is an example of the node table of embodiment 1.
Fig. 6C is an example of an arrow table of example 1.
Fig. 7A is an example of a knowledge graph showing knowledge in the document of example 1.
Fig. 7B is an example of the follow-up knowledge table of embodiment 1.
Fig. 8A is an example of a knowledge graph showing knowledge in the document of example 1.
Fig. 8B is an example of the follow-up knowledge table of embodiment 1.
Fig. 9A is an example of a knowledge graph generated in the speech understanding process of embodiment 1.
Fig. 9B is an example of a knowledge graph generated in the speech understanding process of embodiment 1.
Fig. 9C is an example of transition of the screen displayed on the display device of embodiment 1.
Description of the reference symbols
103, a document DB; 105 a knowledge graph DB; 107 context DB; 201 conception support device; 202 an input device; 203 a display device; 204 a communication device; 205 arithmetic means (CPU); 206 a memory; 207 a secondary storage device; 211 a knowledge graph generating unit; 212 context calculation section; 213 a speech understanding part; 214 a context recognition unit; stage 215 identification unit.
Detailed Description
Embodiments of the present invention will be described below with reference to the drawings. It should be noted that the present embodiment is merely an example for implementing the present invention, and does not limit the technical scope of the present invention.
In order to develop expressive power, thinking power, and the like, composition training (of a treatise, for example) is performed in schools and the like. Composition is different from knowledge that can be grasped by a problem set and the like, and it is difficult to self-learn, and individual instruction by a teacher is often required to improve the composition technique of students. In addition, students often do not know what is written first in a stage unfamiliar with composition, and therefore, a teacher often needs to be careful to solve the problem. In addition, in many cases, training for a conversation in a foreign language is performed, and it is not known what is good first.
Therefore, the present embodiment supports the idea of writing a composition by presenting an enlightening or comment on the composition of the user. In addition, in the same manner, the present embodiment provides an enlightenment for the next utterance in the foreign language conversation to assist the conversation. Thus, the user can be familiar with the foreign language conversation, and can easily practice the conversation with the person. The idea support device according to the present embodiment supports the idea of the user by giving the user a next comment or comment during the composition or conversation, and further supports the composition writing or conversation of the user.
[ example 1]
Fig. 1 is a block diagram showing a configuration example of the concept supporting apparatus of the present embodiment. The concept supporting apparatus of the present embodiment is constituted by a computer including an input device 202, a display device 203, a communication device 204, a computing device (CPU)205, a memory 206, and an auxiliary storage device 207, for example.
The input device 202 receives an input such as a command from a user, and includes, for example, a keyboard and a mouse. The input device 202 receives an input of a command or the like executed for controlling a program executed by the arithmetic unit (CPU)205 and controlling a device connected to the concept support apparatus 201. The communication device 204 transmits the processing contents of the concept supporting device 201 to an external device or receives information from the external device, for example, according to a predetermined protocol.
The arithmetic device (CPU)205 includes a processor and executes a program stored in the memory 206. The memory 206 includes a ROM as a nonvolatile memory element and a RAM as a volatile memory element. The ROM holds the unchanging programs (e.g., BIOS) and the like. The RAM is a high-speed and volatile memory element such as a dram (dynamic Random Access memory), and temporarily stores a program executed by the processor and data used for executing the program.
The auxiliary storage device 207 is a large-capacity nonvolatile storage device such as a magnetic storage device (HDD) or a flash memory (SSD), and stores a program executed by the arithmetic unit (CPU)205 and data used when executing the program. That is, the program is read from the auxiliary storage device 207, loaded into the memory 206, and executed by the arithmetic unit (CPU) 205.
The program executed by the arithmetic unit (CPU)205 is supplied to the design support apparatus 201 via a removable medium (CD-ROM, flash memory, etc.) or a network, and is stored in the non-volatile auxiliary storage device 207 as a non-transitory storage medium. Therefore, the design support apparatus 201 preferably has an interface for reading data from a removable medium.
The concept support apparatus 201 is a computer system physically configured on one computer or a plurality of logically or physically configured computers, and may operate on a single thread on the same computer or on a virtual computer built on a plurality of physical computer resources.
The arithmetic device (CPU)205 includes a knowledge map generating unit 211, a context calculating unit 212, a speech understanding unit 213, a context recognizing unit 214, and a phase recognizing unit 215. For example, the arithmetic device (CPU)205 functions as the knowledge map generating unit 211 by operating in accordance with a knowledge map generating program loaded into the memory 206, and functions as the context calculating unit 212 by operating in accordance with a context calculating program loaded into the memory 206. The same applies to other parts included in the arithmetic device (CPU) 205.
The knowledge graph generating unit 211 generates a graph (knowledge graph) representing knowledge expressed by a document of the document DB103, which will be described later. The context calculation unit 212 calculates the appearance order of knowledge represented by a knowledge graph held by the knowledge graph DB105, which will be described later, in the document of the document DB 103. The speech understanding unit 213 receives an input of a composition or a conversation from the user, and creates text data of the composition or the conversation.
The context recognition unit 214 calculates the order of appearance of the knowledge in the text data created by the speech understanding unit 213, and compares the order of appearance with the order of appearance of the knowledge calculated by the context calculation unit 212. The stage recognition unit 215 recognizes the stage of the composition or the conversation of the user by determining whether the content of the composition or the conversation requested in advance is insufficient in the text input by the user.
The auxiliary storage device 207 holds the document DB103, the knowledge graph DB105, and the context DB 107. The document DB103 holds documents written in natural language, in particular, discussion papers and the like. Since the document DB103 holds a document for defining knowledge in the concept support apparatus 201, that is, a document as a knowledge source, it is preferable that the document DB103 holds a document in which an event to be discussed or contents related to a discussion are described.
Web documents, news articles, reports in organizations, academic papers, official gazettes of administrative offices and international institutions, survey documents such as wisdom, and the like are examples of documents held in the document DB 103. Note that the past speech history of the user is also an example of the document held in the document DB 103. Note that the document DB103 may also hold bibliographic information of each document.
The knowledge graph DB105 stores information indicating the structure of the knowledge graph generated by the knowledge graph generation unit 211. The context DB107 stores information indicating the appearance order of the knowledge calculated by the context calculation unit 212.
In the present embodiment, the information used by the concept support apparatus 201 does not depend on the data structure, and may be represented by any data structure. For example, the information may be held by a data structure appropriately selected from a table, a list, a database, or a queue. In this embodiment, an example is shown in which each data held in the auxiliary storage device 207 is expressed by a table structure.
The concept support apparatus 201 may include at least 1 of the input apparatus 202, the display apparatus 203, and the communication apparatus 204. When the design support apparatus 201 does not include the input device 202, the communication device 204 receives an input of a command or the like from an external device, for example. When the design support apparatus 201 does not include the display apparatus 203, the communication apparatus 204 transmits the processing result generated by the design support apparatus 201 to an external device, for example.
Each processing unit may perform input and output to and from other processing units via the memory 206 or the auxiliary storage device 207. For example, when the knowledge map generating unit 211 outputs the processing result to the context calculating unit 212, the knowledge map generating unit 211 may actually store the processing result in the memory 206 or the auxiliary storage device 207, and the context calculating unit 212 may acquire the output result stored in the memory 206 or the auxiliary storage device 207 as an input.
Fig. 2 is a flowchart showing an example of processing performed by the concept supporting apparatus 201 according to the present embodiment. The process executed by the concept supporting apparatus 201 is roughly divided into 2 parts, i.e., a learning process (S110) and an operation process (S120). The learning process includes the processes of steps S111 to S112, and the operation stage includes the processes of steps S121 to S125. First, the process of the learning process (S110) will be described. First, the knowledge graph generating unit 211 converts the document information stored in the document DB103 into the graph-type knowledge, thereby generating the knowledge graph stored in the knowledge graph DB105 (S111). The details of step S111 will be described below.
Fig. 3A to 3C are examples of knowledge charts showing knowledge in a document. In the present embodiment, although the document described in japanese is used, the same processing is performed even in other languages such as english. Hereinafter, the nodes represented by the ellipses of the single-fold line in the graph in the figure are nodes representing real objects such as people, objects, and organizations, or entities (entries) showing abstract concepts and the like. Further, characters described along arrows between entities indicate relationships between entities.
The knowledge graph 301 is a graph in which the knowledge graph generating unit 211 improves discipline in adoption of uniforms in schools based on "expert lectures" in documents in the document DB 103. "is generated from the sentence.
In the generation of the knowledge graph, the knowledge graph generator 211 first extracts an entity from a sentence. The knowledge map generation unit 211 extracts entities using, for example, machine learning. That is, a study document set in which an entity to be extracted is tagged is prepared in advance, and the knowledge map generating unit 211 creates a recognizer for extracting the entity and extracts the entity in a sentence using the created recognizer. In addition, assume that the entities in the sentence of fig. 3A are "expert", "uniform", and "discipline".
The knowledge map generating unit 211 may extract an entity by referring to a word dictionary or an ontology of words (ontology) prepared in advance. Since most entities are expressed by phrases (phrase) corresponding to nouns or adjectives, the knowledge map generation unit 211 may perform syntax analysis to extract nouns or adjectives as entities.
Next, the knowledge map generating unit 211 calculates the relationship between the entities. In "expert lectures, the adoption of uniforms in schools would improve discipline. In the sentence "above," uniform "and" discipline "are arguments of" increase "as a predicate (the" uniform "is the subject and the" discipline "is the target). The knowledge graph generator 211 extracts predicates having 2 entities as arguments by syntax analysis as the relationship between the entities.
The knowledge graph generation unit 211 generates the knowledge graph 301 by connecting the 2 entities and the relationship therebetween with an arrow. The knowledge graph generating unit 211 generates an arrow line having a node representing an entity as a subject as a starting point and a node representing an entity as a predicate as an ending point in the generation of the knowledge graph. In addition, the knowledge graph may be a undirected graph.
Note that, although the example in which the knowledge graph generation unit 211 extracts a predicate having an entity (subject and destination) as a argument as a relationship is shown here, other relationships between entities may be extracted. For example, the knowledge map generation unit 211 may extract a modification between entities as a relationship, or may extract a distance between sentences between entities as a relationship. The knowledge map generating unit 211 may extract the relationship by using a method such as Semantic Role labeling (sematic Role labeling).
The knowledge graph 302 is a free form of the knowledge graph generation unit 211 in response to the "uniforms in school" in the document of the document DB 103. "is generated from the sentence. The knowledge graph generating unit 211 extracts 2 entities "uniform" and "free" and the relationship "hindrance" between entities from the sentence in the same manner as in the example of fig. 3A, and generates the knowledge graph 302.
The knowledge graph generator 211 extracts entities from each sentence in the given document in this manner, and describes the relationships between the entities in the sentence using the knowledge graph.
Fig. 3C is an example of a knowledge graph obtained by merging 2 knowledge graphs. For example, when the same node exists in a plurality of knowledge graphs, the knowledge graph generating unit 211 may merge the plurality of knowledge graphs by regarding the node as the same node. The knowledge graph 303 is a knowledge graph generated by the knowledge graph generation unit 211 by regarding the "uniform" node in the knowledge graph 301 and the "uniform" node in the knowledge graph 302 as the same node.
The knowledge graph generation unit 211, for example, regards nodes whose surfaces represent the same as each other as the same. In addition, when generating nodes using a word dictionary, an ontology, or the like, the knowledge graph generating unit 211 may treat, for example, the same expressions in the word dictionary or the ontology as the same nodes. Specifically, for example, the knowledge graph generation unit 211 may regard nodes of the synonyms as the same. Note that, the knowledge graph generation unit 211 may regard, as the same, the arrow lines that represent the same relationship among the arrow lines that connect 2 nodes on the graph.
The knowledge graph generation unit 211 represents the information of the generated knowledge graph by, for example, tables shown in fig. 3D and 3E, and stores the tables in the knowledge graph DB 105. Fig. 3D is an example of the node table stored in the knowledge graph DB 105. The node table 304 is a table holding information of nodes of the knowledge graph. The node table 304 of FIG. 3D holds information for the nodes of the knowledge graph 303.
The knowledge graph generation unit 211 assigns an ID for identifying a node on the generated knowledge graph to the node of the knowledge graph, and stores the assigned ID in the node table 304. Further, the knowledge graph generation unit 211 may include other information about the node in the node table 304. In the example of fig. 3D, the node table 304 includes information on the surface representation of each node. For example, the node table 304 may include bibliographic information of documents from which surface-layer expressions corresponding to the nodes are extracted.
Fig. 3E is an example of an arrow table stored in the knowledge graph DB 105. The arrow table 305 is a table containing information of arrows of the knowledge chart. In the example of fig. 3E, the arrow table 305 holds information of the arrows of the knowledge graph 303.
The knowledge graph generation unit 211 assigns an ID for identifying an arrow on the generated knowledge graph to an arrow of the knowledge graph, associates the ID of each arrow, the ID of the start node, and the ID of the end node, and stores the associated IDs in the arrow table 305. Further, the knowledge map generation unit 211 may include other information about the arrow in the arrow table 305. In the example of fig. 3E, the arrow table 305 includes information on the surface layer expression of each arrow. For example, the arrow table 305 may include bibliographic information of documents from which surface-layer expressions corresponding to the respective arrows are extracted. The construction of the knowledge graph is defined by the node table 304 and the arrow table 305.
Another example of the knowledge map generation process is described below. Fig. 4A to 4C are examples of knowledge graphs. The sentence of fig. 4A is the same as the sentence of fig. 3A. The knowledge graph 301 represents information indicating "uniform improvement discipline" shown in this sentence. However, the knowledge graph 301 does not show information indicating "expert narration" and "uniform improvement discipline" (i.e., information on a relationship such as "narration") shown in the sentence.
The knowledge graph generation unit 211 may generate the knowledge graph 402 in addition to the knowledge graph 301 if a term (statement) such as "uniform improvement discipline" is regarded as an entity. A sentence is a proposition representing a plurality of elements constituting a sentence and a relationship between the plurality of elements. Each element may be an entity, or may be a sentence or a sentence, for example. Each element may be a term.
Since the knowledge graph 301 and the knowledge graph 402 are independent knowledge graphs, the relationship between the 2 knowledge graphs represented by the sentences of fig. 4A cannot be expressed. In addition, when a plurality of pieces of knowledge having an association are expressed by such independent knowledge charts, the number of entities becomes enormous.
Therefore, the knowledge graph generation unit 211 generates the knowledge graph 403 shown in fig. 4B in order to represent such information. An example of the process of generating the knowledge graph 403 will be described. As described above, when at least one of the objects connected with a relationship in the sentence of the document is a sentence, the knowledge graph generating unit 211 generates the knowledge graph assuming that the sentence is regarded as an entity as in the knowledge graph 402.
When it is determined that a node of the knowledge graph exists and that all nodes including information of a term represented by another knowledge graph generated from the same document (that is, when a node of a nested structure exists), the knowledge graph generating unit 211 generates a node representing the term (hereinafter, also referred to as a representative node) from the other knowledge graph.
In the example of fig. 4A, since the "uniform improvement discipline" node of the knowledge graph 402 includes information of all the words represented by the knowledge graph 301 generated from the same document as the knowledge graph 402, the knowledge graph generating unit 211 generates a representative node. The knowledge graph generation unit 211 changes the "improvement" arrow line indicating the relationship of the knowledge graph 301 to the "improvement" node, for example, and determines the node as a representative node. The knowledge map generating unit 211 generates an arrow line having a "uniform" node, which is originally the starting point of the "enhanced" arrow line, as the starting point and a representative node as the ending point, and an arrow line having a "discipline" node, which is originally the ending point of the "enhanced" arrow line, as the starting point and the ending point. Thereby, a portion surrounded by a rectangular dotted line of the knowledge graph 403 is generated.
As described above, the "improvement" node indicated by the double line of the knowledge graph 403 is a representative node representing 1 sentence including 3 nodes ("uniform", "improvement", "discipline"). A representative node representing N (N is an integer of 1 or more) sentences is referred to as an N + 1-th-order node. That is, the "improved" node of the knowledge graph 403, represented by the double line, is a 2-degree node. In addition, a node representing the entity itself is referred to as a 1 st-order node. That is, when the relationship between the nodes is transformed into the representative node N times, the representative node is the node N +1 times.
The arrow lines indicated by the double line in the knowledge graph 403 do not indicate the relationship between nodes, but indicate arrow lines representing a case where a node represents a sentence composed of 3 nodes included in a portion surrounded by a rectangular dotted line in the knowledge graph 403. Such an arrow line is hereinafter referred to as a term arrow line.
As described above, in the knowledge graph 402, the "expert" node as the starting point and the "uniform improvement discipline" node as the end point are connected by an arrow line indicating the relationship "lecture". Therefore, the knowledge graph generating unit 211 generates the knowledge graph 403 by connecting 2 nodes with an arrow line representing the relationship of "lecture" starting from the "expert" node with the "improvement" node representing the term "uniform improvement discipline" as an end point.
Thus, the knowledge graph generation unit 211 can express "uniform improvement discipline" and "expert" that said "uniform improvement discipline" by using 1 knowledge graph 403. In addition, when a knowledge graph including the sentence itself as the entity remains after the graph merging processing as described above, the knowledge graph generating unit 211 may delete the knowledge graph.
Fig. 4C is an example of the node table 304. The node table 304 may also hold the number of nodes. That is, the knowledge map generation unit 211 stores the node ID in the node table 304 in association with the number of nodes.
Fig. 4D shows an example of the arrow table 305. The arrow table 305 may also hold a flag indicating whether the arrow is a statement arrow. That is, the knowledge map generation unit 211 stores the arrow ID in the arrow table 305 in association with a flag indicating whether or not the arrow is a term arrow. Since the term arrow does not indicate the surface layer expression, the arrow table 305 stores a null value in the surface layer expression corresponding to the term arrow.
By creating a knowledge graph having a graph structure including 2-degree nodes representing words in this manner, the knowledge graph creating unit 211 can express not only simple relationships between entities but also more complex information using the knowledge graph.
Fig. 5A to 5C are examples of knowledge charts. Knowledge graph 501 is based on a "uniform improvement discipline, but hinders freedom. "sentence making knowledge graph. As described with reference to fig. 3C, the knowledge graph generating unit 211 first generates the knowledge graph 303 indicating "uniform improvement discipline" and "uniform inhibition freedom" from the sentence.
The knowledge graph generating unit 211 refers to, for example, a dictionary or the like prepared in advance, and when it is determined that there is a surface layer expression indicating an adversarial sentence in the sentence, generates representative nodes from a sentence "uniform improvement discipline" as a main clause and a sentence "uniform inhibition freedom" as a subordinate clause. For example, the knowledge graph generating unit 211 generates a representative node "up" from the master clause and a representative node "block" from the slave clause, in the same manner as described above. These representative nodes are all 2-degree nodes.
The knowledge map generating unit 211 generates an arrow line having a "uniform" node, which is originally the starting point of the "enhanced" arrow line, as the starting point and a representative node as the ending point, and an arrow line having a "discipline" node, which is originally the ending point of the "enhanced" arrow line, as the starting point and the ending point. Similarly, the knowledge map generating unit 211 generates an arrow line having a "uniform" node, which is originally the starting point of the "blocking" arrow line, as the starting point and a representative node as the ending point, and an arrow line having a "free" node, which is originally the ending point of the "blocking" arrow line, as the starting point and an ending point. These arrow lines generated are all statement arrow lines.
Further, the knowledge graph generating unit 211 generates an "inverse theory" arrow line indicating a relationship of inverse theory starting from the representative node "raise" of the main clause and ending with the representative node "obstruct" of the dependent clause. Through this processing, the knowledge graph generation unit 211 can generate the knowledge graph 501.
An example in which the knowledge graph generating unit 211 generates the knowledge graph 503 shown in fig. 5C from the sentence "improve discipline by survey" shown in fig. 5B will be described below. First, the knowledge graph generation unit 211 generates the knowledge graph 301 in the same manner as the example of fig. 4A. Further, the knowledge graph generating unit 211 generates the knowledge graph 502. An example of a method of generating the knowledge graph 502 will be described.
For example, when the knowledge map generating unit 211 refers to a dictionary prepared in advance and determines that a word indicating an information source exists in a sentence ("survey" in fig. 5B), it performs syntax analysis or the like to extract information indicated by the information source ("uniform improvement discipline" in fig. 5B).
The knowledge graph generation unit 211 regards the sentence "uniform improvement discipline" as an entity. The knowledge graph generating unit 211 generates an "information source" arrow line having the "survey" node as a starting point and the "uniform improvement discipline" node as an ending point as a relationship between these 2 entities. Thus, the knowledge graph generation unit 211 can generate the knowledge graph 502 shown in fig. 5C.
Further, the knowledge graph generation unit 211 may generate the knowledge graph 503 by combining the knowledge graph 301 and the knowledge graph 502 in the same manner as the method described with reference to fig. 4A and 4B.
In the following, based on the "expert explanation" shown in fig. 5D, the discipline is improved by the uniform, but the free is hindered. "the sentence and knowledge graph generating unit 211 generates an example of the knowledge graph 505 shown in fig. 5E. The knowledge graph generating unit 211 generates the knowledge graph 501 by using the method described with reference to fig. 5A, based on the part of the sentence "improve discipline but hinder liberty". The knowledge graph generation unit 211 generates the knowledge graph 504 by regarding a term "rule improvement discipline but inhibition freedom" as an entity.
Since the "uniform discipline improvement but free" node of the knowledge graph 504 includes information of all the sentences represented by the knowledge graph 501 generated from the same sentence, the "inverse theory" arrow line representing the relationship of the knowledge graph 501 is converted into the "inverse theory" node, and is determined as the representative node. The number of times the node "paradox" is represented is 3.
The knowledge map generating unit 211 generates a term arrow line having the "improvement" node, which is the starting point of the "inverse theory" arrow line, as the starting point and the "inverse theory" node as the end point, and a term arrow line having the "inhibition" node, which is the end point of the "inverse theory" arrow line, as the starting point and the "inverse theory" node as the end point.
Further, the knowledge graph generating unit 211 generates the knowledge graph 505 by connecting 2 nodes with an arrow line representing the relationship of "lecture" using the "paradox" node representing the "uniform improvement discipline but hindering the free" as an end point and the "expert" node as a start point.
As described above, the knowledge graph generating unit 211 of the present embodiment expresses nested information using the pictographic knowledge by using high-order nodes and special arrows (term arrows) connecting the high-order nodes. Thus, the knowledge graph generating unit 211 can represent not only simple information but also a complex context with a small number of knowledge graphs. That is, the concept supporting apparatus 201 according to the present embodiment can perform the matching process of the knowledge graph in the speech understanding process described later at a higher speed while suppressing the data amount of the knowledge graph DB105, although it can handle a complicated context.
Returning to the description of fig. 2. Next, the context calculation unit 212 executes a context calculation process for calculating the order in which information appears in a sentence, using the knowledge graph held in the knowledge graph DB105 (S112).
As described above, the knowledge graph generating unit 211 generates the knowledge graphs shown in fig. 3C, 4B, 5A, 5C, 5D, and the like by converting individual information extracted from a sentence into a graph and then regarding the nodes as being the same according to, for example, a preset reference (a surface layer expression, a word dictionary, a position in the body, and the like).
Fig. 6A is an example of a knowledge graph. Article 601 is an example of a treatise against school importation of uniforms. In the article 601, opinions against the introduction of uniforms are shown, and then, the advantages of discipline improvement are described, and the disadvantages that hinder the freedom of students are described. Further, the article 601 explains the idea that the adoption of a specific uniform hinders the fairness of the market.
Thus, in composition, conversation, discussion and the like, there is a flow (context) of information cues. The context calculation unit 212 of the present embodiment calculates the context of a sentence in the sentence DB103 and stores the calculation result in the context DB 107. The context calculation unit 212 performs context calculation by calculating the order of knowledge appearing in the articles.
The concept supporting apparatus 201 of the present embodiment recognizes the context of the composition or conversation of the user using the knowledge graph DB105 and the context DB107, and presents appropriate elicitations, comments, questions, and the like in consideration of the context to the user. Thus, the concept supporting apparatus 201 can support the concept by enhancing the thought of the user.
The following describes details of the context calculation process (S112). The knowledge in the context calculation process is the knowledge represented by the relationship included in the sentence, that is, the knowledge represented by the node 2 times or more or the arrow line not the arrow line of the sentence. The knowledge graph 602 of fig. 6A is an example of a knowledge graph generated from a sentence 601. Fig. 6B is an example of the node table 304 corresponding to the knowledge graph 602. Fig. 6C is an example of the arrow line table 305 corresponding to the knowledge graph 602.
The context calculation unit 212 calculates the order of knowledge appearing in documents on a knowledge graph, for example, for each document included in the document DB 103. For example, the context calculation unit 212 may express the knowledge "improving discipline" in the knowledge graph 602 by using the IDs of the nodes and the arrowed lines related to the knowledge as vectors (N1, E1, N2, E2, N3) arranged in the order of appearance in the sentence. According to this method, for example, the knowledge of "uniform improvement discipline but inhibition of freedom" can be expressed as (N1, E1, N2, E2, N3, E3, N4, E5, N1, E6, N6, E7, N7).
The context calculation unit 212 preferably uses the method described below to express knowledge more compactly. Specifically, the context calculation unit 212 expresses knowledge by specifying an arrow line that is not a sentence arrow line and a node 2 times or more. Thus, the context calculation unit 212 can simply handle complex knowledge including compound sentences, parallel sentences, conjuncts, and the like, which is "in the knowledge graph to improve discipline but hinders freedom". Hereinafter, a description will be given using specific examples.
First, the knowledge graph 602 includes 2-time nodes N2 and N6, and 3-time node N4 as 2-time or more nodes. The 2-time node N2 "improvement" is linked to the node N1 "uniform" and the node N3 "discipline" by the arrow lines of the sentence, and indicates the knowledge of "uniform improvement discipline". Similarly, the 2-time node N6 "discourages" means the knowledge that "uniforms discourage freedom". The 3-time node N4 "paradox" is connected to N2 representing "uniform improvement discipline" and N6 representing "uniform obstacle freedom" by a term arrow, and thus represents the knowledge of "uniform improvement discipline, but obstacle freedom".
Next, the knowledge graph 602 includes E4 and E8 as arrow lines other than sentence arrow lines. The arrow E4 links node N5 "expert" with node N4 "paradox" to show the knowledge "expert narrative, subdue discipline improving discipline but hindering freedom". Also, arrow E8 represents the knowledge that "uniforms compromise market fairness".
By using the above method, the context calculation unit 212 can express the complex knowledge such as "rule improvement discipline but hindrance freedom" as N4, and can express the knowledge in a concise manner using the ID of the arrow line other than the term arrow line or the ID of the node 2 times or more.
The context calculation unit 212 calculates the order in which the knowledge appears in the document in the context calculation process. An example of the calculation method will be described with reference to fig. 7A and 7B. Fig. 7A is an example of a knowledge graph. The knowledge graph 703 is a knowledge graph generated from the sentences 701 and the sentences 702 included in the document DB 103. When knowledge is expressed using an arrow line that is not a sentence arrow line and 2 or more times of nodes as described above, knowledge appears in the sentence 701 in the order of N6, E8, N2, and E9. Further, also in sentence 702, knowledge appears in the order of N6, E9.
The context calculation unit 212 calculates the number of occurrences of knowledge (hereinafter also referred to as "next knowledge") appearing later than the knowledge in the document DB103 for the knowledge appearing in the knowledge graph 703, and stores the number of occurrences in the next knowledge table held in the context DB 107.
That is, the context calculation unit 212 specifies knowledge corresponding to 2 or more nodes appearing in the knowledge graph 703 and an arrow line other than the arrow line of the sentence. The context calculation unit 212 calculates the number of occurrences of knowledge indicated by a node appearing in the document DB103 later than the determined knowledge and appearing in the knowledge graph 703 2 times or more and an arrow line other than the arrow line of the sentence, respectively.
Fig. 7B is an example of a subsequent knowledge table. The subsequent knowledge table 704 holds the node ID or the arrow ID corresponding to the knowledge, the subsequent knowledge of the knowledge, and the number of times the subsequent knowledge appears after the knowledge in correspondence. For example, row 2 of the subsequent knowledge table 704 of FIG. 7B shows that after knowledge N6, knowledge E8 appears 1 time, knowledge N2 appears 1 time, and knowledge E9 appears 2 times. Since the knowledge E9 appears 1 time after N6 in each of the sentences 701 and 702, the number of occurrences of E9 as the subsequent knowledge of N6 is 2 times in the subsequent knowledge table 704 of fig. 7B.
By associating the next knowledge with the number of occurrences of the next knowledge and storing the subsequent knowledge in the next knowledge table 704 for each knowledge in this manner, the context DB107 can hold information indicating the order, the context, and the like of the knowledge in the document.
The context calculation unit 212 may calculate the distance in the document from the next knowledge for each knowledge and store the distance in the next knowledge table 704. The context calculation unit 212 stores the average value or the minimum value of the distance between the subsequent knowledge and the document of the subsequent knowledge in the subsequent knowledge table 704, for example, with respect to the subsequent knowledge whose number of occurrences is 2 or more. The learning process is explained above.
Next, the operation process (S120) will be described. In the operation processing, the concept support apparatus 201 uses the knowledge graph and the subsequent knowledge created in the learning processing to understand the context of the composition or the session of the user, and presents an appropriate heuristic during the composition or the session, thereby promoting the concept of the user.
First, the speech understanding unit 213 receives an input of a composition or a conversation including a sentence from the user and converts the composition or the conversation into a text (S121). Specifically, the speech understanding unit 213 may receive text input from an input interface such as a keyboard, or may convert speech input from a microphone by voice recognition technology. For example, the speech understanding section 213 may convert a text written on paper into a text by OCR or the like. The speech understanding unit 213 may convert the motion of the user into text by a method such as sign language recognition.
Next, the speech understanding unit 213 performs speech understanding processing for understanding the text acquired in step S121 with a knowledge graph (S122). Here, the speech understanding section 213 understands the speech of the user by matching the input text to the knowledge graph. Hereinafter, details of the speech understanding process will be described with reference to fig. 8A, 8B, 9A, 9B, and 9C.
The knowledge graph 801 in fig. 8A is a part of the knowledge graph represented by the node table 304 and the arrow line table 305 stored in the knowledge graph DB 105. Further, the subsequent knowledge table of fig. 8B is a part of the subsequent knowledge table 704 held in the context DB 107.
For example, suppose that the user writes a discussion paper with the standpoint of "agreeing on school to import a uniform" for the subject of "whether or not a school should import a uniform". Further, it is assumed that the speech understanding unit 213 receives "school should be introduced into the uniform" on the display screen 903 shown in fig. 9C. "input of such a sentence. The speech understanding unit 213 generates the knowledge graph 901 of fig. 9A from the sentence by the same method as the knowledge graph generation processing of step S111.
Next, the speech understanding unit 213 compares the knowledge graph 901 with the knowledge graph 801, and retrieves the knowledge included in the knowledge indicated by the knowledge graph 901 from the knowledge graph 801. If the method of expressing knowledge explained in the context calculation process is used, the speech understanding unit 213 extracts E10 because the knowledge E10 (i.e., the knowledge linking the "school" node and the "uniform" node in the "import" relationship) represented by the knowledge graph 801 is included in the knowledge represented by the knowledge graph 901. In this way, the speech understanding unit 213 extracts knowledge included in the user input, and compares the extracted knowledge with a knowledge graph learned in advance to understand the user input.
Further, the speech understanding section 213 outputs, for example, information indicating the node ID and the arrow line ID on the knowledge graph 801 of the knowledge indicated by the text input by the user and the order of appearance of the knowledge in the text to the context recognizing section 214, using the same expression method as the context calculation processing. When the sentence shown on the display screen 903 is input, the speech understanding unit 213 outputs E10 to the context recognizing unit 214, for example. When the sentence 702 of fig. 7A is input, the speech understanding section 213 outputs the arrow ID to the context recognizing section 214 in the order of N6 and E9, for example.
The knowledge graph 901 completely matches a part of the knowledge graph 801 (a part constituted by N10, E10, and N1). In the comparison of the knowledge graph, the speech understanding unit 213 may regard one of the relationships or entities as the other one of the relationships or entities as the same as the other one of the relationships or entities even if the relationships or entities do not completely match the other one of the relationships or entities.
For example, assume that the speech understanding section 213 generates the knowledge graph 902 of fig. 9B from text input from the user. Although the arrow line of the knowledge graph 902 is "adoption", the speech understanding unit 213 may regard "adoption" and "introduction" of E10 of the knowledge graph 801 as synonymous, and determine that the knowledge shown in the knowledge graph 902 matches the knowledge of E10.
The speech understanding unit 213 calculates the similarity between entities or relationships from the proximity relationship in a word dictionary or ontology, for example. The speech understanding unit 213 may increase the knowledge in the knowledge graph DB105 that matches the knowledge extracted from the user input by treating the relationship or the entity having the similarity equal to or higher than the predetermined degree as the synonymity, and may increase the possibility of giving appropriate inspiration to the user.
Returning to the description of fig. 2. Next, the context recognition unit 214 executes a context recognition process of comparing the order of appearance of the knowledge indicated by the text input by the user acquired in the speech understanding process with the next knowledge table 704 (S123). For example, when the user inputs a text corresponding to the knowledge graph 901, E10 is output in step S122 as described above. At this time, the context recognition unit 214 outputs information indicated by the 2 nd row (row whose knowledge value is E10) of the next knowledge table 704 in fig. 8B to the stage recognition unit 215.
Next, the phase recognition unit 215 executes a phase recognition process for recognizing the phase of the composition or session of the user (S124). For example, for a certain subject (in the example of fig. 9A to 9C, "whether or not a school should be introduced into a uniform"), there is a content required for the discussion paper in the case where the user discusses in favor of or against.
Indications of approval or disapproval, points of approval or disapproval, reasons for approval or disapproval, and hints according to the same are examples of what is required of the discussion. The stage recognition unit 215 recognizes the stage of the composition or the conversation of the user by determining whether the content of the composition or the conversation requested in advance is insufficient in the text input by the user.
Fig. 9C shows an example of transition of a screen displayed on the display device 203. For example, the stage of the sentence displayed on the display screen 903 is a stage in which the introduction of the uniform is approved, but no point of reference is given, and no reason or basis is presented. The stage recognition unit 215 specifies, among the preset contents, a content that has not been described in the text input by the user.
The contents of the composition or the session request are defined in advance, and the concept supporting apparatus 201 holds a document for learning in advance, for example. For example, the design support apparatus 201 creates a label corresponding to the content before the start of the operation process, and creates an identifier capable of identifying the content by machine learning using the label. The stage recognition unit 215 uses the recognizer to determine whether or not the content is included in the text input by the user. In the example of fig. 9C, it is assumed that indication of approval or disapproval for the import to the uniform of the school and the argument for the indicated approval or disapproval are required in making a text.
The concept support apparatus 201 may, for example, assign a comment indicating that the document DB103 corresponds to the content (for example, a document including the content or a text including the content) to a place corresponding to the content.
When the knowledge graph including the knowledge using the comment portion is generated in the knowledge graph generation process, the knowledge graph generating unit 211 may add, as additional information, information indicating the content indicated by the comment portion. That is, for example, the knowledge map generation unit 211 adds a column to the node table 304 and the arrow line table 305, and stores the additional information in the added column.
Returning to the description of fig. 2. Next, the stage recognition unit 215 performs an enlightening presentation process for presenting an enlightening to the user based on the results of the context recognition process and the stage recognition process (S125). The stage recognition unit 215 generates a heuristic to be presented based on the follow-up knowledge indicated by the follow-up knowledge table 704.
As described above, when a sentence displayed on the display screen 903 is input, information that the knowledge of the next knowledge table 704 is a record of E10 is output in the context recognition processing. That is, the phase identifier 215 has already acquired information indicating that the subsequent knowledge of the knowledge E10 is N2 and E9.
Therefore, the stage identifying unit 215 determines the sentence represented by the relationship between the next knowledge N2 and the next knowledge E9 as a candidate for the heuristic presented to the user. The stage identifying unit 215 may output all the candidates as a heuristic to the display device 203.
For example, in the stage recognition process, when there is a candidate corresponding to the knowledge indicating the content determined to be insufficient in the sentence displayed on the display screen 903, the stage recognition unit 215 may preferentially display the candidate on the display device 203 as an enlightenment.
The phrase "a certain candidate is preferentially displayed" means, for example, that only the candidate is displayed on the display device 203, or that all candidates are highlighted when displayed on the display device 203.
The stage recognition unit 215 may prompt the display device 203 to give priority to knowledge whose number of occurrences is large in the record of the subsequent knowledge table 704 (in the example of fig. 8B, N2 is 10 times, and E9 is 5 times) whose number of occurrences is E10 (for example, to prompt a predetermined number of candidates in descending order of the number of occurrences, or to prompt all candidates whose number of occurrences is equal to or greater than a predetermined number). Thus, the stage recognition unit 215 can output, as a heuristic, knowledge that is strongly related to the content input by the user.
When the distance on the document between the knowledge and the next knowledge is stored in the next knowledge table 704, the stage recognition unit 215 may prompt the display device 203 to prioritize the knowledge having a smaller distance in the record in which the knowledge of the next knowledge table 704 is E10 (for example, prompt a predetermined number of candidates in the order of the distance from small to large, or prompt all candidates having a distance equal to or smaller than a predetermined value). Thus, the stage recognition unit 215 can output, as a heuristic, knowledge that is strongly related to the content input by the user.
For example, suppose it is the argument of approval or disapproval of knowledge N2 and knowledge E9. At this time, the phase recognition unit 215 gives priority to the number of appearance, for example, and presents the content of "uniform improvement discipline" as a heuristic presentation as shown in a display screen 904 of fig. 9C. For example, when it is estimated that the user needs an elicitation, or when an elicitation prompt from the user is accepted, the phase recognition unit 215 displays the elicitation on the display device 203. Specifically, for example, the stage recognition unit 215 may present the heuristic when a certain time has elapsed since the user input and no subsequent input is made, or may present the heuristic when the user requests a request from the user such as pressing a button.
Here, the stage recognition unit 215 presents only 1 heuristic on the display screen 904, but may present a plurality of heuristics. In addition, since the stage identification processing is processing for reducing the heuristic to be presented, it may not be executed.
The user continues the input of text, as shown in display 905, with reference to the elicitation prompted in display 904, returning to the speech understanding process. Additionally, the user may also enter text unrelated to the prompted elicitation. Then, the speech understanding process and the context recognizing process are executed again for the inputted text. Outputs in the context recognition processing in the case where a sentence displayed on the display screen 905 is input are E10 and N2.
At this time, in the next knowledge table 704 of fig. 8B, since the common next knowledge of E10 and N2 is E9, the phase recognition unit 215 takes "the intention to improve the learning for uniform" as a heuristic presentation in the heuristic presentation process, for example, as in the display screen 906.
The concept supporting apparatus 201 of the present embodiment can give appropriate motivation to the user when the user is stuck in the text input or when the user requests the motivation through the above-described processing, and can support the concept of the user and support the next speech of the user.
In the heuristic presentation process, the stage identifying unit 215 may perform the following process using a knowledge graph. As described above, when the user inputs a sentence displayed on the display screen 905, the knowledge E10 and the knowledge N2 are output in the context recognition processing.
In the knowledge graph 801, there is a knowledge N6 which is an inverse theory of the knowledge N2 output in the context recognition process. The phase recognition unit 215 may present an elicitation corresponding to the inverse theoretic knowledge when the knowledge graph includes the inverse theoretic knowledge of the knowledge input from the context recognition unit 214 and the knowledge of the inverse theoretic is not input from the context recognition unit 214. Specifically, for example, the phase recognition unit 215 may improve the discipline of "uniform" shown in the display screen 907, but there is an opinion that uniform hinders freedom, how do you think? "etc. as a heuristic prompt.
By this processing, the user can know the opinion of the user from the opposite standpoint, and can support the idea of the user in a discussion, composition, or conversation. In the above example, the phase recognition unit 215 has been described as an example of enlightening the knowledge of the inverse theory of the knowledge input from the context recognition unit 214, but the phase recognition unit 215 may enlighten the knowledge of the relationship other than the inverse theory of the knowledge input from the context recognition unit 214 and the knowledge not input from the context recognition unit 214.
The present invention is not limited to the above-described embodiments, and various modifications are possible. For example, the above embodiments are described in detail to easily explain the present invention, and are not limited to having all the described configurations. Further, a part of the structure of one embodiment may be replaced with the structure of another embodiment, and the structure of another embodiment may be added to the structure of one embodiment. In addition, as for a part of the configuration of each embodiment, addition, deletion, and replacement of other configurations can be performed.
Further, a part or all of the above-described structures, functions, processing units, processing means, and the like may be realized by hardware by designing them with an integrated circuit, for example. The above-described structures, functions, and the like may be implemented in software by interpreting and executing a program that implements each function by a processor. Information of programs, tables, files, and the like that realize the respective functions may be placed in a memory, a hard disk, a recording device such as an ssd (solid State drive), or a recording medium such as an IC card, an SD card, or a DVD.
Note that the control lines and the information lines are shown as parts deemed necessary for the description, and not all the control lines and the information lines are necessarily shown in the product. In practice, it is also possible to consider almost all structures connected to each other.

Claims (10)

1. A concept supporting apparatus for supporting a concept of a user,
comprises a processor and a memory;
the above memory holds the following information:
graph information indicating a structure of a 1 st graph including nodes indicating elements constituting a proposition included in a document defining knowledge and 1 st type arrow lines indicating a relationship between the elements indicated by the proposition; and
context information indicating an order of appearance of the relationship in the document;
in the above-mentioned processor, the processor is provided with a microprocessor,
accepting input of propositions;
extracting a relationship between an element included in the proposition for which the input is accepted and the element represented by the proposition for which the input is accepted;
generating a 2 nd graph including nodes representing the extracted elements and including relationships between the extracted elements as the 1 st kind of arrow lines;
when it is determined that the relationship indicated in the 2 nd graph is included in the 1 st graph with reference to the graph information, specifying a relationship appearing later than the relationship in the document with reference to the context information;
outputting information indicating the determined above-mentioned relationship.
2. The conception support apparatus according to claim 1,
the above-mentioned document contains a plurality of propositions;
the nodes of the 1 st graph include:
a type 1 node representing an entity included in the plurality of propositions; and
a 2 nd type node indicating a relation of a first type proposition, the 1 st proposition being a proposition included in the plurality of propositions and being a proposition in which all elements and relations are included in 1 element of other propositions among the plurality of propositions;
the 1 st graph includes a 2 nd type arrow line connecting a node indicating an element constituting the 1 st proposition with the 2 nd type node;
the processor accepts input of a plurality of propositions;
the processor, in generating the 2 nd graph,
determining an entity in the extracted elements as the type 1 node;
determining a relationship between the 1 st proposition as the 2 nd node when the plurality of propositions for which the input is accepted are determined to include the 1 st proposition;
the node representing the element constituting the 1 st proposition is connected to the determined 2 nd node by the 2 nd arrow line.
3. The conception support apparatus according to claim 1,
the context information indicates, for each relationship indicated by the proposition of the document, the number of occurrences of a relationship appearing later than the relationship in the document;
the processor selects a relationship from the determined relationships based on the number of occurrences of the determined relationship indicated by the context information, and outputs information indicating the selected relationship.
4. The conception support apparatus according to claim 1,
the context information indicates, for each relationship indicated by the proposition of the document, a distance on the document between the relationship and a relationship appearing later than the relationship in the document;
the processor selects a relationship from the determined relationships based on a distance between the relationship indicated by the 2 nd graph included in the 1 st graph and the determined relationship indicated by the context information, and outputs information indicating the selected relationship.
5. The conception support apparatus according to claim 1,
the processor outputs information indicating a predetermined relationship when the predetermined relationship is included in the determined relationship.
6. A concept supporting method for supporting a concept of a user by a concept supporting apparatus,
the concept supporting apparatus holds the following information:
graph information indicating a structure of a 1 st graph including nodes indicating elements constituting a proposition included in a document defining knowledge and 1 st type arrow lines indicating a relationship between the elements indicated by the proposition; and
context information indicating an order of appearance of the relationship in the document;
in the concept supporting method, the concept supporting apparatus performs:
accepting input of propositions;
extracting a relationship between an element included in the proposition for which the input is accepted and the element represented by the proposition for which the input is accepted;
generating a 2 nd graph including nodes representing the extracted elements and including relationships between the extracted elements as the 1 st kind of arrow lines;
when it is determined that the relationship indicated in the 2 nd graph is included in the 1 st graph with reference to the graph information, specifying a relationship appearing later than the relationship in the document with reference to the context information;
outputting information indicating the determined above-mentioned relationship.
7. The conception support method according to claim 6,
the above-mentioned document contains a plurality of propositions;
the nodes of the 1 st graph include:
a type 1 node representing an entity included in the plurality of propositions; and
a 2 nd type node indicating a relation of a first type proposition, the 1 st proposition being a proposition included in the plurality of propositions and being a proposition in which all elements and relations are included in 1 element of other propositions among the plurality of propositions;
the 1 st graph includes a 2 nd type arrow line connecting a node indicating an element constituting the 1 st proposition with the 2 nd type node;
in the concept supporting method, the concept supporting apparatus performs:
accepting input of a plurality of propositions;
in the generation of the above-described 2 nd graph,
determining an entity in the extracted elements as the type 1 node;
determining a relationship between the 1 st proposition as the 2 nd node when the plurality of propositions for which input is accepted are determined to include the 1 st proposition;
the node representing the element constituting the 1 st proposition is connected to the determined 2 nd node by the 2 nd arrow line.
8. The conception support method according to claim 6,
the context information indicates, for each relationship indicated by the proposition of the document, the number of occurrences of a relationship appearing later than the relationship in the document;
in the above-described conception supporting method, the first step of the second step of the first step,
the design support apparatus selects a relationship from the determined relationships based on the number of occurrences of the determined relationship indicated by the context information, and outputs information indicating the selected relationship.
9. The conception support method according to claim 6,
the context information indicates, for each relationship indicated by the proposition of the document, a distance on the document between the relationship and a relationship appearing later than the relationship in the document;
in the above-described conception supporting method, the first step of the second step of the first step,
the design support apparatus selects a relationship from the determined relationships based on a distance between the relationship indicated by the 2 nd graph included in the 1 st graph and the determined relationship indicated by the context information, and outputs information indicating the selected relationship.
10. The conception support method according to claim 6,
the concept supporting apparatus outputs information indicating a predetermined relationship when the predetermined relationship is included in the determined relationship.
CN201711235125.2A 2017-03-06 2017-11-30 Concept support device and concept support method Active CN108536720B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2017041822A JP6622236B2 (en) 2017-03-06 2017-03-06 Idea support device and idea support method
JP2017-041822 2017-03-06

Publications (2)

Publication Number Publication Date
CN108536720A CN108536720A (en) 2018-09-14
CN108536720B true CN108536720B (en) 2021-09-21

Family

ID=63489748

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711235125.2A Active CN108536720B (en) 2017-03-06 2017-11-30 Concept support device and concept support method

Country Status (3)

Country Link
JP (1) JP6622236B2 (en)
KR (1) KR102039393B1 (en)
CN (1) CN108536720B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7343820B2 (en) * 2020-04-10 2023-09-13 日本電信電話株式会社 Text data analysis information generation device, text data analysis information generation method, and text data analysis information generation program using ontology
CN112148871B (en) * 2020-09-21 2024-04-12 北京百度网讯科技有限公司 Digest generation method, digest generation device, electronic equipment and storage medium
JP2022070523A (en) * 2020-10-27 2022-05-13 株式会社日立製作所 Semantic expression analysis system and semantic expression analysis method
EP4239535A4 (en) 2020-11-02 2023-12-20 Fujitsu Limited Machine learning program, inference program, device, and method
JP7474536B1 (en) 2023-08-07 2024-04-25 Linked Ideal合同会社 Information processing system and information processing method

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5841900A (en) * 1996-01-11 1998-11-24 Xerox Corporation Method for graph-based table recognition
AUPR295501A0 (en) * 2001-02-08 2001-03-08 Wong, Kwok Kay Proximity for computer represented graphs
JP2003006191A (en) * 2001-06-27 2003-01-10 Ricoh Co Ltd Device and method for supporting preparation of foreign language document and program recording medium
JP2004342064A (en) 2003-05-16 2004-12-02 Seki Tama Yonezawa Human inference engine
JP2006301967A (en) * 2005-04-20 2006-11-02 Sony Computer Entertainment Inc Conversation support device
US20070156720A1 (en) * 2005-08-31 2007-07-05 Eagleforce Associates System for hypothesis generation
WO2007049800A1 (en) * 2005-10-27 2007-05-03 Raytec Co., Ltd. Document creation support device
JPWO2007060780A1 (en) * 2005-11-22 2009-05-07 日本電気株式会社 Idea support device, idea support method, and idea support program
JP4677563B2 (en) 2006-03-10 2011-04-27 国立大学法人 筑波大学 Decision support system and decision support method
US8375061B2 (en) * 2010-06-08 2013-02-12 International Business Machines Corporation Graphical models for representing text documents for computer analysis
US20120078062A1 (en) * 2010-09-24 2012-03-29 International Business Machines Corporation Decision-support application and system for medical differential-diagnosis and treatment using a question-answering system
JP5426710B2 (en) * 2012-03-19 2014-02-26 株式会社東芝 Search support device, search support method and program
KR101353521B1 (en) * 2012-05-10 2014-01-23 경북대학교 산학협력단 A method and an apparatus of keyword extraction and a communication assist device
CN104699758B (en) * 2015-02-04 2017-10-27 中国人民解放军装甲兵工程学院 The commanding document intelligent generating system and method for a kind of graphics and text library association

Also Published As

Publication number Publication date
CN108536720A (en) 2018-09-14
KR102039393B1 (en) 2019-11-04
JP6622236B2 (en) 2019-12-18
JP2018147238A (en) 2018-09-20
KR20180101991A (en) 2018-09-14

Similar Documents

Publication Publication Date Title
CN108536720B (en) Concept support device and concept support method
Vlachos et al. A new corpus and imitation learning framework for context-dependent semantic parsing
Bensoltane et al. Aspect-based sentiment analysis: an overview in the use of Arabic language
JP6077727B1 (en) Computer system, method, and program for transferring multilingual named entity recognition model
Dong et al. Revisit input perturbation problems for llms: A unified robustness evaluation framework for noisy slot filling task
Saoudi et al. Trends and challenges of Arabic Chatbots: Literature review
Yessenbayev et al. KazNLP: A pipeline for automated processing of texts written in Kazakh language
Xian et al. Benchmarking mi-pos: Malay part-of-speech tagger
KR20210147368A (en) Method and apparatus for generating training data for named entity recognition
Altuna et al. Adapting timeml to basque: Event annotation
Tehseen et al. Corpus based machine translation for scientific text
Amri et al. Amazigh POS tagging using TreeTagger: a language independant model
Irmawati et al. Exploiting Syntactic Similarities for Preposition Error Corrections on Indonesian Sentences Written by Second Language Learner
JP4478042B2 (en) Word set generation method with frequency information, program and program storage medium, word set generation device with frequency information, text index word creation device, full-text search device, and text classification device
Samir et al. Amazigh PoS tagging using machine learning techniques
Pease et al. Controlled english to logic translation
Shamsfard et al. Thematic role extraction using shallow parsing
Murthy et al. A New Approach to Tagging in Indian Languages.
Bernhard et al. Elal: An emotion lexicon for the analysis of alsatian theatre plays
Maitra et al. Semantic question matching in data constrained environment
Winiwarter Incremental learning of transfer rules for customized machine translation
Nunsanga et al. Stochastic based part of speech tagging in Mizo language: Unigram and Bigram hidden Markov model
TEIXEIRA ANICE: AN ARTIFICIAL NEURO-LINGUISTIC INTERACTIVE COMPUTER ENTITY
Shetty et al. English transliteration of Kannada words with Anusvara and Visarga
Kang et al. Ontology-based word sense disambiguation using semi-automatically constructed ontology

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant