WO2021131013A1 - Inference device, setting method, and setting program - Google Patents

Inference device, setting method, and setting program Download PDF

Info

Publication number
WO2021131013A1
WO2021131013A1 PCT/JP2019/051380 JP2019051380W WO2021131013A1 WO 2021131013 A1 WO2021131013 A1 WO 2021131013A1 JP 2019051380 W JP2019051380 W JP 2019051380W WO 2021131013 A1 WO2021131013 A1 WO 2021131013A1
Authority
WO
WIPO (PCT)
Prior art keywords
node
edge
transition probability
transition
setting
Prior art date
Application number
PCT/JP2019/051380
Other languages
French (fr)
Japanese (ja)
Inventor
克希 小林
悠介 小路
進也 田口
Original Assignee
三菱電機株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 三菱電機株式会社 filed Critical 三菱電機株式会社
Priority to JP2021558772A priority Critical patent/JP7012916B2/en
Priority to PCT/JP2019/051380 priority patent/WO2021131013A1/en
Publication of WO2021131013A1 publication Critical patent/WO2021131013A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/04Inference or reasoning models

Definitions

  • This disclosure relates to an inference device, a setting method, and a setting program.
  • HMI Human Machine Interface
  • the device is a car navigation system, a home appliance, a smart speaker, or the like.
  • the dialogue with the device is performed according to a scenario designed based on a state chart (state transition diagram) showing the transition of the processing state such as waiting for input or searching, or a flowchart showing the processing procedure.
  • a system using a state transition graph has been proposed (see Patent Document 1).
  • a complicated scenario can be created by integrating sensor information, information obtained from the Web, information indicating a user's preference, and the like.
  • sensor information information obtained from the Web
  • information indicating a user's preference information indicating a user's preference
  • a knowledge graph is a knowledge representation that graphically represents the attributes of things, the relationships between things, and the causal relationships.
  • the inference device derives the result of inference by using the importance of each node in the knowledge graph, starting from the node representing the observed fact. Then, a response based on the result of inference is output. For example, the knowledge of "eating cold food” is inferred from the fact of the observation of "hot” and the fact of the observation of "daytime” obtained from the sensor information. Then, the response "Do you eat cold food because it is hot and lunch?" Is output. In this way, by using the knowledge graph, it is not necessary to create a complicated scenario.
  • the structure of the knowledge graph has a great influence on the accuracy of inference. Therefore, the policy for constructing the structure of the knowledge graph is important.
  • the knowledge graph is highly flexible in expression. Therefore, the designer's freedom to build the knowledge graph can be inconsistent. Therefore, in the construction of the knowledge graph, knowledge representation based on common rules is used. Ontology is one of the knowledge representations.
  • ontology a concept or term related to a domain is clearly defined by the concept itself, the term itself, or the relationship between concepts or terms (for example, superordinate concept, subconcept, subconcept, attribute, etc.).
  • various domains such as device functions, sensor information, user's intentions, behaviors, and preferences are systematically represented.
  • DBpedia open datasets
  • By unifying the knowledge representation with an ontology it becomes possible to take in external knowledge as needed and to construct an expandable knowledge graph. For this reason, the use of ontology is effective in constructing the knowledge graph.
  • the purpose of this disclosure is to prevent the output of logically leap in reasoning results.
  • the inference device includes an acquisition unit for acquiring a knowledge graph including an ontology including an upper node and a plurality of lower nodes connected to a plurality of first edges connected to the upper node, and the plurality of the inference devices. At least one edge of the first edge has a setting unit for setting a transition probability of a value that does not cause a transition to the upper node.
  • FIG. 1 shows the structure of the hardware which the inference apparatus of Embodiment 1 has. It is a functional block diagram which the inference device of Embodiment 1 has. It is a figure which shows the specific example of the format of the knowledge graph of Embodiment 1.
  • FIG. It is a flowchart which shows the example of the process which the inference apparatus of Embodiment 1 executes. It is a figure which shows the example of the rule table of Embodiment 1.
  • FIG. It is a figure which shows the specific example of the update of the knowledge graph of Embodiment 1.
  • FIG. It is a figure which shows the specific example of the transition probability of Embodiment 1.
  • FIG. It is a functional block diagram which the inference device of Embodiment 2 has. It is a figure which shows the specific example (the 1) in the case of setting the transition probability of Embodiment 2. It is a figure which shows the specific example (the 2) at the time of setting the transition probability of Embodiment 2.
  • FIG. 1 is a diagram showing a hardware configuration of the inference device of the first embodiment.
  • the inference device 100 may be called a setting device or an information processing device.
  • the inference device 100 is a device that executes the setting method.
  • the inference device 100 is a device provided with an interactive HMI.
  • the inference device 100 is a car navigation device, a home appliance, or a smart speaker.
  • the inference device 100 includes a processor 101, a volatile storage device 102, a non-volatile storage device 103, an input device 104, and an output device 105.
  • the processor 101 controls the entire inference device 100.
  • the processor 101 is a CPU (Central Processing Unit), an FPGA (Field Programmable Gate Array), or the like.
  • the processor 101 may be a multiprocessor.
  • the inference device 100 may be realized by a processing circuit, or may be realized by software, firmware, or a combination thereof.
  • the processing circuit may be a single circuit or a composite circuit.
  • the volatile storage device 102 is the main storage device of the inference device 100.
  • the volatile storage device 102 is a RAM (Random Access Memory).
  • the non-volatile storage device 103 is an auxiliary storage device of the inference device 100.
  • the non-volatile storage device 103 is an HDD (Hard Disk Drive) or an SSD (Solid State Drive).
  • the input device 104 is a device or microphone that accepts user operations.
  • the output device 105 is a display or a speaker. The input device 104 and the output device 105 may exist outside the inference device 100.
  • FIG. 2 is a functional block diagram of the inference device of the first embodiment.
  • the inference device 100 includes a storage unit 110, an acquisition unit 120, an update unit 130, a setting unit 140, an importance calculation unit 150, a detection unit 160, and an output unit 170.
  • the storage unit 110 is realized as a storage area reserved in the volatile storage device 102 or the non-volatile storage device 103.
  • a part or all of the acquisition unit 120, the update unit 130, the setting unit 140, the importance calculation unit 150, the detection unit 160, and the output unit 170 may be realized by the processor 101.
  • a part or all of the acquisition unit 120, the update unit 130, the setting unit 140, the importance calculation unit 150, the detection unit 160, and the output unit 170 may be realized as modules of a program executed by the processor 101.
  • the program executed by the processor 101 is also called a setting program.
  • the setting program is recorded on a recording medium.
  • the storage unit 110 stores the rule table 111 and the knowledge graph 112.
  • the rule table 111 will be described later.
  • the format of the knowledge graph 112 will be specifically described.
  • FIG. 3 is a diagram showing a specific example of the format of the knowledge graph of the first embodiment.
  • RDF Resource Description Framework
  • data is represented by triplets (three sets) of subject, predicate, and object.
  • Non-Patent Document 2 describes a predicate "rdfs: subClassOf" representing a subclass.
  • the triplet of the subject A, the predicate B, and the object C is expressed as (A, B, C). Therefore, the triplet of the subject "ramen”, the predicate "rdfs: subClassOf”, and the object "food” is expressed as (ramen, rdfs: subClassOf, food).
  • RDF expression the subject and object are represented by nodes, and the predicate is represented by edges. Then, in the RDF expression, the relationship between the subject and the object is expressed by a directed graph.
  • the RDF expression (ramen, rdfs: subClassOf, food) is represented graphically.
  • the knowledge graph is constructed by stitching together RDF representations.
  • common sense that is, common sense
  • general human senses may be used in addition to ontology. For example, it is possible to express a causal relationship between situations, intentions, and actions such as “eating cold food when it is hot” and “going to a cool place when it is hot”. For example, triplets are described as (hot, action, eating cold food), (hot, action, going to a cool place).
  • the acquisition unit 120 acquires the input information.
  • the input information is information input to the inference device 100.
  • the input information will be described in detail later.
  • the acquisition unit 120 acquires the knowledge graph 112.
  • the acquisition unit 120 acquires the knowledge graph 112 from the storage unit 110.
  • the knowledge graph 112 may be stored in an external device.
  • the external device is a cloud server.
  • the acquisition unit 120 acquires the knowledge graph 112 from the external device.
  • the knowledge graph includes an ontology.
  • the ontology is a hierarchical structure.
  • the ontology includes a plurality of nodes and a plurality of edges to which transition probabilities are associated.
  • the transition probability is the probability of transitioning to a node connected to an edge.
  • the update unit 130 updates the knowledge graph.
  • the setting unit 140 adjusts the transition probability.
  • the setting unit 140 generates transition probability information.
  • the transition probability information indicates the transition probability associated with a plurality of edges.
  • the importance calculation unit 150 calculates the importance corresponding to a plurality of nodes based on the transition probability information.
  • the detection unit 160 detects the node of the response to the input information as the inference result by using the importance corresponding to the plurality of nodes.
  • the output unit 170 outputs a response based on the detected node.
  • the response based on the detected node is information obtained by processing the character string of the detected node. For example, if the detected node is the node "eat cold", the response based on the detected node is "eat cold?".
  • the output unit 170 outputs a response based on the detected node to the output device 105.
  • the output device 105 is a speaker
  • the output device 105 outputs a response based on the detected node by voice.
  • the output device 105 is a display
  • the output device 105 displays a response based on the detected node.
  • FIG. 4 is a flowchart showing an example of processing executed by the inference device of the first embodiment.
  • the acquisition unit 120 acquires the input information.
  • the acquisition unit 120 acquires input information from the input device 104, a sensor, a car navigation device, the Internet, or a database.
  • the input information acquired from the sensor is a physical quantity such as temperature, humidity, camera image, and biological signal.
  • the input information acquired from the sensor may be classification information based on a physical quantity indicating the facial expression of the user, a physical quantity indicating the degree of opening of the eyes or mouth, or a score value based on these physical quantities.
  • the input information acquired from the car navigation device is the operation information performed when the user handles the car navigation device.
  • the operation information may include voice recognition information, touch operation, steering wheel operation, and the like.
  • the input information acquired from the Internet is information indicating weather, traffic information, news, geographic information, store information, and the like.
  • traffic information includes traffic jams, closed roads, and the like.
  • the store information includes business hours, genres, word-of-mouth information, and the like.
  • the input information acquired from the database is a user's preference, a driving history, an application operation history, and the like.
  • Step S12 The update unit 130 updates the knowledge graph using the input information and the rule table 111.
  • the rule table 111 will be described.
  • FIG. 5 is a diagram showing an example of the rule table of the first embodiment.
  • the rule table 111 is stored in the storage unit 110.
  • the rule table 111 has items of a rule ID (identifier), a condition, and a process.
  • the update unit 130 adds a process to the knowledge graph when a rule satisfying the condition exists in the rule table 111. For example, when the input information indicating that the temperature is 35 degrees is acquired, the update unit 130 refers to the rule table 111 and detects that the condition of the rule 1 is satisfied. Update 130 adds (currently value, hot) to the knowledge graph.
  • the update unit 130 may add processing to the knowledge graph by using the triplet in the knowledge graph and the rule table 111. For example, there is (temperature, value, 35.5 degrees) in the knowledge graph.
  • the update unit 130 refers to the rule table 111 and detects that the condition of the rule 1 is satisfied. Update 130 adds (currently value, hot) to the knowledge graph.
  • FIG. 6 is a diagram showing a specific example of updating the knowledge graph of the first embodiment.
  • the upper part of FIG. 6 shows the knowledge graph before the update.
  • the knowledge graph includes a time domain ontology (hereinafter, time domain ontology) and a meteorological domain ontology (hereinafter, meteorological domain ontology).
  • time domain ontology time domain ontology
  • meteorological domain ontology meteorological domain ontology
  • the acquisition unit 120 acquires input information indicating that the time is 11:55 and the temperature is 35 degrees.
  • the update unit 130 refers to the rule table 111 and detects that the conditions of rule 1 and rule 2 are satisfied.
  • the update unit 130 adds (currently value, noon) to the knowledge graph. This connects the node “day” and the node “present” via the edge.
  • the update unit 130 adds (currently value, hot) to the knowledge graph. This connects the node “hot” and the node "current” through the edge.
  • FIG. 7 is a diagram showing a specific example of the transition probability of the first embodiment. Node “A” joins node “B” and node “C” via an edge. FIG. 7 shows that the edge transition probability between the node “A” and the node “B” is 0.5. The probability of transitioning from node “A” to node “B” or node “C” is 0.5. Therefore, the transition probability is 0.5.
  • the transition probability is not associated with the edge that connects to the added node. As will be described later, the transition probability of the edge is set by the setting unit 140.
  • Step S13 The setting unit 140 adjusts the transition probability. In other words, the setting unit 140 controls the transition probability.
  • FIG. 8 is a diagram showing a specific example when the transition probability is not adjusted.
  • a graph search is performed.
  • a random walk is performed from the node “current”.
  • the node “hot” is connected to the node “current” via an edge. Therefore, the node “eat cold food” is expected as an inference result.
  • the node "hot” and the node “cold” are connected via the node “temperature” on the ontology. Therefore, the node "eat warm food” may be the inference result.
  • the setting unit 140 performs the following processing.
  • the setting unit 140 analyzes the structure of the knowledge graph. For example, the setting unit 140 analyzes the edge type, the ontology hierarchy, and the like. For example, the types of edges are action, value, and subClassOf.
  • the setting unit 140 adjusts the transition probability.
  • FIG. 9 is a diagram showing a specific example (No. 1) in the case of adjusting the transition probability of the first embodiment.
  • the node "hot” and the node “temperature” are connected via "subClassOf".
  • the node “cold” and the node “temperature” are connected via "subClassOf”.
  • the node "air temperature” is also referred to as an upper node.
  • the node "hot” and the node “cold” are also called subordinate nodes. That is, the node “hot” and the node “cold” are a plurality of subordinate nodes.
  • the edge where the node “hot” and the node “temperature” are connected and the edge where the node “cold” and the node “temperature” are connected are also called the first edge. That is, the edge where the node “hot” and the node “temperature” are connected and the edge where the node “cold” and the node “temperature” are connected are a plurality of first edges.
  • the setting unit 140 sets a transition probability of a value that does not cause a transition to a higher node at at least one edge of the plurality of first edges. For example, the setting unit 140 sets a transition probability of a value smaller than a preset value at at least one edge of the plurality of first edges.
  • the setting unit 140 is connected to the node "current” indicating the present based on the input information of the node “hot” and the node “cold” via the edge, and the node "hot”. At the edge between the temperature and the temperature, set the transition probability of a value that does not cause the transition to the node "temperature”. That is, the setting unit 140 sets the transition probability corresponding to the edge between the node “hot” and the node “temperature” low.
  • the node indicating the present based on the input information may be expressed as a node added by acquiring the input information.
  • the adjustment of the transition probability applies that even if the propositional logic from the lower node to the upper node is established, the propositional logic from the upper node to the lower node is not always established. For example, suppose the upper node is an animal and the lower node is a human. The propositional logic that humans are animals holds. However, the propositional logic that animals are humans does not hold.
  • the setting unit 140 may perform the following processing. The process will be described with reference to the figure.
  • FIG. 10 is a diagram showing a specific example (No. 2) in the case of adjusting the transition probability of the first embodiment.
  • the setting unit 140 sets the transition probability lower as the edge is connected to the node in the upper layer, and sets the transition probability higher as the edge is connected to the node in the lower layer. That is, the setting unit 140 strengthens the transition constraint between the layers close to the root node existing on the top floor of the ontology. The setting unit 140 weakens the transition constraint between the layers far from the root node.
  • the node “Shoyu ramen” and the node “Miso ramen” are connected to the node "Ramen".
  • the node “Shoyu ramen” and the node “Miso ramen” exist in a hierarchy far from the root node. Therefore, the setting unit 140 weakens the transition constraint between the node “soy sauce ramen” and the node “ramen”.
  • the setting unit 140 weakens the transition constraint between the node “miso ramen” and the node “ramen”. Further, the setting unit 140 does not have to be restricted.
  • the setting unit 140 strengthens the transition constraint between the node “destination setting” and the node “function”.
  • the setting unit 140 strengthens the transition constraint between the node “air conditioner” and the node “function”. Further, the setting unit 140 may set the transition probability so as not to make a transition.
  • the setting unit 140 sets the transition constraint between the node “meal” and the node “facility” between the above two examples.
  • the setting unit 140 adjusts the transition probability.
  • the node near the end of the ontology is close to the concrete concept. Therefore, even if the transition is performed between the nodes near the end of the ontology, there is no problem in the context of the dialogue. That is, it is not a leap of reasoning. For example, it is not unnatural to recommend miso ramen to users who like soy sauce ramen.
  • a node close to the root node is an abstract concept. If the transition is made at a node close to the root node, the context of the dialogue changes. Therefore, the setting unit 140 strengthens the restriction. In the processing of the setting unit 140, the structural property of the ontology is utilized.
  • the setting unit 140 sets the transition probability of the edge according to the above rule.
  • Step S14 The setting unit 140 generates transition probability information.
  • the setting unit 140 generates transition probability information indicating the correspondence between the edge and the transition probability.
  • Step S15 The importance calculation unit 150 calculates the importance corresponding to a plurality of nodes based on the transition probability information. Here, an example of the method of calculating the importance is shown.
  • FIG. 11 is a diagram showing a specific example of the importance calculation process of the first embodiment.
  • FIG. 11 shows the nodes “D”, “E”, and “F”.
  • FIG. 11 describes a method of calculating the importance of the node “E”.
  • the importance calculation unit 150 calculates the importance of the node “E” using the equation (1).
  • the importance calculation unit 150 repeats the above calculation method for the entire knowledge graph using a random walk. Further, the importance calculation unit 150 may repeat before and after the calculation result until the amount of change in the importance of each node becomes sufficiently small. Further, the importance calculation unit 150 may repeat the above calculation method a certain number of times.
  • Step S16 The detection unit 160 detects the node of the response to the input information as the inference result by using the importance corresponding to the plurality of nodes.
  • the process will be described in detail.
  • the search for node candidates that will be the inference result will be described.
  • a query is used as the search method.
  • a triplet containing a variable is a query.
  • the query may be a query based on the input information.
  • the detection unit 160 uses the query to search the knowledge graph for triplets of patterns that match the query.
  • the search process will be explained using a specific example.
  • the query is (hot, action,? X).
  • the character string starting with the character "?" Represents a variable.
  • the detection unit 160 searches the knowledge graph for triplets of patterns that match the query.
  • the detection unit 160 sorts the detected nodes based on the importance associated with the detected node.
  • the detection unit 160 detects the node having the highest importance as an inference result. For example, the importance of the node “eat cold food” is 0.1. The importance of the node “go to a cool place” is 0.05.
  • the detection unit 160 detects the node "eating cold food” as an inference result.
  • Step S17 The output unit 170 outputs a response based on the inference result. For example, the output unit 170 outputs the voice data "Do you eat cold food?" To the output device 105. As a result, the output device 105 outputs the voice "Do you eat cold food?”.
  • the inference device 100 sets the transition probability corresponding to the edge between the upper node and the lower node to be low. As a result, the inference device 100 can suppress the output of the result of logically leap in inference.
  • the inference device 100 may adjust the transition probability at any time. For example, when the inference device 100 receives an instruction from an external device, the inference device 100 adjusts the transition probability.
  • Embodiment 2 Next, the second embodiment will be described. In the second embodiment, matters different from the first embodiment will be mainly described. Then, in the second embodiment, the description of the matters common to the first embodiment will be omitted. In the description of the second embodiment, FIGS. 1 to 11 are referred to.
  • FIG. 12 is a functional block diagram of the inference device of the second embodiment.
  • the configuration of FIG. 2, which is the same as the configuration shown in FIG. 12, has the same reference numerals as those shown in FIG.
  • the inference device 100a has a storage unit 110a, an acquisition unit 120a, and a setting unit 140a.
  • the storage unit 110a stores the setting table 113.
  • the setting table 113 is also referred to as setting information.
  • the setting table 113 will be described in detail later.
  • the acquisition unit 120a acquires the setting table 113.
  • the acquisition unit 120a acquires the setting table 113 from the storage unit 110.
  • the setting table 113 may be stored in an external device.
  • the acquisition unit 120a acquires the setting table 113 from the external device. The processing of the setting unit 140a will be described later.
  • FIG. 13 is a diagram showing a specific example (No. 1) when setting the transition probability of the second embodiment.
  • the edge where the node “hot” and the node “eat cold” are combined is called edge 1.
  • the edge where the node “cold” and the node “eat warm food” are combined is called edge 2.
  • FIG. 13 shows an example of the setting table 113.
  • the setting table 113 sets a transition probability of a value higher than a preset value or a transition probability of a value lower than the preset value for the input information and the target edge which is an edge included in the ontology. Indicates the relationship with the information indicating whether to set.
  • the edge 1 and the edge 2 are also referred to as a target edge.
  • the acquisition unit 120a acquires input information indicating that the air conditioner is currently operating.
  • the update unit 130 refers to the rule table 111 and adds (currently active, cooling) to the knowledge graph.
  • the setting unit 140a sets a transition probability of a value higher than the preset value at the target edge based on the setting table 113, or sets a value lower than the preset value. Set the transition probability to the target edge. Specifically, the setting unit 140a sets a high transition probability of the edge 1 based on the setting table 113. Further, the setting unit 140a sets the transition probability of the edge 2 low based on the setting table 113. As a result, when the acquisition unit 120a acquires the input information indicating "hot”, "Do you want to eat cold food?" Is output.
  • FIG. 14 is a diagram showing a specific example (No. 2) when setting the transition probability of the second embodiment.
  • the user says “hot”.
  • the inference device 100a outputs "Do you eat cold food?”
  • the inference device 100a sets the transition probability according to the environment. Therefore, the inference device 100a can give an appropriate response to the user.
  • Embodiments 1 and 2 show the case where the knowledge graph uses the RDF format. However, the format is not limited to RDF.
  • 100,100a Inference device 101 processor, 102 Volatile storage device, 103 Non-volatile storage device, 104 Input device, 105 Output device, 110, 110a Storage unit, 111 Rule table, 112 Knowledge graph, 113 Setting table, 120, 120a Acquisition unit, 130 update unit, 140, 140a setting unit, 150 importance calculation unit, 160 detection unit, 170 output unit.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computing Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

An inference device (100) has: an acquisition unit (120); and a setting unit (140). The acquisition unit (120) acquires a knowledge graph that includes an ontology including an upper node and a plurality of lower nodes connected to a plurality of first edges connected to the upper node. The ontology includes a plurality of nodes, including the upper node and the plurality of lower nodes. The setting unit (140) sets, for at least one edge of the plurality of first edges, a transition probability of a value that does not cause a transition to the upper node.

Description

推論装置、設定方法、及び設定プログラムInference device, setting method, and setting program
 本開示は、推論装置、設定方法、及び設定プログラムに関する。 This disclosure relates to an inference device, a setting method, and a setting program.
 対話型のHMI(Human Machine Interface)を備えた装置が普及している。例えば、装置は、カーナビ、家電、スマートスピーカーなどである。装置との対話は、入力待ち、検索中などの処理状態の遷移を表すステートチャート(状態遷移図)、又は処理手続きを示したフローチャートなどに基づいて設計されたシナリオに沿って、行われる。ここで、状態遷移グラフを用いるシステムが提案されている(特許文献1を参照)。 Devices equipped with an interactive HMI (Human Machine Interface) are widespread. For example, the device is a car navigation system, a home appliance, a smart speaker, or the like. The dialogue with the device is performed according to a scenario designed based on a state chart (state transition diagram) showing the transition of the processing state such as waiting for input or searching, or a flowchart showing the processing procedure. Here, a system using a state transition graph has been proposed (see Patent Document 1).
 また、AI(Artificial Intelligence)及びIoT(Internet of Things)技術を普及させるために、複雑なシナリオを作成することが求められている。例えば、複雑なシナリオは、センサ情報、Webから得られる情報、ユーザの嗜好を示す情報などを統合することで作成できる。しかし、複雑なシナリオを作成することは、難しい。 In addition, in order to popularize AI (Artificial Intelligence) and IoT (Internet of Things) technologies, it is required to create complicated scenarios. For example, a complicated scenario can be created by integrating sensor information, information obtained from the Web, information indicating a user's preference, and the like. However, it is difficult to create complex scenarios.
 そこで、知識グラフに基づく推論によって対話を行う推論装置が提案されている(特許文献2を参照)。知識グラフは、物事の属性、物事の関連性、因果関係などをグラフで表した知識表現である。推論装置は、観測した事実を表すノードを起点として、知識グラフ内の各ノードの重要度を用いて、推論の結果を導く。そして、推論の結果に基づく応答が出力される。例えば、センサ情報から得られる“暑い”という観測の事実と、“昼時”という観測の事実とから、“冷たい物を食べる”という知識が推論させる。そして、“暑いし、お昼なので冷たい物を食べますか?”という応答が出力される。このように、知識グラフを用いることで、複雑なシナリオを作成しなくて済む。 Therefore, an inference device that performs dialogue by inference based on a knowledge graph has been proposed (see Patent Document 2). A knowledge graph is a knowledge representation that graphically represents the attributes of things, the relationships between things, and the causal relationships. The inference device derives the result of inference by using the importance of each node in the knowledge graph, starting from the node representing the observed fact. Then, a response based on the result of inference is output. For example, the knowledge of "eating cold food" is inferred from the fact of the observation of "hot" and the fact of the observation of "daytime" obtained from the sensor information. Then, the response "Do you eat cold food because it is hot and lunch?" Is output. In this way, by using the knowledge graph, it is not necessary to create a complicated scenario.
 ところで、知識グラフの構造が、推論の精度に大きな影響を与える。そのため、知識グラフの構造の構築方針は、重要である。知識グラフは、表現の柔軟性が高い。そのため、設計者が自由に知識グラフを構築することは、一貫性を失う可能性がある。そこで、知識グラフの構築では、共通化された規則に基づいた知識表現が用いられる。知識表現の一つにオントロジーがある。 By the way, the structure of the knowledge graph has a great influence on the accuracy of inference. Therefore, the policy for constructing the structure of the knowledge graph is important. The knowledge graph is highly flexible in expression. Therefore, the designer's freedom to build the knowledge graph can be inconsistent. Therefore, in the construction of the knowledge graph, knowledge representation based on common rules is used. Ontology is one of the knowledge representations.
 オントロジーでは、ドメインに関する概念又は用語が、概念自体、用語自体、又は概念間あるいは用語間の関係(例えば、上位概念、下位概念、部分概念、属性など)で明確に定義される。オントロジーを用いることで、機器機能、センサ情報、ユーザの意図、行動、嗜好などの様々なドメインが、体系的に表される。また、DBpediaなどのオープンなデータセットがある。知識表現をオントロジーで統一することで、必要に応じて外部の知識を取り込めること及び拡張性のある知識グラフを構築することが可能になる。このような理由から、知識グラフの構築には、オントロジーの活用が有効である。 In ontology, a concept or term related to a domain is clearly defined by the concept itself, the term itself, or the relationship between concepts or terms (for example, superordinate concept, subconcept, subconcept, attribute, etc.). By using the ontology, various domains such as device functions, sensor information, user's intentions, behaviors, and preferences are systematically represented. There are also open datasets such as DBpedia. By unifying the knowledge representation with an ontology, it becomes possible to take in external knowledge as needed and to construct an expandable knowledge graph. For this reason, the use of ontology is effective in constructing the knowledge graph.
国際公開第2006/031609号International Publication No. 2006/031609 特許第6567218号公報Japanese Patent No. 6567218
 特許文献2の推論では、起点からエッジを辿って到達する確率に基づいて、重要度が算出される。ここで、オントロジーでは、相反する概念のノードであっても上位概念が共通である場合、相反する概念のノードは、近傍に配置される。このため、オントロジーを導入した知識グラフを用いて、特許文献2の推論が行われた場合、相反する概念を辿ることで、論理的に飛躍した推論が導かれる。例えば、“暑い”というノードと、“寒い”というノードは、共に上位概念の“気温”に繋がっている。そのため、“暑い”という観測の事実が得られた場合、“気温”を経由して、“寒い”が推論される可能性がある。“暑い”と“寒い”は、相反する概念である。そのため、“寒い”に関連する知識が推論された場合、論理的に飛躍した推論が結果として出力される。
 このように、論理的に飛躍した推論の結果が出力されることは、問題である。
In the inference of Patent Document 2, the importance is calculated based on the probability of reaching the edge from the starting point. Here, in the ontology, if the nodes of the contradictory concepts have the same superordinate concept, the nodes of the contradictory concepts are arranged in the vicinity. Therefore, when the inference of Patent Document 2 is performed using the knowledge graph in which the ontology is introduced, a logically leap in inference is derived by following the contradictory concepts. For example, the node "hot" and the node "cold" are both connected to the superordinate concept "temperature". Therefore, if the observation fact of "hot" is obtained, "cold" may be inferred via "temperature". "Hot" and "cold" are contradictory concepts. Therefore, when knowledge related to "cold" is inferred, a logically leap in inference is output as a result.
In this way, it is a problem that the result of logically leap in reasoning is output.
 本開示の目的は、論理的に飛躍した推論の結果が出力されることを抑制することである。 The purpose of this disclosure is to prevent the output of logically leap in reasoning results.
 本開示の一態様に係る推論装置が提供される。推論装置は、上位ノードと、前記上位ノードに結合されている複数の第1のエッジに結合する複数の下位ノードとを含むオントロジーが含まれている知識グラフを取得する取得部と、前記複数の第1のエッジのうちの少なくとも1つのエッジに、前記上位ノードに遷移させないような値の遷移確率を設定する設定部と、を有する。 An inference device according to one aspect of the present disclosure is provided. The inference device includes an acquisition unit for acquiring a knowledge graph including an ontology including an upper node and a plurality of lower nodes connected to a plurality of first edges connected to the upper node, and the plurality of the inference devices. At least one edge of the first edge has a setting unit for setting a transition probability of a value that does not cause a transition to the upper node.
 本開示によれば、論理的に飛躍した推論の結果が出力されることを抑制できる。 According to the present disclosure, it is possible to suppress the output of the result of logically leap in reasoning.
実施の形態1の推論装置が有するハードウェアの構成を示す図である。It is a figure which shows the structure of the hardware which the inference apparatus of Embodiment 1 has. 実施の形態1の推論装置が有する機能ブロック図である。It is a functional block diagram which the inference device of Embodiment 1 has. 実施の形態1の知識グラフのフォーマットの具体例を示す図である。It is a figure which shows the specific example of the format of the knowledge graph of Embodiment 1. FIG. 実施の形態1の推論装置が実行する処理の例を示すフローチャートである。It is a flowchart which shows the example of the process which the inference apparatus of Embodiment 1 executes. 実施の形態1のルールテーブルの例を示す図である。It is a figure which shows the example of the rule table of Embodiment 1. FIG. 実施の形態1の知識グラフの更新の具体例を示す図である。It is a figure which shows the specific example of the update of the knowledge graph of Embodiment 1. FIG. 実施の形態1の遷移確率の具体例を示す図である。It is a figure which shows the specific example of the transition probability of Embodiment 1. FIG. 遷移確率を調整しない場合の具体例を示す図である。It is a figure which shows the specific example when the transition probability is not adjusted. 実施の形態1の遷移確率を調整する場合の具体例(その1)を示す図である。It is a figure which shows the specific example (the 1) in the case of adjusting the transition probability of Embodiment 1. 実施の形態1の遷移確率を調整する場合の具体例(その2)を示す図である。It is a figure which shows the specific example (the 2) in the case of adjusting the transition probability of Embodiment 1. 実施の形態1の重要度の算出処理の具体例を示す図である。It is a figure which shows the specific example of the calculation process of the importance of Embodiment 1. FIG. 実施の形態2の推論装置が有する機能ブロック図である。It is a functional block diagram which the inference device of Embodiment 2 has. 実施の形態2の遷移確率を設定する場合の具体例(その1)を示す図である。It is a figure which shows the specific example (the 1) in the case of setting the transition probability of Embodiment 2. 実施の形態2の遷移確率を設定する場合の具体例(その2)を示す図である。It is a figure which shows the specific example (the 2) at the time of setting the transition probability of Embodiment 2.
 以下、図面を参照しながら実施の形態を説明する。以下の実施の形態は、例にすぎず、本開示の範囲内で種々の変更が可能である。 Hereinafter, embodiments will be described with reference to the drawings. The following embodiments are merely examples, and various modifications can be made within the scope of the present disclosure.
実施の形態1.
 図1は、実施の形態1の推論装置が有するハードウェアの構成を示す図である。推論装置100は、設定装置又は情報処理装置と呼んでもよい。推論装置100は、設定方法を実行する装置である。推論装置100は、対話型のHMIを備えた装置である。例えば、推論装置100は、カーナビゲーション装置、家電、又はスマートスピーカーである。
 推論装置100は、プロセッサ101、揮発性記憶装置102、不揮発性記憶装置103、入力装置104、及び出力装置105を有する。
Embodiment 1.
FIG. 1 is a diagram showing a hardware configuration of the inference device of the first embodiment. The inference device 100 may be called a setting device or an information processing device. The inference device 100 is a device that executes the setting method. The inference device 100 is a device provided with an interactive HMI. For example, the inference device 100 is a car navigation device, a home appliance, or a smart speaker.
The inference device 100 includes a processor 101, a volatile storage device 102, a non-volatile storage device 103, an input device 104, and an output device 105.
 プロセッサ101は、推論装置100全体を制御する。例えば、プロセッサ101は、CPU(Central Processing Unit)、FPGA(Field Programmable Gate Array)などである。プロセッサ101は、マルチプロセッサでもよい。推論装置100は、処理回路によって実現されてもよく、又は、ソフトウェア、ファームウェア若しくはそれらの組み合わせによって実現されてもよい。なお、処理回路は、単一回路又は複合回路でもよい。 The processor 101 controls the entire inference device 100. For example, the processor 101 is a CPU (Central Processing Unit), an FPGA (Field Programmable Gate Array), or the like. The processor 101 may be a multiprocessor. The inference device 100 may be realized by a processing circuit, or may be realized by software, firmware, or a combination thereof. The processing circuit may be a single circuit or a composite circuit.
 揮発性記憶装置102は、推論装置100の主記憶装置である。例えば、揮発性記憶装置102は、RAM(Random Access Memory)である。不揮発性記憶装置103は、推論装置100の補助記憶装置である。例えば、不揮発性記憶装置103は、HDD(Hard Disk Drive)又はSSD(Solid State Drive)である。
 例えば、入力装置104は、ユーザの操作を受け付ける装置又はマイクである。例えば、出力装置105は、ディスプレイ又はスピーカである。
 入力装置104及び出力装置105は、推論装置100の外部に存在してもよい。
The volatile storage device 102 is the main storage device of the inference device 100. For example, the volatile storage device 102 is a RAM (Random Access Memory). The non-volatile storage device 103 is an auxiliary storage device of the inference device 100. For example, the non-volatile storage device 103 is an HDD (Hard Disk Drive) or an SSD (Solid State Drive).
For example, the input device 104 is a device or microphone that accepts user operations. For example, the output device 105 is a display or a speaker.
The input device 104 and the output device 105 may exist outside the inference device 100.
 図2は、実施の形態1の推論装置が有する機能ブロック図である。推論装置100は、記憶部110、取得部120、更新部130、設定部140、重要度算出部150、検出部160、及び出力部170を有する。
 記憶部110は、揮発性記憶装置102又は不揮発性記憶装置103に確保した記憶領域として実現される。
FIG. 2 is a functional block diagram of the inference device of the first embodiment. The inference device 100 includes a storage unit 110, an acquisition unit 120, an update unit 130, a setting unit 140, an importance calculation unit 150, a detection unit 160, and an output unit 170.
The storage unit 110 is realized as a storage area reserved in the volatile storage device 102 or the non-volatile storage device 103.
 取得部120、更新部130、設定部140、重要度算出部150、検出部160、及び出力部170の一部又は全部は、プロセッサ101によって実現してもよい。取得部120、更新部130、設定部140、重要度算出部150、検出部160、及び出力部170の一部又は全部は、プロセッサ101が実行するプログラムのモジュールとして実現してもよい。例えば、プロセッサ101が実行するプログラムは、設定プログラムとも言う。例えば、設定プログラムは、記録媒体に記録されている。 A part or all of the acquisition unit 120, the update unit 130, the setting unit 140, the importance calculation unit 150, the detection unit 160, and the output unit 170 may be realized by the processor 101. A part or all of the acquisition unit 120, the update unit 130, the setting unit 140, the importance calculation unit 150, the detection unit 160, and the output unit 170 may be realized as modules of a program executed by the processor 101. For example, the program executed by the processor 101 is also called a setting program. For example, the setting program is recorded on a recording medium.
 記憶部110は、ルールテーブル111と知識グラフ112とを記憶する。ルールテーブル111については、後で説明する。知識グラフ112のフォーマットを具体的に説明する。 The storage unit 110 stores the rule table 111 and the knowledge graph 112. The rule table 111 will be described later. The format of the knowledge graph 112 will be specifically described.
 図3は、実施の形態1の知識グラフのフォーマットの具体例を示す図である。例えば、フォーマットは、非特許文献1に記載のRDF(Resource Description Framework)を採用する。RDFでは、主語、述語、目的語のトリプレット(三組)でデータが表現される。また、非特許文献2には、サブクラスを表す述語“rdfs:subClassOf”が記載されている。サブクラスが用いられた場合、主語“ラーメン”と目的語“食べ物”とによって、上下の関係が表現できる。以下、特に断りのない限り、主語A、述語B、目的語Cのトリプレットは、(A,B,C)と表現する。よって、主語“ラーメン”、述語“rdfs:subClassOf”、目的語“食べ物”のトリプレットは、(ラーメン,rdfs:subClassOf,食べ物)と表現される。 FIG. 3 is a diagram showing a specific example of the format of the knowledge graph of the first embodiment. For example, as the format, RDF (Resource Description Framework) described in Non-Patent Document 1 is adopted. In RDF, data is represented by triplets (three sets) of subject, predicate, and object. Further, Non-Patent Document 2 describes a predicate "rdfs: subClassOf" representing a subclass. When subclasses are used, the upper and lower relationships can be expressed by the subject "ramen" and the object "food". Hereinafter, unless otherwise specified, the triplet of the subject A, the predicate B, and the object C is expressed as (A, B, C). Therefore, the triplet of the subject "ramen", the predicate "rdfs: subClassOf", and the object "food" is expressed as (ramen, rdfs: subClassOf, food).
 RDF表現では、主語と目的語がノード、述語がエッジで表現される。そして、RDF表現では、主語と目的語との関係が、有向グラフで表現される。図3では、(ラーメン,rdfs:subClassOf,食べ物)というRDF表現がグラフで表されている。知識グラフは、RDF表現をつなぎ合わせることで構成される。 In RDF expression, the subject and object are represented by nodes, and the predicate is represented by edges. Then, in the RDF expression, the relationship between the subject and the object is expressed by a directed graph. In FIG. 3, the RDF expression (ramen, rdfs: subClassOf, food) is represented graphically. The knowledge graph is constructed by stitching together RDF representations.
 知識グラフの構築では、オントロジーの他に常識(すなわち、コモンセンス)、人の一般的な感覚が用いられてもよい。例えば、“暑いときは冷たい物を食べる”、“暑いときは涼しい場所に行く”などの状況、意図、行動の因果関係が表現できる。例えば、トリプレットは、(暑い,action,冷たい物を食べる)、(暑い,action,涼しい場所に行く)のように表現される。 In constructing the knowledge graph, common sense (that is, common sense) and general human senses may be used in addition to ontology. For example, it is possible to express a causal relationship between situations, intentions, and actions such as "eating cold food when it is hot" and "going to a cool place when it is hot". For example, triplets are described as (hot, action, eating cold food), (hot, action, going to a cool place).
 取得部120は、入力情報を取得する。入力情報は、推論装置100に入力される情報である。入力情報については、後で詳細に説明する。
 また、取得部120は、知識グラフ112を取得する。例えば、取得部120は、知識グラフ112を記憶部110から取得する。ここで、知識グラフ112は、外部装置に格納されてもよい。例えば、外部装置は、クラウドサーバである。知識グラフ112が外部装置に格納されている場合、取得部120は、知識グラフ112を外部装置から取得する。
The acquisition unit 120 acquires the input information. The input information is information input to the inference device 100. The input information will be described in detail later.
In addition, the acquisition unit 120 acquires the knowledge graph 112. For example, the acquisition unit 120 acquires the knowledge graph 112 from the storage unit 110. Here, the knowledge graph 112 may be stored in an external device. For example, the external device is a cloud server. When the knowledge graph 112 is stored in the external device, the acquisition unit 120 acquires the knowledge graph 112 from the external device.
 ここで、知識グラフには、オントロジーが含まれている。オントロジーは、階層構造である。オントロジーは、複数のノードと、遷移確率が対応付けられている複数のエッジとを含む。遷移確率は、エッジに結合しているノードに遷移する確率である。 Here, the knowledge graph includes an ontology. The ontology is a hierarchical structure. The ontology includes a plurality of nodes and a plurality of edges to which transition probabilities are associated. The transition probability is the probability of transitioning to a node connected to an edge.
 更新部130は、知識グラフを更新する。設定部140は、遷移確率を調整する。また、設定部140は、遷移確率情報を生成する。遷移確率情報は、複数のエッジに対応付けられている遷移確率を示す。
 重要度算出部150は、遷移確率情報に基づいて、複数のノードに対応する重要度を算出する。検出部160は、複数のノードに対応する重要度を用いて、入力情報に対する応答のノードを、推論結果として、検出する。
The update unit 130 updates the knowledge graph. The setting unit 140 adjusts the transition probability. In addition, the setting unit 140 generates transition probability information. The transition probability information indicates the transition probability associated with a plurality of edges.
The importance calculation unit 150 calculates the importance corresponding to a plurality of nodes based on the transition probability information. The detection unit 160 detects the node of the response to the input information as the inference result by using the importance corresponding to the plurality of nodes.
 出力部170は、検出されたノードに基づく応答を出力する。なお、例えば、検出されたノードに基づく応答とは、検出されたノードの文字列を加工した情報である。例えば、検出されたノードが、ノード“冷たい物を食べる”である場合、検出されたノードに基づく応答は、“冷たい物を食べますか?”である。 The output unit 170 outputs a response based on the detected node. For example, the response based on the detected node is information obtained by processing the character string of the detected node. For example, if the detected node is the node "eat cold", the response based on the detected node is "eat cold?".
 また、出力部170は、検出されたノードに基づく応答を出力装置105に出力する。例えば、出力装置105がスピーカである場合、出力装置105は、検出されたノードに基づく応答を音声で出力する。例えば、出力装置105がディスプレイである場合、出力装置105は、検出されたノードに基づく応答を表示する。 Further, the output unit 170 outputs a response based on the detected node to the output device 105. For example, when the output device 105 is a speaker, the output device 105 outputs a response based on the detected node by voice. For example, if the output device 105 is a display, the output device 105 displays a response based on the detected node.
 次に、推論装置100が実行する処理を、フローチャートを用いて説明する。
 図4は、実施の形態1の推論装置が実行する処理の例を示すフローチャートである。
 (ステップS11)取得部120は、入力情報を取得する。例えば、取得部120は、入力装置104、センサ、カーナビゲーション装置、インターネット、又はデータベースから入力情報を取得する。
Next, the process executed by the inference device 100 will be described with reference to a flowchart.
FIG. 4 is a flowchart showing an example of processing executed by the inference device of the first embodiment.
(Step S11) The acquisition unit 120 acquires the input information. For example, the acquisition unit 120 acquires input information from the input device 104, a sensor, a car navigation device, the Internet, or a database.
 例えば、センサから取得される入力情報は、気温、湿度、カメラ映像、生体信号などの物理量である。また、例えば、センサから取得される入力情報は、ユーザの表情を示す物理量、目又は口の開き度合を示す物理量などに基づく分類情報でもよいし、これらの物理量に基づくスコア値でもよい。 For example, the input information acquired from the sensor is a physical quantity such as temperature, humidity, camera image, and biological signal. Further, for example, the input information acquired from the sensor may be classification information based on a physical quantity indicating the facial expression of the user, a physical quantity indicating the degree of opening of the eyes or mouth, or a score value based on these physical quantities.
 例えば、カーナビゲーション装置から取得される入力情報は、ユーザがカーナビゲーション装置を扱う際に行う操作情報である。操作情報には、音声認識情報、タッチ操作、ハンドル操作などが含まれてもよい。 For example, the input information acquired from the car navigation device is the operation information performed when the user handles the car navigation device. The operation information may include voice recognition information, touch operation, steering wheel operation, and the like.
 例えば、インターネットから取得される入力情報は、天気、交通情報、ニュース、地理情報、店舗の情報などを示す情報である。なお、例えば、交通情報は、渋滞、通行止めなどである。また、例えば、店舗の情報は、営業時間、ジャンル、口コミ情報などである。
 例えば、データベースから取得される入力情報は、ユーザの嗜好、運転履歴、アプリケーションの操作履歴などである。
For example, the input information acquired from the Internet is information indicating weather, traffic information, news, geographic information, store information, and the like. For example, traffic information includes traffic jams, closed roads, and the like. Further, for example, the store information includes business hours, genres, word-of-mouth information, and the like.
For example, the input information acquired from the database is a user's preference, a driving history, an application operation history, and the like.
 (ステップS12)更新部130は、入力情報とルールテーブル111とを用いて、知識グラフを更新する。ここで、ルールテーブル111を説明する。 (Step S12) The update unit 130 updates the knowledge graph using the input information and the rule table 111. Here, the rule table 111 will be described.
 図5は、実施の形態1のルールテーブルの例を示す図である。ルールテーブル111は、記憶部110に格納されている。ルールテーブル111は、ルールID(identifier)、条件、及び処理の項目を有する。
 更新部130は、条件を満たすルールがルールテーブル111に存在する場合、処理を知識グラフに追加する。例えば、気温が35度であることを示す入力情報が取得された場合、更新部130は、ルールテーブル111を参照し、ルール1の条件を満たすことを検出する。更新部130は、(現在,value,暑い)を知識グラフに追加する。
FIG. 5 is a diagram showing an example of the rule table of the first embodiment. The rule table 111 is stored in the storage unit 110. The rule table 111 has items of a rule ID (identifier), a condition, and a process.
The update unit 130 adds a process to the knowledge graph when a rule satisfying the condition exists in the rule table 111. For example, when the input information indicating that the temperature is 35 degrees is acquired, the update unit 130 refers to the rule table 111 and detects that the condition of the rule 1 is satisfied. Update 130 adds (currently value, hot) to the knowledge graph.
 また、更新部130は、知識グラフ内のトリプレットとルールテーブル111と用いて、処理を知識グラフに追加してもよい。例えば、知識グラフ内に(気温,value,35.5度)が存在している。更新部130は、ルールテーブル111を参照し、ルール1の条件を満たすことを検出する。更新部130は、(現在,value,暑い)を知識グラフに追加する。 Further, the update unit 130 may add processing to the knowledge graph by using the triplet in the knowledge graph and the rule table 111. For example, there is (temperature, value, 35.5 degrees) in the knowledge graph. The update unit 130 refers to the rule table 111 and detects that the condition of the rule 1 is satisfied. Update 130 adds (currently value, hot) to the knowledge graph.
 次に、知識グラフの更新を、具体例を用いて説明する。
 図6は、実施の形態1の知識グラフの更新の具体例を示す図である。図6の上段は、更新前の知識グラフを示している。知識グラフには、時間ドメインのオントロジー(以下、時間ドメインオントロジー)と気象ドメインのオントロジー(以下、気象ドメインオントロジー)が含まれている。なお、オントロジー内の各ノードは、述語“subClassOf”で結合されている。しかし、図6は、述語“subClassOf”を省略している。
Next, the update of the knowledge graph will be described with reference to a specific example.
FIG. 6 is a diagram showing a specific example of updating the knowledge graph of the first embodiment. The upper part of FIG. 6 shows the knowledge graph before the update. The knowledge graph includes a time domain ontology (hereinafter, time domain ontology) and a meteorological domain ontology (hereinafter, meteorological domain ontology). Each node in the ontology is connected by the predicate "subClassOf". However, FIG. 6 omits the predicate "subClassOf".
 取得部120は、時刻が11時55分であること、及び気温が35度であることを示す入力情報を取得する。更新部130は、ルールテーブル111を参照し、ルール1とルール2の条件を満たすことを検出する。更新部130は、(現在,value,昼)を知識グラフに追加する。これにより、ノード“昼”とノード“現在”が、エッジを介して結合される。また、更新部130は、(現在,value,暑い)を知識グラフに追加する。これにより、ノード“暑い”とノード“現在”が、エッジを介して結合される。
 知識グラフが更新されることで、様々なドメインの知識が統合される。そして、知識グラフを用いて、推論が可能になる。
The acquisition unit 120 acquires input information indicating that the time is 11:55 and the temperature is 35 degrees. The update unit 130 refers to the rule table 111 and detects that the conditions of rule 1 and rule 2 are satisfied. The update unit 130 adds (currently value, noon) to the knowledge graph. This connects the node "day" and the node "present" via the edge. In addition, the update unit 130 adds (currently value, hot) to the knowledge graph. This connects the node "hot" and the node "current" through the edge.
By updating the knowledge graph, knowledge of various domains will be integrated. Then, inference becomes possible using the knowledge graph.
 上述したように、エッジには、遷移確率が対応付けられている。遷移確率の具体例を説明する。
 図7は、実施の形態1の遷移確率の具体例を示す図である。ノード“A”は、エッジを介して、ノード“B”とノード“C”に結合する。図7は、ノード“A”とノード“B”との間のエッジの遷移確率が0.5であることを示している。ノード“A”からノード“B”又はノード“C”に遷移する確率は、0.5である。そのため、当該遷移確率は、0.5である。
As described above, the edges are associated with transition probabilities. A specific example of the transition probability will be described.
FIG. 7 is a diagram showing a specific example of the transition probability of the first embodiment. Node "A" joins node "B" and node "C" via an edge. FIG. 7 shows that the edge transition probability between the node “A” and the node “B” is 0.5. The probability of transitioning from node "A" to node "B" or node "C" is 0.5. Therefore, the transition probability is 0.5.
 また、知識グラフが更新された場合、追加されたノードに結合するエッジには、遷移確率は、対応付けられていない。後述するように、当該エッジの遷移確率は、設定部140によって、設定される。 Also, when the knowledge graph is updated, the transition probability is not associated with the edge that connects to the added node. As will be described later, the transition probability of the edge is set by the setting unit 140.
 (ステップS13)設定部140は、遷移確率を調整する。言い換えれば、設定部140は、遷移確率を制御する。 (Step S13) The setting unit 140 adjusts the transition probability. In other words, the setting unit 140 controls the transition probability.
 ここで、遷移確率を調整しない場合を説明する。
 図8は、遷移確率を調整しない場合の具体例を示す図である。図8では、グラフ探索が行われる。図8では、ノード“現在”からランダムウォークが行われる。ノード“暑い”は、ノード“現在”とエッジを介して結合している。そのため、ノード“冷たい物を食べる”が、推論結果として、期待される。しかし、ノード“暑い”とノード“寒い”は、オントロジー上でノード“気温”を介して繋がっている。そのため、ノード“温かい物を食べる”が、推論結果になる可能性がある。
Here, the case where the transition probability is not adjusted will be described.
FIG. 8 is a diagram showing a specific example when the transition probability is not adjusted. In FIG. 8, a graph search is performed. In FIG. 8, a random walk is performed from the node “current”. The node "hot" is connected to the node "current" via an edge. Therefore, the node "eat cold food" is expected as an inference result. However, the node "hot" and the node "cold" are connected via the node "temperature" on the ontology. Therefore, the node "eat warm food" may be the inference result.
 そこで、設定部140は、次の処理を行う。
 設定部140は、知識グラフの構造を解析する。例えば、設定部140は、エッジの種別、オントロジーの階層などを解析する。なお、例えば、エッジの種別は、action、value、subClassOfである。設定部140は、遷移確率を調整する。
Therefore, the setting unit 140 performs the following processing.
The setting unit 140 analyzes the structure of the knowledge graph. For example, the setting unit 140 analyzes the edge type, the ontology hierarchy, and the like. For example, the types of edges are action, value, and subClassOf. The setting unit 140 adjusts the transition probability.
 図9は、実施の形態1の遷移確率を調整する場合の具体例(その1)を示す図である。気象ドメインオントロジーにおいて、ノード“暑い”とノード“気温”とが、“subClassOf”を介して結合している。また、気象ドメインオントロジーにおいて、ノード“寒い”とノード“気温”とが、“subClassOf”を介して結合している。
 ここで、ノード“気温”は、上位ノードとも言う。ノード“暑い”とノード“寒い”とは、下位ノードとも言う。すなわち、ノード“暑い”とノード“寒い”とは、複数の下位ノードである。ノード“暑い”とノード“気温”とが結合しているエッジとノード“寒い”とノード“気温”とが結合しているエッジとは、第1のエッジとも言う。すなわち、ノード“暑い”とノード“気温”とが結合しているエッジとノード“寒い”とノード“気温”とが結合しているエッジとは、複数の第1のエッジである。
FIG. 9 is a diagram showing a specific example (No. 1) in the case of adjusting the transition probability of the first embodiment. In the meteorological domain ontology, the node "hot" and the node "temperature" are connected via "subClassOf". Further, in the meteorological domain ontology, the node "cold" and the node "temperature" are connected via "subClassOf".
Here, the node "air temperature" is also referred to as an upper node. The node "hot" and the node "cold" are also called subordinate nodes. That is, the node "hot" and the node "cold" are a plurality of subordinate nodes. The edge where the node "hot" and the node "temperature" are connected and the edge where the node "cold" and the node "temperature" are connected are also called the first edge. That is, the edge where the node "hot" and the node "temperature" are connected and the edge where the node "cold" and the node "temperature" are connected are a plurality of first edges.
 設定部140は、複数の第1のエッジのうちの少なくとも1つのエッジに、上位ノードに遷移させないような値の遷移確率を設定する。例えば、設定部140は、複数の第1のエッジのうちの少なくとも1つのエッジに、予め設定された値よりも小さい値の遷移確率を設定する。 The setting unit 140 sets a transition probability of a value that does not cause a transition to a higher node at at least one edge of the plurality of first edges. For example, the setting unit 140 sets a transition probability of a value smaller than a preset value at at least one edge of the plurality of first edges.
 具体的には、設定部140は、ノード“暑い”とノード“寒い”のうちの入力情報に基づく現在を示すノード“現在”とエッジを介して結合しているノード“暑い”と、ノード“気温”との間のエッジに、ノード“気温”に遷移させないような値の遷移確率を設定する。すなわち、設定部140は、ノード“暑い”とノード“気温”との間のエッジに対応する遷移確率を低く設定する。
 なお、入力情報に基づく現在を示すノードは、入力情報が取得されたことで追加されたノードと表現してもよい。
Specifically, the setting unit 140 is connected to the node "current" indicating the present based on the input information of the node "hot" and the node "cold" via the edge, and the node "hot". At the edge between the temperature and the temperature, set the transition probability of a value that does not cause the transition to the node "temperature". That is, the setting unit 140 sets the transition probability corresponding to the edge between the node "hot" and the node "temperature" low.
The node indicating the present based on the input information may be expressed as a node added by acquiring the input information.
 遷移確率の調整は、下位ノードから上位ノードへの命題論理が成立しても、上位ノードから下位ノードへの命題論理が必ずしも成立しないことを応用している。例えば、上位ノードが動物であり、下位ノードが人間であるとする。人間が動物であるという命題論理は、成立する。しかし、動物が人間であるという命題論理は、成立しない。 The adjustment of the transition probability applies that even if the propositional logic from the lower node to the upper node is established, the propositional logic from the upper node to the lower node is not always established. For example, suppose the upper node is an animal and the lower node is a human. The propositional logic that humans are animals holds. However, the propositional logic that animals are humans does not hold.
 設定部140は、次の処理を行ってもよい。図を用いて、処理を説明する。
 図10は、実施の形態1の遷移確率を調整する場合の具体例(その2)を示す図である。設定部140は、上位の階層のノードに結合するエッジほど遷移確率を低く設定し、下位の階層のノードに結合するエッジほど遷移確率を高く設定する。すなわち、設定部140は、オントロジーの最上階に存在するルートノードに近い階層間の遷移制約を強める。設定部140は、ルートノードから遠い階層間の遷移制約を弱める。
The setting unit 140 may perform the following processing. The process will be described with reference to the figure.
FIG. 10 is a diagram showing a specific example (No. 2) in the case of adjusting the transition probability of the first embodiment. The setting unit 140 sets the transition probability lower as the edge is connected to the node in the upper layer, and sets the transition probability higher as the edge is connected to the node in the lower layer. That is, the setting unit 140 strengthens the transition constraint between the layers close to the root node existing on the top floor of the ontology. The setting unit 140 weakens the transition constraint between the layers far from the root node.
 例えば、ノード“醤油ラーメン”とノード“味噌ラーメン”とは、ノード“ラーメン”に繋がっている。ノード“醤油ラーメン”とノード“味噌ラーメン”とは、ルートノードから遠い階層に存在する。そのため、設定部140は、ノード“醤油ラーメン”とノード“ラーメン”との間の遷移制約を弱める。設定部140は、ノード“味噌ラーメン”とノード“ラーメン”との間の遷移制約を弱める。また、設定部140は、制約しなくてもよい。 For example, the node "Shoyu ramen" and the node "Miso ramen" are connected to the node "Ramen". The node "Shoyu ramen" and the node "Miso ramen" exist in a hierarchy far from the root node. Therefore, the setting unit 140 weakens the transition constraint between the node "soy sauce ramen" and the node "ramen". The setting unit 140 weakens the transition constraint between the node "miso ramen" and the node "ramen". Further, the setting unit 140 does not have to be restricted.
 一方、ノード“目的地設定”とノード“エアコン”とは、ノード“機能”に繋がっている。ノード“機能”は、ルートノードである。そのため、設定部140は、ノード“目的地設定”とノード“機能”との間の遷移制約を強める。設定部140は、ノード“エアコン”とノード“機能”との間の遷移制約を強める。また、設定部140は、遷移しないように、遷移確率を設定してもよい。 On the other hand, the node "destination setting" and the node "air conditioner" are connected to the node "function". A node "function" is a root node. Therefore, the setting unit 140 strengthens the transition constraint between the node "destination setting" and the node "function". The setting unit 140 strengthens the transition constraint between the node "air conditioner" and the node "function". Further, the setting unit 140 may set the transition probability so as not to make a transition.
 また、ノード“食事”とノード“買物”とは、中間層に存在する。そのため、設定部140は、ノード“食事”とノード“施設”との間の遷移制約を上記の2つ例の中間に設定する。 Also, the node "meal" and the node "shopping" exist in the middle layer. Therefore, the setting unit 140 sets the transition constraint between the node "meal" and the node "facility" between the above two examples.
 このように、設定部140は、遷移確率を調整する。ここで、オントロジーの末端に近いノードは、具体的な概念に近い。そのため、オントロジーの末端に近いノード間で遷移が行われても、対話の文脈に支障はない。すなわち、飛躍した推論ではない。例えば、醤油ラーメンが好きなユーザに、味噌ラーメンを推薦することは、不自然でない。一方、ルートノードに近いノードは、抽象的な概念である。ルートノードに近いノードで遷移が行われた場合、対話の文脈が変化する。そのため、設定部140は、制約を強める。設定部140の処理では、オントロジーの構造上の性質が利用される。 In this way, the setting unit 140 adjusts the transition probability. Here, the node near the end of the ontology is close to the concrete concept. Therefore, even if the transition is performed between the nodes near the end of the ontology, there is no problem in the context of the dialogue. That is, it is not a leap of reasoning. For example, it is not unnatural to recommend miso ramen to users who like soy sauce ramen. On the other hand, a node close to the root node is an abstract concept. If the transition is made at a node close to the root node, the context of the dialogue changes. Therefore, the setting unit 140 strengthens the restriction. In the processing of the setting unit 140, the structural property of the ontology is utilized.
 また、知識グラフが更新された場合、追加されたノードに結合するエッジには、遷移確率は、対応付けられていない。そのため、設定部140は、当該エッジの遷移確率を上記の規則に従って設定する。 Also, when the knowledge graph is updated, the transition probability is not associated with the edge that connects to the added node. Therefore, the setting unit 140 sets the transition probability of the edge according to the above rule.
 (ステップS14)設定部140は、遷移確率情報を生成する。言い換えれば、設定部140は、エッジと遷移確率との対応関係を示す遷移確率情報を生成する。
 (ステップS15)重要度算出部150は、遷移確率情報に基づいて、複数のノードに対応する重要度を算出する。ここで、重要度の算出方法の一例を示す。
(Step S14) The setting unit 140 generates transition probability information. In other words, the setting unit 140 generates transition probability information indicating the correspondence between the edge and the transition probability.
(Step S15) The importance calculation unit 150 calculates the importance corresponding to a plurality of nodes based on the transition probability information. Here, an example of the method of calculating the importance is shown.
 図11は、実施の形態1の重要度の算出処理の具体例を示す図である。図11は、ノード“D”,“E”,“F”を示している。図11は、ノード“E”の重要度を算出する方法を説明する。重要度算出部150は、ノード“E”の重要度を、式(1)を用いて算出する。 FIG. 11 is a diagram showing a specific example of the importance calculation process of the first embodiment. FIG. 11 shows the nodes “D”, “E”, and “F”. FIG. 11 describes a method of calculating the importance of the node “E”. The importance calculation unit 150 calculates the importance of the node “E” using the equation (1).
 0.125=0.1×0.25+0.2×0.5・・・(1) 0.125 = 0.1 x 0.25 + 0.2 x 0.5 ... (1)
 重要度算出部150は、ランダムウォークを用いて、知識グラフ全体で上記の算出方法を繰り返す。また、重要度算出部150は、算出結果の前後で各ノードの重要度の変化量が十分小さくなるまで繰り返してもよい。また、重要度算出部150は、一定回数、上記の算出方法を繰り返してもよい。 The importance calculation unit 150 repeats the above calculation method for the entire knowledge graph using a random walk. Further, the importance calculation unit 150 may repeat before and after the calculation result until the amount of change in the importance of each node becomes sufficiently small. Further, the importance calculation unit 150 may repeat the above calculation method a certain number of times.
 (ステップS16)検出部160は、複数のノードに対応する重要度を用いて、入力情報に対する応答のノードを、推論結果として、検出する。詳細に、処理を説明する。
 まず、推論結果となるノードの候補の検索を説明する。例えば、検索方法は、クエリが用いられる。例えば、変数を含むトリプレットが、クエリとされる。また、クエリは、入力情報に基づいたクエリでもよい。検出部160は、クエリを用いて、知識グラフ内でクエリと合致するパターンのトリプレットを検索する。
(Step S16) The detection unit 160 detects the node of the response to the input information as the inference result by using the importance corresponding to the plurality of nodes. The process will be described in detail.
First, the search for node candidates that will be the inference result will be described. For example, a query is used as the search method. For example, a triplet containing a variable is a query. Further, the query may be a query based on the input information. The detection unit 160 uses the query to search the knowledge graph for triplets of patterns that match the query.
 具体例を用いて、検索処理を説明する。具体例では、“暑い”から推論される行動又は意図のノードを検索する場合を説明する。クエリは、(暑い,action,?x)とする。なお、文字“?”から始まる文字列は、変数を表す。検出部160は、知識グラフ内で当該クエリと合致するパターンのトリプレットを検索する。ここで、知識グラフには、(暑い,action,冷たい物を食べる)と(暑い,action,涼しい場所に行く)というトリプレットが存在する。そのため、検出部160は、クエリを用いて、(暑い,action,冷たい物を食べる)と(暑い,action,涼しい場所に行く)を検出する。そして、検出部160は、“?x”として、ノード“冷たい物を食べる”とノード“涼しい場所に行く”を検出する。このように、推論結果となるノードの候補が検出される。 The search process will be explained using a specific example. In a specific example, a case of searching for a node of an action or intention inferred from "hot" will be described. The query is (hot, action,? X). The character string starting with the character "?" Represents a variable. The detection unit 160 searches the knowledge graph for triplets of patterns that match the query. Here, in the knowledge graph, there are triplets (hot, action, eat cold food) and (hot, action, go to a cool place). Therefore, the detection unit 160 uses a query to detect (hot, action, eat cold food) and (hot, action, go to a cool place). Then, the detection unit 160 detects the node "eat cold food" and the node "go to a cool place" as "? X". In this way, the candidate node that is the inference result is detected.
 検出部160は、検出したノードに対応付けられている重要度に基づいて、検出したノードをソートする。検出部160は、最も重要度が高いノードを推論結果として、検出する。例えば、ノード“冷たい物を食べる”の重要度が、0.1とする。ノード“涼しい場所に行く”の重要度が、0.05とする。検出部160は、ノード“冷たい物を食べる”を推論結果として、検出する。 The detection unit 160 sorts the detected nodes based on the importance associated with the detected node. The detection unit 160 detects the node having the highest importance as an inference result. For example, the importance of the node "eat cold food" is 0.1. The importance of the node "go to a cool place" is 0.05. The detection unit 160 detects the node "eating cold food" as an inference result.
 (ステップS17)出力部170は、推論結果に基づく応答を出力する。例えば、出力部170は、“冷たい物を食べますか?”という音声データを出力装置105に出力する。これにより、出力装置105は、“冷たい物を食べますか?”という音声を出力する。 (Step S17) The output unit 170 outputs a response based on the inference result. For example, the output unit 170 outputs the voice data "Do you eat cold food?" To the output device 105. As a result, the output device 105 outputs the voice "Do you eat cold food?".
 実施の形態1によれば、推論装置100は、上位ノードと下位ノードとの間のエッジに対応する遷移確率を低く設定する。これにより、推論装置100は、論理的に飛躍した推論の結果が出力されることを抑制できる。 According to the first embodiment, the inference device 100 sets the transition probability corresponding to the edge between the upper node and the lower node to be low. As a result, the inference device 100 can suppress the output of the result of logically leap in inference.
 また、上記では、入力情報が取得されたときに、遷移確率が調整される場合を説明した。しかし、推論装置100は、遷移確率をいつ調整してもよい。例えば、推論装置100が外部装置から指示を受信したとき、推論装置100は、遷移確率を調整する。 Also, in the above, the case where the transition probability is adjusted when the input information is acquired has been explained. However, the inference device 100 may adjust the transition probability at any time. For example, when the inference device 100 receives an instruction from an external device, the inference device 100 adjusts the transition probability.
実施の形態2.
 次に、実施の形態2を説明する。実施の形態2では、実施の形態1と相違する事項を主に説明する。そして、実施の形態2では、実施の形態1と共通する事項の説明を省略する。実施の形態2の説明では、図1~11を参照する。
Embodiment 2.
Next, the second embodiment will be described. In the second embodiment, matters different from the first embodiment will be mainly described. Then, in the second embodiment, the description of the matters common to the first embodiment will be omitted. In the description of the second embodiment, FIGS. 1 to 11 are referred to.
 図12は、実施の形態2の推論装置が有する機能ブロック図である。図12に示される構成と同じ図2の構成は、図12に示される符号と同じ符号を付している。
 推論装置100aは、記憶部110a、取得部120a、及び設定部140aを有する。記憶部110aは、設定テーブル113を記憶する。設定テーブル113は、設定情報とも言う。設定テーブル113については、後で具体的に説明する。
FIG. 12 is a functional block diagram of the inference device of the second embodiment. The configuration of FIG. 2, which is the same as the configuration shown in FIG. 12, has the same reference numerals as those shown in FIG.
The inference device 100a has a storage unit 110a, an acquisition unit 120a, and a setting unit 140a. The storage unit 110a stores the setting table 113. The setting table 113 is also referred to as setting information. The setting table 113 will be described in detail later.
 取得部120aは、設定テーブル113を取得する。例えば、取得部120aは、設定テーブル113を記憶部110から取得する。ここで、設定テーブル113は、外部装置に格納されてもよい。設定テーブル113が外部装置に格納されている場合、取得部120aは、設定テーブル113を外部装置から取得する。設定部140aの処理は、後で説明する。 The acquisition unit 120a acquires the setting table 113. For example, the acquisition unit 120a acquires the setting table 113 from the storage unit 110. Here, the setting table 113 may be stored in an external device. When the setting table 113 is stored in the external device, the acquisition unit 120a acquires the setting table 113 from the external device. The processing of the setting unit 140a will be described later.
 図13は、実施の形態2の遷移確率を設定する場合の具体例(その1)を示す図である。ノード“暑い”とノード“冷たい物を食べる”とが結合するエッジは、エッジ1と呼ぶ。ノード“寒い”とノード“温かい物を食べる”とが結合するエッジは、エッジ2と呼ぶ。 FIG. 13 is a diagram showing a specific example (No. 1) when setting the transition probability of the second embodiment. The edge where the node "hot" and the node "eat cold" are combined is called edge 1. The edge where the node "cold" and the node "eat warm food" are combined is called edge 2.
 図13は、設定テーブル113の例を示している。設定テーブル113は、入力情報と、オントロジーに含まれるエッジである対象エッジに、予め設定された値よりも高い値の遷移確率を設定するか又は当該予め設定された値よりも低い値の遷移確率を設定するかを示す情報との関係を示す。ここで、エッジ1とエッジ2は、対象エッジとも言う。 FIG. 13 shows an example of the setting table 113. The setting table 113 sets a transition probability of a value higher than a preset value or a transition probability of a value lower than the preset value for the input information and the target edge which is an edge included in the ontology. Indicates the relationship with the information indicating whether to set. Here, the edge 1 and the edge 2 are also referred to as a target edge.
 取得部120aは、現在、冷房が稼働していることを示す入力情報を取得する。更新部130は、ルールテーブル111を参照し、(現在,active,冷房)を知識グラフに追加する。 The acquisition unit 120a acquires input information indicating that the air conditioner is currently operating. The update unit 130 refers to the rule table 111 and adds (currently active, cooling) to the knowledge graph.
 設定部140aは、入力情報が取得された場合、設定テーブル113に基づいて、予め設定された値よりも高い値の遷移確率を対象エッジに設定し、又は予め設定された値よりも低い値の遷移確率を対象エッジに設定する。具体的には、設定部140aは、設定テーブル113に基づいて、エッジ1の遷移確率を高く設定する。また、設定部140aは、設定テーブル113に基づいて、エッジ2の遷移確率を低く設定する。
 これにより、取得部120aが“暑い”を示す入力情報を取得した場合、“冷たい物を食べますか”が出力される。
When the input information is acquired, the setting unit 140a sets a transition probability of a value higher than the preset value at the target edge based on the setting table 113, or sets a value lower than the preset value. Set the transition probability to the target edge. Specifically, the setting unit 140a sets a high transition probability of the edge 1 based on the setting table 113. Further, the setting unit 140a sets the transition probability of the edge 2 low based on the setting table 113.
As a result, when the acquisition unit 120a acquires the input information indicating "hot", "Do you want to eat cold food?" Is output.
 図14は、実施の形態2の遷移確率を設定する場合の具体例(その2)を示す図である。ユーザは、“暑い”と発言する。推論装置100aは、“冷たい物を食べますか”を出力する。 FIG. 14 is a diagram showing a specific example (No. 2) when setting the transition probability of the second embodiment. The user says "hot". The inference device 100a outputs "Do you eat cold food?"
 実施の形態2によれば、推論装置100aは、環境に応じて遷移確率を設定する。そのため、推論装置100aは、ユーザに適切な応答ができる。
 実施の形態1,2は、知識グラフがRDFのフォーマットを用いる場合を示した。しかし、フォーマットは、RDFに限らない。
According to the second embodiment, the inference device 100a sets the transition probability according to the environment. Therefore, the inference device 100a can give an appropriate response to the user.
Embodiments 1 and 2 show the case where the knowledge graph uses the RDF format. However, the format is not limited to RDF.
 以上に説明した各実施の形態における特徴は、互いに適宜組み合わせることができる。 The features in each of the embodiments described above can be combined with each other as appropriate.
 100,100a 推論装置、 101 プロセッサ、 102 揮発性記憶装置、 103 不揮発性記憶装置、 104 入力装置、 105 出力装置、 110,110a 記憶部、 111 ルールテーブル、 112 知識グラフ、 113 設定テーブル、 120,120a 取得部、 130 更新部、 140,140a 設定部、 150 重要度算出部、 160 検出部、 170 出力部。 100,100a Inference device, 101 processor, 102 Volatile storage device, 103 Non-volatile storage device, 104 Input device, 105 Output device, 110, 110a Storage unit, 111 Rule table, 112 Knowledge graph, 113 Setting table, 120, 120a Acquisition unit, 130 update unit, 140, 140a setting unit, 150 importance calculation unit, 160 detection unit, 170 output unit.

Claims (7)

  1.  上位ノードと、前記上位ノードに結合されている複数の第1のエッジに結合する複数の下位ノードとを含むオントロジーが含まれている知識グラフを取得する取得部と、
     前記複数の第1のエッジのうちの少なくとも1つのエッジに、前記上位ノードに遷移させないような値の遷移確率を設定する設定部と、
     を有する推論装置。
    An acquisition unit that acquires a knowledge graph that includes an ontology including a higher-level node and a plurality of lower-level nodes that are connected to a plurality of first edges that are connected to the higher-level node.
    A setting unit that sets a transition probability of a value that does not cause a transition to the upper node at at least one edge of the plurality of first edges, and a setting unit.
    Inference device with.
  2.  前記取得部は、入力情報を取得し、
     前記設定部は、前記複数の下位ノードのうちの前記入力情報に基づく現在を示すノードとエッジを介して結合している下位ノードと、前記上位ノードとの間のエッジに、前記上位ノードに遷移させないような値の遷移確率を設定する、
     請求項1に記載の推論装置。
    The acquisition unit acquires the input information and
    The setting unit transitions to the upper node at the edge between the lower node which is connected via the edge to the node indicating the present based on the input information among the plurality of lower nodes and the upper node. Set the transition probability of the value that does not let you,
    The inference device according to claim 1.
  3.  重要度算出部と、
     検出部と、
     出力部と、
     をさらに有し、
     前記オントロジーは、前記上位ノードと前記複数の下位ノードとを含む複数のノードと、遷移確率が対応付けられている複数のエッジとを含み、
     前記重要度算出部は、前記複数のエッジに対応付けられている遷移確率を示す遷移確率情報に基づいて、前記複数のノードに対応する重要度を算出し、
     前記検出部は、前記複数のノードに対応する重要度を用いて、前記入力情報に対する応答のノードを検出し、
     前記出力部は、検出されたノードに基づく前記応答を出力する、
     請求項2に記載の推論装置。
    Importance calculation unit and
    With the detector
    Output section and
    Have more
    The ontology includes a plurality of nodes including the upper node and the plurality of lower nodes, and a plurality of edges with which transition probabilities are associated.
    The importance calculation unit calculates the importance corresponding to the plurality of nodes based on the transition probability information indicating the transition probabilities associated with the plurality of edges.
    The detection unit detects the node of the response to the input information by using the importance corresponding to the plurality of nodes.
    The output unit outputs the response based on the detected node.
    The inference device according to claim 2.
  4.  前記オントロジーは、階層構造であり、かつ前記上位ノードと前記複数の下位ノードとを含む複数のノードと、複数のエッジとを含み、
     前記設定部は、上位の階層のノードに結合するエッジほど遷移確率を低く設定し、下位の階層のノードに結合するエッジほど遷移確率を高く設定する、
     請求項1に記載の推論装置。
    The ontology has a hierarchical structure and includes a plurality of nodes including the upper node and the plurality of lower nodes, and a plurality of edges.
    The setting unit sets the transition probability lower as the edge connects to the node in the upper hierarchy, and sets the transition probability higher as the edge joins the node in the lower hierarchy.
    The inference device according to claim 1.
  5.  前記取得部は、入力情報と、前記オントロジーに含まれるエッジである対象エッジに、予め設定された値よりも高い値の遷移確率を設定するか又は前記予め設定された値よりも低い値の遷移確率を設定するかを示す情報との関係を示す設定情報と、前記入力情報とを取得し、
     前記設定部は、前記入力情報が取得された場合、前記設定情報に基づいて、前記予め設定された値よりも高い値の遷移確率を前記対象エッジに設定し、又は前記予め設定された値よりも低い値の遷移確率を前記対象エッジに設定する、
     請求項1に記載の推論装置。
    The acquisition unit sets a transition probability of a value higher than a preset value or a transition of a value lower than the preset value for the input information and the target edge which is an edge included in the ontology. The setting information indicating the relationship with the information indicating whether to set the probability and the input information are acquired, and the input information is acquired.
    When the input information is acquired, the setting unit sets a transition probability of a value higher than the preset value on the target edge based on the setting information, or sets the transition probability to the target edge or from the preset value. A low value transition probability is set for the target edge,
    The inference device according to claim 1.
  6.  推論装置が、
     上位ノードと、前記上位ノードに結合されている複数の第1のエッジに結合する複数の下位ノードとを含むオントロジーが含まれている知識グラフを取得し、
     前記複数の第1のエッジのうちの少なくとも1つのエッジに、前記上位ノードに遷移させないような値の遷移確率を設定する、
     設定方法。
    The inference device
    Obtain a knowledge graph that includes an ontology that includes a high-level node and a plurality of low-level nodes that are connected to a plurality of first edges that are connected to the high-level node.
    At least one edge of the plurality of first edges is set with a transition probability of a value that does not cause a transition to the upper node.
    Setting method.
  7.  推論装置に、
     上位ノードと、前記上位ノードに結合されている複数の第1のエッジに結合する複数の下位ノードとを含むオントロジーが含まれている知識グラフを取得し、
     前記複数の第1のエッジのうちの少なくとも1つのエッジに、前記上位ノードに遷移させないような値の遷移確率を設定する、
     処理を実行させる設定プログラム。
    In the inference device,
    Obtain a knowledge graph that includes an ontology that includes a high-level node and a plurality of low-level nodes that are connected to a plurality of first edges that are connected to the high-level node.
    At least one edge of the plurality of first edges is set with a transition probability of a value that does not cause a transition to the upper node.
    A setting program that executes processing.
PCT/JP2019/051380 2019-12-27 2019-12-27 Inference device, setting method, and setting program WO2021131013A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP2021558772A JP7012916B2 (en) 2019-12-27 2019-12-27 Inference device, setting method, and setting program
PCT/JP2019/051380 WO2021131013A1 (en) 2019-12-27 2019-12-27 Inference device, setting method, and setting program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2019/051380 WO2021131013A1 (en) 2019-12-27 2019-12-27 Inference device, setting method, and setting program

Publications (1)

Publication Number Publication Date
WO2021131013A1 true WO2021131013A1 (en) 2021-07-01

Family

ID=76572945

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/051380 WO2021131013A1 (en) 2019-12-27 2019-12-27 Inference device, setting method, and setting program

Country Status (2)

Country Link
JP (1) JP7012916B2 (en)
WO (1) WO2021131013A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007272485A (en) * 2006-03-31 2007-10-18 Kddi Corp Associative retrieval device and computer program
JP2009500746A (en) * 2005-07-08 2009-01-08 本田技研工業株式会社 Building housework plans from distributed knowledge
JP6567218B1 (en) * 2018-09-28 2019-08-28 三菱電機株式会社 Inference apparatus, inference method, and inference program

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009500746A (en) * 2005-07-08 2009-01-08 本田技研工業株式会社 Building housework plans from distributed knowledge
JP2007272485A (en) * 2006-03-31 2007-10-18 Kddi Corp Associative retrieval device and computer program
JP6567218B1 (en) * 2018-09-28 2019-08-28 三菱電機株式会社 Inference apparatus, inference method, and inference program

Also Published As

Publication number Publication date
JP7012916B2 (en) 2022-01-28
JPWO2021131013A1 (en) 2021-07-01

Similar Documents

Publication Publication Date Title
US11593708B2 (en) Integrated neural network and semantic system
Zhang et al. Graph-based knowledge reuse for supporting knowledge-driven decision-making in new product development
US9535902B1 (en) Systems and methods for entity resolution using attributes from structured and unstructured data
JP6567218B1 (en) Inference apparatus, inference method, and inference program
US9348815B1 (en) Systems and methods for construction, maintenance, and improvement of knowledge representations
Ranganathan et al. An infrastructure for context-awareness based on first order logic
US10019538B2 (en) Knowledge representation on action graph database
Bouchachia Fuzzy classification in dynamic environments
US20150339573A1 (en) Self-Referential Semantic-based Method, System, and Device
US10402752B2 (en) Training sequence natural language processing engines
WO2016064576A1 (en) Tagging personal photos with deep networks
US20180173766A1 (en) Rule-based joining of foreign to primary key
US11100406B2 (en) Knowledge network platform
JP2021026779A (en) Real-time graph-based embedding construction method and system for personalized content recommendation
JP2007317189A (en) Retrieval information processing method based on three element model
Paredes et al. NetDER: an architecture for reasoning about malicious behavior
JP2006114033A (en) System for smoothly organizing data
US20200065416A1 (en) Using relation suggestions to build a relational database
Kacprzyk et al. Modern data-driven decision support systems: the role of computing with words and computational linguistics
JP7012916B2 (en) Inference device, setting method, and setting program
Kundu et al. Building a graph database for storing heterogeneous healthcare data
Pivert et al. Fuzzy quality-Aware queries to graph databases
KR101053897B1 (en) Contextual Information Management System and Method and Contextual Information Generation Method
Skowron* et al. Hierarchical modelling in searching for complex patterns: constrained sums of information systems
Lüddecke et al. Modeling context-aware and intention-aware in-car infotainment systems: Concepts and modeling processes

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19958032

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021558772

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19958032

Country of ref document: EP

Kind code of ref document: A1