JP6145562B2 - Information structuring system and information structuring method - Google Patents

Information structuring system and information structuring method Download PDF

Info

Publication number
JP6145562B2
JP6145562B2 JP2016503804A JP2016503804A JP6145562B2 JP 6145562 B2 JP6145562 B2 JP 6145562B2 JP 2016503804 A JP2016503804 A JP 2016503804A JP 2016503804 A JP2016503804 A JP 2016503804A JP 6145562 B2 JP6145562 B2 JP 6145562B2
Authority
JP
Japan
Prior art keywords
node
candidate
relationship
identification
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
JP2016503804A
Other languages
Japanese (ja)
Other versions
JPWO2015125209A1 (en
Inventor
利彦 柳瀬
利彦 柳瀬
修 今一
修 今一
真 岩山
真 岩山
直之 神田
直之 神田
Original Assignee
株式会社日立製作所
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社日立製作所 filed Critical 株式会社日立製作所
Priority to PCT/JP2014/053763 priority Critical patent/WO2015125209A1/en
Publication of JPWO2015125209A1 publication Critical patent/JPWO2015125209A1/en
Application granted granted Critical
Publication of JP6145562B2 publication Critical patent/JP6145562B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/31Indexing; Data structures therefor; Storage structures
    • G06F16/313Selection or weighting of terms for indexing
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor

Description

  The present invention relates to an information structuring system for natural language documents using a computer.
  Recently, a large amount of electronic data (big data) has been used. This is because, with the advent of open source software such as Apache Hadoop, a technique for performing distributed parallel computation using a general PC server has become widespread. As a result, the cost of computer resources required to process a large amount of data in a short time has been greatly reduced.
  Data processing in big data includes a process of counting a large amount of numerical data and a process in which a computer automatically extracts a pattern useful for a user from electronic document data.
  Among document data, specific expressions such as names of persons and organizations are highly important from the viewpoint of bridging the contents of the document and the real world. By the proper expression extraction technology, information such as a person name, an organization name, and a place name can be automatically extracted from a natural language.
  Hereinafter, in this specification, an entity in the real world indicated by a specific expression is referred to as an entity. A character string indicating an entity is referred to as entity notation or specific expression.
  On the other hand, there is Wikipedia or the like as information that summarizes information in the real world in the form of electronic data. There is a movement to create knowledge graphs using these information sources. DBPedia, YAGO, BabelNet, etc. are known as representative knowledge graphs.
  These knowledge graphs are described by RDF (Resource Description Framework) and express relationships between entities. If an entity is regarded as a node and a relationship is regarded as an edge, the relationship between entities can be grasped as a graph. This graph is a knowledge graph.
  By selecting the knowledge graph as a name identification destination, multipurpose name identification (entity identification) can be expected.
  As background art in this technical field, there are JP-A-2004-185515 (Patent Document 1) and JP-A-2011-191982 (Patent Document 2).
  In Patent Document 1, a word information input unit having means for inputting word information constituting text data and an arbitrary text data pair included in the text data are used using word information constituting the text data pair. A eigenvalue decomposition is performed on the calculated text data relevance matrix, a text data relevance matrix calculation unit having means for calculating relevance with directionality, and means for generating a square matrix having the calculated values as element values. An eigenvalue decomposition unit having means for calculating eigenvalues and eigenvectors, a text data evaluation value calculation unit having means for calculating an evaluation value of each text data based on the eigenvector of the calculated maximum eigenvalue, and calculated text data A text data evaluation device having a text data evaluation value output unit having means for outputting an evaluation value is disclosed. To have.
  Further, in Patent Document 2, a store name candidate extraction unit extracts a word whose notation matches the store name in the store name list from the processed input sentence, and in the processed input sentence together with the notation of the word The store name candidate is determined by the store name determination unit using the store-likeness DB, and only the store name candidates determined as the store name are determined by the store name determination unit. The name of the determined store name in the processed input sentence indicates which record in the store DB corresponds to the determined store name using the store DB and the feature word DB for the determined store name by the ambiguity resolution unit. A store name ambiguity resolving apparatus that outputs at least the record ID of the corresponding record in the store DB together with the store name is determined based on the constraint word or feature word corresponding to the store attribute value in the store DB that appears in the vicinity. It is.
JP 2004-185515 A Japanese Unexamined Patent Publication No. 2011-191982
  If a large-scale knowledge graph is not used for entity identification, common points behind the document cannot be grasped, and the consistency result may occur in the identification result. On the other hand, general knowledge graphs are created for multipurpose purposes and are not specialized for entity identification. Therefore, a method for selecting information suitable for entity identification is necessary. For this reason, the above-described known technique cannot use the background knowledge in the knowledge graph to improve the consistency of entity identification.
  For this reason, it is required to analyze the document including the link structure of the knowledge graph and the rule definition of the structure.
  A typical example of the invention disclosed in the present application is as follows. That is, an information structuring system for analyzing the structure of a document, which has a processor that executes a program and a memory that stores a program executed by the processor, and stores nodes that are nouns to which identification information is assigned. A database that extracts a noun from a document, associates the extracted noun with a node stored in the database, and associates the extracted noun with a node, and a plurality of the extracted nouns. A candidate enumeration unit that searches for a relay node that connects the noun for which identification information is specified and the node candidate, a noun for which the searched relay node and the identification information are specified, A calculation unit for calculating a first relationship of the second node and a second relationship between the searched relay node and the candidate node; Using the suppression unit that determines a relay node that has a large relationship of 1 and a small second relationship, and candidate nodes associated with the determined relay node, the extracted noun And a determination unit for determining a corresponding node.
  According to a typical embodiment of the present invention, a partial structure effective for entity identification can be extracted from a general knowledge graph, and can be used for consistent identification by narrowing down appropriate candidate nodes. Moreover, since a general knowledge graph is made for multipurpose purposes, the application destination of identification results is wide. Problems, configurations, and effects other than those described above will become apparent from the description of the following embodiments.
It is a block diagram of the computer which comprises the information structuring system of the Example of this invention. It is a logical block diagram of the computer which comprises the information structuring system of a present Example. It is a figure explaining the structure of a literature database. It is a figure explaining the structure of an annotation database. It is a figure explaining the structure of a knowledge graph database. It is a functional block diagram of the computer which comprises the information structuring system of a present Example. It is a flowchart of the information extraction process by the information structuring system of a present Example. It is a figure explaining the example of the document from which the entity was extracted. It is a figure explaining the example of the identification candidate enumerated. It is a figure explaining the example of the attribute information of an identification candidate. It is a figure explaining the example of the attribute information of an identification candidate. It is a figure explaining the relationship of an entity. It is a figure explaining the relationship of an entity. It is a flowchart of the identification score calculation process by the information structuring system of a present Example. It is a figure explaining the concept of determination of the local relation score using the threshold value of a present Example.
  Hereinafter, an embodiment of the present invention will be described with reference to the accompanying drawings.
  In the following embodiments, when referring to the number of elements, etc., it is not limited to the specific number, unless specifically specified and clearly specified in principle. However, it may be the following.
  Further, in the following embodiments, it is obvious that the constituent elements are not necessarily essential unless specifically specified and clearly required in principle. Similarly, in the following embodiments, when referring to the shape and positional relationship of the constituent elements, the shape or the like is substantially changed unless otherwise specified or apparently in principle. Includes approximations or similar. The same applies to the above numerical values and ranges.
<First embodiment>
FIG. 1 is a block diagram of a computer 100 constituting the information structuring system according to the embodiment of this invention.
  A computer 100 configuring the information structuring system of this embodiment is a general-purpose computer as shown in FIG. 1, and can be specifically configured by a PC server. The computer 100 includes a central processing unit (CPU) 110, a memory 120, a local file system 130, an input device 140, an output device 150, a network device 160, and a bus 170.
  Central processing unit 110 executes a program stored in memory 120. The memory 120 is a high-speed and volatile storage element such as a DRAM (Dynamic Random Access Memory), and temporarily stores a program executed by the central processing unit 110 and data used when the program is executed.
  The local file system 130 is a rewritable storage area built in the computer 100, and is composed of, for example, a large-capacity nonvolatile storage device such as a magnetic storage device (HDD), a flash memory (SSD), or a RAM disk. The The storage device in which the local file system is configured may be a storage device connected to the computer 100 from the outside.
  In addition to the local file system 130, the storage device stores programs executed by the central processing unit 110 and data used when the programs are executed. Data stored in the storage device includes a global relation score table 265, a document database 220, an annotation database 225, and a knowledge graph database 230 described below. In addition, a program for implementing each unit described below is read from the storage device, loaded into the memory 120, and executed by the central processing unit 110.
  The input device 140 is an interface that receives input from the user, such as a keyboard and a mouse. The output device 150 is an interface such as a display device and a printer that outputs the execution result of the program in a format that can be viewed by the user. Note that when the computer 100 is remotely operated by a terminal connected via a network, the computer 100 may not have the input device 140 and the output device 150.
  The network device 160 is a network interface device that controls communication with other devices according to a predetermined protocol. A bus 170 connects the devices 110 to 160.
  The program executed by the central processing unit 110 is provided to a computer via a removable medium (CD-ROM, flash memory, etc.) or a network, and is stored in a storage device that is a non-temporary storage medium. For this reason, the computer may have an interface for reading data from the removable medium.
  The information structuring system of the present embodiment is a computer system configured on a plurality of computers that are physically configured on one computer or logically or physically. It may operate on a thread, or may operate on a virtual computer constructed on a plurality of physical computer resources.
  FIG. 2 is a logical block diagram of the computer 100 constituting the information structuring system of this embodiment.
  The computer 100 includes an initialization unit 235, an entity extraction unit 240, an identification candidate listing unit 245, a global relationship score calculation unit 250, a hub suppression unit 255, an identification score calculation unit 260, a global relationship score table 265, and an ID determination unit 270. .
  The initialization unit 235 initializes each unit of the information structuring system of this embodiment. The entity extraction unit 240 extracts an entity from a document and gives an annotation to the extracted entity. The global relationship score calculation unit 250 performs scoring according to whether an entity in the knowledge graph database 230 contributes to identification. The identification candidate enumeration unit 245 enumerates entities in the knowledge graph corresponding to the entities. The hub suppressing unit 255 uses the global relationship score table 265 to select information on the knowledge graph used for the relationship graph. The identification score calculation unit 260 calculates a score representing the likelihood of identification.
  The global relationship score table 265 holds the result of scoring the entities in the knowledge graph. Specifically, the global relationship score table 265 records the number of other entities connected to the entity serving as the relay node. For example, since people all over the world are connected to the relay node “Person”, the global relation score is the world population (about 7 billion). The global relation score of the relay node “NN Party” is the number of members of the NN Party. At this time, comparing the relay node “Person” and the relay node “NN Party”, the relay node “Person” is more general as a hub. For this reason, in this embodiment, the global relationship score table 265 is used by the hub suppression unit 255 to select a relay node with low generality as a hub.
  The ID determination unit 270 uniquely determines an entity identifier based on the identification score.
  In the present invention, an entity means an entity indicated by an arbitrary expression of a proper noun. For example, “Hitachi” or “Hitachi” may mean “Hitachi City” as a place name or “Hitachi Ltd.” as a company. In this case, “Hitachi” and “Hitachi” are arbitrary expressions, and “Hitachi City” and “Hitachi Ltd.” are entities as entities. In addition, people with the same surname and the same name are actually different entities even if they have the same notation.
  The time information recognition unit 246, the geographic information recognition unit 247, and the learning unit 271 included in the identification candidate enumeration unit 245 are configurations necessary for the second example, the third example, and the fourth example, respectively. In the embodiment, it is not necessary.
  The computer 100 is connected to the literature database 220, the annotation database 225, and the knowledge graph database 230 via the LAN 210.
  The document database 220 is a database that stores documents to be processed. The configuration of the document database 220 will be described later with reference to FIG. 3A. The annotation database 225 is a database that manages annotations given to documents. The configuration of the annotation database 225 will be described later with reference to FIG. 3B. The knowledge graph database 230 manages information incidental to the entity. The configuration of the knowledge graph database 230 will be described later with reference to FIG. 3C.
  Each database can use existing data management software running on a computer.
  FIG. 3A is a diagram illustrating the configuration of the document database 220. The document database 220 is a database for managing documents, and specifically manages an identifier (document ID) for identifying a document and the content of the document. The content of the document is the text (character information) of the document. Specifically, RDB (Relational Database), a full-text search engine, an associative search engine, etc. can be used for the literature database 220. FIG.
  FIG. 3B is a diagram illustrating the configuration of the annotation database 225. The annotation database 225 is a database that manages annotations given to documents. Specifically, an identifier (label ID) for identifying a label and an identifier (document ID) for identifying a document to which a label is given. ), And manage annotations. The annotation includes label position information (for example, start character position, end character position) and label identification result information (for example, entity ID in the knowledge graph database). Specifically, RDB and KVS (Key-Value Store) can be used for the annotation database 225.
  FIG. 3C is a diagram illustrating the configuration of the knowledge graph database 230. The knowledge graph database 230 is a database that manages information incidental to an entity. The information attached to the entity includes attribute information of the entity itself such as a name and a name, and information on the relationship between entities such as “Nagano Prefecture is a location administrative region in Japan”. Specifically, the data stored in the knowledge graph database 230 is described in RDF. Specifically, the knowledge graph database can use a data store of an RDF store (Apache Jena, Sesame, etc.).
  FIG. 4 is a functional block diagram of the computer 100 constituting the information structuring system of the present embodiment, and FIG. 5 is a flowchart of information extraction processing by the information structuring system of the present embodiment.
  First, the initialization unit 235 activates each unit of the information structuring system of the present embodiment, connects to each database, and prepares for processing. A document to be identified is acquired from the document database 220 (step 400). Thereafter, the initialization unit 235 activates the global relationship score calculation unit 250.
  The global relationship score calculation unit 250 acquires entities in the knowledge graph from the knowledge graph database 230, scores the acquired entities according to whether they contribute to identification, and stores the scoring results in the global relationship score table 265 ( Step 410). Note that the global relationship score represents the generality of an entity, and is defined so as to contribute to identification (lower generality) as the value increases.
  For example, the reciprocal of the number of links possessed by the entity can be used as the global relationship score. Information that an entity is a person has no meaning when identifying persons. The relationships that many entities have in common are less important for identification. For this reason, it is effective to use the reciprocal of the number of links as the global relation score. Also, log (number of entities / number of links) can be used as the global relation score, as in reverse document frequency IDF (Inverse Document Frequency) in a document.
  Next, the entity extraction unit 240 acquires a document from the document database 220, extracts an entity included in the acquired document, adds an annotation to the extracted entity, and stores the added annotation in the annotation database 225 ( Step 420).
  In order to annotate an entity, the above-described specific expression extraction technique can be used. The specific expression extraction technique is a technique for automatically extracting a specific expression such as a person name or an organization name based on a predetermined rule. By using this technique, it is possible to add an annotation representing the type of specific expression such as “person name” or “organization name” to a location corresponding to the specific expression in the document.
  Also, a specific expression extraction technique by machine learning may be used. Based on correct data called tagged corpus, this technology allows a computer to learn the pattern when a specific expression appears in a document and extracts the specific expression using the learned pattern (rule). To do.
  However, at this stage, the entity extracted from the document may not be correctly identified, such as having a plurality of identification candidates. For this reason, in this embodiment, a reliable identification destination of the extracted entity (that is, the ID of the entity) is determined.
  FIG. 6 shows an example of a document from which an entity is extracted. A document 600 illustrated in FIG. 6 describes the result of a tennis match. A bold and underlined portion in the document 600 is a portion that is determined to be entity notation and extracted, and an annotation is given to each of the extracted portions.
  Next, the identification candidate listing unit 245 is activated. The identification candidate listing unit 245 extracts the identification candidates corresponding to the entities extracted from the document from the knowledge graph database 230 and lists them (step 430). In order to enumerate the identification candidate entities, known identification candidate enumeration techniques can be used. For example, in the simplest method, the similarity between the character string of the specific expression included in the annotation and the notation of the entity in the knowledge graph database is calculated, and the one with the large similarity is selected as the identification candidate. it can.
  As an extension of the identification candidate enumeration technology mentioned above, an alternative reading is added to the specific expression using a thesaurus (synonym dictionary), and the similarity between the added specific expression reading and the entity notation is calculated. A large one may be selected as an identification candidate.
  Further, by referring to the annotation database 225, a document in which an entity appears can be extracted, the inter-document distance with the document currently being processed can be calculated, and selected as an identification candidate in order from the calculated distance. Good.
  FIG. 7 shows examples of listed identification candidates. As a result of collating the notation of the entity and the entry of the knowledge graph, the identification candidate enumeration unit 245 uniquely identifies “Yamada XX”, “Tokyo Open”, “Roger YY”, “GG Company Cup”, “Sato ZZ”. The identification destination was determined. On the other hand, “Tanaka AA” has two candidates, “Tanaka # AA # (politician)” who is a politician and “Tanaka # AA # (tennis)” who is a tennis player. The destination could not be determined uniquely.
  8A and 8B show examples of identification candidate attribute information. This attribute information is obtained from the knowledge graph, and is not obtained from the document of FIG.
  FIG. 8A shows attribute information 800 of an entity of Tanaka # AA # (politician). The information that politician Tanaka AA is a person, belongs to the NN party, and is from MM prefecture is described.
  FIG. 8B shows the entity attribute information 810 of Tanaka # AA # (tennis). Tanaka # AA # (tennis) is a person and describes information that he / she participated in the events US # Open # (tennis) and FF # Cup # (tennis).
  After the identification candidate is selected, the identification score calculation unit 260 is activated. The identification score calculation unit 260 calculates an identification score representing the probability of identification. The identification score is calculated for a set in which candidates are extracted one by one from the specific expressions included in the sentence. In the example (Yamada # XX, Tokyo # Open # (tennis), Roger # YY, GG # CUP # (tennis), Sato # ZZ, Tanaka # AA # (politician)) and (Yamada # XX, Tokyo # Open # (tennis), Roger # YY, GG # CUP # (tennis), Sato # ZZ, Tanaka # AA # (tennis)). The identification score indicates that the larger the value is, the more likely the identification is.
  Specifically, the identification score calculation unit 260 acquires the listed identification destination candidates and activates the hub suppression unit 255. The hub suppression unit 255 obtains a global relation score using the global relation score table 265, obtains a local relation score based on a candidate set of identification destinations, and is usefully represented in a knowledge graph used for the relation graph. A relationship is selected (step 440). For example, the hub suppression unit 255 can obtain a relation score by combining the global relation score and the local relation score, select the relay node candidates in descending order of the relation score, and sequentially select the relay node candidates. At this time, as the number of relay nodes that pass between the nodes increases, the relationship becomes lighter and the number of relay nodes increases. Therefore, an upper limit can be set for the number of relays. Moreover, you may give a relation score to all the relay nodes. In this case, the identification score calculation unit 260 creates a partial graph composed of identification candidate nodes, relay nodes, and edges connecting them, and refers to the relation score as a weight to the edges, and calculates the sum of the partial identification scores by a method described later. Take an identification score.
  After that, the identification score calculation unit 260 calculates the likelihood of the identification target candidate set using the useful relationship selected by the hub suppression unit 255 (step 450). A specific example of the method for calculating the identification score will be described later with reference to FIG. In the present embodiment, the hub suppression unit 255 limits the nodes that can be used for relaying.
  The processing by the identification score calculation unit 260 ends when, for example, all combinations of identification target candidate combinations are calculated. Further, the processing may be terminated when the identification score of the candidate set of identification destination falls below a certain threshold.
  After the processing by the identification score calculation unit 260 is completed, the ID determination unit 270 is activated. The ID determination unit 270 uniquely determines an entity identifier based on the identification score (step 460). For example, the ID determination unit 270 may select a candidate having the maximum identification score.
  After the entity identifier is determined, a relationship graph is output (step 470).
  9 and 10 are relational graphs created by the information structuring system of the present embodiment, and show a state where the identification destination is not determined.
  FIG. 9 shows the relationship between entities when the condition of participation in US # Open # (tennis) (second line in FIG. 8B) is selected as a relay node. US # Open # (tennis) connects with Tanaka AA # (tennis), Yamada # XX, and Roger # YY. These persons are those who have some kind of relationship such as participating in the US Open Tennis Tournament. Thus, Tanaka # AA # (tennis) and Tanaka # AA # (politician) can be separated by selecting US # Open # (tennis) as a relay node.
  FIG. 10 shows the relationship between entities when the attribute “Person” (first line in FIG. 8B) is selected as a relay node. Since Person is an attribute of all persons, all person entities are connected. With this, the two candidates Tanaka # AA # (tennis) and Tanaka # AA # (politician) cannot be distinguished. This is a situation that can occur when finding the shortest path between entities. Therefore, in the present invention, the hub suppressing unit 255 selects relay nodes based on the relation score.
In this embodiment, the terms appearing in the same sentence are often connected to the terms appearing in the same document by utilizing characteristics having similar meanings, and the terms to be distinguished can be distinguished. Select a relay node. Then, by adopting the configuration of the embodiment as described above, the following two functions can be exhibited and an effect can be achieved.
(1) Since entity identification can be performed using a general large-scale knowledge graph created outside, the identification result can be used for multiple purposes.
(2) The consistency of identification results can be improved.
  FIG. 11 is a flowchart of the identification score calculation process by the information structuring system of this embodiment. The identification score calculation process is executed by the identification score calculation unit 260 and the hub suppression unit 255.
  First, among the selected identification candidates, an identification candidate uniquely determined and an identification candidate having a plurality of candidates are separated (1100). Next, entities having a plurality of identification candidates are listed (1110). Entity properties are listed for each identification candidate (1120). As the entity property, a pair of attribute type and value as shown in FIGS. 8A and 8B can be used.
  Thereafter, a global relationship score for each property is obtained with reference to the global relationship score table 265 (1130). This property becomes a candidate for the relay node. Next, the relay node candidate property and the global relation score are sent to the hub suppression unit 255 (1140).
  The hub suppressing unit 255 calculates a local relationship score (1145), and calculates a relationship score using the local relationship score and the global relationship score. Specifically, take the weighted sum of the local relationship score and the global relationship score, or take the product of the local relationship score and the global relationship score. Integrate scores. Using this relation score, a relay node is selected and sent to the identification score calculation unit 260 (1150). Specifically, relay nodes can be selected in descending order of relationship score.
  For this reason, a local relation score is calculated using Formula (1) shown in FIG. That is, the local relation score is a sum of values obtained by inverting the sign of the connection score (Sdi) with a node whose ID has already been determined and the connection score (Scj) with a node that is an identification candidate. Can be used to calculate. Here, the reason why the sign of Scj is inverted is to make the local relation score larger as Scj is smaller. Therefore, instead of the value obtained by inverting the sign of Scj, the reciprocal of Scj may be used, or the value of log (number of identification candidate nodes / Scj) may be used. In Formula (1), w is a weighting coefficient and is a value of 0 or more and 1 or less. By adopting a small value for w, the ability to distinguish identification candidates is improved.
  Here, the connection score is a score that takes a large value when the number of nodes is small and the number of vias is connected via an edge having a large weight. For example, when the connection score is between nodes that have been routed k times, when the route coefficient is g (0 <g ≦ 1) and the weight when the number of routes is n (1 ≦ n ≦ k) is αn, Σ # n ( g) It is determined as ^ n * αn. Here, the weight αn is specifically a constant, a global relation score, the reciprocal of the number of connections with nodes that are identification candidates, log (number of identification candidate nodes / number of connections with nodes that are identification candidates), etc. Can be used.
  After that, the identification score calculation unit 260 calculates a partial identification score for each set of the distance to the determined node via the relay node and the identification candidate (other undecided nodes), and sums them to obtain the identification score. Calculate (1160). Specifically, the partial identification score refers to a graph in which the path between two nodes belonging to a certain group is weighted by the relation score, and the path between the nodes is traced by adding the relation score, and the sum is obtained. Is a partial identification score. Here, the use of addition when taking the sum of the relation scores is an example, and it can be replaced by operations such as multiplication and multiplication. Moreover, you may calculate an identification score by calculating | requiring the flow volume between nodes. The larger the flow rate, the deeper the relationship is. For example, the sum or maximum value of the flow rate can be used as the identification score.
  By determining the identification score in this way, it can be said that identification is more likely when the identification score is larger.
  When calculating the identification score in order between the nodes, if the partial identification score is small, the subsequent calculation can be canceled and the number of candidates to be calculated can be limited. Specifically, the threshold is set for the rank based on the partial identification score, or the threshold is set for the partial identification score.
  Thereafter, it is determined whether the end condition is satisfied (1170). For example, it can be determined that the calculation has been completed when the calculation for all the combinations of identification candidates is completed. Moreover, you may determine with completion | finish, when an identification score becomes larger than a predetermined threshold value. In this case, as the predetermined threshold value, a value that can reliably distinguish the object to be distinguished is adopted.
  FIG. 12 shows the concept of determining a local relation score using a threshold value. In a two-dimensional space in which the horizontal axis is the distance Sdj to the determined node and the vertical axis is the distance Scj to the identification candidate node, the determination threshold can be represented by a straight line rising to the right as shown. . The lower right is an area where a relay node suitable for identification exists.
  Thereafter, the identification score calculation unit 260 transmits the calculated identification score to the ID determination unit 270 (1180).
  In this way, since the hub suppression unit 255 selects a relay node that contributes to identification and calculates an identification score using the selected relay node, more probable identification can be performed.
  As described above, according to the first embodiment of the present invention, the hub suppression unit 255 selects a relay node having a small distance from a node that is an identification candidate, and the identification score calculation unit 260 determines the ID of the relay node. The first relationship with the completed node and the second relationship between the relay node and the candidate node are calculated, and the ID determination unit 270 uses the node candidates associated with the selected relay node to generate a unique expression. Since the corresponding entity (ID) is determined, appropriate candidate nodes can be narrowed down.
  In addition, the hub suppression unit 255 calculates a first value that is the sum of the distances (Sdi) from the ID determined nodes associated with the relay nodes, and the sum of the distances (Scj) from the candidate nodes associated with the relay nodes. And the second node (ΣSdi) is large and the second node (ΣScj) is small. Therefore, the identification score calculation unit 260 and the ID are calculated. The determination unit 270 can narrow down appropriate candidate nodes with a simple calculation.
<Second embodiment>
Next, a second embodiment of the present invention will be described.
  As shown in FIG. 2, in the information structuring system of the second embodiment, the identification candidate listing unit 245 includes a time information recognition unit 246.
  For this reason, the identification candidate enumeration unit 245 of the second embodiment enumerates the entities in consideration of time information from the knowledge graph corresponding to the entities. The time information in the document is meta information such as a document creation date and a modification date (for example, news transmission date and time, newspaper issue date), and date information (incident occurrence date and time) appearing in the document content. . For example, in a news article, the news transmission date (newspaper issue date) is document metadata, and the incident occurrence date is content date information.
  In the second embodiment, the knowledge graph database 230 also includes time information.
  Furthermore, as shown in FIGS. 8A and 8B, the identification candidate attribute information of the second embodiment includes the date of birth (BirthDate). The time information may be, for example, a person's death date, company establishment date, or listing date.
  The identification candidate listing unit 245 deletes nodes that do not have time information related to the document (for example, a person who does not survive at the time of issuance or a person who is not present at the position) from the identification candidates, thereby identifying the identification candidates. You can narrow down.
  In addition, since structures other than the above-mentioned of 2nd Example are the same as 1st Example, those description is abbreviate | omitted.
  Thus, as in the second embodiment (and the third embodiment described later), the attribute information of the identification candidates is used to narrow down the relay nodes even if it is used as a relay node as described in the first embodiment. You may use it.
  Further, instead of the identification candidate listing unit 245, the ID determination unit 270 may include a time information recognition unit. In this case, the ID determination unit 270 can give a low score to a candidate node that is temporally separated from the identified node.
  As described above, according to the second embodiment of the present invention, identification can be made more reliably by using temporal relationships. In addition, identification candidates can be narrowed down before the identification score is calculated.
<Third embodiment>
Next, a third embodiment of the present invention will be described.
  As shown in FIG. 2, in the information structuring system of the third embodiment, the identification candidate listing unit 245 includes a geographic information recognition unit 247.
  For this reason, the identification candidate enumeration unit 245 of the second embodiment enumerates entities from the knowledge graph corresponding to the entities in consideration of geographic information. The geographical information in the document is meta information such as a place name (for example, a country of creation) added as a document category, and a place name, a country name, and a region name appearing in the contents of the document. For example, the target area is added as a category name to the local news. In addition, the location of the event and the position information (for example, residence) of the characters are described as geographic information in the content.
  In the third embodiment, the knowledge graph database 230 also includes geographic information. For example, the attribute information of a person entity may include geographical information such as nationality and residence. In addition, the entity entity attribute information may include the location of the head office or sales office.
  Further, the identification candidate attribute information of the third embodiment includes geographic information (BirthPlace, etc.) as shown in FIG. 8A.
  Since the configuration of the third embodiment other than the above is the same as that of the first embodiment, description thereof will be omitted.
  In the third embodiment, by using these pieces of information, identification candidates can be narrowed down before the identification score is calculated.
  Further, instead of the identification candidate listing unit 245, the ID determination unit 270 may include a geographic information recognition unit. In this case, the ID determination unit 270 can give a low score to a candidate node that is geographically distant from the identified node.
  As described above, according to the third embodiment of the present invention, identification can be performed more reliably by utilizing the geographical relationship. In addition, identification candidates can be narrowed down before the identification score is calculated.
<Fourth embodiment>
Next, a fourth embodiment of the present invention will be described. The fourth embodiment is different from the first embodiment in that the ID determination unit 270 includes a learning unit 271.
  The learning unit 271 performs machine learning, particularly supervised learning. In supervised learning, a computer learns patterns using data created by humans as teacher data. For example, the determination pattern of the identification candidate can be learned using multivariate regression analysis using a function with the identification score, the time score, and the geographic score as variables. This allows the computer to substitute for human intelligent processing.
  As described above, according to the fourth embodiment of the present invention, the result of past identification is scored and learned, so that identification can be performed more reliably.
<Fifth embodiment>
Next, a fifth embodiment of the present invention will be described. The fifth embodiment is different from the first embodiment in that the hub suppressing unit 255 has a relation score calculating unit.
  For example, even if the number of nodes to which the relay node is related is not previously stored, it may be calculated when necessary. For this reason, it does not have the global relation score table 265, the hub suppression part 255 has a relation score calculation part, and calculates an identification score each time.
  As described above, according to the fifth embodiment of the present invention, it is possible to reliably identify even a system with a small storage capacity.
  The embodiments of the present invention have been described above by taking the information structuring of electronic document data as an example. However, the present invention is not limited to this, and the present invention is widely applied to all data processing such as matching processing between knowledge graphs and knowledge on hand. can do.
  Moreover, although the identification of a person name was demonstrated about the Example of this invention, this invention is applicable also to identification of proper nouns, such as a company name.
  The present invention is not limited to the above-described embodiments, and includes various modifications and equivalent configurations within the scope of the appended claims. For example, the above-described embodiments have been described in detail for easy understanding of the present invention, and the present invention is not necessarily limited to those having all the configurations described. A part of the configuration of one embodiment may be replaced with the configuration of another embodiment. Moreover, you may add the structure of another Example to the structure of a certain Example. In addition, for a part of the configuration of each embodiment, another configuration may be added, deleted, or replaced.
  In addition, each of the above-described configurations, functions, processing units, processing means, etc. may be realized in hardware by designing a part or all of them, for example, with an integrated circuit, and the processor realizes each function. It may be realized by software by interpreting and executing the program to be executed.
  Information such as programs, tables, and files that realize each function can be stored in a storage device such as a memory, a hard disk, or an SSD (Solid State Drive), or a recording medium such as an IC card, an SD card, or a DVD.
  Further, the control lines and the information lines are those that are considered necessary for the explanation, and not all the control lines and the information lines that are necessary for the mounting are shown. In practice, it can be considered that almost all the components are connected to each other.

Claims (10)

  1. An information structuring system that analyzes the structure of a document,
    A processor for executing the program, and a memory for storing the program executed by the processor;
    A database that stores nodes that are nouns with identification information;
    Extracting a noun from a document, and associating the extracted noun with a node stored in the database, thereby associating a node with the extracted noun;
    When a plurality of node candidates are associated with the extracted nouns, a candidate enumeration unit that searches for relay nodes that connect the nouns whose identification information is specified and the node candidates;
    A calculation unit for calculating a first relationship between the searched relay node and the noun for which the identification information is specified, and a second relationship between the searched relay node and the node candidate;
    A suppressor that determines a relay node that has a large first relationship and a small second relationship;
    An information structuring system comprising: a determination unit that determines a node corresponding to the extracted noun using a node candidate associated with the determined relay node.
  2. The information structuring system according to claim 1,
    The calculation unit includes a first value that is the number of nouns for which the identification information associated with the searched relay node is specified, and a second value that is the number of candidate nodes associated with the relay node. And calculate
    The information structuring system, wherein the suppression unit determines a relay node having a large sum of the first values and a small sum of the second values.
  3. The information structuring system according to claim 1,
    The candidate enumeration unit determines a candidate for the node using a temporal relationship between the searched relay node and the node.
  4. The information structuring system according to claim 1,
    The candidate enumeration unit determines a candidate for the node by using a geographical relationship between the searched relay node and the node.
  5. The information structuring system according to claim 3 or 4,
    The determination unit
    Using the first relationship, the second relationship, the temporal relationship between the searched relay node and the node, and the geographical relationship between the searched relay node and the node Calculate the score of the candidate node,
    Find a regression equation that learned the determination result of the node using the calculated score,
    An information structuring system, wherein a node is determined using the obtained regression equation.
  6. An information structuring method using a computer,
    The computer has a processor that executes a program, a memory that stores a program executed by the processor, and a database that stores nodes that are nouns to which identification information is assigned,
    The method
    Extracting a noun from a document, and associating the extracted noun with a node stored in the database, thereby associating a node with the extracted noun;
    When a plurality of node candidates are associated with the extracted noun, a candidate enumeration step for searching for a relay node that connects the noun for which identification information is specified and the node candidate;
    A calculation step of calculating a first relationship between the searched relay node and the noun for which the identification information is specified, and a second relationship between the searched relay node and the candidate node;
    A suppressing step of determining a relay node having a large first relationship and a small second relationship;
    A determination step of determining a node corresponding to the extracted noun using a candidate node associated with the determined relay node.
  7. The information structuring method according to claim 6,
    In the calculating step, a first value that is the number of nouns for which the identification information associated with the searched relay node is specified, and a second value that is the number of candidates for the node associated with the relay node And calculate
    In the suppressing step, an information structuring method is characterized in that a relay node having a large sum of the first values and a small sum of the second values is determined.
  8. The information structuring method according to claim 6,
    In the candidate listing step, the node candidate is determined using a temporal relationship between the searched relay node and the node.
  9. The information structuring method according to claim 6,
    In the candidate enumeration step, a candidate for the node is determined using a geographical relationship between the searched relay node and the node.
  10. The information structuring method according to claim 8 or 9,
    In the determination step,
    Using the first relationship, the second relationship, the temporal relationship between the searched relay node and the node, and the geographical relationship between the searched relay node and the node Calculate the score of the candidate node,
    Find a regression equation that learned the determination result of the node using the calculated score,
    An information structuring method, comprising: determining a node using the obtained regression equation.
JP2016503804A 2014-02-18 2014-02-18 Information structuring system and information structuring method Active JP6145562B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2014/053763 WO2015125209A1 (en) 2014-02-18 2014-02-18 Information structuring system and information structuring method

Publications (2)

Publication Number Publication Date
JPWO2015125209A1 JPWO2015125209A1 (en) 2017-03-30
JP6145562B2 true JP6145562B2 (en) 2017-06-14

Family

ID=53877750

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2016503804A Active JP6145562B2 (en) 2014-02-18 2014-02-18 Information structuring system and information structuring method

Country Status (2)

Country Link
JP (1) JP6145562B2 (en)
WO (1) WO2015125209A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106649550B (en) * 2016-10-28 2019-07-05 浙江大学 A kind of joint knowledge embedding grammar based on cost sensitive learning
CN112016312A (en) * 2020-09-08 2020-12-01 平安科技(深圳)有限公司 Data relation extraction method and device, electronic equipment and storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4010058B2 (en) * 1998-08-06 2007-11-21 富士ゼロックス株式会社 Document association apparatus, document browsing apparatus, computer-readable recording medium recording a document association program, and computer-readable recording medium recording a document browsing program
US7698626B2 (en) * 2004-06-30 2010-04-13 Google Inc. Enhanced document browsing with automatically generated links to relevant information
JP5582540B2 (en) * 2011-06-13 2014-09-03 日本電信電話株式会社 Method for extracting frequent partial structure from data having graph structure, apparatus and program thereof
JP2013054602A (en) * 2011-09-05 2013-03-21 Nippon Telegr & Teleph Corp <Ntt> Graph pattern matching system and graph pattern matching method

Also Published As

Publication number Publication date
JPWO2015125209A1 (en) 2017-03-30
WO2015125209A1 (en) 2015-08-27

Similar Documents

Publication Publication Date Title
US9239875B2 (en) Method for disambiguated features in unstructured text
JP5078173B2 (en) Ambiguity Resolution Method and System
KR20130056207A (en) Relational information expansion device, relational information expansion method and program
Li et al. A probabilistic topic-based ranking framework for location-sensitive domain information retrieval
JP5710581B2 (en) Question answering apparatus, method, and program
US10198497B2 (en) Search term clustering
US10725836B2 (en) Intent-based organisation of APIs
JP5250009B2 (en) Suggestion query extraction apparatus and method, and program
JP6145562B2 (en) Information structuring system and information structuring method
Prasanth et al. Effective big data retrieval using deep learning modified neural networks
JP2009295052A (en) Compound word break estimating device, method, and program for estimating break position of compound word
JP5362807B2 (en) Document ranking method and apparatus
Lu et al. Improving web search relevance with semantic features
JP5869948B2 (en) Passage dividing method, apparatus, and program
JP6663826B2 (en) Computer and response generation method
TWI656450B (en) Method and system for extracting knowledge from Chinese corpus
US11113607B2 (en) Computer and response generation method
KR102059743B1 (en) Method and system for providing biomedical passage retrieval using deep-learning based knowledge structure construction
Plum et al. Toponym detection in the bio-medical domain: A hybrid approach with deep learning
KR20180129001A (en) Method and System for Entity summarization based on multilingual projected entity space
JP2019148933A (en) Summary evaluation device, method, program, and storage medium
Shi et al. Story Disambiguation: Tracking Evolving News Stories across News and Social Streams
Aleyasen Entity recognition for multi-modal socio-technical systems
JP2007011891A (en) Information retrieval method and device, program, and storage medium storing program
WO2017138057A1 (en) Text generation system and text generation method

Legal Events

Date Code Title Description
TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20170425

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20170515

R150 Certificate of patent or registration of utility model

Ref document number: 6145562

Country of ref document: JP

Free format text: JAPANESE INTERMEDIATE CODE: R150