WO2013137903A1 - Systèmes et procédés d'inférence et de raisonnement sémantiques - Google Patents

Systèmes et procédés d'inférence et de raisonnement sémantiques Download PDF

Info

Publication number
WO2013137903A1
WO2013137903A1 PCT/US2012/029395 US2012029395W WO2013137903A1 WO 2013137903 A1 WO2013137903 A1 WO 2013137903A1 US 2012029395 W US2012029395 W US 2012029395W WO 2013137903 A1 WO2013137903 A1 WO 2013137903A1
Authority
WO
WIPO (PCT)
Prior art keywords
semantic
inference
data artifacts
entities
relationships
Prior art date
Application number
PCT/US2012/029395
Other languages
English (en)
Inventor
Sameer Joshi
Todd Pehle
Larry Crochet
Original Assignee
Orbis Technologies, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Orbis Technologies, Inc. filed Critical Orbis Technologies, Inc.
Priority to PCT/US2012/029395 priority Critical patent/WO2013137903A1/fr
Publication of WO2013137903A1 publication Critical patent/WO2013137903A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/35Clustering; Classification
    • G06F16/355Class or cluster creation or modification

Definitions

  • the present invention relates to systems and methods for searching a large corpus of data to identify contextually relevant search results.
  • search engines typically operate by receiving a textual query of terms and/or phrases (the “search terms”), comparing the search terms against a body of searchable content (the “corpus”), and returning the data items in the corpus that are most relevant to the keywords (the “results").
  • search engines typically operate by receiving a textual query of terms and/or phrases (the “search terms”), comparing the search terms against a body of searchable content (the “corpus”), and returning the data items in the corpus that are most relevant to the keywords (the “results").
  • search engines typically operate by receiving a textual query of terms and/or phrases (the “search terms”), comparing the search terms against a body of searchable content (the “corpus”), and returning the data items in the corpus that are most relevant to the keywords (the "results").
  • the classic example of a search engine is an Internet search engine, which indexes web pages and returns the most relevant ones in response to search term queries.
  • Semantic reasoners overcome problems with traditional keyword searching by applying inference algorithms to the content and/or metadata of corpus documents to infer logical consequences.
  • semantic reasoners can expose connections that are invisible to traditional search engines and thereby allow users to find more relevant content.
  • a semantic reasoner may be able to identify relevant documents in the corpus that do not contain the given search terms but are nevertheless semantically related, to disambiguate entities in the data that have the same or different textual names, reduce the number of results to which a user is exposed, preserve linguistic flexibility in search terms, and enable accurate ranking of query results by trust in source, etc.
  • semantic reasoners provide powerful advantages over traditional search engines, such reasoners have remained impractical for large corpuses such as Internet content, intelligence report databases, corporate document databases, and the like.
  • the inference algorithms applied by traditional semantic reasoners can significantly inflate the already large volume of corpus data, which requires prohibitive storage and/or computing resources.
  • inference algorithms are often brittle and lose accuracy when applied to large volumes of documents that span beyond a single narrow domain (i.e., "the frame problem").
  • a semantic reasoning method and system is designed to overcome the shortcomings of traditional reasoners by employing a novel multi-stage approach.
  • a method for analyzing a corpus of data artifacts comprises obtaining, by a computer, a semantic representation of the data artifacts, where the semantic representation indicates (1) entities identified in the data artifacts, and (2) semantic relationships among the entities as indicated by the data artifacts.
  • the method further comprises clustering the data artifacts into clusters of semantically related data artifacts based on the semantic representation and inferring additional semantic relationships between pairs of the entities.
  • the inferring comprises applying, on a cluster-by-cluster basis, a multi-tiered network of inference engines to a portion of the semantic representation corresponding to the cluster, where the multi-tiered network of inference engines includes a domain-independent inference tier and a domain-specific inference tier.
  • obtaining the semantic representation may comprise applying natural language processing techniques to extract the entities and relationships from natural language content contained of the data artifacts.
  • obtaining the semantic representation comprises determining that the same entity is identified in the artifacts using different identifiers and disambiguating the entity by replacing one or more ⁇ of the different identifiers with a common identifier for the entity.
  • clustering the data artifacts may comprises performing a semantic analysis to determine semantic interrelatedness of the data artifacts based on respective ones of the entities and relationships in the data artifacts and/or performing a syntactic analysis to determine syntactic interrelatedness of the data artifacts based on syntactic overlap of respective content of the data artifacts.
  • applying the multi-tiered network of inference engines may comprise applying two or more inference engines sequentially, in parallel, or iteratively according to a static or dynamic schedule, which may be defined in one or more runtime configuration files.
  • applying the multi-tiered network of inference engines may comprises applying a plurality of domain- independent inference engines in the domain-independent tier and subsequently applying a plurality of inference engines in the domain-specific tier.
  • the clustering and inferring may be implemented using a parallel prograrnming and execution model, such as a MapReduce framework.
  • a system for analyzing a corpus of data artifacts comprises a parallel processing facility comprising a plurality of computer processing cores, and one or more memories coupled to the computer processing cores and storing program instructions executable by the processing cores to implement a semantic inference and reasoning engine.
  • the semantic inference and reasoning engine may be configured to analyze a corpus of data artifacts by: (1) obtaining a semantic representation of the data artifacts, where the semantic representation indicates (a) entities identified in the data artifacts, and (b) semantic relationships among the entities as indicated by the data artifacts; (2) clustering the data artifacts into clusters of semantically related data artifacts based on the semantic representation; and (3) inferring additional semantic relationships between pairs of the entities.
  • the inferring may comprise applying, on a cluster-by-cluster basis, a multi-tiered network of inference engines to a portion of the semantic representation corresponding to the cluster, where the multi-tiered network of inference engines includes a domain-independent inference tier and a domain-specific inference tier.
  • the system may include a distributed storage facility coupled to the parallel processing facility and storing the corpus of data artifacts.
  • the storage facility may comprise a distributed file system.
  • the parallel processing facility may comprise at least one of: a compute cluster, a superscalar supercomputer, a desktop grid, or a compute cloud.
  • the memories may further store program instructions executable to implement a parallel computation scheduling framework for executing the semantic inference and reasoning engine on the parallel processing facility using a MapReduce pattern.
  • the patent or application file contains at least one drawing executed in color.
  • FIG. 1 is a schematic diagram illustrating the hardware environment of a semantic inference and reasoning engine (SIRE), according to some embodiments of the present invention.
  • FIG. 2 is a block diagram illustrating the software components of a semantic inference and reasoning engine, according to some embodiments of the present invention.
  • FIG. 3 is a block diagram illustrating components of an ingestion tier of an inference engine, according to some embodiments of the present invention.
  • FIG. 4 is a block diagram illustrating components of a document-based resolution tier ("document tier") of an inference engine, according to some embodiments of the present invention.
  • FIG. 5 is a block diagram illustrating components of an entity-based resolution tier ("entity-tier") of an inference engine, according to some embodiments of the present invention.
  • FIG. 6 is a block diagram illustrating components of a domain-based resolution tier ("domain tier") of an inference engine, according to some embodiments of the present invention.
  • FIG. 7 is a flow diagram illustrating a method for analyzing a corpus of documents using a semantic inference and reasoning engine, according to some embodiments of the present invention.
  • FIG. 8 is a flow diagram illustrating a method for ingesting documents into the semantic database, according to some embodiments of the present invention.
  • FIG. 9 is a flow diagram illustrating a method for clustering documents using document-based resolution, according to some embodiments of the present invention.
  • FIG. 10 is a flow diagram for inferring new relationships in RDF data, according to some embodiments of the present invention.
  • FIG. 11 is a flow diagram illustrating a method for querying a semantic database, according to some embodiments of the present invention.
  • FIG. 12a illustrates a visualization of document clusters related to queried entities, according to some embodiments of the present invention.
  • FIG. 12b illustrates a visualization of a single document cluster, according to some embodiments of the present invention.
  • FIG. 13 illustrates a possible implementation for at least some components of a computer, according to some embodiments of the present invention.
  • a semantic reasoning method and system is designed to overcome the shortcomings of traditional reasoners by employing a novel multi-stage approach.
  • a corpus of data artifacts e.g., natural language documents
  • RDF Resource Description Framework
  • natural language documents and RDF are used as examples throughout this disclosure.
  • the input may be any data artifacts (e.g., text, semantic document, etc.) and the semantic representation may be described in RDF or in any other suitable semantic representation language.
  • the input documents may be analyzed to extract entities and their semantic inter-relationships, which may be added to the RDF representation.
  • a corpus of natural language intelligence reports may be analyzed via natural language processing to produce an RDF document that identifies people, places, activities, and/or other entities discussed in the document and the semantic relationships among those entities.
  • the RDF may then be analyzed to identify clusters of semantically-related documents.
  • the clustering may be based on the entities and semantic inter-relationships in the RDF.
  • documents that are more semantically related to one another are grouped into the same cluster.
  • Semantic relatedness for the purpose of clustering may be measured in various dimensions such that, in some instances, a data artifact (e.g., document) may be part of multiple clusters.
  • the system may infer additional semantic relationships by executing various inference algorithms on each document cluster (i.e., on the semantic data corresponding to the documents in the cluster).
  • the system may apply any number of domain-independent and domain-dependent inference techniques sequentially, in parallel, and/or iteratively to infer new relationships between entities.
  • the system may add the new inferences to the semantic representation of the data.
  • the system may store the data for later query.
  • the semantic data may be stored as RDF, in a relational database, and/or in any other format.
  • the semantic data store may be referred to generally herein as the "semantic database" without limitation to a particular implementation.
  • the system may later respond to a query for data relating to one or more entities by identifying one or more document- clusters that are related to the queried entities based on the identified and/or inferred semantic relationships.
  • the system and techniques described herein overcome the shortcomings of traditional reasoners by solving the data explosion problem and the frame problem.
  • the system may mitigate both problems by actively managing data volume through clustering by applying a network of inference techniques to smaller, semantically related document clusters.
  • the inference process can be made less brittle and users can be given additional control over the inference workflow.
  • the system can produce inferences with higher confidence and accuracy by organizing components and inference algorithms into layers that work on different levels of granularity.
  • FIG. 1 is a schematic diagram illustrating the hardware environment of a semantic inference and reasoning engine (SIRE), according to some embodiments of the present invention.
  • SIRE semantic inference and reasoning engine
  • system 100 implements a client-server model where a plurality of clients 105a-105c connect to one or more servers 115 via network 110.
  • client hardware may correspond to any computing device, such as mobile device 105a, desktop computer 105b, or laptop computer 105c.
  • Each of clients 105 may include software for accessing servers 115 via network 110.
  • the particular software necessary may depend on the protocols of network 110 and/or on the interface of servers 115.
  • clients 105 may utilize web browsers, browser plugins (e.g., widgets), and/or standalone applications to access web pages or web services provided by servers 115.
  • Software executing on clients 105 may permit clients to form requests for data from the corpus, to receive the requested data, and/or to view the data (e.g., as visualizations, etc.).
  • network 110 may be implemented by the Internet or any other combination of one or more electronic communication networks, including local area networks (LAN) and/or wide area networks (WAN).
  • LAN local area networks
  • WAN wide area networks
  • the networks may use various protocols, including wireless networking protocols (e.g., WiFi), wired networking protocols (e.g., Ethernet), radio networking protocols (e.g., GSM, LTE), etc., and can be arranged in any configuration (e.g., point-to-point, broadcast, etc.).
  • wireless networking protocols e.g., WiFi
  • wired networking protocols e.g., Ethernet
  • radio networking protocols e.g., GSM, LTE
  • GSM Global System for Mobile communications
  • LTE Long Term Evolution
  • the networks may use various protocols, including wireless networking protocols (e.g., WiFi), wired networking protocols (e.g., Ethernet), radio networking protocols (e.g., GSM, LTE), etc., and can be arranged in any configuration (e.g., point-to-point, broadcast, etc.).
  • Servers 115 may comprise any number of physical and/or virtual machines capable of executing one or more software servers.
  • servers 115 may be configured to execute software web servers that host one or more web applications and/or one or more web services (e.g., RESTful).
  • the web servers may make such applications and/or services accessible by clients 105 via network 110.
  • servers 115 may expose a web application with a browser-accessible interface that can be delivered to web browsers on clients 105.
  • servers 115 may host web services accessible by widgets and/or standalone applications executing on clients 105.
  • System 100 further includes SIRE compute cluster 120.
  • Compute cluster 120 provides storage and computing resources for creating and maintaining the semantic database that stores and analyzes the corpus. Although a cluster is illustrated, the necessary computational and storage resources may be provided by various other architectures for parallel computation and/or storage, such as one or more supercomputers, desktop grids, distributed clusters, and/or other systems.
  • Cluster 120 may be configured as a commodity cluster, which includes a set of commodity computers networked via an interconnect.
  • the cluster may be controlled by scheduling software, such as by a MapReduce framework (e.g., Hadoop), to perform parallel computations necessary for ingestion and inference computations described herein.
  • the cluster may also be controlled by distributed database and/or file system software, such as Hadoop Distributed File System, to implement a distributed file system on which the semantic data (i.e., semantic database) may be stored.
  • MapReduce framework e.g., Hadoop
  • distributed database and/or file system software such as Hadoop Distributed File System, to implement a distributed file system on which the semantic data (i.e., semantic database) may be stored.
  • servers 115 are illustrated as separate from compute cluster 120, in various embodiments, ' computers on compute cluster 120 may be used to implement web server functionality.
  • FIG. 2 is a block diagram illustrating the software components of a semantic inference and reasoning engine, according to some embodiments of the present invention.
  • the components of FIG. 2 may correspond to logical software components configured to execute on compute cluster 120 and/or servers 115.
  • system 200 includes one or more web servers 205, which are configured to host various web applications and/or services 207.
  • Web servers 205 and web applications/services 207 may execute on servers 115 and/or on computer cluster 120 of FIG. 1.
  • web servers 205 may be configured to receive queries regarding various entities and in response, to query the semantic database for data that is most relevant to the queries.
  • System 200 further includes job control flow / scheduling framework 210.
  • Scheduling framework 210 may be executed on one or more nodes of cluster 120 and may be configured to control how workflow and/or parallel job execution on the cluster is handled.
  • scheduling framework 210 may be implemented by the Hadoop MapReduce framework.
  • Such a framework may handle the splitting of jobs into smaller jobs and executing the smaller jobs in parallel across the nodes of the cluster. For instance, in MapReduce, jobs may be split into smaller jobs that are executed in parallel across the different nodes of the cluster to produce an intermediate result (i.e, "map” step) and the intermediate results are then redistributed by key and consolidated according to a given function (i.e., "reduce").
  • System 200 includes inference engine 220, which may be executed on one or more nodes of cluster 120.
  • Inference engine 220 may be configured to ingest and process documents into an RDF representation.
  • inference engine 220 includes ingestion tier 222 for receiving and/or retrieving documents (e.g., intelligence reports, emails, etc.), document-based resolution tier 224 for identifying clusters of semantically relevant documents, entity-based resolution tier 226 for inferring additional relationships based on entity relationships, and domain-based resolution tier 228 for inferring additional relationships based on domain-specific knowledge, such as by applying expert systems.
  • documents e.g., intelligence reports, emails, etc.
  • document-based resolution tier 224 for identifying clusters of semantically relevant documents
  • entity-based resolution tier 226 for inferring additional relationships based on entity relationships
  • domain-based resolution tier 228 for inferring additional relationships based on domain-specific knowledge, such as by applying expert systems.
  • System 200 includes a data access layer 230 for storing semantic data, ingested documents, intermediate data (e.g., data produced during inference activities), and/or other types of data.
  • the tiers of inference engine 230 may communicate input and Output data via data access layer 230.
  • data access layer 230 may be implemented by storage devices associated with and/or coupled to cluster 120.
  • the storage provided by data access layer 230 may be provided in whole or in part by the individual hard drives or solid-state storage of the computers in the cluster, by a separate storage cluster, by cloud storage, and/or by any special-purpose storage devices, such as tape backup, large-scale magnetic storage, large-scale solid state storage, etc.
  • data access layer 230 may implement a distributed file system, such as a Hadoop Distributed File System (HDFS), which may facilitate fast access to semantic database.
  • the semantic database stored in data access layer 230 may be a managed database, including a query engine for facilitating query of and access to the semantic data.
  • the semantic database may be implemented as files on the distributed file system.
  • FIG. 3 is a block diagram illustrating components of an ingestion tier of an inference engine, such as ingestion tier 222 of inference engine 220, according to some embodiments of the present invention.
  • Ingestion tier 300 may correspond to any combination of software and/or hardware configured to acquire documents for the corpus and ingest those documents into the semantic database.
  • ingestion tier 300 includes a document gathering module 305.
  • Document gathering module 305 may be configured to receive documents in any format, such as natural language or a structured format (e.g., XML).
  • the document gathering module 305 may be configured to actively search for and pull documents from remote sources, such as by crawling the web or searching through an email database, a company file system, a backup storage facility and/or any other type of document repository for ingestible documents.
  • the document gathering module may be configured to periodically scan an email repository for new intelligence reports and to ingest those reports into the semantic database.
  • the document gathering module 305 may be configured to passively receive documents from another component through a programmatic interface.
  • the interface may be invoked by one or more other components to add documents to the semantic database.
  • the document gathering module 305 may expose an interface for ingesting email messages and an email system may be configured to invoke the interface each time an email message is received.
  • Ingestion tier 300 further includes a document cleansing module 310, which may be configured to normalize document content.
  • the cleansing module 310 may strip extra white space, extraneous formatting, and/or other superfluous data from a document being ingested. For example, if document gathering module 305 ingests a document encoded in HTML and another in WordTM format, the document cleansing module 310 may normalize the two documents to use a common encoding without extraneous formatting or other metadata. The particular normalized formatting may depend on the particular implementation.
  • Ingestion tier 300 also includes a corpus pattern analyzer (CPA 315) for extracting entities and relationships from ingested documents.
  • CPA corpus pattern analyzer
  • the CPA may employ complex natural language processing techniques to identify entities described in each document and to determine semantic relationships between those entities.
  • the extracted entities may correspond to any real-world entities, such as people, places, companies, organizations, and the like.
  • the extracted relationships may correspond to any semantic relationship between two or more of the entities. For example, suppose the ingested document is a memorandum reporting that Steve, who is a banker at UBC, was seen meeting with Terry at the Blue Parrot Inn.
  • the CPA may extract the entities “Steve,” “UBC,” “Terry,” and “Blue Parrot Inn” from the memorandum.
  • the CPA may then identify relationships between the entities.
  • the CPA may create a unidirectional "works at” relationship between Steve and UBC, a bidirectional "met with” relationship between Steve and Terry, and respective unidirectional "met at” relationship between Steve and the Blue Parrot Inn and/or with Terry and the Blue Parrot Inn.
  • CPA 315 may be configured to create a structured representation of the extracted entities and relations.
  • the structured representation may include indications of the ingested documents, particular sentences within the ingested documents, metadata for the documents, and/or the extracted entities and relationships.
  • the structured representation may be stored in the data access layer (e.g., data access layer 230) and thus passed to the document-based resolution tier for further processing.
  • FIG. 4 is a block diagram illustrating components of a document-based resolution tier ("document tier") of an inference engine, such as document-based resolution tier 224 of inference engine 220, according to some embodiments of the present invention.
  • Document tier 400 may be implemented by any combination of software and/or hardware configured to cluster ingested documents into semantically related groups.
  • document tier 400 includes format mapping services (FMS) 405.
  • FMS 405 may be configured to convert the structured representation output by the ingestion tier into RDF data and to normalize that data for analysis by the semantic inference and reasoning engine.
  • the FMS may read the structured representation output by the ingestion tier and convert it to RDF.
  • the FMS may then normalize the RDF by repackaging it for MapReduce (e.g., by creating a sequence file of n-tuples) and optimizing the representation by disambiguating key values.
  • Document tier 400 includes statistics module 410, which may be configured to gather statistics about the ingested documents and thereby make meta-inferences.
  • statistics module 410 may be configured to count the number of entities and or relationships defined in each ingested document in order to determine their relative importance to the corpus.
  • a document concerning many entities and relationships may be more important to the corpus than one that concerns very few. Accordingly, more important documents may warrant extra processing, grouping into multiple groups, and/or other unique treatment.
  • Document tier 400 also includes syntactic analyzer 415 and semantic analyzer 420.
  • Analyzers 415 and 420 may be configured to determine semantically-related clusters of documents based on the entity/relationship data previously identified. Documents that appear to be highly semantically interrelated may be grouped together as a single cluster. For example, if the ingested documents include emails and other documents concerning three different events planned by a wedding planning company, the syntactic and semantic analyzers 415 and 420 may group the documents (based on their entities and relationships) into three groups: one for each event.
  • the semantic analyzer 420 and syntactic analyzer 415 may be executed sequentially or in parallel.
  • Syntactic analyzer 415 may be configured to identify related documents based on the particular text of the documents. For example, if syntactic analyzer 415 discovers that a group of documents contain some number of sentences or phrases in common, the analyzer may conclude that the documents are syntactically related to one another, and therefore, likely semantically related. By further analyzing metadata (e.g., date of document creation), the syntactic analyzer may identify the nature of particular relationships between the documents (e.g., a sentence was copied from an earlier document to a subsequently created document).
  • metadata e.g., date of document creation
  • Semantic analyzer 420 may be configured to identify related documents based on the particular entities and semantic relationships represented in the document. For example, if semantic analyzer 420 discovers that the entities and/or relationships mentioned in a group of documents overlap significantly, the semantic analyzer 420 may conclude that those documents are semantically related. In various embodiments, the semantic analyzer may employ various data mining and/or machine learning techniques, such as named entity recognition and/or linguistic extraction.
  • Document tier 400 also includes high-level analytics module 425, which may be configured to analyze each cluster of documents to produce cluster-level metadata.
  • Analytics module 425 may analyze a cluster to determine various metadata, such as the number of entities and/or relationships within the cluster, number of documents within the cluster, level of entity intercormectedness within a cluster, the most important entities/relationships within the cluster, and/or various other cluster-level metadata.
  • the cluster-level metadata may be used for. analysis by subsequent tiers and/or for query and visualization of query results.
  • FIG. 5 is a block diagram illustrating components of an entity-based resolution tier ("entity-tier" 500) of an inference engine, such as entity-based resolution tier 226 of inference engine 220, according to some embodiments of the present invention.
  • Entity tier 500 may be implemented by any combination of software and/or hardware configured to infer new semantic relationships from existing semantic relationships based on domain-independent logic.
  • Entity tier 500 may be applied to each cluster of documents separately. By limiting the inference activity to a single group of semantically related documents at a time, the technique enables the system to manage data size and framing problems described above.
  • entity tier 500 may comprise an ecosystem of different, domain-independent inference components, each configured to infer new semantic relationships between entities in a cluster based on the existing semantic relationships.
  • the ecosystem may comprise various inference engines known in the art, such as rules engines, description logic engines, and/or First Order logic (FO- logic) engines.
  • entity tier 500 includes a rules engine 505, which applies a rules based system described in rule store 510 (e.g., a rules database).
  • entity tier 500 further includes a description logic engine 515, which is based on T-Box reasoning 520 and A-Box reasoning 525.
  • Entity tier 500 further includes an FO-logic engine 530 based on ontology 535.
  • entity tier 500 may include fewer and/or additional types of domain-independent inference algorithms, such as forward chaining techniques, backward chaining techniques, Bayesian reasoning, and/or others.
  • the entity tier 500 may employ the various inference engines (e.g., 505, 515, 530), each according to respective tuning parameters. Such parameters may be set in one or more configuration files, which may be read by the system at runtime.
  • entity tier 500 may employ the inference engines in any order (i.e., according to any static or dynamic schedule), including sequentially, in parallel, and/or iteratively.
  • a "static" schedule may refer to a schedule in which the individual inference engines are applied in a pre-defined order.
  • a “dynamic” schedule may refer to a schedule where the decision of which inference engine to apply next is made based on runtime conditions, such as the results of one or more previous inference engine executions.
  • FIG. 6 is a block diagram illustrating components of a domain-based resolution tier ("domain tier" 600) of an inference engine, such as domain-based resolution tier 228 of inference engine 220, according to some embodiments of the present invention.
  • Domain tier 600 may be implemented by any combination of software configured to infer new semantic relationships from existing semantic relationships based on domain-dependent logic.
  • domain tier 600 may be applied to each cluster of documents separately. Accordingly, by limiting the inference activity to a single group of semantically related documents at a time, the technique enables the system to manage data size and framing problems described above.
  • domain tier 600 may comprise an ecosystem of different, domain-specific inference components, each configured to infer new semantic relationships between entities in a cluster based on the existing semantic relationships.
  • the ecosystem of reasoners in domain tier 600 may comprise various reasoners known in the art, such as domain-specific logic engines (e.g., expert systems) and/or graph-based reasoning engines.
  • domain tier 600 includes domain-specific logic engine 605, which is configured to infer new relationships based on heuristics in heuristic store 610 (e.g., database, configuration file(s), etc.).
  • a domain-specific logic engine may correspond to an "expert system” configured to deduce new relationships based on domain-specific rules (e.g., if Steve is an employee of company C and Steve has been spotted entering building B every weekday at 8am and leaving a 5pm, then company C has an office in building B).
  • the particular heuristics in heuristic store 610 may be domain-specific and determined by domain experts.
  • Domain tier 600 also includes graph-based reasoning engine 615, which may be configured to infer new relationships based on probabilistic graph matching techniques, such as sub-graph isomorphism matching to determine if two differently organized graphs are indeed referring to the same real world entity.
  • the domain tier 600 may apply its inference engines according to different input parameters and in any order, including sequentially, in parallel, and/or iteratively.
  • Such an order may be static (e.g., according to a predefined script) and/or dynamic (iterative based on the results of previous inferences).
  • a dynamic schedule my choose which one or more inference engines (if any) to execute next based on the results of previous inference engine runs (e.g., do not rerun an inference engine if no new relationships have been inferred since the previous time the engine was run).
  • the inference activities of entity tier 500 and domain tier 600 may be executed in any order (e.g., parallel, sequential, iterative) and according to any static or dynamic schedule.
  • Runtime parameters and the static or dynamic schedule may be set by the system administrator using one or more system configuration files.
  • FIG. 7 is a flow diagram illustrating a method for analyzing a corpus of documents using a semantic inference and reasoning engine, according to some embodiments of the present invention.
  • method 700 may be executed by the hardware of system 100 (FIG. 1), some of which may be executing the logical components of system 200 (FIG. 2).
  • method 700 first receives documents, as in 705.
  • the receiving of 705 may be performed by active retrieval (e.g., crawling a document repository) and/or by passive receiving (e.g., receiving email reports).
  • Method 700 next comprises extracting entities and relationships from the received documents, as in 710.
  • the ingestion tier may receive the documents and extract entities and relationships into a structured representation using CPA.
  • Method 700 next comprises creating a semantic representation (e.g., RDF) of the ingested documents based on the extracted entities and relationships, as in 715.
  • a semantic representation e.g., RDF
  • the document tier may create the RDF and ensure that it is normalized for use with the execution framework (e.g., MapReduce).
  • the documents are clustered into semantically-related document clusters based on the extracted entities and relationships.
  • the clustering step of 720 may be executed by the document tier (e.g., 400), as described above.
  • the inference step of 730 may comprise applying entity-based inference algorithms (as in 732) and/or domain-based inference algorithms (as in 734) to infer new relationships.
  • entity-based and domain-based algorithms may be executed by entity tier 500 and/or by domain tier 600 according to any execution parameters and/or in any order (e.g., according to a static and/or dynamic schedule).
  • the parameters and/or schedules may be specified by the system administrator in configuration files.
  • an administrator may use configuration files may specify which inference engines to use, the static and/or dynamic workflow for those engines, and the parameters for those engines.
  • the configuration parameters for each inference engine may be provided in a manner corresponding to a static schedule.
  • the parameters for the inference engines in a static schedule may be provided as a vector, where the i -th element in the vector corresponds to a set of parameters for the i -th inference engine in the static schedule.
  • Components scheduled to be executed in parallel may be sorted by secondary criteria, such as lexicographically by engine name.
  • a pre-tested schedule and set of configuration parameters may be provided.
  • users may modify the configuration parameters and/or schedule to suit particular data sets or deployments.
  • the new relationships inferred in 730 may be added to the RDF representation created in 715.
  • the entity relationships indicated in the RDF database may become richer each time an inference algorithm is applied in 730.
  • the method may comprise iteratively executing the inference algorithms of 730 according to a static and/or dynamic schedule, as indicated by the feedback loop from 740 to 730.
  • the RDF database is ready to be queried.
  • the ingestion and inference steps may be executed regularly as a background process.
  • documents may be regularly ingested and incorporated into the RDF database.
  • the execution may be structured as a MapReduce execution and may be controlled by the MapReduce scheduling software, such as scheduler 201 of FIG. 2.
  • the system may be configured to optimize the RDF database for query, such as by creating various indices around entities, relationships, documents, etc.
  • the system receives a data request.
  • the data request may come from a client (e.g., clients 105 of FIG. 1) and be received by a web server interface, such as by server 205 of FIG. 1.
  • the data request may come from one of the servers (e.g., 205) in response to a client request and be received by a query engine coupled with the semantic database.
  • a server 205 may be configured to respond to client requests for data by formulating a query in a structured query language (e.g., SPARQL, a structured query language for RDF).
  • a structured query language e.g., SPARQL, a structured query language for RDF
  • different semantic representations and query languages may be used.
  • this aspect of the invention may be implemented by a NoSQL query and database.
  • the data request of 745 may specify one or more entities, relationships, documents, and/or any other items in the RDF.
  • the request may be for all documents relevant to a given person.
  • the request of 745 may be for all entities with a given relationship (e.g., "resides in") to a particular another entity (e.g., "Chicago").
  • the system queries the semantic database (i.e., semantic representation) for semantically relevant data, as in 750. Because the semantic database includes both semantic relationships that were indicated by the ingested document and those that were inferred from those documents, the query may be satisfied by both indicated and/or inferred relationships.
  • semantic database i.e., semantic representation
  • results of the query are returned.
  • the returned results may depend on the document clustering performed in 720.
  • results may be visualized and/or otherwise presented according to the document clusters. Examples of such visualizations are shown in FIG. 12.
  • FIG. 8 is a flow diagram illustrating a method for ingesting documents into the semantic database, according to some embodiments of the present invention.
  • an ingestion tier such as 222 or 300.
  • Method 800 begins in 805 by receiving documents to be ingested.
  • the receiving may be performed as an active retrieving step (e.g., crawling a document repository) or a passive step (e.g., receiving a document via an invocation of an exposed ingest interface).
  • the received documents may be in natural language, text, structured language, and/or in any other format.
  • the documents are cleaned.
  • the cleaning may involve stripping formatting, stripping extraneous characters and/or spaces, converting one encoding to another, etc.
  • entities and relationships are extracted from the documents.
  • the entities and relationships may be extracted by the ingestion tier using CPA techniques and/or other natural language processing.
  • the extracted entities and relationships may include syntactic structures, such as sentences.
  • a structured representation of the documents, entities, relationships, entities, etc. is created.
  • the structured representation may be annotated with various attributes and/or metadata.
  • a document may be annotated with various document metadata, such as word counts, date of creation, file name, author, and so forth.
  • the structured representation is stored in the data access layer.
  • a result of the ingestion method 800 is a structured representation of the ingested documents, including content and metadata.
  • the ingestion method may ingest structured data (e.g., in RDF) directly, which may obviate many of the steps described in FIG. 8 for such documents.
  • a document ingested in RDF may be directly passed to the data access layer for processing by subsequent tiers.
  • Documents ingested directly in RDF may include schemas and/or ontologies that may also be ingested to enable more in- depth processing by subsequent analysis tiers.
  • FIG. 9 is a flow diagram illustrating a method for clustering documents using document-based resolution, according to some embodiments of the present invention.
  • Method 900 of FIG. 9 may be executed by a document-based resolution tier, such as 224 and 400.
  • Method 900 begins in 905 where the document tier accesses the data access layer to read the structured representation created by the ingestion tier.
  • the document tier converts the structured representation to RDF.
  • the system may use a semantic language other than RDF.
  • the RDF is normalized for processing. Normalization 920 may entail manipulating the RDF into a format that can be used by the processing framework (e.g., MapReduce) to identify clusters and/or infer new semantic relationships. For example, in the illustrated embodiment, the RDF is normalized for execution in a MapReduce framework.
  • the processing framework e.g., MapReduce
  • the method of normalizing may include steps such as creating n-tuples representing the data (as in 922), disambiguating entities by asserting equivalence relationships between entities in the RDF such that two textual names for the same entity are consolidated into one (as in 924), filtering the entities to remove sentences that are known to be non-indicative of semantic relationships (e.g., headers, footers, boilerplate language, other jargon, etc.) (as in 926), and packaging the RDF into a sequence file (as in 928), which is a format that can be input into a MapReduce program.
  • the particular steps of normalizing the RDF for processing may vary when other types of computational frameworks are used.
  • step 930 the RDF is analyzed to evaluate relative document importance.
  • the results of step 930 may be used in subsequent steps to direct analysis to particularly important documents.
  • clusters are identified via semantic analysis (e.g., 942) and/or syntactic analysis (e.g., 944).
  • a document tier such as 400 may utilize semantic analyzer 420 to group entities and/or documents based on the semantic relationships in the RDF.
  • the same document tier may utilize syntactic analyzer 415 to group entities and/or documents based on the syntactic (e.g., sentence overlap) relationships between documents in the RDF.
  • the semantic analysis 942 and syntactic analysis 944 may be performed sequentially, in parallel, and/or iteratively.
  • each type of analysis may be implemented by a MapReduce program and executed on a compute cluster, such as cluster 120.
  • the particular workflow used to create the grouping may be parameterized (e.g., using a configuration file).
  • an administrator may be able to tweak the method of identifying clusters for different domains and/or datasets.
  • each cluster is analyzed to calculate per-cluster analytics.
  • the analytics may identify important entities, relationships, and/or documents.
  • the analytics may provide summaries of the cluster and/or the documents within the cluster. Such summaries may be presented to a user to facilitate speedy understanding of the cluster and its elements.
  • the per-cluster analytics may be calculated in parallel, such as by a MapReduce program.
  • a result of document-based resolution method 900 is a semantic database (e.g., in RDF) where documents have been grouped into semantically relevant clusters.
  • the grouping may comprise converting the ingested documents to analyzable RDF and analyzing the RDF through parallel execution (e.g., using MapReduce programs).
  • FIG. 10 is a flow diagram for inferring new relationships in RDF data, according to some embodiments of the present invention.
  • Method 1000 of FIG. 10 may be executed by an entity-based resolution tier (e.g., 226 or 500) and/or by a domain-based resolution tier (e.g., 228 or 600).
  • the inference process may be implemented through parallel execution, such as by using any number of MapReduce programs executing on a compute cluster, such as 120.
  • the particular workflow of individual inference engines, as well as the individual inference engines may be calibrated by a system administrator in one or more configuration files.
  • method 1000 begins by receiving the RDF of document clusters, as in 1005.
  • the RDF may be retrieved from the data access layer, where it was stored by the document tier.
  • the system decides whether it should apply another inference algorithm in an attempt to infer new relationships in the RDF.
  • the decision may be informed by a static or dynamic schedule, which may be specified in the one or more configuration files.
  • a static schedule may dictate a particular workflow of inference engines and/or an order in which they are applied.
  • a dynamic schedule may provide parameters for deciding which inference engine to apply next and/or the runtime parameters of those inference engines. Such decisions may be based on which inference engines were executed previously and what inferences those previous executions added. For example, if no new inferences have been added since the previous execution of a given inference engine, then it may not be productive to re-execute the same inference engine with the same parameters.
  • the choice of which inference engine(s) to apply next and or what parameters to use for each engine may be a product of a static or dynamic schedule defined by the system configuration.
  • administrators may fine tune the system for particular domains.
  • the system may decide to execute multiple inference engines in parallel.
  • the particular inference engines to be applied may be domain-independent (e.g., entity-based) and or domain dependent.
  • domain-independent e.g., entity-based
  • domain dependent e.g., domain dependent
  • the inferred relationships are added to the RDF in 1025.
  • the relationship addition step of 1025 is illustrated separately from execution step 1020, it should be understood that, in various embodiments, the relationships may be added to the RDF as part of execution step 1020.
  • Inferred relationships may correspond to those that were not explicit in the ingested documents, but which, nevertheless, could be inferred from the ingested documents. For example, if an ingested document includes the sentence "Steve works in Chicago", the semantic "works in” relationship between the entities "Steve” and “Chicago” is said to be “explicit.” Explicit relationships may be identified and incorporated into the RDF without an inference engine.
  • an inference engine e.g., an expert system
  • the inference engine may create an inferred "works in” relationship between the entity 'Terry" and the entity “Chicago.” Such relationships are said to be "inferred.”
  • a result of inference method 1000 is that the RDF database includes some number of inferred relationships in addition to the explicit ones. Queries to the RDF may therefore rely on explicit and inferred relationships to provide richer, more relevant results and deeper analysis than typical keyword searches.
  • the ingestion, clustering, and inference methods may be executed periodically to maintain the semantic database.
  • the method may be executed on SIRE compute cluster 120 nightly or on some other schedule.
  • a system administrator may initiate processing of the semantic database (i.e., ingestion, clustering, and/or inference) on demand.
  • FIG. 11 is a flow diagram illustrating a method for querying a semantic database, according to some embodiments of the present invention. As indicated by the dotted boxes, different portions of query method 1100 may be executed by different client-side and/or server-side components.
  • query method 1100 begins with the client (e.g., 105) receiving a request for data, as in 1105.
  • the request may be to search for all ingested documents that are relevant to a particular person.
  • the request may be specified by a user using a graphical user interface.
  • the graphical user interface may be provided as part of a web application (e.g., via a browser), as part of a stand-alone application, or through some other means.
  • the client may invoke a semantic search web service, as in 1110.
  • a server receives the web service invocation sent by the client in 1110.
  • the server prepares a query for data in the semantic database that matches the search request, as in 1120.
  • the query may be articulated in a structured query language, such as SPARQL (an RDF query language).
  • SPARQL an RDF query language
  • the query is then sent to a data access layer (e.g., 230) for execution, as in 1125.
  • the data access layer receives the query and executes it in 1135.
  • the data access layer then returns the results to the server in 1140.
  • the server may subsequently return the results to the client.
  • query execution in 1135 may be handled by a query execution engine.
  • the results obtained from the semantic database may be further processed at any point in the chain of return, from the data access layer, to the server, the client. Such processing may be necessary to repackage the results in a format acceptable to the recipient.
  • FIG. 12a and FIG. 12b illustrate two example visualizations that client software may be configured to present to a user in response to a query.
  • FIG. 12a illustrates a visualization of document clusters related to queried entities, according to some embodiments of the present invention.
  • Enterprise view 1200 visualizes entities in various clusters (i.e., cluster inclusion of entities 1210).
  • Cluster inclusions 1210 show that three entities have been queried and that each is related to a respective set of document clusters.
  • Visualization 1210 shows that all three entities are semantically related to at least one cluster of documents and that two of the entities are both semantically related to a set of three document clusters.
  • Semantic relations are indicated by edges between nodes of the graph, Edges may be annotated to denote the character, strength, or other characteristic of the relationship.
  • FIG. 12b illustrates a visualization of a single document cluster, according to some embodiments of .the present invention.
  • Cluster view 1250 visualizes document cluster 1255, including the individual documents in the cluster and their interrelationships.
  • edges may be used to represent relationships and may be annotated to denote the nature (e.g., strength) of those relationships.
  • FIG. 13 illustrates a possible implementation for at least some components of a computer, according to some embodiments of the present invention.
  • the computer 1300 may correspond to any computing component illustrated in FIG. 1, including clients 105, servers 115, and/or nodes of cluster 120.
  • computer 1300 may include a data processing system 1335.
  • data processing system 1335 may include any number of computer processors, any number of which may include one or more processing cores.
  • any of the processing cores may be physical or logical. For example, a single core may be used to implement multiple logical cores using symmetric multithreading.
  • Computer 1300 also includes network interface 1340 for receiving messages (e.g., messages transmitted from a clients 105) and transmitting messages over network 110, and a data storage system 1305, which may include one or more computer-readable mediums.
  • the computer-readable mediums may include any number of persistent storage devices (e.g., magnetic disk drives, solid state storage, etc.) and/or transient memory devices (e.g., Random Access Memory).
  • data processing system 1335 includes a microprocessor
  • a semantic inference and reasoning computer program product may be provided.
  • Such a computer program product may include computer readable program code 1330, which implements a computer program, stored on a computer readable medium 1320.
  • Computer readable medium 1320 may include magnetic media (e.g., a hard disk), optical media (e.g., a DVD), memory devices (e.g., random access memory), etc.
  • computer readable program code 1330 is configured such that, when executed by data processing system 1335, code 1330 causes the processing system to perform steps described above.
  • computer 1300 may be configured to perform steps described above without the need for code 1330.
  • data processing system 1335 may consist merely of specialized hardware, such as one or more application-specific integrated circuits (ASICs).
  • ASICs application-specific integrated circuits
  • the features of the present invention described above may be implemented in hardware and/or software.
  • the functional tiers described above may be implemented by data processing system 1335 executing computer instructions 1330, by data processing system 1335 operating independent of any computer instructions 1330, or by any suitable combination of hardware and/or software.
  • SIRE semantic inference and reasoning engine
  • the semantic inference and reasoning engine (SIRE) described above offers many novel advantages over traditional systems. SIRE is able to manage the problems of scale in the reasoning process by applying a novel, multi-tier approach.
  • the multitier approach may include clustering semantically related data artifacts and applying a network of inference engines to each cluster according to a static or dynamic schedule that manages data size, maximizes inferences, and minimizes error.
  • the disclosed system is therefore uniquely positioned to address the search result problems that arise in cloud and/or Internet-scale data spaces, such as cloud-scale reasoning, semantic disambiguation of entities, search engine enhancement, semantic alerts based on incoming data, multi-granular semantic pattern detection, trend analysis, intelligence data mining, medical diagnosis, industrial competitive intelligence analysis, social network analysis, and other uses.
  • semantic representation may refer to any format that indicates entities and relationships. Although many examples are described herein using RDF, other semantic representations are possible in different embodiments.
  • the term "data artifact” may be used to refer to any unit of content that can be included in and analyzed as part of the corpus, regardless of its particular form.
  • the data artifact may be a natural language document (e.g., email, report) or itself a semantic representation.
  • MapReduce may be used to refer to a family of software frameworks for supporting distributed computing on large datasets according to the MapReduce pattern (i.e., distributing and performing work in parallel according to the "map” and “reduce” functions known in functional programming).
  • MapReduce may refer to any software framework for implementing a MapReduce system, such as the open-source Hadoop package. Examples of implementing semantic reasoning using a MapReduce framework can be found in co-pending U.S. application 13/097,662, which is incorporated in its entirety herein by reference.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

L'invention concerne un procédé et un système pour analyser un corpus d'artéfacts de données. Le procédé comprend l'obtention, par un ordinateur, d'une représentation sémantique des artéfacts de données, la représentation sémantique indiquant (1) des entités identifiées dans les artéfacts de données, et (2) les relations sémantiques entre les entités telles qu'indiquées par les artéfacts de données. Le procédé comprend en outre le groupement des artéfacts de données en des groupes d'artéfacts de données associés de manière sémantique sur la base de la représentation sémantique et l'inférence de relations sémantiques supplémentaires entre des paires des entités. L'inférence comprend l'application, sur une base groupe par groupe, d'un réseau à multiples étages de moteurs d'inférence à une partie de la représentation sémantique correspondant au groupe, le réseau à multiples étages de moteurs d'inférence comprenant un étage d'inférence indépendant d'un domaine et un étage d'inférence spécifique à un domaine.
PCT/US2012/029395 2012-03-16 2012-03-16 Systèmes et procédés d'inférence et de raisonnement sémantiques WO2013137903A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/US2012/029395 WO2013137903A1 (fr) 2012-03-16 2012-03-16 Systèmes et procédés d'inférence et de raisonnement sémantiques

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2012/029395 WO2013137903A1 (fr) 2012-03-16 2012-03-16 Systèmes et procédés d'inférence et de raisonnement sémantiques

Publications (1)

Publication Number Publication Date
WO2013137903A1 true WO2013137903A1 (fr) 2013-09-19

Family

ID=49161631

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2012/029395 WO2013137903A1 (fr) 2012-03-16 2012-03-16 Systèmes et procédés d'inférence et de raisonnement sémantiques

Country Status (1)

Country Link
WO (1) WO2013137903A1 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3567492A1 (fr) * 2018-05-11 2019-11-13 Kabushiki Kaisha Toshiba Procédé de traitement d'informations, support d'enregistrement non transitoire et dispositif de traitement d'informations
US10817467B2 (en) 2013-10-31 2020-10-27 Oxford University Innovation Limited Parallel materialisation of a set of logical rules on a logical database
WO2024099069A1 (fr) * 2022-11-09 2024-05-16 Huawei Technologies Co., Ltd. Systèmes, procédés et dispositifs de stockage lisibles par ordinateur non transitoires pour détecter des clones de données dans des ensembles de données tabulaires

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070208719A1 (en) * 2004-03-18 2007-09-06 Bao Tran Systems and methods for analyzing semantic documents over a network
US20090089277A1 (en) * 2007-10-01 2009-04-02 Cheslow Robert D System and method for semantic search
US20110270606A1 (en) * 2010-04-30 2011-11-03 Orbis Technologies, Inc. Systems and methods for semantic search, content correlation and visualization

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070208719A1 (en) * 2004-03-18 2007-09-06 Bao Tran Systems and methods for analyzing semantic documents over a network
US20090089277A1 (en) * 2007-10-01 2009-04-02 Cheslow Robert D System and method for semantic search
US20110270606A1 (en) * 2010-04-30 2011-11-03 Orbis Technologies, Inc. Systems and methods for semantic search, content correlation and visualization

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10817467B2 (en) 2013-10-31 2020-10-27 Oxford University Innovation Limited Parallel materialisation of a set of logical rules on a logical database
US11409698B2 (en) 2013-10-31 2022-08-09 Oxford University Innovation Limited Parallel materialisation of a set of logical rules on a logical database
EP3567492A1 (fr) * 2018-05-11 2019-11-13 Kabushiki Kaisha Toshiba Procédé de traitement d'informations, support d'enregistrement non transitoire et dispositif de traitement d'informations
CN110471373A (zh) * 2018-05-11 2019-11-19 株式会社东芝 信息处理方法、程序和信息处理装置
CN110471373B (zh) * 2018-05-11 2023-03-07 株式会社东芝 信息处理方法、程序和信息处理装置
US11934963B2 (en) 2018-05-11 2024-03-19 Kabushiki Kaisha Toshiba Information processing method, non-transitory storage medium and information processing device
WO2024099069A1 (fr) * 2022-11-09 2024-05-16 Huawei Technologies Co., Ltd. Systèmes, procédés et dispositifs de stockage lisibles par ordinateur non transitoires pour détecter des clones de données dans des ensembles de données tabulaires

Similar Documents

Publication Publication Date Title
US11763175B2 (en) Systems and methods for semantic inference and reasoning
JP7344327B2 (ja) アプリケーションプログラミングインターフェイスのメタデータ駆動型外部インターフェイス生成ためのシステムおよび方法
US11500865B1 (en) Multiple stage filtering for natural language query processing pipelines
WO2013137903A1 (fr) Systèmes et procédés d'inférence et de raisonnement sémantiques
Qiu et al. Efficient Regular Path Query Evaluation with Structural Path Constraints

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12871242

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 12871242

Country of ref document: EP

Kind code of ref document: A1