US20070198449A1 - Method and apparatus for safe ontology reasoning - Google Patents
Method and apparatus for safe ontology reasoning Download PDFInfo
- Publication number
- US20070198449A1 US20070198449A1 US11/361,293 US36129306A US2007198449A1 US 20070198449 A1 US20070198449 A1 US 20070198449A1 US 36129306 A US36129306 A US 36129306A US 2007198449 A1 US2007198449 A1 US 2007198449A1
- Authority
- US
- United States
- Prior art keywords
- ontology
- elements
- intersection
- sensitive element
- subset
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N5/00—Computing arrangements using knowledge-based models
- G06N5/04—Inference or reasoning models
Abstract
The present invention is a method and apparatus for safe ontology reasoning. In one embodiment, a method for building safe sub-ontology that includes one or more elements of a given ontology includes designating at least one the elements as a sensitive element, where a sensitive element is an element not to be revealed. The method then designates a safe sub-ontology such that the safe sub-ontology does not include any elements that, alone or in combination, allow inference of a sensitive element, in accordance with one or more given inference rules. In another embodiment, a method for building a potential sub-ontology includes designating at least one of the elements as a sensitive element and including a maximal number of the elements in the potential sub-ontology, wherein the maximal number includes the greatest number of elements that can be revealed, cumulatively, without allowing inference of a sensitive element, in accordance with one or more given inference rules.
Description
- The invention relates generally to ontology processing, and relates more particularly to ontology security.
- A central issue under consideration by the World Wide Web Consortium is ontology security and privacy. In particular, as ontologies proliferate and automatic reasoners become more powerful, it becomes more difficult to protect sensitive information. That is, as facts can be inferred from other facts, it becomes increasingly likely that information included in an ontology, while not sensitive itself, may nevertheless enable inference of information that is deemed sensitive.
- A competing concern, on the other hand, is the ability to provide an adequate or useful amount of information for ontology processing applications such as querying, navigating and reasoning. This concern is often at odds with the desire to limit or prevent access to information that may contribute to the inference of sensitive information.
- Thus, there is a need for a method and apparatus for safe ontology reasoning.
- The present invention is a method and apparatus for safe ontology reasoning, where the “safety” of an ontology encompasses both privacy concerns and security concerns. In one embodiment, a method for building safe sub-ontology that includes one or more elements of a given ontology includes designating at least one the elements as a sensitive element, where a sensitive element is an element not to be revealed. The method then designates a safe sub-ontology such that the safe sub-ontology does not include any elements that, alone or in combination, allow inference of a sensitive element, in accordance with one or more given inference rules. In another embodiment, a method for building a potential sub-ontology includes designating at least one of the elements as a sensitive element and including a maximal number of the elements in the potential sub-ontology, wherein the maximal number includes the greatest number of elements that can be revealed, cumulatively, without allowing inference of a sensitive element, in accordance with one or more given inference rules.
- So that the manner in which the above recited embodiments of the invention are attained and can be understood in detail, a more particular description of the invention, briefly summarized above, may be obtained by reference to the embodiments thereof which are illustrated in the appended drawings. It is to be noted, however, that the appended drawings illustrate only typical embodiments of this invention and are therefore not to be considered limiting of its scope, for the invention may admit to other equally effective embodiments.
-
FIG. 1 is a flow diagram illustrating one embodiment of a method for testing a subset of an ontology for “safeness”, according to the present invention; -
FIG. 2 is a flow diagram illustrating one embodiment of a method for determining a “best” safe ontology, according to the present invention; -
FIG. 3 is a flow diagram illustrating one embodiment of a method for reducing a multi-matroid problem to a three-matroid problem; and -
FIG. 4 is a high level block diagram of the present ontology testing method that is implemented using a general purpose computing device. - To facilitate understanding, identical reference numerals have been used, where possible, to designate identical elements that are common to the figures.
- In one embodiment, the present invention is method and apparatus for safe ontology reasoning. Within the context of the present invention, the “safety” of an ontology refers to the ontology's ability to address both privacy concerns and security concerns. Embodiments of the present invention preserve the integrity of sensitive information in an ontology framework by verifying the safety of a sub-ontology made available for querying, navigating, reasoning and other ontology processing applications. In particular, the present invention verifies not only that sensitive information is not included in the sub-ontologies, but also that information from which the sensitive information can be inferred is not included in the sub-ontologies. This substantially reduces the likelihood of sensitive information being even inadvertently revealed. Moreover, the present invention maximizes the amount of information that is provided in the safe ontology, so that the ontology can provide as much useful information as possible while still remaining “safe” with respect to the sensitive information.
- Within the context of the present invention, a “safe” or “secure” ontology (or sub-ontology) is defined as one that does not contain any information that may be used to derive sensitive facts, given a collection of inference rules.
- Embodiments of the present invention define an ontology, O, as a tuple {I, R, M} comprising a finite set of concepts, where I is a finite set of individuals, R is a finite set of relationships and M is a finite set of metadata (which may include characteristics of relations, such as symmetry or transitivity, or constraints on relationships, such as restrictions of the number of relationships of a given type that can exist between individuals).
- A relationship, r, in the set R is expressed as a set of triples in the form:
-
- (subject, property, object)
where “subject” is an individual (e.g., i in the set I), “property” is a specific type of relationship, and “object” is an expression composed of individuals and the logical operators AND, OR and NOT. For example, the relationships (Jim isMemberOf man), (man isEquivalentTo (person AND male)) and (American is SubsetOf person) are all expressed as sets of triples.
- (subject, property, object)
- Pieces, m, of metadata in M are also expressed as triples. Specifically, a piece, m, of metadata is expressed as:
-
- (property, constraint, value)
where “property” corresponds to the specific type of relationship (e.g., the middle member of a relationship triple, such as isMemberOf or isEquivalentTo), “value” is a property or constant, and “constraint” is a member of {<=>inverseOf subPropertyOf disjointFrom is}. For example, the pieces of metadata (isSubsetOf is transitive), (name=1), (spouse<2) and (parentOf inverseOf childOf) are all expressed as sets of triples.
- (property, constraint, value)
- Types of metadata give rise to inference rules. For instance, the piece of metadata (ancestorOf is transitive)—i.e., the property “ancestorOf” is transitive—allows one to infer that if (Adam ancestorOf Bob) and (Bob ancestorOf Carl), then (Adam ancestorOf Carl).
- In one embodiment, the present invention extends the definition of an ontology to include restricted relations of the form FOR_ALL individuals, i, in class c, there exists an individual, j, in class D such that (i property j) and FOR_ALL individuals, i, in class C, there exists an individual, j, such that (i property j), then j is a member of class D.
- The closure, F(R), of a set of relations, R, is defined as the total set of relations or facts that can be inferred from the given set of relations, R, and the inference rules implied by the set of metadata, M. If the set of metadata, M, is relatively simple, the closure, F(R), of the set of relations, R, is also simple to determine. For example, if the set of metadata, M, only contains: (isSubsetOf is transitive), (isEquivalentTo is transitive) and (isEquivalentTo is symmetric), then, given a set of relations, R of the form: (x isSubsetOf y), (w isEquivalentTo z) and (i isA C), the closure, F(R), of the set of relations, R, can be computed by considering a graph, G, with edge set R (i.e., the sets of triples in the set of relations, R, define the set of edges of the graph, G, and the endpoints of the edges define the set of nodes). That is, where the only available inference mechanism is transitivity, facts may be inferred from other individual facts. In this case, the only inferences that can be made are membership inferences (i.e., one can infer whether a set is equivalent to or is a subset of another set, or whether an individual is a member of a set). The problem of determining the closure, F(R), of the set of relations, R, thus involves simply identifying the “reachability” set of each node, n, in the graph, G (i.e., determining for which set of nodes, s, a path exists from n to s). This can be easily computed, for example, by using breadth first search.
- In a more general case, other transitive relations may exist, such as “isPartOf”. For example: (USA isPartOf NorthAmerica), (State Pennsylvania isPartOf USA) or (City Philadelphia isPartOf State Pennsylvania). Membership, in this case, can still be determined using a simple search algorithm; however, the search must be sensitive to the fact that paths must comprise properties of the same type. This can also be extended to the case where different types of properties interact to form paths by declaring all such groups of properties as sub-properties of a single transitive property.
-
FIG. 1 is a flow diagram illustrating one embodiment of amethod 100 for testing a subset of an ontology, O (where O={I, R, M}), for “safeness”, according to the present invention. - The
method 100 is initialized atstep 102 and proceeds tostep 104, where themethod 100 defines a first subset, Rs, of the set of relationships, R, in the given ontology, O. Specifically, the first subset, Rs, contains all sensitive relationships (facts) in the set of relationships R. For example, the first subset, Rs, may contain the triple: T1=(A is subSetOf E). In one embodiment, the first subset, Rs, may be defined for themethod 100 by an external source (e.g., a human operator). - In
step 106, themethod 100 defines a second subset, Q, of the set of relationships, R. The second subset, Q, contains a test subset of relationships from the set of relationships, R, That is, the second subset, Q, is to be tested to determine its safety. For example, the second subset, Q, may contain the triples: T2=(A isEquivalentTo (B AND C)), T3=(A is subSetOf D), and T4=(E isEquivalentTo (B AND (C AND D))). In one embodiment, the second subset, Q, may be defined for themethod 100 by an external source (e.g., a human operator). - In
step 108, themethod 100 builds the closure, F(Q) of the second subset, Q, e.g., as described above. Instep 110, themethod 100 determines whether the closure, F(Q), of the second subset, Q, intersects with the first subset, Rs. In general, given one or more sets of relations, Msi k, for each rsi in the first subset, Rs, where rsi can be inferred from Msi k, but cannot be inferred from any subset of the set of relations Msk k a sub-ontology containing all of the relationships in Msi k is not considered safe with respect to the first subset, Rs. However, a safe sub-ontology with respect to the first subset, Rs, may be defined as any set of relations that does not contain all of the members of Msi k. In one embodiment, Msi k is provided or derived in accordance with an ontology defined by Horn clauses. - Accordingly, if the
method 100 determines instep 110 that the closure, F(Q), of the second subset, Q, intersects with the first subset, Rs, themethod 100 proceeds tostep 112 and concludes that the second subset, Q, is not safe (i.e., that information contained in the first subset, Rs, can be inferred from the information contained in the second subset, Q). Alternatively, if themethod 100 determines instep 110 that the closure, F(Q), of the second subset, Q, does not intersect with the first subset, Rs, themethod 100 proceeds tostep 114 and concludes that the second subset, Q, is safe (i.e., that information contained in the first subset, Rs, cannot be inferred from the information contained in the second subset, Q). - Thus, for example, based on the triples T1 through T4 discussed above, the second subset, Q, would not be considered safe with respect to the first subset, Rs, because the triple T4 can be inferred from the sub-ontology (T1, T2, T3). However, if the second subset, Q, contained only (T1, T2), only (T1, T3) or only (T2, T3), then the second subset, Q, would be considered safe with respect to the first subset, Rs. Once a conclusion has been reached as to the safety of the second subset, Q, the
method 100 terminates instep 116. - The present invention therefore preserves the integrity of sensitive information in an ontology framework by verifying the safety of a sub-ontology made available for querying, navigating, reasoning and other ontology processing applications. That is, the present invention verifies not only that sensitive information is not included in the sub-ontologies, but also that information from which the sensitive information can be inferred is not included in the sub-ontologies. This substantially reduces the likelihood of sensitive information being even inadvertently revealed.
-
FIG. 2 is a flow diagram illustrating one embodiment of amethod 200 for determining a “best” safe ontology, e.g., for use in querying, navigating, reasoning and other ontology processing applications, according to the present invention. In particular, themethod 200 optimizes the safe ontology, with respect to some function of the safe ontology. In the instance of themethod 200, the function is a counting function. That is, themethod 200 builds a safe ontology that retains as many relationships as possible (without revealing or allowing the inference of any information deemed sensitive). - The
method 200 is initialized atstep 202 and proceeds to step 204, where themethod 200 defines a first subset, Rs, of the set of relationships, R, in the given ontology, O. Specifically, the first subset, Rs, contains all sensitive relationships (facts) in the set of relationships R. In one embodiment, the first subset, Rs, is defined for themethod 200 by an external source (e.g., a human operator). - In
step 206, themethod 200 defines, for each relationship, rsi, in the first subset, Rs, the minimal set of relationships, Msi k, required to infer the given relationship, rsi. In one embodiment, the minimal set of relationships, Msi k, is defined for themethod 200 by an external source (e.g., a human operator or another application). The goal of themethod 200 thus becomes to find a maximum cardinality set of relationships, R*, such that R* does not include all of the relationships in any of the minimal sets of relationships, Msi k. - Thus, in
step 208, themethod 200 associates a matroid with each of the minimal sets of relationships, Msi k. A matroid M(E, F) is defined by a set of elements, E, and a family, F, of independent sets, F′, of the elements, E, where the independent sets, F′, have the following properties: (1) every subset of an independent set, F′, is also independent; (2) if there are two independent sets Fk′ and Fk+1′, of cardinalities k and k+1, respectively, then there exists an element, ei, in the set of elements, E, that is a member of Fk+1′, but not a member of Fk′, and such that Fk′∪ei is an independent set. In one embodiment, the set of elements, E, is finite. The set of elements, E, may contain concepts, relationships, and/or individuals in the given ontology, O. Having established the matroids, the goal is to find a single set of relationships that are simultaneously independent in all of the minimal sets of relationships, Msi k (i.e., an independent set in an intersection of the matroids defined in step 208). - In
step 210, themethod 200 defines the intersection of the matroids. Formally, given k matroids (i.e., M1, M2, . . . , Mk), all defined over the same set of elements, E, the intersection of the matroids is defined as M1=(E, F1), where a subset, F, of the set of elements, E, is a member of F1 if and only if the subset, F, is independent in all of the individual matroids. - In
step 212, themethod 200 reduces the intersection problem to a fewer-matroid problem. In one embodiment, the intersection problem is reduced to a three-matroid problem (i.e., first matroid M1**, second matroid M2** and third matroid M3**). One embodiment of a method for reducing a multi-matroid problem to a three-matroid problem is described with reference toFIG. 3 . For the purposes of simplicity, the remainder of the discussion of themethod 200 will assume that the intersection has been reduced to a three-matroid problem. - As described above, having reduced the number of matroids (e.g., to first matroid M1**, second matroid M2** and third matroid M3**), the goal becomes to identify an independent set in an intersection of the matroids in the reduced set. In one embodiment, a polynomial-bounded algorithm to find an independent set of maximum cardinality in the intersection of two matroids relies on the concept of an alternating chain and is an extension of an algorithm for finding maximum cardinality independent sets in a single matroid (i.e., find elements that are independent of already selected elements, with the assurance that no element, once selected, will prevent the finding of an independent set of higher cardinality). The algorithm for finding an independent set of maximum cardinality in the intersection of two matroids first selects elements one at a time, maintaining independence in both matroids, until no further elements can be selected. However, it is not necessarily guaranteed that one can find a maximum cardinality intersection in this manner, and even though the algorithm may be adapted by means of an augmenting path, this process becomes complicated for problems involving the intersection of large numbers of matroids. Accordingly, an alternate embodiment of a method for finding the independent set in the intersection of the reduced set of matroids is described below with respect to steps 214-224.
- Once the number of matroids for which an intersection must be found has been reduced (e.g., in step 208), the
method 200 proceeds to step 214 and initializes an intersection, Xk, where k=0. Thus, the intersection, Xk, is currently an empty intersection. - In
step 216, themethod 200 forms a border graph, B, based on the current intersection, Xk. The border graph, B, is a bipartite graph whose node set is the base set of elements, E, for the reduced set of matroids (e.g., the first, second and third matroids, M1**, M2** and M3**). - In
step 218, themethod 200 determines whether an augmenting tree, Tk, exists in the border graph, B. An augmenting tree is a sequence of elements, ej, which can be added and/or removed from a set of elements that are independent in a given number of matroids, in order to create a larger set of elements that are independent in the matroids. The augmenting tree, Tk, is rooted at a starting element (node), e1, that has no incoming paths; is terminated at an ending element (node), ex, that has no outgoing paths; and is comprised of additional intermediate elements (nodes), ej, having, at most, one path leading therein. In one embodiment, the elements, ej, in the border graph, B, have multiple labels that are each defined as a tuple: (S, W), where S is the set of elements, ej, in the path from the starting element, e1, and W is the total weight of all elements, ej, in the path (if the “best” subset of elements is defined as a subset of maximum weight, where each potential element in the subset is associated with an individual weight). An augmenting tree, Tk, rooted at the starting element, e1, is thus found by labeling elements, ej, from previously labeled elements, ek. All paths in the augmenting tree must terminate in elements ek with degree zero. This resolves all cycles formed while performing augmentation. - In one embodiment, one or more paths in the border graph, B, corresponds to an augmenting tree or sequence from a first intersection, Xp, to a second intersection, Xp+1. The nodes of the border graph, B, are partitioned into the sets Xp and E-Xp. For ei Xp and ej E-Xp, there exists a directed edge (ej, ei) in the border graph, B, if ei, when added to Ip, forms a cycle Cj (1) in the first matroid M1** and if ei is in Cj (1). A cycle, such as the cycle Cj (1), is a set that becomes independent with respect to given inference rules by removing an element from the set. Similarly, there exists a directed edge (ei, ej) in the border graph, B, if ei, when added to Xp, forms a cycle Cj (2) in the second matroid M2** and if ej is in Ci (2), or if ei, when added to Xp, forms a cycle Cj (3) in the third matroid M3** and if ej is in Ci (3). Edges of the border graph, B, that are based on a cycle in the first matroid, M1**, are referred to as type-1 edges, while edges generally based on cycles in a matroid, Mk**, are referred to as type-k edges.
- In the simplest case, the starting element, e1, has neither incoming nor outgoing edges, in which case the starting element, e1, forms no cycles with Xp in any of the matroids in the reduced set (e.g., first, second and third matroids M1**, M2** and M3**). In this case, the starting element, el, is an augmenting tree by itself (i.e., it can be added to Xp to form Xp+1).
- The next most simple case would be where the starting element, e1, has no incoming edges (i.e., does not form a cycle in the first matroid, M1**, added to Xp), but does form a cycle in the second matroid M2**. In this case, if the starting element e1 is added to Xp, some other element, ej (where ej is connected to the starting element, e1, via a type-2 edge in the border graph, B), must be removed from the cycle that the starting element, e1, forms in the second matroid M2**. Thus, an edge must be found from ej to some node ek in Xp, where ej is part of the cycle formed by ek in the first matroid, M1**. It is also possible that the starting element, e1, has no incoming edges, but forms cycles in both the first and second matroids, M2** and M3**. If there is a single element, ej, that is present in both of these cycles, the starting element, e1, can be added; ej can be removed; and a third node, ek, which includes ej in the cycle ek forms with Xp in the first matroid, M1**, can be added. It should be noted that these cases are only exemplary, and an augmenting path may contain more or less than three elements.
- If the
method 200 determines instep 218 that an augmenting tree, Tk, does not exist in the border graph, B, then themethod 200 concludes instep 220 that the current intersection, Xk, is of maximum cardinality before terminating instep 224. This maximum cardinality intersection, Xk, of the first, second and third matroids M1**, M2** and M3**, represents the “optimal” sub-ontology (i.e., the sub-ontology that retains the most relationships out of all of the available safe sub-ontologies). - Alternatively, if the
method 200 determines instep 218 that an augmenting tree, Tk, does exist in the border graph, B, then themethod 200 proceeds to step 222 and augments the current intersection, Xk, in accordance with the augmenting tree, Tk. That is, themethod 200 adds to the current intersection, Xk, all ej Xk. Themethod 200 then returns to step 216 and proceeds as described above, first by forming a new border graph, B, based on the current intersection, Xk, which has been newly augmented. -
FIG. 3 is a flow diagram illustrating one embodiment of amethod 300 for reducing a multi-matroid problem to a three-matroid problem. That is, themethod 300 reduces a set of k matroids to a set of three matroids. Themethod 300 may be implemented, for example, in accordance withstep 212 of themethod 200. - The
method 300 is initialized atstep 302 and proceeds to step 304, where themethod 300 makes one copy of each element, e, in the given set of elements, E, for each minimal set of relationships, Msi k. - In
step 306, themethod 300 finds independent sets in each of the matroids separately. Instep 308, themethod 300 determines whether a copy, j, of an element, ei, was used in the independent set from a given matroid, Mj. If themethod 300 concludes instep 308 that a copy of the element, ei, was used in the independent set from the given matroid, Mj, then themethod 300 proceeds to step 310 and uses ei in the independent sets for all other matroids. This transforms the k-intersection problem in a matroid, M, with m elements into a problem of finding a maximum cardinality independent set in a new matroid M* with km elements, but also with an additional condition (a “parity condition”) that all copies of a given element, e, be included in any solution. - In
step 312, themethod 300 removes the parity condition. Notably, if themethod 300 concludes instep 308 that a copy of the element, ei, was not used in the independent set from the given matroid, Mj, then themethod 300 proceeds directly to step 312 without applying the copy of the element, ei, in the independent sets for all other matroids. - In one embodiment, the parity condition is removed by defining three additional matroids on the elements of the new matroid M*. This is done by first defining a new element, aij, corresponding to each element, eij, in the new matroid, M*. This creates a first matroid, M1**, where M1**=(E**, F1**), E**={eij}∪{aij} and F is in F** if all elements, e, in FEj (the jth copies of the set of elements, E) are independent in Mj. Thus, M 1 enforces the constraints in the original matroids.
- Secondly, to enforce the parity rule, one defines second and third matroids, respectively:
M 2**=(E**, F 2**)
M 3**=(E**, F 3**)
where F is in F2** if, for all i and j (j=1, 2, . . . , k), F does not include both eij and aij; and F is in F3** if, for all i and j, F does not include both eij and aij+1 for j<k and also does not include both eik and ai, 1. - The goal of the constraints in F2** and F3** is to allow a full set of eij's for a given intersection or a full set of aij's for that given intersection, but not both. Now, one only has to solve the problem of finding the maximum intersection over the intersection of three matroids.
- Once the three new matroids have been defined, the
method 300 terminates instep 314. -
FIG. 4 is a high level block diagram of the present ontology testing method that is implemented using a generalpurpose computing device 400. In one embodiment, a generalpurpose computing device 400 comprises aprocessor 402, amemory 404, anontology testing module 405 and various input/output (I/O)devices 406 such as a display, a keyboard, a mouse, a modem, and the like. In one embodiment, at least one 1/O device is a storage device (e.g., a disk drive, an optical disk drive, a floppy disk drive). It should be understood that theontology testing module 405 can be implemented as a physical device or subsystem that is coupled to a processor through a communication channel. - Alternatively, the
ontology testing module 405 can be represented by one or more software applications (or even a combination of software and hardware, e.g., using Application Specific Integrated Circuits (ASIC)), where the software is loaded from a storage medium (e.g., I/O devices 406) and operated by theprocessor 402 in thememory 404 of the generalpurpose computing device 400. Thus, in one embodiment, theontology testing module 405 testing ontologies for safeness described herein with reference to the preceding Figures can be stored on a computer readable medium or carrier (e.g., RAM, magnetic or optical drive or diskette, and the like). - Thus, the present invention represents a significant advancement in the field of ontology processing. A method is provided that preserves the integrity of sensitive information in an ontology framework by verifying the safety of a sub-ontology made available for querying, navigating, reasoning and other ontology processing applications. That is, the present invention verifies not only that sensitive information is not included in the sub-ontologies, but also that information from which the sensitive information can be inferred is not included in the sub-ontologies. This substantially reduces the likelihood of sensitive information being even inadvertently revealed. Moreover, the present invention maximizes the amount of information that is provided in the safe ontology, so that the ontology can provide as much useful information as possible while still remaining “safe” with respect to the sensitive information.
- While foregoing is directed to the preferred embodiment of the present invention, other and further embodiments of the invention may be devised without departing from the basic scope thereof, and the scope thereof is determined by the claims that follow.
Claims (20)
1. A method for building a safe sub-ontology comprising one or more elements of a given ontology, the elements comprising one or more individuals, one or more relationships defined between the one or more individuals and one or more pieces of metadata relating to the one or more individuals and the one or more relationships, the method comprising:
designating at least one of said one or more elements as a sensitive element, where a sensitive element is an element not to be revealed;
designating at least one of said one or more elements as a potential safe sub-ontology such that said potential safe sub-ontology does not include any elements that, alone or in combination, allow inference of a sensitive element, in accordance with one or more given inference rules;
verifying that said potential safe sub-ontology does not include any elements that, alone or in combination, allow inference of a sensitive element, in accordance with said one or more given inference rules; and
storing said potential safe sub-ontology as the safe sub-ontology, if said verifying concludes that said potential safe sub-ontology does not include any elements that, alone or in combination, allow inference of a sensitive element, in accordance with said one or more given inference rules.
2. (canceled)
3. The method of claim 1 , wherein said verifying comprises:
defining a first subset of said one or more elements, said first subset comprising said least one of said one or more elements that Is designated as a sensitive element;
defining a second subset of said one or more elements, where said second subset comprises a set of elements to be tested as said potential safe sub-ontology;
building a closure of said second subset; and
determining whether said closure intersects with said first subset.
4. The method of claim 3 , further comprising:
concluding that said potential safe sub-ontology is safe if said closure does not intersect with said first subset; and
concluding that said potential safe sub-ontology is not safe if said closure intersects with said first subset.
5. The method of claim 1 , wherein said ontology comprises:
a finite set of concepts;
a finite set of individuals;
a finite set of relationships; and
a finite set of metadata.
6. The method of claim 1 , further comprising the step of:
optimizing said potential safe sub-ontology with respect to a function of said potential safe sub-ontology, while maintaining said safe status.
7. The method of claim 6 , wherein said function is a counting function.
8. The method of claim 6 , wherein said including comprises:
defining, for each sensitive element, a minimal set of said one or more elements that is necessary to infer said sensitive element in accordance with said one or more given inference rules;
associating each minimal set with a matroid; and
identifying a single subset of said one or more elements, where each element in said single subset is independent in each minimal set.
9. The method of claim 8 , wherein said identifying comprises:
defining a first intersection of all of said matroids associated with a minimal set;
reducing said first intersection to a second Intersection of a subset of said matroids;
initiating a third intersection, said third intersection being an empty intersection;
building a border graph in accordance with said third intersection; and
selecting said maximal number in accordance with said border graph.
10. The method of claim 9 , wherein said selecting comprises:
determining whether an augmenting tree exists in said border graph;
augmenting said third intersection with said augmenting tree, if said augmenting tree is determined to exist in said border graph; and
selecting said third intersection as said maximal number, if no augmenting tree is determined to exist in said border graph.
11. The method of claim 9 , wherein said second intersection is an intersection of three matroids.
12. A computer readable medium containing an executable program for building a safe sub-ontology comprising one or more elements of a given ontology, the elements comprising one or more individuals, one or more relationships defined between the one or more individuals and one or more pieces of metadata relating to the one or more individuals and the one or more relationships, where the program performs the steps of:
designating at least one of said one or more elements as a sensitive element, where a sensitive element Is an element not to be revealed;
designating at least one of said one or more elements as a potential safe sub-ontology such that said potential safe sub-ontology does not include any elements that, alone or in combination, allow inference of a sensitive element, in accordance with one or more given inference rules;
verifying that said potential safe sub-ontology does not include any elements that, alone or in combination, allow inference of a sensitive element, in accordance with said one or more given inference rules; and
storing said potential safe sub-ontology as the safe sub-ontology, if said verifying concludes that said potential safe sub-ontology does not include any elements that, alone or in combination, allow inference of a sensitive element, in accordance with said one or more given inference rules.
13. (canceled)
14. The computer readable medium of claim 12 , wherein said verifying comprises:
defining a first subset of said one or more elements, said first subset comprising said least one of said one or more elements that is designated as a sensitive element;
defining a second subset of said one or more elements, where said second subset comprises a set of elements to be tested as said potential safe sub-ontology;
building a closure of said second subset; and
determining whether said closure intersects with said first subset.
15. Apparatus for building a safe sub-ontology comprising one or more elements of a given ontology, the elements comprising one or more individuals, one or more relationships defined between the one or more individuals and one or more pieces of metadata relating to the one or more individuals and the one or more relationships, said apparatus comprising:
means for designating at least one of said one or more elements as a sensitive element, where a sensitive element is an element not to be revealed;
means for designating at least one of said one or more elements as a potential safe sub-ontology such that said potential safe sub-ontology does not include any elements that, alone or in combination, allow inference of a sensitive element, in accordance with one or more given inference rules;
means for verifying that said potential safe sub-ontology does not include any elements that, alone or in combination, allow inference of a sensitive element, in accordance with said one or more given inference rules; and
means for storing said potential safe sub-ontology as the safe sub-ontology, if said verifying concludes that said potential safe sub-ontology does not include any elements that, alone or in combination, allow inference of a sensitive element, in accordance with said one or more given inference rules.
16. A method for building a potential sub-ontology comprising one or more elements of a given ontology, the elements comprising one or more individuals, one or more relationships defined between the one or more individuals and one or more pieces of metadata relating to the one or more individuals and the one or more relationships, the method comprising
designating at least one of said one or more elements as a sensitive element, where a sensitive element is an element not to revealed;
including a maximal number of said one or more elements in said potential sub-ontology, wherein said maximal number is a greatest number of elements that can be revealed, cumulatively, without allowing inference of a sensitive element, in accordance with one or more given inference rules;
verifying that said potential safe sub-ontology does not include any elements that, alone or in combination, allow inference of a sensitive element, in accordance with said one or more given inference rules: and
storing said potential safe sub-ontology as a safe subontology, if said verifying concludes that said potential safe sub-ontology does not include any elements that, alone or in combination, allow inference of a sensitive element, in accordance with said one or more given inference rules.
17. The method of claim 16 , wherein said including comprises:
defining, for each sensitive element, a minimal set of said one or more elements that is necessary to infer said sensitive element in accordance with said one or more given inference rules;
associating each minimal set with a matroid; and
identifying a single subset of said one or more elements, where each element in said single subset is independent in each minimal set.
18. The method of claim 17 , wherein said identifying comprises:
defining a first intersection of all of said matroids associated with a minimal set;
reducing said first intersection to a second intersection of a subset of said matroids;
initiating a third intersection, said third intersection being an empty intersection;
building a border graph in accordance with said third intersection; and
selecting said maximal number in accordance with said border graph.
19. The method of claim 18 , wherein said selecting comprises:
determining whether an augmenting tree exists in said border graph;
augmenting said third intersection with said augmenting tree, if said augmenting tree is determined to exist in said border graph; and
selecting said third intersection as said maximal number, if no augmenting tree is determined to exist in said border graph.
20. The method of claim 18 , wherein, said second intersection is an intersection of three matroids.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US11/361,293 US20070198449A1 (en) | 2006-02-23 | 2006-02-23 | Method and apparatus for safe ontology reasoning |
US11/931,601 US7860816B2 (en) | 2006-02-23 | 2007-10-31 | Method and apparatus for safe ontology reasoning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US11/361,293 US20070198449A1 (en) | 2006-02-23 | 2006-02-23 | Method and apparatus for safe ontology reasoning |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/931,601 Continuation US7860816B2 (en) | 2006-02-23 | 2007-10-31 | Method and apparatus for safe ontology reasoning |
Publications (1)
Publication Number | Publication Date |
---|---|
US20070198449A1 true US20070198449A1 (en) | 2007-08-23 |
Family
ID=38429538
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/361,293 Abandoned US20070198449A1 (en) | 2006-02-23 | 2006-02-23 | Method and apparatus for safe ontology reasoning |
US11/931,601 Active 2027-05-16 US7860816B2 (en) | 2006-02-23 | 2007-10-31 | Method and apparatus for safe ontology reasoning |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/931,601 Active 2027-05-16 US7860816B2 (en) | 2006-02-23 | 2007-10-31 | Method and apparatus for safe ontology reasoning |
Country Status (1)
Country | Link |
---|---|
US (2) | US20070198449A1 (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7512576B1 (en) * | 2008-01-16 | 2009-03-31 | International Business Machines Corporation | Automatically generated ontology by combining structured and/or semi-structured knowledge sources |
US20100070420A1 (en) * | 2008-09-17 | 2010-03-18 | Microsoft Corporation | Online pricing and buyback |
US20140372481A1 (en) * | 2013-06-17 | 2014-12-18 | Microsoft Corporation | Cross-model filtering |
US20160140203A1 (en) * | 2014-11-19 | 2016-05-19 | Empire Technology Development Llc | Ontology decomposer |
Families Citing this family (55)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110264482A1 (en) * | 2010-04-22 | 2011-10-27 | Maher Rahmouni | Resource matching |
US8355905B2 (en) | 2010-05-14 | 2013-01-15 | International Business Machines Corporation | Mapping of relationship entities between ontologies |
US9037615B2 (en) | 2010-05-14 | 2015-05-19 | International Business Machines Corporation | Querying and integrating structured and unstructured data |
US20220164840A1 (en) | 2016-04-01 | 2022-05-26 | OneTrust, LLC | Data processing systems and methods for integrating privacy information management systems with data loss prevention tools or other tools for privacy design |
US11366909B2 (en) | 2016-06-10 | 2022-06-21 | OneTrust, LLC | Data processing and scanning systems for assessing vendor risk |
US11438386B2 (en) | 2016-06-10 | 2022-09-06 | OneTrust, LLC | Data processing systems for data-transfer risk identification, cross-border visualization generation, and related methods |
US11416590B2 (en) | 2016-06-10 | 2022-08-16 | OneTrust, LLC | Data processing and scanning systems for assessing vendor risk |
US11222142B2 (en) | 2016-06-10 | 2022-01-11 | OneTrust, LLC | Data processing systems for validating authorization for personal data collection, storage, and processing |
US11188862B2 (en) | 2016-06-10 | 2021-11-30 | OneTrust, LLC | Privacy management systems and methods |
US10284604B2 (en) | 2016-06-10 | 2019-05-07 | OneTrust, LLC | Data processing and scanning systems for generating and populating a data inventory |
US11403377B2 (en) | 2016-06-10 | 2022-08-02 | OneTrust, LLC | Privacy management systems and methods |
US11416109B2 (en) | 2016-06-10 | 2022-08-16 | OneTrust, LLC | Automated data processing systems and methods for automatically processing data subject access requests using a chatbot |
US11520928B2 (en) | 2016-06-10 | 2022-12-06 | OneTrust, LLC | Data processing systems for generating personal data receipts and related methods |
US11636171B2 (en) | 2016-06-10 | 2023-04-25 | OneTrust, LLC | Data processing user interface monitoring systems and related methods |
US11651106B2 (en) | 2016-06-10 | 2023-05-16 | OneTrust, LLC | Data processing systems for fulfilling data subject access requests and related methods |
US10997318B2 (en) | 2016-06-10 | 2021-05-04 | OneTrust, LLC | Data processing systems for generating and populating a data inventory for processing data access requests |
US11418492B2 (en) | 2016-06-10 | 2022-08-16 | OneTrust, LLC | Data processing systems and methods for using a data model to select a target data asset in a data migration |
US11651104B2 (en) | 2016-06-10 | 2023-05-16 | OneTrust, LLC | Consent receipt management systems and related methods |
US10846433B2 (en) | 2016-06-10 | 2020-11-24 | OneTrust, LLC | Data processing consent management systems and related methods |
US11392720B2 (en) | 2016-06-10 | 2022-07-19 | OneTrust, LLC | Data processing systems for verification of consent and notice processing and related methods |
US11294939B2 (en) | 2016-06-10 | 2022-04-05 | OneTrust, LLC | Data processing systems and methods for automatically detecting and documenting privacy-related aspects of computer software |
US11586700B2 (en) | 2016-06-10 | 2023-02-21 | OneTrust, LLC | Data processing systems and methods for automatically blocking the use of tracking tools |
US11461500B2 (en) | 2016-06-10 | 2022-10-04 | OneTrust, LLC | Data processing systems for cookie compliance testing with website scanning and related methods |
US11134086B2 (en) | 2016-06-10 | 2021-09-28 | OneTrust, LLC | Consent conversion optimization systems and related methods |
US11227247B2 (en) | 2016-06-10 | 2022-01-18 | OneTrust, LLC | Data processing systems and methods for bundled privacy policies |
US11416589B2 (en) | 2016-06-10 | 2022-08-16 | OneTrust, LLC | Data processing and scanning systems for assessing vendor risk |
US11354435B2 (en) | 2016-06-10 | 2022-06-07 | OneTrust, LLC | Data processing systems for data testing to confirm data deletion and related methods |
US11675929B2 (en) | 2016-06-10 | 2023-06-13 | OneTrust, LLC | Data processing consent sharing systems and related methods |
US11544667B2 (en) | 2016-06-10 | 2023-01-03 | OneTrust, LLC | Data processing systems for generating and populating a data inventory |
US10318761B2 (en) | 2016-06-10 | 2019-06-11 | OneTrust, LLC | Data processing systems and methods for auditing data request compliance |
US11188615B2 (en) | 2016-06-10 | 2021-11-30 | OneTrust, LLC | Data processing consent capture systems and related methods |
US11354434B2 (en) | 2016-06-10 | 2022-06-07 | OneTrust, LLC | Data processing systems for verification of consent and notice processing and related methods |
US11727141B2 (en) | 2016-06-10 | 2023-08-15 | OneTrust, LLC | Data processing systems and methods for synching privacy-related user consent across multiple computing devices |
US11562097B2 (en) | 2016-06-10 | 2023-01-24 | OneTrust, LLC | Data processing systems for central consent repository and related methods |
US11475136B2 (en) | 2016-06-10 | 2022-10-18 | OneTrust, LLC | Data processing systems for data transfer risk identification and related methods |
US11625502B2 (en) | 2016-06-10 | 2023-04-11 | OneTrust, LLC | Data processing systems for identifying and modifying processes that are subject to data subject access requests |
US11222139B2 (en) | 2016-06-10 | 2022-01-11 | OneTrust, LLC | Data processing systems and methods for automatic discovery and assessment of mobile software development kits |
US11481710B2 (en) | 2016-06-10 | 2022-10-25 | OneTrust, LLC | Privacy management systems and methods |
US10013577B1 (en) | 2017-06-16 | 2018-07-03 | OneTrust, LLC | Data processing systems for identifying whether cookies contain personally identifying information |
US10803202B2 (en) | 2018-09-07 | 2020-10-13 | OneTrust, LLC | Data processing systems for orphaned data identification and deletion and related methods |
US11544409B2 (en) | 2018-09-07 | 2023-01-03 | OneTrust, LLC | Data processing systems and methods for automatically protecting sensitive data within privacy management systems |
US11797528B2 (en) | 2020-07-08 | 2023-10-24 | OneTrust, LLC | Systems and methods for targeted data discovery |
EP4189569A1 (en) | 2020-07-28 | 2023-06-07 | OneTrust LLC | Systems and methods for automatically blocking the use of tracking tools |
US11475165B2 (en) | 2020-08-06 | 2022-10-18 | OneTrust, LLC | Data processing systems and methods for automatically redacting unstructured data from a data subject access request |
US11436373B2 (en) | 2020-09-15 | 2022-09-06 | OneTrust, LLC | Data processing systems and methods for detecting tools for the automatic blocking of consent requests |
US20230334158A1 (en) | 2020-09-21 | 2023-10-19 | OneTrust, LLC | Data processing systems and methods for automatically detecting target data transfers and target data processing |
US11397819B2 (en) | 2020-11-06 | 2022-07-26 | OneTrust, LLC | Systems and methods for identifying data processing activities based on data discovery results |
US11687528B2 (en) | 2021-01-25 | 2023-06-27 | OneTrust, LLC | Systems and methods for discovery, classification, and indexing of data in a native computing system |
US11442906B2 (en) | 2021-02-04 | 2022-09-13 | OneTrust, LLC | Managing custom attributes for domain objects defined within microservices |
US11601464B2 (en) | 2021-02-10 | 2023-03-07 | OneTrust, LLC | Systems and methods for mitigating risks of third-party computing system functionality integration into a first-party computing system |
WO2022178089A1 (en) | 2021-02-17 | 2022-08-25 | OneTrust, LLC | Managing custom workflows for domain objects defined within microservices |
US11546661B2 (en) | 2021-02-18 | 2023-01-03 | OneTrust, LLC | Selective redaction of media content |
US11533315B2 (en) | 2021-03-08 | 2022-12-20 | OneTrust, LLC | Data transfer discovery and analysis systems and related methods |
US11562078B2 (en) | 2021-04-16 | 2023-01-24 | OneTrust, LLC | Assessing and managing computational risk involved with integrating third party computing functionality within a computing system |
US11620142B1 (en) | 2022-06-03 | 2023-04-04 | OneTrust, LLC | Generating and customizing user interfaces for demonstrating functions of interactive user environments |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7093200B2 (en) * | 2001-05-25 | 2006-08-15 | Zvi Schreiber | Instance browser for ontology |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5187480A (en) * | 1988-09-05 | 1993-02-16 | Allan Garnham | Symbol definition apparatus |
-
2006
- 2006-02-23 US US11/361,293 patent/US20070198449A1/en not_active Abandoned
-
2007
- 2007-10-31 US US11/931,601 patent/US7860816B2/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7093200B2 (en) * | 2001-05-25 | 2006-08-15 | Zvi Schreiber | Instance browser for ontology |
US7099885B2 (en) * | 2001-05-25 | 2006-08-29 | Unicorn Solutions | Method and system for collaborative ontology modeling |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7512576B1 (en) * | 2008-01-16 | 2009-03-31 | International Business Machines Corporation | Automatically generated ontology by combining structured and/or semi-structured knowledge sources |
US20100070420A1 (en) * | 2008-09-17 | 2010-03-18 | Microsoft Corporation | Online pricing and buyback |
US8260724B2 (en) * | 2008-09-17 | 2012-09-04 | Microsoft Corporation | Online pricing and buyback |
US20140372481A1 (en) * | 2013-06-17 | 2014-12-18 | Microsoft Corporation | Cross-model filtering |
US9720972B2 (en) * | 2013-06-17 | 2017-08-01 | Microsoft Technology Licensing, Llc | Cross-model filtering |
US10606842B2 (en) | 2013-06-17 | 2020-03-31 | Microsoft Technology Licensing, Llc | Cross-model filtering |
US20160140203A1 (en) * | 2014-11-19 | 2016-05-19 | Empire Technology Development Llc | Ontology decomposer |
US9740763B2 (en) * | 2014-11-19 | 2017-08-22 | Empire Technology Development Llc | Ontology decomposer |
Also Published As
Publication number | Publication date |
---|---|
US7860816B2 (en) | 2010-12-28 |
US20080065578A1 (en) | 2008-03-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7860816B2 (en) | Method and apparatus for safe ontology reasoning | |
Horrocks et al. | Ontology reasoning in the SHOQ (D) description logic | |
Baader et al. | Axiom pinpointing in general tableaux | |
Ghilardi et al. | Did I damage my ontology | |
Calvanese et al. | Reasoning about explanations for negative query answers in DL-Lite | |
CN107040585A (en) | A kind of method and device of business verification | |
Ahmetaj et al. | Polynomial Datalog Rewritings for Expressive Description Logics with Closed Predicates. | |
US9613162B1 (en) | Semantic database driven form validation | |
ten Cate et al. | The product homomorphism problem and applications | |
Rashmanlou et al. | New concepts of interval-valued intuitionistic (S, T)-fuzzy graphs | |
Baumgartner et al. | Model evolution with equality—revised and implemented | |
Ceylan et al. | Probabilistic Query Answering in the Bayesian Description Logic | |
Wakaki | Assumption‐Based Argumentation Equipped with Preferences and its Application to Decision Making, Practical Reasoning, and Epistemic Reasoning | |
Tiwari et al. | On minimal realization for a fuzzy language and Brzozowski’s algorithm | |
US9361579B2 (en) | Large scale probabilistic ontology reasoning | |
Boustia et al. | A dynamic access control model | |
Nehi et al. | TOPSIS and Choquet integral hybrid technique for solving MAGDM problems with interval type-2 fuzzy numbers | |
Xiao et al. | The DReW system for nonmonotonic dl-programs | |
US9412069B1 (en) | Information infrastructure enabling mind supportable by universal computing devices | |
Calvanese et al. | View-Based Query Answering over Description Logic Ontologies. | |
Latte et al. | Definability by weakly deterministic regular expressions with counters is decidable | |
Sattler et al. | How does a reasoner work? | |
Link | Sound approximate reasoning about saturated conditional probabilistic independence under controlled uncertainty | |
Järvisalo | On the relative efficiency of DPLL and obdds with axiom and join | |
Merz et al. | Reasoning in with Fuzzy Concrete Domains |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: INTERNATIONAL BUSINESS MACHINES CORPORATION, NEW Y Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:FOKOUE-NKOUTCHE, ACHILLE;GRABARNIK, GENADY;HALIM, NAGUI;AND OTHERS;REEL/FRAME:017594/0890;SIGNING DATES FROM 20060222 TO 20060505 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO PAY ISSUE FEE |