AU2003200039B1 - Forward-chaining inferencing - Google Patents

Forward-chaining inferencing Download PDF

Info

Publication number
AU2003200039B1
AU2003200039B1 AU2003200039A AU2003200039A AU2003200039B1 AU 2003200039 B1 AU2003200039 B1 AU 2003200039B1 AU 2003200039 A AU2003200039 A AU 2003200039A AU 2003200039 A AU2003200039 A AU 2003200039A AU 2003200039 B1 AU2003200039 B1 AU 2003200039B1
Authority
AU
Australia
Prior art keywords
facts
fact
rules
rulebase
inferencing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired
Application number
AU2003200039A
Inventor
Andrew Barry
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Oracle International Corp
Original Assignee
Oracle International Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from AU2002952648A external-priority patent/AU2002952648A0/en
Application filed by Oracle International Corp filed Critical Oracle International Corp
Priority to AU2003200039A priority Critical patent/AU2003200039B1/en
Publication of AU2003200039B1 publication Critical patent/AU2003200039B1/en
Assigned to RULEBURST LIMITED reassignment RULEBURST LIMITED Amend patent request/document other than specification (104) Assignors: SOFTLAW CORPORATION LIMITED
Assigned to ORACLE INTERNATIONAL CORPORATION reassignment ORACLE INTERNATIONAL CORPORATION Request for Assignment Assignors: RULEBURST LIMITED
Anticipated expiration legal-status Critical
Expired legal-status Critical Current

Links

Description

Regulation 3.2
AUSTRALIA
Patents Act 1990 COMPLETE SPECIFICATION STANDARD PATENT
APPLICANT:
Invention Title: SOFTLAW CORPORATION LIMITED FORWARD-CHAINING INFERENCING The following statement is a full description of this invention, including the best method of performing it known to me: "FORWARD-CHAINING INFERENCING" Technical field This invention relates to systems for and methods of forward-chaining inferencing.
Forward-chaining inferencing can be described as a process or method by which new facts are discovered given a rulebase (ie. a set of rules) and a set of input facts. It is used by and in so-called expert systems which can be described as computers or computer programs that use symbolic knowledge and inference to reach conclusions.
By way of example, an expert system could apply a set of input facts describing an individual's personal circumstances to a rulebase that models a national Social Security Act or the like to determine the social security benefits to which the individual is entitled.
This process is referred to as forward-chaining because it is essentially a chain of inferences that start from the input facts and end with the required determinations.
The invention has particular but not exclusive application to the use of forwardchaining inferencing in expert systems and for illustrative purposes reference will be made throughout this specification to such use. However it will be realised the present invention may be utilised in other applications where computers are used to infer outcomes (or new facts) from a set of given inputs (or facts) in accordance with a set of rules (ie a number of operational or governing criteria).
Background of Invention Expert systems are well known. They have been described as follows: expert system A computer program that uses symbolic knowledge and inference to reach conclusions. It derives most of its power from its knowledge. The key components of an expert system are an inference engine and a knowledge base. The separation of control (the inference engine) from knowledge (knowledge base) is a hallmark of an expert system. Other components of an expert system include a user interface, a knowledgeacquisition module, and an explanatory interface.
An expert system derives most of its power from its knowledge rather than its inferencing ability. Expert systems are applied to the class of problems in which no simple algorithmic solution is known. To qualify as an expert system it must attain levels of performance roughly equivalent to a human expert. Most expert systems are able to explain their reasoning. Expert systems are generally able to reason about their own inference processes. Other advantages of expert systems are that they do not forget, they consider all details, they don't overlook remote possibilities and they do not jump to conclusions...
In contrast with ordinary computer programs, expert systems can be incrementally modified with little difficulty at least as compared to conventional programs. The knowledge in an expert system is more available to scrutiny than it is in a conventional program where knowledge may be intertwined with procedure... Expert systems are more robust than conventional programs they are more likely to be able to handle unexpected situations...
There are a number of criteria for the use of expert systems: One is the existence of expertise in the area. The task should be a complex problem with multiple interacting subtasks where there appears to be no fixed order of problem solution. It is useful when the solution needs to be explained, when what-if analysis is desirable, or when it is known that the system will be frequently revised.
Mercadal, D. 1990. Dictionary of Artificial Intelligence. p96-97. NY: Van Nostrand Reinhold It should be noted the term rulebase as used herein is synonymous with the expression knowledge base above.
The standard method used by expert systems for forward-chaining inferencing is known as the Rete algorithm and aims to minimise the amount of effort required for an inference cycle whenever input facts change. The Rete algorithm will be explained in more detail when describing the preferred embodiment of present invention.
The Rete algorithm was invented in 1979 a bygone era of computing. Since then, the application of expert systems, including the environment that they work within, has changed dramatically: Systems must now provide high levels of scalability to support thousands of concurrent users, particularly through the use of stateless application development Today's Internet technologies mean that systems are largely transactional by nature S Modern user interfaces are better at collecting many items of data per screen (or transaction) S Today's processors are much faster with large onboard caches S Expert systems that perform batch processing and provide enginebased services are now a common requirement S Integration of expert systems with corporate databases is a standard requirement The forward-chaining inferencing system and method of the present invention allows expert systems to better deal with these significant changes.
Summary of Invention The present invention aims to provide an alternative to known systems and methods of forward-chaining inferencing.
This invention in one aspect resides broadly in a method of forward-chaining inferencing in a rulebased system having a rulebase and a set of input facts, wherein new facts are inferred in accordance with variations to the rules or the input facts, the method including:developing a computerised database containing a fact dependency tree for indicating which facts are used to produce other facts in accordance with respective rules in the rulebase; sequentially ordering the facts in the fact dependency tree to produce a serialised fact dependency tree wherein for any given fact in the sequence, all facts which are used to produce that fact are facts which are earlier in the sequence than is the given fact, and 6 ordering the rules in the rulebase in accordance with the facts produced thereby to produce a serialised rulebase wherein the rules are in the same sequential order as the facts in the serialised fact dependency tree.
As used herein the expression "rulebase" is to be given a broad meaning.
Rulebased systems and methods are ones which are developed and implemented, and which operate, in accordance with a set of rules. The rules are preferably declarative, ie they explain rather than pronounce.
It is preferred that the method also includes:setting in working memory all known input facts; sequentially evaluating each of the ordered rules in the rulebase, and updating the working memory in accordance with any changes to the facts in accordance with the evaluating of a rule.
In another aspect this invention resides broadly in a system for forward-chaining inferencing in a rulebased system having a rulebase and a set of input facts, wherein new facts are inferred in accordance with variations to the rules or the input facts, the system including:a computerised database containing a fact dependency tree for indicating which facts are used to produce other facts in accordance with respective rules in the rulebase, and program means for sequentially ordering the facts in the fact dependency tree to produce a serialised fact dependency tree wherein for any given fact in the sequence, all facts which are used to produce that fact are facts which are earlier in the sequence than is the given fact, and for ordering the rules in the rulebase in accordance with the facts produced thereby to produce a serialised rulebase wherein the rules are in the same sequential order as the facts they produce in the serialised fact dependency tree.
It is preferred that the system also includes program means for:setting in working memory all known input facts; sequentially evaluating each of the ordered rules in the rulebase, and updating the working memory in accordance with any changes to the facts in accordance with the evaluating of a rule.
It is preferred that the development of the computerised database containing a fact dependency tree includes:generating a graph in which each of the facts relevant to the set of rules in the rulebase is identified without any indication of the sequential relationship of the facts, and for each rule in the rulebase, providing an arc between the facts associated with that rule, the linkage being directed from the fact(s) which produce other fact(s) toward the other fact(s).
As used herein the expression "graph" refers to a graphical illustration of the facts in a rulebase, for example a set of nodes each representing a fact can be referred to as a graph. As used herein the expression "arc" in relation to graphs refers to a connecting one-way arrow which joins two facts, for example a directional linkage between nodes/facts can be referred to as an arc.
It is also preferred that only those rules which are relevant in a given situation are evaluated whereby the new facts are inferred incrementally. Accordingly, the method may includemaintaining a lookup table for recording for each fact in the rulebase which rules are reliant thereon for evaluation, and maintaining a flag for each rule in the rulebase, the flag indicating for any given fact or setting of a fact value between or during inferences, whether the rule is relevant or irrelevant.
The method and system of forward-chaining inferencing may also take into account cyclic rule dependencies. Accordingly the method may include:identifying loops in the fact dependency tree, the loops being generated by cyclically dependant rules; for each said loop, identifying a snarl containing the loop; for each said snarl, ignoring the facts in the snarl and any fact dependencies within the snarl and treating the snarl as an indivisible node, when sequentially ordering the facts in the fact dependency tree, and marking the start and end of each snarl in the serialised rulebase.
As used herein the expression "snarl" refers to the smallest set of facts in the fact dependency tree which contains a loop or loops generated by cyclically dependent rules.
In this embodiment it is also preferred that the method includes, when inferencing:repeatedly evaluating the rules in each snarl in cycles, and stopping evaluating the rules in a snarl when a steady state is reached.
The method and system of forward-chaining inferencing may also take multiple object instances into account. In this embodiment facts representing attributes of object instances are stored in working memory object instance tables for storing multiple sets of facts, and the rules proving these facts are evaluated once for each object instance, the sequential evaluation order of the rules being preserved.
As used herein the expression "object instance" refers to a specific instance of a real-world entity and the expression "attribute" refers to a quality associated with an object instance. Thus by way of non-limiting example, a child called Julie is an object instance, as is a second child called Barney and object instances of the same type, (eg. Julie and Barney), have the same type of attributes, (eg. their sex).
The method and system of forward-chaining inferencing may also accommodate batch processing. Accordingly the steps of sequentially evaluating the ordered rules and updating the working memory can be conducted simultaneously across multiple working memories to facilitate batch processing for enhancing the average level of system performance.
Description of Drawings In order that this invention may be more easily understood and put into practical effect, reference will now be made to the accompanying drawings which illustrate a preferred embodiment of the invention, wherein:- FIGS 1 to 4 illustrate exemplary networks in accordance with forward-chaining inferencing by the known method utilising the Rete algorithm; FIG 5 illustrates a fact dependency tree in accordance with the linear inferencing process of the present invention; FIG 6 illustrates a serialised fact dependency tree in accordance with the linear inferencing process of the present invention; FIG 7 illustrates a serialised rulebase in accordance with the linear inferencing process of the present invention; FIG 8 is a schematic block diagram of the linear inferencing process of the present invention; FIG 9 is a flowchart illustrating the linear inferencing process of the present invention; FIGS 10 to 12 relate to the treatment of cyclic dependencies with FIG illustrating a fact dependency loop, FIG 11 illustrating a serialised fact dependency tree with a snarl, and FIG 12 illustrating a serialised rulebase with a snarl, and FIGS 13 and 14 relate to the treatment of multiple object instances with FIG 13 illustrating an example of related objects and their attributes and FIG 14 illustrating tables which store object instance data.
Description of Preferred Embodiment of Invention By way of illustrative example of forward-chaining inferencing and to enable a better understanding of the methodology of the present invention a simplified rulebase will be outlined by way of example and this rulebase used to exemplify forward-chaining inferencing, first with reference to the known Rete algorithm, and second with reference to the system and method of the present invention.
Let it be assumed by way of example that there are 10 facts: fl, f2, Facts fl to f6 are input facts, ie. they are provided to the rulebase. Facts f to are inferred by the rulebase. All the facts take Boolean logical values.
The rulebase consists of 6 rules, rl to r6: rl:fl f2 f5-f10 r2: f7 f10 f9 r3:f4 r4:f6 fl f8 r5: f3 f8 f9 r6:fl f2 f7 For the purposes of this specification the specific method of evaluating the rules is not relevant. The operator is some combining operator and the means "produces", eg. according to rl, if we have values for fl, f2 and f5 then they can be combined to produce a value for 11 However, to better assist with an understanding of the examples to follow, it is assumed that the operator performs a logical AND, ie. a b produces: 1. true, if both a and b are true; or 2. false, if either a or b is false; or 3. unknown, otherwise.
Now, given the following input facts: fl is true; f2 is true; f6 is false; and all other input facts are unknown, an expert system uses forward-chaining inferencing to determine that: f7 is true (by applying r6); f8 is false (by applying r4); f9 is false (by applying r4 and r5); and f10 is unknown.
As indicated above, the Rete algorithm is the industry standard for forwardchaining inferencing and aims to minimise the amount of effort required for an inference cycle whenever input facts change. It relies on the following basic assumptions: 1. working memory generally changes slowly; and 2. the left-hand side of the rules in a rulebase contain many similar expressions.
The Rete algorithm is implemented using a tree-based network, where the nodes of the tree are either: 1. leaves, representing the input facts; 2. combining operators that take two values and combine them to product a result; or 3. inferred facts.
The network also contains the working memory: between inferences, fact nodes store their values and combining operators store their inputs.
The network for the rulebase exemplified above is illustrated in FIG 1. Arcs (represented by arrows in the illustration) are located between nodes (represented in the illustration as either small circles indicating a fact, or small squares indicating a combining operator). The arcs between the nodes are used to propagate values through the network during inferencing.
It should be noted that the evaluation of fl f2 is used twice in the network (to evaluate f using r6 and to evaluate fl0 using rl). This is how the algorithm deals with repeated patterns in the left hand side of rules, in accordance with the 2 nd of the algorithm's assumptions.
The leaf nodes, fl to f6 in FIG 1 are the inputs of the network. When an input fact changes, the value is fed into the network via the node's output arcs. When a value flows into an inferred fact node it is stored by that node and then emitted through its output arcs (if any). When a value flows into a combining operator it is stored by the combining operator as an input. The combined value is then emitted through the node's output arcs.
Implementing a data structure to represent the Rete network is relatively straightforward. The inferencing process itself can be described as walking the paths from the input fact that has changed value (eg. f5 in FIG 1) until the paths come to an end (eg. by hitting f9) or until a combining operator is hit which does not emit a new value.
FIG 2 illustrates the relevant part of the example network after setting fl to true and performing the subsequent inference. FIG 3 illustrates the relevant part of the example network after setting f2 to true and performing the subsequent inference, and FIG 4 illustrates the relevant part of the example network after setting f6 to false and performing the subsequent inference.
The Rete algorithm thus always traverses the relevant part of a rulebase whenever a fact value is changed. By way of contrast the method and system of the present invention serialises the inferencing process. This present system and method of forward-chaining inferencing has been termed linear inferencing and that expression will be used throughout the specification to refer to the system and method of the present invention.
The first step in preparing a rulebase for Linear inferencing is to build a fact dependency tree showing which facts are used to produce other facts. The fact dependency tree for the exemplified rulebase is shown in FIG The next step is to lay out the facts serially while ensuring that all the arcs point to the right. This is always possible for a rulebase, providing the rulebase contains no cyclic dependencies. (The approach to be adopted when there are cyclic dependencies is described subsequently). A serialised dependency tree for the exemplified rulebase is shown in FIG 6.
The final step is to build a data structure containing the rules laid out serially in a contiguous block of memory. The rules are ordered by the facts they produce, in accordance with the serialised fact dependency tree.
Using the exemplified rulebase above, the process starts with all the rules that produce fl, then the rules that produce f2, then the rules that produce f7, then the rules that produce f4, etc. The serialised rulebase for the above example is illustrated in FIG 7.
This serialising of the rulebase by the ordering of the rules allows inferencing to occur with a single left-to-right scan of the rulebase and guarantees that inferred facts needed to evaluate a specific rule are always produced before that rule.
A working memory is utilised consisting of an array of fact values, initialised with any known values for the input facts. For the example above, initial working memory is: fl If2 1 f4M f5 f6 f7 I f8 f9 fl0 t t f l t= true, f false, unknown Performing an inference begins with the first rule in the serialised rulebase. In our example this is r6, which produces a value for f7: Sfl f2 f3 f4 f5 f6i f7 f8 f9 !flO t t I f t i i The inference then continues to the next rule. In our example this is rl, which fails to produce a value for f10 because f5 is unknown, so working memory remains unchanged.
The inference continues in this fashion until all the rules have been evaluated.
In our example, working memory will finally be: fi f f ff 5 I f6 f7 f ff9 1 t |t ?f t f f When one or more input facts subsequently change value, another inference is required to update working memory.
Reference is now made to FIG 8 which is a stylistic block diagram illustrating the main aspects of linear inferencing, ie building a fact dependency tree, serialising the fact dependency tree and serialising the rulebase.
To build the fact dependency tree the method begins with a graph containing each of the facts in the rulebase but without any connecting arcs. For each rule in the rulebase, a directed arc is then inserted from each fact appearing on the left-hand side of the rule to the fact appearing on the right-hand side. The facts in the fact dependency tree are then sequentially ordered so that for each fact, all arcs will come from facts positioned earlier in the sequence. The fact dependency tree has now been serialised. Finally, the rules in the rulebase are ordered by the facts they produce into the same order as the facts appear in the serialised fact dependency tree. This serialises the rulebase.
A flow chart diagram illustrating the linear inferencing process is seen in FIG 9.
A number of enhancements to the basic Linear inferencing approach will now be described.
Incremental Inferencing The Linear inferencing algorithm can be easily extended to provide a mechanism for implementing incremental inferencing. The basic approach is to evaluate only those rules that are relevant, when inferencing, by tracking fact dependencies as follows: 1. Maintain a flag for each rule in the rulebase that specifies whether the rule is relevant or irrelevant. Initially, all rules are marked irrelevant.
2. Maintain a lookup table that records, for each fact in the rulebase, which rules are reliant on that fact for evaluation, ie. which rules have that fact appearing on the left-hand side. For our standard example, the lookup table for incremental inferencing is as follows: fl f2 f3 f4f f f6 f7 f8 fIfl rl rl r3 r I r4 r2 rS r2 r4 r6 r6 3. Whenever a fact value is set (between or during inferences), the lookup table is used to mark each dependent rule as relevant. For our standard example, the initial state of the relevancy flags immediately after working memory has been initialised, is as follows: Srl r2 r3 r4 rS r6 r i r I r I= irrelevant, r relevant 4. Inferencing proceeds as described earlier except that any irrelevant rules are skipped over. When a fact is set during an inference, any dependent rules of that fact are also marked as relevant. It is noted that these newly dependent rules will always appear to the right of the current rule in the serialised rulebase, maintaining the linearity of the inferencing process.
Cyclic Dependencies Cyclic dependencies generate loops in a rulebase because of rules such as: 1. If the person is male then they are not female 2. If the person is female then they are not male Such rules, although prima facie superfluous, are often required in an expert system.
Extending the above exemplified rule format slightly, such rules can be represented as follows: r7: f10 -f11 false r8: f11 f10 false, where f10 represents "the person is male" and f11 represents "the person is female".
Such cyclic dependencies thwart the establishment of a perfectly serialised fact dependency tree and hence, of a perfectly serialised rulebase. This is because, in terms of the above example, facts cannot be ordered so that all the arcs point to the right as seen in FIG These cyclic dependency loops can be dealt with as follows: 1. For each loop in the fact dependency tree, identify the smallest set of facts that contain such a loop. These sets of facts are herein termed snarls.
2. Treat each snarl as an indivisible node, when serialising the fact dependency tree, ignoring the individual facts and any dependencies within the snarl itself. The internal order of facts within snarls also no longer matters. An example of this is seen in FIG 11 which illustrates a serialised fact dependency tree with a snarl. For the tree in FIG 11, the {f4, f5, fl0) snarl is dependent on fl and f2, with only f9 dependent on the snarl itself.
3. The serialised rulebase is then created from the serialised fact dependency tree as normal. The start and end of the group of rules that represent each snarl are also recorded for future reference during inferencing as seen in FIG 12 which illustrates a serialised rulebase with a snarl.
4. When inferencing, the normal process is followed until a snarl is encountered. At this point, what may be termed a "brute-force" approach to inferencing is used, wherein the rules in a snarl are repeatedly evaluated in cycles, until a steady-state in working memory is reached (or until some sort other terminating mechanism kicks in). In practice, the number of rules in each snarl is very small, making the brute force approach ideal.
Multiple Obiect Instances Modern expert systems need to be able to reason about real-world objects having rich interrelationships, such as illustrated in FIG 13 which illustrates related objects and their attributes.
In an extension of the present invention, object attributes such as "isnice" are regarded as actual facts (f5 in the example of FIG 13). This means that facts can take on multiple values, one for each associated object instance (so f5 is true for Fred but false for Geri in the example of FIG 13).
The system provides for the development of rules that infer facts across object instances simultaneously. For example, rules such as: rl: all a child's friends are nice the child is happy r2: any of a child's friends are not nice the child is not happy, would produce the value of true for Julie's f3 and false for Barney's f3.
Linear inferencing deals with these multiple object instances by extending the way in which working memory is structured. The original flat table of values is only used for storing facts that are not related to any objects. Facts representing object attributes are stored in special object instance tables such as illustrated in FIG 14.
The Linear inferencing process itself is largely unaffected by these structural changes to working memory. The existence of multiple values for facts does not change the order in which rules need to be inferenced so the process remains linear, as before.
However, the inferencing process is modified when a rule is encountered that proves a fact that appears in multiple object instances, in which case that rule is evaluated once for each instance. In other words, the rule evaluation order is preserved but some rules are evaluated more than once.
Batch Processing It is normal for expert systems to support some form of batch processing, ie. the unattended recalculation or reconsideration of a large number of saved cases due to a change in rules.
Batch processing basically involves the following steps: 1. Load the next case into working memory Reinference using the new rules Save the case data out of working memory Repeat until there are no more cases to process As discussed earlier, the Linear inferencing algorithm is well suited to providing high performance in this type of processing scenario because it is very good at dealing with multiple changes to working memory.
However, a simple extension can provide even better levels of performance by taking advantage of the fact that all inferences involve a single left-to-right sweep of the rulebase. The basic idea is to exploit the highly linear nature of the inferencing process by processing multiple working memories simultaneously for each sweep of the rulebase.
Rulebases can often be very large in size (megabytes) and the process of sweeping the memory occupied by a rulebase is relatively expensive.
Spreading the cost of that operation over multiple sessions provides a significant performance boost, especially for large rulebases.
Minimisinq Conditional Branches Modern processors use onboard caches to achieve high levels of performance, which the Linear inferencing algorithm uses to good effect to maximise performance.
Another important strategy employed by modern processors to boost performance is deep instruction pipelining, which involves overlapping the execution of instructions which effectively keeps every part of a processor as busy as possible.
One of the key hazards to instruction pipelining is conditional branching which can cause the pipeline to stall when the processor fails to predict the next instruction to execute. To facilitate maximum processor performance, the frequency of unpredictable conditional branches is minimised.
The main area where the avoidance of conditional branching can pay large performance dividends is rule evaluation. To this end, implementation of Linear inferencing has largely reduced the process of evaluating rules to a sequence of logical operations and table lookups. An example of this preferred approach applied to the specific problem of performing a logical AND on a set of Boolean fact values is as follows: 1. Represent each fact value as a bit mask: Value Binary mask (decimal equivalent) true 100 (4) false 010(2) unknown 001 (1) 2. Combine the fact values using a bitwise OR operation and use it to lookup the result: Binary index (decimal equivalent) Result 000 true 001 unknown 010 false 011 false 100 true 101 unknown 110(6) false 111 false It should be noted that his type of approach can also be used with the other logical operators to help remove conditional branching from rule evaluation.
It will be appreciated that the forward-chaining inferencing system and method of the present invention, and which utilises linear inferencing, has a number of advantages over known forward-chaining inferencing methods. The Rete algorithm has a number of shortcomings including that it only deals with small changes, that it carries a high memory overhead and that it lacks support for modern processor architectures.
With regard to the first of these shortcomings, the Rete algorithm was designed to perform the completely minimum amount of work for each discrete change in input fact value. This is a valid approach, given its key assumption that working memory changes slowly, but this assumption is out of date inferencing in modern expert systems usually occurs after many changes in input fact values: Stateless, interactive systems (for supporting high levels of scalability) rely on the efficient reconstruction of an inferred state from a large set of input facts (routinely 100s per transaction) Even for interactive systems that do not implement statelessness, the transactional nature of modern systems, combined with the capacity and desire to collect multiple items of data from a user at a time, means that inferencing rarely occurs after a single fact changes value Batch processing or engine-based systems are presented with a lump of input facts (routinely 100s per transaction) from which to infer decisions Data sourced from corporate databases is presented to an expert system as a lump of input facts (routinely 100s), when initialising a session The Rete algorithm is not suited to efficiently coping with the routine change of lOOs of input facts and yet this is what is required by modern systems.
As to high memory overhead, the Rete algorithm builds complex data structures which mirror the complexity of the rulebase. These data structures can, therefore, get quite large for big and complex rulebases. Essentially the algorithm sacrifices memory efficiency to ensure that the minimum number of operations is conducted during an inference. This is a major disadvantage for high-performance, high-load enterprise applications where memory is at a premium because each active session requires its own Rete network. Finally, the Rete algorithm does not best exploit the large onboard caches of modern processor architectures which provide the potential for massive performance breakthroughs.
The Linear inferencing approach of the present invention improves upon the above shortcomings and deal with multiple, simultaneous updates to input fact values. This means that it can deal with the various processing scenarios listed above and which are standard features of modern enterprise-level expert systems today. Furthermore, because the working memory largely consists of simple tables of values, working memory required for the present invention has been fully minimised. Finally, modern processors achieve high levels of performance by employing large onboard caches with high-speed memory. The success of these caches relies on memory access locality, ie. the fact that successive memory accesses are located close together. The Linear inferencing algorithm allows the efficient organisation of data structures to achieve very high levels of memory access locality thus maximising the performance of these caches.
It will of course be realised that whilst the above has been given by way of an illustrative example of this invention, all such and other modifications and variations hereto, as would be apparent to persons skilled in the art, are deemed to fall within the broad scope and ambit of this invention as is herein set forth.
SOFTLAW CORPORATION LIMITED by PIZZEYS PATENT AND TRADE MARK ATTORNEYS

Claims (13)

1. A method of forward-chaining inferencing in a rulebased system having a rulebase and a set of input facts, wherein new facts are inferred in accordance with variations to the rules or the input facts, the method including:- developing a computedsed database containing a fact dependency tree for indicating which facts are used to produce other facts in accordance with respective rules in the rulebase; sequentially ordering the facts in the fact dependency tree to produce a serialised fact dependency tree wherein for any given fact in the sequence, all facts which are used to produce that fact are facts which are earlier in the sequence than is the given fact, and ordering the rules in the rulebase in accordance with the facts produced thereby to produce a serialised rulebase wherein the rules are in the same sequential order as the facts in the serialised fact dependency tree.
2. A method of forward-chaining inferencing as claimed in claim 1, and including:- setting in working memory all known input facts; sequentially evaluating each of the ordered rules in the rulebase, and updating the working memory in accordance with any changes to the facts in accordance with the evaluating of a rule.
3. A method of forward-chaining inferencing as claimed in claim 1, wherein development of the computerised database containing a fact dependency tree includes:- generating a graph in which each of the facts relevant to the set of rules in the rulebase is identified without any indication of the sequential relationship of the facts, and for each rule in the rulebase, providing an arc between the facts associated with that rule, the linkage being directed from the fact(s) which produce other fact(s) toward the other fact(s).
4. A method of forward-chaining inferencing as claimed in claim 2, wherein only those rules which are relevant in a given situation are evaluated whereby the new facts are inferred incrementally.
5. A method of forward-chaining inferencing as claimed in claim 4, the method including:- maintaining a lookup table for recording for each fact in the rulebase which rules are reliant thereon for evaluation, and maintaining a flag for each rule in the rulebase, the flag indicating for any given fact or setting of a fact value between or during inferences, whether the rule is relevant or irrelevant.
6. A method of forward-chaining inferencing as claimed in claim 2, and including:- identifying loops in the fact dependency tree, the loops being generated by cyclically dependant rules; for each said loop, identifying a snarl containing the loop; for each said snarl, ignoring the facts in the snarl and any fact dependencies within the snarl and treating the snarl as an indivisible node, when sequentially ordering the facts in the fact dependency tree, and marking the start and end of each snarl in the serialised rulebase.
7. A method of forward-chaining inferencing as claimed in claim 6, and including:- repeatedly evaluating the rules in each snarl in cycles, and stopping evaluating the rules in a snarl when a steady state is reached.
8. A method of forward-chaining inferencing as claimed in claim 2:- wherein facts representing attributes of object instances are stored in working memory object instance tables for storing multiple sets of facts, and wherein the rules proving these facts are evaluated once for each object instance, the sequential evaluation order of the rules being preserved.
9. A method of forward-chaining inferencing as claimed in claim 2, wherein the steps defined in claim 2 are conducted simultaneously across multiple working memories to facilitate batch processing for enhancing the average level of system performance.
A system for forward-chaining inferencing in a rulebased system having a rulebase and a set of input facts, wherein new facts are inferred in accordance with variations to the rules or the input facts, the system including:- a computerised database containing a fact dependency tree for indicating which facts are used to produce other facts in accordance with respective rules in the rulebase, and program means for sequentially ordering the facts in the fact dependency tree to produce a serialised fact dependency tree wherein for any given fact in the sequence, all facts which are used to produce that fact are facts which are earlier in the sequence than is the given fact, and for ordering the rules in the rulebase in accordance with the facts produced thereby to produce a serialised rulebase wherein the rules are in the same sequential order as the facts they produce in the serialised fact dependency tree.
11. A system for forward-chaining inferencing as claimed in claim 10, the system further including program means for:- setting in working memory all known input facts; sequentially evaluating each of the ordered rules in the rulebase, and updating the working memory in accordance with any changes to the facts in accordance with the evaluating of a rule.
12. A system for forward-chaining inferencing in a rulebased system substantially as described with reference to the embodiments illustrated in the drawings. 29
13. A method of forward-chaining inferencing in a rulebased system substantially as described with reference to the embodiments illustrated in the drawings.
AU2003200039A 2002-11-14 2003-01-07 Forward-chaining inferencing Expired AU2003200039B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2003200039A AU2003200039B1 (en) 2002-11-14 2003-01-07 Forward-chaining inferencing

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
AU2002952648 2002-11-14
AU2002952648A AU2002952648A0 (en) 2002-11-14 2002-11-14 Forward-chaining inferencing
AU2003200039A AU2003200039B1 (en) 2002-11-14 2003-01-07 Forward-chaining inferencing

Publications (1)

Publication Number Publication Date
AU2003200039B1 true AU2003200039B1 (en) 2003-05-01

Family

ID=39099695

Family Applications (1)

Application Number Title Priority Date Filing Date
AU2003200039A Expired AU2003200039B1 (en) 2002-11-14 2003-01-07 Forward-chaining inferencing

Country Status (1)

Country Link
AU (1) AU2003200039B1 (en)

Similar Documents

Publication Publication Date Title
US7418434B2 (en) Forward-chaining inferencing
CN108345937B (en) Circulation is merged with library
CN109086031B (en) Business decision method and device based on rule engine
CN107967539B (en) Method for predicting fuel limit of transaction on ether house based on machine learning and block chain technology
US9251466B2 (en) Driving an interactive decision service from a forward-chaining rule engine
Malapert et al. A constraint programming approach for a batch processing problem with non-identical job sizes
van Benthem In Praise of Strategies.
Chanas et al. Minimizing maximum lateness in a single machine scheduling problem with fuzzy processing times and fuzzy due dates
CN111797327B (en) Social network modeling method and device
Wang et al. Ant colony optimization for feature selection in software product lines
Dean et al. Solving Stochastic Planning Problems with Large State and Action Spaces.
Crook et al. Conversational semantic search: Looking beyond web search, q&a and dialog systems
Džeroski et al. Learning population dynamics models from data and domain knowledge
US20060143143A1 (en) System and method for externalized inferencing components
Kim et al. Solving factored MDPs with large action space using algebraic decision diagrams
Thathachar et al. Parallel algorithms for modules of learning automata
AU2003200039B1 (en) Forward-chaining inferencing
Simiński et al. Goal-driven inference for web knowledge based system
Le Clément et al. Constraint-based graph matching
Laboudi et al. Efficient service selection in multimedia documents adaptation processes
Kanovich Simulating linear logic in 1-only linear logic
Erdeniz et al. Cluster-specific heuristics for constraint solving
Prakash On generic method models
Ahmed et al. Pareto optimality for conditional preference networks with comfort
Tay et al. Weighted Constraint Satisfaction for Smart Home Automation and Optimization.

Legal Events

Date Code Title Description
FGA Letters patent sealed or granted (standard patent)
TC Change of applicant's name (sec. 104)

Owner name: RULEBURST LIMITED

Free format text: FORMER NAME: SOFTLAW CORPORATION LIMITED

PC Assignment registered

Owner name: ORACLE INTERNATIONAL CORPORATION

Free format text: FORMER OWNER WAS: RULEBURST LIMITED

MK14 Patent ceased section 143(a) (annual fees not paid) or expired