EP0298078A4 - Entwicklungswerkzeug für experterkenntnissystem. - Google Patents

Entwicklungswerkzeug für experterkenntnissystem.

Info

Publication number
EP0298078A4
EP0298078A4 EP19870901229 EP87901229A EP0298078A4 EP 0298078 A4 EP0298078 A4 EP 0298078A4 EP 19870901229 EP19870901229 EP 19870901229 EP 87901229 A EP87901229 A EP 87901229A EP 0298078 A4 EP0298078 A4 EP 0298078A4
Authority
EP
European Patent Office
Prior art keywords
variable
value
rule
rules
variables
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP19870901229
Other languages
English (en)
French (fr)
Other versions
EP0298078A1 (de
Inventor
Daniel Wolf
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
U.S. ADVANCED TECHNOLGIES NV
Original Assignee
ULTIMATE MEDIA ENTERPRISES Inc
ULTIMATE MEDIA ENTPR Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ULTIMATE MEDIA ENTERPRISES Inc, ULTIMATE MEDIA ENTPR Inc filed Critical ULTIMATE MEDIA ENTERPRISES Inc
Publication of EP0298078A1 publication Critical patent/EP0298078A1/de
Publication of EP0298078A4 publication Critical patent/EP0298078A4/de
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/04Inference or reasoning models

Definitions

  • the present invention relates to an expert knowledge system and in particular to a development tool for creating a rule-based expert knowledge system which makes inferences using the rules.
  • a development tool for creating a rule-based expert knowledge system which makes inferences using the rules.
  • the first and probably most famous is known as MYCIN which was developed at Stanford University in the 1970's.
  • MYCIN which was developed at Stanford University in the 1970's.
  • This expert system is used as an aid to physicians in diagnosis of blood infections.
  • This system was built with the use of a knowledge base and an inference engine.
  • the knowledge base is a memory structure or store which consists of rules, facts, and heuristics about the expert subject matter.
  • the inference engine establishes the procedures by which inferencing will occur based on the information contained in the knowledge base.
  • the inference engine in MYCIN uses backward chaining through the rules starting with the conclusions or goals (disease diagnoses) of the rules and working backwards through the antecedents or "if" clauses of the rules.
  • Other techniques such as forward chaining, modus ponens, are possible.
  • the MYCIN system and other existing rule-based expert systems tend to be large and require large well-supported computer facilities in order to handle the memory and processing requirements of the inefficient languages which are used.
  • the structure for representing the knowledge base is generally quite large and complicated to program and reprogram in order to extend the capabilities of the expert system.
  • a problem which faces every expert system is that of efficiently handling the great volume of information which constitutes the knowledge of an expert which is embodied in the knowledge base and the difficulty in assigning values to the rules in the knowledge base.
  • Rule-based expert systems in general can value a variable, as for example a rule conclusion, as being true or false. Some are able to assign a certainty factor to whether a variable is true or false. However, as can be understood, oftentimes it cannot be determined whether a variable is true or false.
  • the inference engine be able to efficiently analyze and the information so that, with the information given in the knowledge base, conclusions can be reached which may not have been initially anticipated.
  • Prior expert systems have not always been able to analyze the knowledge base in order to optimally use the information contained therein.
  • the knowledge base can be quite extensive, even in a small system, there is a need to prune the knowledge base in an efficient manner so that only the appropriate portions of the knowledge base are used to address the required analysis.
  • the present invention is directed to overcoming the disadvantages of prior systems and providing a compact, efficient expert knowledge system suitable for real-time use in, for example, a production environment.
  • the present invention includes an apparatus and method for providing an expert knowledge system development tool comprising a computer with a structure for storing a knowledge base and an inference engine structure for inferring from the knowledge base.
  • the knowledge base structure can store a plurality of positive and negative rules where each rule includes at least one antecedent variable and one conclusion variable.
  • the knowledge base structure can establish and store a hypothesis variable list comprised of some of the conclusion variables.
  • This tool includes a structure for selecting rules that have conclusions that, match each hypothesis and the negative of each hypothesis.
  • This tool further includes a structure for analyzing a bundle of selected rules as a whole in order to determine a variable value from common conclusions for each hypothesis variable or sub-hypothesis variable and also analyzing the antecedents of each rule in order to determine a variable value for each rule.
  • the present rule-based expert knowledge system development tool provides a compact knowledge base structure and ease of analysis of the knowledge base in order to efficiently, and on a real-time basis, address the questions or hypotheses posed by a real-world environment such as required for the safe operation of, for example, a manufacturing line, or the operation of a piece of equipment. Accordingly, the present invention provides a compact knowledge-base structure which can allow for the analysis of hundreds and thousands of rules in a second.
  • the present tool allows for a rule or variable to have the third and fourth values or states of Unknown and Untested. In comparison to the classical two valued systems using only true and false, these four states allow for great flexibility in the analysis accomplished.
  • the unknown value is a value which designates that judgment or evaluation is to be deferred until later in the inferencing process. At a later date other results of the inferencing process or information from the outside environmen ⁇ can be used to assign another value in place of. the unknown value.
  • the knowledge base of the present system allows for negative rules which can lead to conclusions which are the opposite of the positive rules.
  • the use of negative rules allows the development tool to have a second very powerful logical pathway for evaluating a given situation. Often this second logical pathway in addition to the first logical pathway comprised of positive rules is able to inference to a conclusion that would not be possible with the first logical pathway.
  • the tool provides for sorting the rules into a bundle of positive rules and a bundle of negative rules in order to efficiently inference first on the bundle of positive rules and then on the bundle of negative rules. With the use of negative rules inferencing without sorting would require that both negative and positive rules be inferenced at the same time, an inefficient task at best.
  • the present invention allows for an efficient use of both of the backward chaining and forward chaining inferencing techniques specially designed for the four valued logic structures of the invention in order to quickly determine what is true, false, unknown, and untested, and to quickly prune down the size of the rule base.
  • the present invention through a forward chaining technique, is able to efficiently prune a logic tree described by a bundle of rules such that the irrelevant sections of the tree will not be addressed.
  • the backward chaining inferencing technique also uses a pruning technique, through only on a selected bundle of rules, not on the entire knowledge base as is accomplished with the forward chaining inferencing technique.
  • the forward chaining inferencing technique assigns values to rules based on the antecedent variable of the rules irrespective of the relativeness or sequence of the rules.
  • the results of the forward chaining inferencing technique, which assigns values to the rules can be used by the backward chaining inferencing technique that assigns values to the hypotheses.
  • the present invention provides for alternatingly forward chaining through the antecedents of each rule in order to determine the value of the conclusion, and for backward chaining among rule conclusions or goals which match each required hypothesis on the hypothesis list in order to greatly enhance the inferencing process.
  • the present invention additionally provides for an automatic hypothesis list generation structure for locating all logical variables (1) which are only rule conclusions and do not appear additionally as antecedents for other rules, (2) which are both conclusions and antecedents, and
  • the present invention provides for a plurality of knowledge base structures and a structure for sequencing automatically between the knowledge base structures.
  • This sequencing structure provides that the determined hypothesis variable values for one structure are used as the reset or input values in the next structure. Accordingly, a problem can be broken down into separate, compact and efficient knowledge base structures or modules which are convenient and easy to program and modify, and with each structure building on the results obtained from the previous structure.
  • Figure 1 is a block diagram of an expert knowledge system development tool in accordance with the invention.
  • Figures 2A, 2B, 2C and 2D represent a schematical block diagram and flow chart depicting the inferencing methodology and structure of the present invention.
  • Figure 3 represents a block diagram and schematic flow chart depicting the methodology and structure of an automatic hypothesis list generation aspect of the present invention.
  • Figure 4 depicts a block diagram and schematic flow chart depicting the methodology and structure of the automatic sequencing aspect of the invention.
  • Figure 5 depicts a block diagram and schematic flow chart of the methodology and structure for single rule evaluation from eight antecedent variables.
  • Figure 6 represents a block diagram and schematic flow chart depicting the methodology and structure for evaluation between eight rules, four of which represent positive rules and four of which represent negative rules, in accordance with an embodiment of the present invention.
  • the expert knowledge system development tool 50 of the invention for use in developing an expert knowledge system, includes a host computer 52, a knowledge base store 54 for the knowledge base structure, and an inference engine 56.
  • the knowledge base store 54 and the inference engine 56 can be presented in a stand-alone unit 58 which includes a central processing unit (CPU) 60 and/or can be incorporated directly into the host computer 52.
  • Communicating with both the host computer 52 and the unit 58 is a controller 62 which communicates both with the unit 58 containing the knowledge base store 54 and the inference engine 56, and the host computer 52.
  • the controller 62 communicates with transducers 64 to sample and environment in order to gather values for the knowledge base store 54. Additionally, the controller 62 which has its own on-board CPU, can initiate action based on the expert inferencing which has occurred. It is to be understood that values for the various rules can be sampled either through the transducer 64 or can be input directly by a user through the host computer 52. As will be explained in greater detail, the transducer 64 can be, for example, used to monitor the various states and values which are important for the proper operation of, for example, a turbine used to generate power, equipment used in an assembly line, the entire assembly line itself, or the proper operation of a dryer to insure that a fire does not occur.
  • the knowledge base 54 contains all the information, rules, etc. that the expert knowledge system (as for example, a system for the above turbine) is based upon.
  • the inference engine 56 logically interprets the information contained in the knowledge base 54.
  • the knowledge base 54 can include a plurality of knowledge base modules collectively referred to by the number 55 and individually referred to by the numbers 66, 68, 70, 72, etc.
  • the knowledge base module 66 in a preferred embodiment contains a dictionary of 127 logical variable names, and a rule list of 225 "if... then” rules.
  • a rule is comprised of a list structure which in a preferred embodiment can have up to seven antecedents, or "if" variables, or slots, and one conclusion, or "then” variable. If any antecedent slot is empty, it is assigned a variable 0, a blank. In the following example of a rule, found in Chart 1, only six antecedents were used, so antecedent #6, which is permanently empty, contains variable 0.
  • PROBE ACCETERAITON ANTECEDENT #1 (Variable 12)
  • Chart 1 contains a rule which can be used to determine by inferencing to the conclusion (Variable 1) whether or not a turbine should be shut down. This determination is made by evaluating the values of the antecedents which may or may not already have preassigned values.
  • the descriptions of the conclusion and the antecedents is shown in Column 2. These descriptions are known as the logical variable names. Column four would contain the value for each variable if that value has been preassigned. These values are inserted as the inferencing process assigns the value.
  • the inference engine 56 operates the knowledge base 54 (whether fully or partially determined with values).
  • Each of the variables (which can be either conclusions and/or antecedents) in the dictionary of 127 variables has a negative image or variable, a logical negative.
  • the negative always has the number of the positive, plus 128..
  • the negative of variable 5 is variable 133.
  • variable 128 (Not empty) is also empty or blank.
  • Each rule is analyzed or evaluated by evaluating the antecedents in numerically decreasing order, i.e., antecedent 6, 5 ... 0.
  • the present invention provides for four-valued logical evaluation of rule conclusions and antecedents.
  • the variable values of both rule conclusions and antecedents can be assigned logic values of True, False, or Unknown (or UNK, Don't Know, DK). Further the value of Untested (UNT) is assigned in the absence of any other of the above three assigned value. Values are assigned through the inferencing mechanism or through use of information obtained from the outside environment or user. This outside information may be obtained from human or transducer (e.g,, electromechanical) input. Whenever a value is assigned to a variable, the opposite value is assigned to the negative of that variable:
  • Rule conclusion values are assigned by inferencing engine 56 according to the values of antecedents, according to the following sequence of logic determination as shown in Chart 2.
  • the knowledge base stored in store 54 can be depicted (Chart 4) to show input/output relation- ships between the variables which are antecedents and/or conclusions.
  • Chart 4 shows four rule numbers at the top, and seven dictionary variable numbers arranged vertically at the far left. For each variable number there is one horizontal line extending to the right completely across the chart. A total of seven horizontal lines appear in Chart 4. Some of these lines represent inputs, some represent outputs and others represent both.
  • INPUTS ARE Those variables which occur only as antecedents in rules.
  • BOTH ARE Variables which occur as both antecedents and conclusions.
  • Each of the four rule numbers shown at the top of Chart 4 is also represented by seven vertical lines (slashes) with a circle to the right. This rule symbol represents the seven
  • each rule entry slot is a vertical line (comprised of slashes) extending part or all the way down Chart 4. If the slot is filled by a dictionary variable, the vertical line will extend down to intersect with the horizontal line corresponding to that variable.
  • a vertical line extending beyond the bottom of the chart means that it connects to some other dictionary variable that is visible on a chart which includes all 127 dictionary variables.
  • Chart 4 is just a window to the full knowledge base diagram. In a full diagram, scrolling left or right presents additional rules and connections. Scrolling up and down brings into view more dictionary variable names and their connections to the rules.
  • the dictionary variables and rules are placed in the most logical position for each knowledge base.
  • the dictionary of variable names is broken up into three parts: input, output, and both. Within each of these groups, the variables are put into numerical order. All of the outputs appear at the top of the diagram, both inputs and outputs are in the middle and input variables are at the bottom of the diagram.
  • rule three has variable 30 as its conclusion.
  • Rules 76, 75, 22 come next because these rules all have variable 29 as common conclusion.
  • the inferencing process carried, on by the inference engine 56 can be represented as seen in Chart 5 which is a four-window display.
  • the windows each present a portion of information about what is occurring in the inference engine 56.
  • SPECTRUM 3 X SPEED . ? PROBLEM AT SURGE .
  • APPLYING RULE Shows a rule which has a conclusion matching the active goal hypothesis.
  • QUESTION Shows an untested variable from the antecedents of the current rule.
  • Window 1 (upper left) displays whatever question (variable) is currently being asked by the inference engine 56. This question appears as an untested value and is a variable from the active rule being displayed (Window 3). A value of True, False, Unknown, or Untested is supplied by the environment or the user.
  • Window 2 (upper right) displays a goal path.
  • This is a list of variables that represents a chain of conclusions leading to one of the questions (hypotheses). This chain typically has two to five levels, but can have as many as 128 levels.
  • the inference engine 56 begins evaluating a hypothesis, it searches for rules that might lead to a matching conclusion. Either conclusions or antecedents of these rules become sub-hypotheses and are grouped together in a stack (i.e., the goal path). The sub-hypothesis being evaluated is treated like a listed hypothesis, with the inference engine 56 searching for rules that have conclusions matching the sub-hypothesis.
  • Window 3 (lower right) displays a list of conclusions. These are the variables that have been evaluated and assigned logical values according to the rules and to answers given to questions. When the inferencing process begins, this window is blank.
  • Window 4 depicts the rule currently being used to help evaluate one of the sub-hypotheses of the goal path.
  • the conclusion in this case "ADVISE FOUNDATION RESONANCE"
  • the goal path items are evaluated beginning at the bottom.
  • some of the lower items depicted in goal path window 2 will show logical value symbols.
  • the active item in the goal path window, the one which matches the conclusions, is always the lowest one without a value.
  • FIGs 2A, 2B, 2C and 2D the structure of a preferred embodiment of the inference engine 56 is depicted. While it will be shown that the inferencing process can start in one of several, locations, for simplicity the inference process is started in Figure 2A with block 100.
  • block 100 the next hypothesis from the hypothesis list is obtained for evaluation.
  • blocks 102, 104 and 106 are used to determine if the inferencing engine has exceeded the limit of the number of levels in the tree (maximum recursion depth) which can never exceed the maximum number of rules which in this embodiment is 256 rules. If this is the case, there is an exit to block 108 and an indication that there is a circular reasoning error in the inferencing caused by some error introduced into the knowledge base.
  • the inference engine 56 obtains all the rules which have conclusions which match the current hypothesis or the negative of the current hypothesis, that is currently being determined. It is to be understood that the current hypothesis may be a sub-hypothesis located down in tree. If this number is 0 (block 111), then the system exits to block 112 which requires that either the user or the environment input a value for the current hypothesis or variable as none is stored in the knowledge base 54.
  • rules are ordered into two bundles, a first bundle which includes all the positive rules and a second bundle which includes all the negative rules (block 114).
  • the backward chaining or analysis between the values of the conclusions for the select rules is then accomplished.
  • inferencing is initiated by the conclusion or goal of the rule.
  • the inference engine attempts to determine if the hypothesis is correct by comparing the hypothesis with values from similar conclusions of the selected rules. If this is not possible, the inferencing backs up to the antecedent clauses of each rule and tries to determine if these are correct. This in turn leads the inference engine to other rules which have conclusions which match the antecedents of the first rule to be analyzed. In this way, the backward chaining process proceeds down an ever broadening inference tree from one level to the next with the antecedents of the level above being the goals of the level below.
  • the backward chaining inferencing is applied to the bundle of rules in the order and sequence provided below.
  • a determination is made as to whether there is one rule of the positive rules which has a True value. If this is the case, as the hypothesis is similar to the conclusion of the bundle of rules, the hypothesis variable is True and is so designated at block 118 and the inferencing exits to Block 152.
  • a determination is made as to whether any one of the negative rules is True (block 120). If this is the case, the negative of the hypothesis, being equivalent to the conclusion of the negative rule, is True, and the hypothesis variable is accordingly designated to be False at block 122 and the inferencing exits to Block 152.
  • the hypothesis variable is given a value of Unknown at block 134. It is noted that a rule which has been tested may have one of three values of True, False or Unknown. If a rule has not been tested, it is valued as Untested.
  • DK Unknown
  • Blocks 138-150 accomplish pruning for the selected rules for the bundle under consideration, while Blocks 162-182 as discussed hereinbelow accomplish pruning for all the rules in the knowledge base.
  • the inference engine proceeds to the structure of block 152.
  • the hypothesis just tested is marked True, False, or Unknown and the negative conclusion of the hypothesis is marked with the opposite value.
  • any value for the current hypothesis which has been obtained from an external source is introduced from block 112. Blocks 154, 158 and 160 account for any remaining upward inferencing steps (returning up the tree) remaining in the evaluation of a listed hypothesis.
  • Block 156 determines if there is an internal error which would occur if the tool tried to inference past the top of the tree.
  • forward chaining process described immediately hereinbelow is provided in a sequence following the backward chaining analysis, that the forward chaining as demonstrated in blocks 162 through 182 may occur prior to the initial backward chaining process.
  • the advantage in accomplishing the forward chaining first is that it is possible to "prune" the inference tree from the bottom up by adding values to conclusions and antecedents based on the knowledge in the knowledge base. It can be appreciated that this ordering can save processing time during backward chaining.
  • forward chaining is accomplished before any hypothesis is selected for analysis using the backward chaining technique above.
  • Blocks 162 through 182 value rules one at a time with no recursion.
  • the inference engine 56 begins the forward chaining inferencing by setting a flag to 0 at block 162.
  • the antecedents of each rule are then analyzed, in the following order and sequence, with blocks 164 and 182 indexing from rule to rule until all the rules have been analyzed.
  • Block 166 if all antecedents of the current rule being analyzed are True, the rule is True and a value of True is assigned to the conclusion of the rule by block 168 and the inferencing process exits to Block 178.
  • block 178 determines that the conclusion value has already been determined to be True and bypasses the flag-setting block 180 stopping the forward chaining process.
  • block 184 determines when any hypotheses of the hypothesis list are still untested. If that is the case, the analysis returns to block 100 to begin on the next hypothesis. If no hypotheses remain untested, inferencing is over.
  • the generator determines all variable names which are only listed as rule conclusions, which variable names and conclusions would be at the top of the tree or the goals, and additionally all variable names which are listed only as antecedents, which antecedents would be at the bottom of the tree.
  • the tree is comprised of a collection of rules, with the rules at the top having conclusion variables that are not also antecedent variables and with the rules at the bottom having antecedent variables which are not conclusion variables. Accordingly, there are fewer rules at the top of the tree and more rules at the bottom of the tree.
  • Block 214 compiles a hypothesis variable list including all variable names which are only conclusions. The hypothesis count is then set by block 216 and the hypothesis list generation is complete at block 218.
  • the inference engine 56 further includes a knowledge based automatic sequencing structure which sequences between knowledge bases as between knowledge base 66 and knowledge base 68, and between knowledge base 68 and knowledge base 70, and so on.
  • This sequencing structure is depicted in Figure 4 and identified by the numeral 250. This sequencing can occur after a first inference has been completed on a first knowledge base such as knowledge base 0, KB0, identified by number 66 and is determined by block 252. This being the case, the values of the listed hypotheses (as distinguished from the sub-hypotheses) determined for knowledge base 0, KB0, are transferred to the second knowledge base, knowledge base 1, KB1, in block 68 as values for the corresponding numbered variables in. KB1.
  • KB1 In knowledge base 1, KB1, all the variable values are set to Untested by block 256.
  • the variable values from knowledge base 0 (whether hypothesis variables, conclusion variables or antecedent variables), KB01 and block 66 , are then used as the current variable values in knowledge base 1, KB1 and block 68 for the correspondingly numbered variables.
  • Blocks 260 and 262 then start the inferencing process again using the values of knowledge base 0, KB0, and block 66.
  • Forward chaining (Blocks 162-182) is then first accomplished on KB1 in order to assign values to as many rules as possible, based on the values from KB0, greatly increasing the efficiency of inferencing of KB1.
  • Figure 5 depicts alternate structure for evaluating a single rule, and assigning a value thereto, with eight antecedents using a forward chaining technique.
  • the structure of Figure 5 includes eight decoders, 302 through 316, which represent the antecedents of the rule. Decoders 302 to 316 are 2 of 4 decoders with an output line for the Unknown value not shown. True values of the decoders are communicated to And gate 318, False values communicated to Or gate 320 and Untested values communicated to Or gate 322. The inverses of outputs from gate 318, 320 and 322 are communicated to And gate 324. The invesses from the outputs of gates 318, 320 and 324 are communicated to And gate 326.
  • the output from gate 324 is communicated to the multiplexer 328 on line 0 and represents the Untested value.
  • the output from gate 320 is communicated to the multiplexer 328 on line 1 and represents the False value of the conclusion of the rule.
  • the output of gate 318 is communicated to the multiplexer 328 on line 2 and represents the True conclusion of the rule.
  • Finally the output of gate 326 is communicated to the multiplexer 328 and represents the Unknown value for the conclusion of the rule (line 3).
  • Figure 6 represents an alternate embodiment of the structure of the invention for evaluating variables (hypotheses) based on rules and assigning values to said variables.
  • This structure is identified by number 400.
  • Structure 400 of Figure 6 represents the evaluation of one variable or hypothesis using 4 matching positive rules and 4 matching negative rules.
  • a "star" in Figure 6 represents the negative rule bundle pathway.
  • decoders 402 to 408 represent the conclusion values of 4 matching negative rules
  • decoders 410 through 416 represent the conclusion values of 4 matching positive rules.
  • Each decoder is a 2 by 4 decoder with the output lines representing True, False and Untested values, the Unknown value not being shown.
  • the True outputs from the 4 positive rule decoders 410 through 416 are provided to Or gate 418.
  • the False outputs from the 4 positive rule decoders 410 to 416 are provided to And gate 420.
  • the 4 False outputs from the negative rule decoders 402 to 408 are provided to And gate 422 with the 4 True outputs of negative rule decoders 402 to 408 provided to Or gate 424.
  • Finally Or gate 426 obtains the 8 Untested outputs from the 8 decoders 402 through 416.
  • And gate 430 receives the inverted output of gate 420 and the output of gate 422 and provides an output to gate 428.
  • Or gate 428 receives the outputs from And gate 430 and Or gate 418 and provides an output on line 2 to the multiplexer 440 in the True position.
  • And gate 434 receives the output from gate 424 and the inverted output from gate 418. Gate 434 provides an output to gate 432 along with the output from gate 420. The output of gate 432 is provided to line 1 which is provided to multiplexer 440 in the False position.
  • And gate 436 receives inverted outputs from gates 426, 432, and 428, and provides an output along line 0 to multiplexer 440 in the Untested position.
  • And gate 438 receives the inverted values from gates 436, 432, and 428 to provide an output along line 3 to the Unknown position of multiplexer 440.
  • the multiplexer 440 in turn determines the hypothesis conclusion according to the backward chaining analysis provided by structure 400.
  • variable 3 the conclusion of this rule
  • STEP 1 The inference engine finds there is one. hypothesis, and that it does not already have a value (values are True, False, Unknown and Untested) for the hypothesis variable (Variable 3). (If Variable 3 is found to have a value already, the following steps are unnecessary, and the inference engine goes directly to Step 10).
  • STEP 2 The inference engine determines there is one rule available which can be used to make a conclusion about the hypothesis, namely the example rule above. Furthermore, the inference engine finds that the rule does not have a value yet.
  • STEP 3 The inference engine tries to give the rule a value, and examines all the variables (1 and 2) on the "IF" side of the rule. The inference engine looks at all of the values and determines if there is enough information to conclude whether or not this rule can be used to make some conclusion about the hypothesis. Because the inference process has just begun, both "IF" variables have no value.
  • the inference engine uses the first of the "IF" variables (antecedent values) it finds having no value (always bottom up), in this case, variable or antecedent 1, and makes that variable into a new hypothesis or sub-hypothesis.
  • the inference engine In the process of setting up a different variable as a new hypothesis, the inference engine must remember to come back to its previous hypothesis.
  • the inference engine stacks up the current hypothesis for further reference, and takes on whichever variable is needed to evaluate as the current hypothesis. The result of this stacking process is visible in the goal path window of Chart 5. At this point, the inference engine can be said to be one level deep in the inferencing process.
  • STEP 5 The inference engine is in a situation which is similar to Step 1. There is a hypothesis which does not have any value as yet. The inference engine searches for other rules that can be used on the new hypothesis. However, there are no rules which have variable 1 as a conclusion.
  • STEP 7 This testing process brings a reply by the user (or external device being queried the system) of yes, no, or unknown.
  • the answer supplied to the inference engine about variable 1 will partly determine what happens next, because after each external world query, the inference engine returns to STEP 3 and tries to use the rule if possible (if enough information is now available about the "IF" variables).
  • Step 8 Assuming the answer is "no" in Step 7, the inference engine assigns a value of False to variable 1, and finds there is enough information about the "IF" side of the rule to make a decision about using the rule. The rule is False. If one "IF" variable is False, there is no way that rule can be evaluated other than as False. Thus the inference engine will not look at any further variables within that rule.
  • STEP 9 The inference engine now takes back its old hypothesis and starts over evaluating variable 3.
  • the rule now has a value to apply to the conclusion variable (the hypothesis). In this case the rule value is False.
  • the hypothesis is assigned the value of False according to the value of the conclusion, variable 3, the only rule available.
  • STEP 10 The inference engine looks to see if there are any other hypotheses lined up for evaluation which do not have values yet. In this example there are none, so the inference process is complete.
  • STEP 7 Set variable 1 to a "yes" value.
  • variable 2 gets a True value
  • the rule gets a True value
  • the original hypothesis gets a True value (there is one rule, and it does apply since both "IF"'s are True).
  • variable 2 gets a false value
  • the rule gets a False value
  • the original hypothesis variable 3 also gets a False value.
  • variable 2 gets an Unknown value.
  • One "IF” is True and one "IF” is Unknown, the result of which is Unknown for the rule and the hypothesis.
  • rule values are some as conclusion values in the one-rule case. There is only one condition for a rule value on True, that is that all "IF" variables must be True. A single False variable always makes the rule False, even it other variables have not even been evaluated.
  • the first thing taken into account is the group of positive rules (those whose conclusions exactly match the current hypothesis). If no value can be assigned to the variable based on those rules, then the second group comes into play. These are rules which have conclusions logically negative to the current hypothesis. The inference engine uses these rules to assign a value- to the negative of the current hypothesis.
  • variable 3 is the current hypothesis
  • the rule gathering process will find both of these rules relevant.
  • the first rule can directly impact on the hypothesis and the second rule can impact on the opposite of the hypothesis.
  • Variables 3 and 131 are opposites. If variable 3 is True, then 131 is False. If one of them is Unknown, then so is the other.
  • STEP 1 The inference engine needs to evaluate variable 3, the hypothesis. During the search for relevant rules, both of the above rules are found and the inference engine attempts an evaluation of Rule 1. STEP 2 The inference engine subjects Rule 1 to evaluation (see Example 1) until some value (T, F, or D) is assigned to Rule 1.
  • STEP 5 The inference engine now has a value of D for Rule 1, and a T for Rule 2.
  • the inference engine considers the T value of Rule 2 of greater importance (it will always conclude with a T or F when possible; D is equivalent to deferred judgement).
  • STEP 6 The inference engine notes no other rules need evaluation.
  • the T value of Rule 2 is used to assign a value of T to variable 131.
  • negative rules may be used and may, by themselves, provide the information required to make a conclusion about the positive variable.
  • Rule 2 NEG X X T F D Like the situation in Example 1, the inference engine may not need values at all for Rule 2 if the value of Rule 1 is enough information. It is only necessary to evaluate Rule 2 if the value of Rule 1 is D. Rule 2 then dominates the outcome.
  • X means that the value is unimportant to the outcome.
  • the negative rule need not be used in the evaluation process unless the positive bundle (rules 1 and 2) leaves in an Unknown situation. Then the negative bundle (Rule 3) is used and dominates the outcome for the two variables in question. The negative logic route is only used when necessary to try and resolve an Unknown into a solid True or False value.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computing Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Devices For Executing Special Programs (AREA)
EP19870901229 1987-01-20 1987-01-20 Entwicklungswerkzeug für experterkenntnissystem. Withdrawn EP0298078A4 (de)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US1987/000165 WO1988005574A1 (en) 1987-01-20 1987-01-20 Expert knowledge system development tool

Publications (2)

Publication Number Publication Date
EP0298078A1 EP0298078A1 (de) 1989-01-11
EP0298078A4 true EP0298078A4 (de) 1989-12-12

Family

ID=22202259

Family Applications (1)

Application Number Title Priority Date Filing Date
EP19870901229 Withdrawn EP0298078A4 (de) 1987-01-20 1987-01-20 Entwicklungswerkzeug für experterkenntnissystem.

Country Status (3)

Country Link
EP (1) EP0298078A4 (de)
JP (1) JPH01501901A (de)
WO (1) WO1988005574A1 (de)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2384580A (en) 2001-12-14 2003-07-30 Empiricom Technologies Ltd Knowledge acquisition in expert systems
WO2009081306A1 (en) * 2007-12-21 2009-07-02 Koninklijke Philips Electronics, N.V. Detection of errors in the inference engine of a clinical decision support system
US9262719B2 (en) 2011-03-22 2016-02-16 Patrick Soon-Shiong Reasoning engines
JP7253760B2 (ja) * 2017-05-04 2023-04-07 ナレルシステム株式会社 論理型プログラミングにおいて否定を実現する方法、コンピュータプログラム及び装置
JP6965621B2 (ja) * 2017-08-02 2021-11-10 富士通株式会社 検出プログラム、検出方法及び検出装置

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0205873A2 (de) * 1985-06-26 1986-12-30 International Business Machines Corporation Verfahren zum Verarbeiten eines Expertsystemsregelsatzes der in kontextuelle Einheiten aufgeteilt ist

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0650442B2 (ja) * 1983-03-09 1994-06-29 株式会社日立製作所 設備群制御方法およびシステム
JPH0736123B2 (ja) * 1983-05-09 1995-04-19 株式会社日立製作所 設備群制御方法
US4649515A (en) * 1984-04-30 1987-03-10 Westinghouse Electric Corp. Methods and apparatus for system fault diagnosis and control
US4648044A (en) * 1984-06-06 1987-03-03 Teknowledge, Inc. Basic expert system tool
US4591983A (en) * 1984-07-09 1986-05-27 Teknowledge, Inc. Hierarchical knowledge system
US4642782A (en) * 1984-07-31 1987-02-10 Westinghouse Electric Corp. Rule based diagnostic system with dynamic alteration capability
JPS63631A (ja) * 1986-06-20 1988-01-05 Hitachi Ltd ル−ル処理方式
JPS6312023A (ja) * 1986-07-02 1988-01-19 Matsushita Electric Ind Co Ltd 知識獲得方式

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0205873A2 (de) * 1985-06-26 1986-12-30 International Business Machines Corporation Verfahren zum Verarbeiten eines Expertsystemsregelsatzes der in kontextuelle Einheiten aufgeteilt ist

Non-Patent Citations (8)

* Cited by examiner, † Cited by third party
Title
1986 PROCEEDINGS FALL JOINT COMPUTER CONFERENCE, Dallas, Texas, 2nd-6th November 1986, pages 185-189, IEEE, New York, US; R.J.K. JACOB et al.: "Software engineering for rule-based systems" *
1986 PROCEEDINGS FALL JOINT COMPUTER CONFERENCE, Dallas, Texas, 2nd-6th November 1986, pages 353-362, IEEE, New York, US; J. HERATH et al.: "DCBL: Dataflow computing base language with n-value logic" *
ARTIFICIAL INTELLIGENCE, vol. 30, no. 3, December 1986, pages 273-287, Elsevier Science Publishers B.V., North-Holland, Amsterdam, NL; M. GELFOND et al.: "Negation as failure: careful closure procedure" *
IBM SYSTEMS JOURNAL, vol. 25, no. 2, 1986, Armonk, New York, US; N. BURNS et al.: "The portable inference engine: fitting significant expertise into small systems" *
PATENT ABSTRACTS OF JAPAN, vol. 12, no. 194 (P-713), 7th June 1988; & JP-A-63 000 631 (HITACHI LTD) 05-01-1988 *
PATENT ABSTRACTS OF JAPAN, vol. 12, no. 214 (P-718), 18th June 1988; & JP-A-63 012 023 (MATSUSHITA ELECTRIC CO LTD) 19-01-1988 *
PROCEEDINGS OF THE 1986 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS, Atlanta, Georgia, 14th-17th October 1986, pages 1496-1501, IEEE, New York, US; B.K. MOORE: "Reasoning with comparative uncertainty" *
See also references of WO8805574A1 *

Also Published As

Publication number Publication date
EP0298078A1 (de) 1989-01-11
WO1988005574A1 (en) 1988-07-28
JPH01501901A (ja) 1989-06-29

Similar Documents

Publication Publication Date Title
US4970657A (en) Expert knowledge system development tool
Staszewski Skilled memory and expert mental calculation
Turley et al. Competencies of exceptional and nonexceptional software engineers
Goldberg et al. Rapid, accurate optimization of difficult problems using messy genetic algorithms
US6278987B1 (en) Data processing method for a semiotic decision making system used for responding to natural language queries and other purposes
US5005143A (en) Interactive statistical system and method for predicting expert decisions
US6389406B1 (en) Semiotic decision making system for responding to natural language queries and components thereof
Rist Knowledge creation and retrieval in program design: A comparison of novice and intermediate student programmers
Saitta et al. Learning in the “real world”
WO1988005574A1 (en) Expert knowledge system development tool
WO1996005555A1 (fr) Dispositif d'inference de causes
Addis Towards an ‘expert’diagnostic system
Beale et al. Hunter-gatherer: Three search techniques integrated for natural language semantics
Punch III et al. Peirce: A tool for experimenting with abduction
Hájek et al. Artificial intelligence and data analysis
US11127487B2 (en) Systems and methods for cyber-enabled structure elucidation
RU29597U1 (ru) Связная экспертная система
CA2449470A1 (en) Case-based reasoning system and method having fault isolation manual trigger cases
Bullinaria IAI: Expert systems
Nwaigwe et al. The Simple Location Heuristic is Better at Predicting Students' Changes in Error Rate Over Time Compared to the Simple Temporal Heuristic.
Berry APL and the search for truth: A set of functions to play New Eleusis
EP0694836B1 (de) System und Methode für fall-basiertes Schlie en
KR910009099B1 (ko) 컴퓨터 한의진단 처리의 속도향상 방식
Newell et al. Boulder, Colorado, May 16, 1958.
Morrison The effect of cognitive style and training on fault diagnosis performance

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE CH DE FR GB IT LI LU NL SE

17P Request for examination filed

Effective date: 19890105

RAP1 Party data changed (applicant data changed or rights of an application transferred)

Owner name: U.S. ADVANCED TECHNOLGIES NV

A4 Supplementary search report drawn up and despatched

Effective date: 19891212

17Q First examination report despatched

Effective date: 19910604

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 19911015