US20090055344A1 - System and method for arbitrating outputs from a plurality of threat analysis systems - Google Patents
System and method for arbitrating outputs from a plurality of threat analysis systems Download PDFInfo
- Publication number
- US20090055344A1 US20090055344A1 US12/129,393 US12939308A US2009055344A1 US 20090055344 A1 US20090055344 A1 US 20090055344A1 US 12939308 A US12939308 A US 12939308A US 2009055344 A1 US2009055344 A1 US 2009055344A1
- Authority
- US
- United States
- Prior art keywords
- threat
- value
- uncertainty
- hypothesis
- output
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N23/00—Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00
- G01N23/02—Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00 by transmitting the radiation through the material
- G01N23/06—Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00 by transmitting the radiation through the material and measuring the absorption
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/217—Validation; Performance evaluation; Active pattern learning techniques
- G06F18/2178—Validation; Performance evaluation; Active pattern learning techniques based on feedback of a supervisor
Definitions
- the present invention relates generally to data fusion techniques, and, more particularly, to arbitration of the output of independent algorithms for threat detection.
- the detection of special nuclear materials can be accomplished with a combination of passive spectroscopic systems and advanced radiography systems. Together, these two technologies provide a capability to detect unshielded, lightly shielded and heavily shielded nuclear materials, components, and weapons that may be illicitly transported in trucks, cargo containers, air cargo containers, or other conveyances. With regard to nuclear material smuggled in cargo containers, it is more likely that the material will be shielded to the extent that it may be difficult to detect with passive spectroscopic systems. Thus, advanced radiography systems can be used to help detect shielded materials.
- Radiography systems may be primarily designed to provide the capability to detect traditional contraband, such as drugs, currency, guns, or explosives. These types of contraband typically have a low atomic number (Z).
- Next generation radiography systems that use automated analysis of detector signals can detect shielded materials with a high atomic number, such as lead, uranium, or plutonium, by employing multi-energy radiographic images. Threat assessments for a region of interest can be made by algorithms or systems that employ the resultant radiographic images.
- One embodiment provides a method of arbitrating outputs from a plurality of threat analysis algorithms so as to generate a final decision from a plurality of hypotheses.
- the method includes receiving at least one threat output for a region of interest (ROI) from each of the plurality of threat analysis algorithms.
- Each threat output may be assigned to a class membership.
- Expert rules can be selected based at least in part on the at least one threat output and the respective class membership.
- the expert rules can provide an amount of support mass for one of the plurality of hypotheses and an amount of uncertainty mass.
- Each expert rule can have an associated priority value that can be used to weight the support mass and uncertainty mass.
- a combined belief value can be determined for each hypothesis from the provided support mass.
- a total uncertainty value can be determined from the uncertainty masses for the plurality of hypotheses.
- the method can also include generating a decision matrix output including the plurality of hypotheses with respective combined belief values.
- a hypothesis with the highest combined belief value may be selected from the decision matrix output.
- the combined belief value for this selected hypothesis can be compared with a first threshold value.
- the total uncertainty value can be compared with a second threshold value.
- the selected hypothesis or an uncertainty hypothesis may be output as the final decision depending on the results of the comparisons.
- Another embodiment may include a system for arbitrating outputs from a plurality of threat analysis systems.
- the system can include means for assigning each threat output to a class membership.
- the system may further include means for selecting one or more rules.
- the selected rules can provide an amount of support mass for one of a plurality of hypotheses.
- the selected rules can also provide an amount of uncertainty mass.
- the system may further include means for determining a combined belief value for each of the plurality of hypotheses and a total uncertainty value from the provided support mass and uncertainty masses, respectively, and means for generating an output of a selected hypothesis based on the combined belief values and the total uncertainty value.
- Another embodiment includes a computer program product for arbitrating outputs from a plurality of threat analysis algorithms.
- the computer program product includes a computer readable medium encoded with software instructions that, when executed by a computer cause the computer to perform the step of receiving at least one threat output from each of a plurality of threat analysis algorithms.
- the steps may also include assigning each threat output to a class membership.
- the steps may also include determining a combined belief value for each of a plurality of hypotheses and a total uncertainty value and generating a decision matrix output including the plurality of hypotheses and associated combined belief values.
- FIG. 1 shows a block diagram of a system view of an exemplary embodiment
- FIG. 2 shows a flowchart of an exemplary method for arbitrating the outputs of a plurality of algorithms
- FIG. 3 shows a flowchart of exemplary aspects of an arbitration method.
- an exemplary embodiment may use an arbitration technology to integrate results from many independent algorithms or systems, thereby minimizing false alarm rates and improving assessment capability beyond the capability of any single algorithm.
- Results from several systems may be individually analyzed and their outcomes may be mapped to an occurrence probability or fuzzy membership.
- a series of hypotheses may be generated for each potential threat region.
- Expert rules may be used to determine a combined belief for each hypothesis for use in selecting a hypothesis as a final decision of the arbitrator.
- FIG. 1 shows a block diagram overview 100 of an exemplary embodiment of an advanced cognitive arbitrator (ACA) system.
- the ACA 102 has three aspects: a class assignor module 108 , a rule selector module 112 , and a data fusion module 116 .
- the three components of the ACA 102 may be configured together as a single module, separate individual modules, or combined with other modules performing separate functions.
- Independent threat analysis algorithms 104 a - 104 c may evaluate an object, such as by analysis of radiographic images of the object, so as to generate threat outputs 106 a - 106 c to the ACA 102 . Each threat output can have an associated confidence value that may be output to the ACA 102 as well.
- algorithms 104 a - 104 c may be separate systems employing different detection methodologies.
- the threat output may be the determination of the effective atomic number of a material of an object based on analysis of a radiographic image.
- the threat output may be a detection of high-Z material, special nuclear material, or a general threat condition in a given region of interest (ROI).
- Each algorithm may also generate more than one threat output. For example, a set of threat outputs may be generated that directly correspond to a predetermined set of hypotheses. Each threat output may then have an associated confidence value indicating the likelihood that the output is correct. Alternatively, only a single threat output for each particular algorithm may be communicated to the ACA 102 .
- the threat outputs may be color coded.
- the threats may be converted to a red-green-blue (RGB) color scale that serves as a visual cue to an operator or user.
- RGB red-green-blue
- the various pixels in the image may be color-coded to correspond to the estimated Z eff of the material in the image.
- high Z materials may be colored red
- organic materials may be colored orange
- metals may be colored blue.
- the color coding may be based on confidence values associated with the threat outputs or threat algorithms.
- class assignor module 108 can receive the threat outputs 106 a - 106 c .
- the class assignor 108 may then map the threat outputs to a class membership.
- the class assignor 108 may assign each threat output to a particular class membership.
- the class membership may be based on associated confidence values for each threat output or on an expected confidence for a given ROI.
- the class membership may be a fuzzy membership set, a probabilistic map, a probability distribution, and/or the like.
- the class membership for each threat output may be determined based on the particular algorithm and the given ROI. For example, three algorithms may be provided, as shown below in Table 1. Each threat output from the respective algorithms may be assigned to a class membership depending on the application of the algorithm to a particular location in the container. For example, for an effective atomic number greater than a threshold N, any threat output from algorithm 1 may be assigned a high confidence value (or to a “high” class membership) if the algorithm is applied to a ROI at the bottom of a particular container, as shown in Table 1. Alternately, any threat output from algorithm 1 may be assigned a medium confidence value (or to a “medium” class membership) if the algorithm is applied to a ROI in either the middle or top of the particular container.
- Specific confidence values may be associated with each class membership and accordingly assigned to each threat measurement output, as appropriate.
- the different class memberships and their relation to the threat algorithms may be determined through human expert input regarding performance of the threat algorithm in a given environment and system condition. Accordingly, applicable class memberships may be determined for each threat algorithm for a variety of containers and/or scenarios.
- the class assignor module 108 may convert the confidence value associated with each threat output into probability distribution functions represented by fuzzy class membership designations.
- the various output scores from a given recognizer can be placed into one of a plurality of fuzzy memberships such as “very high,” “appropriate,” “close,” and “not appropriate.”
- Other fuzzy membership classes can be utilized, based on the application.
- Confidence values may be used for the ROI to account for variations in performance of the algorithms across different regions of the object. For example, when the object is a cargo container, different confidence values may be used for the regions of interest in the bottom, middle, and top of the cargo container according to how well the different algorithms perform in each area.
- the structure of the class assignor module 108 could be a portion of a tangible medium having fixed thereon software instructions for causing a computer to assign a class membership to each threat output from the plurality of threat analysis algorithms.
- the structure could be a general purpose computer programmed to assign a class membership to each threat output from the plurality of threat analysis algorithms.
- the structure could be an electromagnetic signal, a ROM, a RAM, or other memory storing software instructions for causing a computer to assign a class membership to each threat output from the plurality of threat analysis algorithms.
- the structure could be a special purpose microchip, PLA, or the like.
- Class assignor module 108 may output the class memberships and threat outputs ( 110 a - 110 c ) to the rule selector module 112 .
- the rule selector module 112 may utilize class membership, at least in part, to assign support mass to each of the plurality of hypotheses according to a plurality of selected expert rules. For example, the rule selector module 112 can assign varying amounts of support mass to a certain hypothesis based on if a threat output is associated with the hypothesis, the confidence value of the associated threat output, and/or the class membership for the associated threat output.
- the selected expert rules may also assign an amount of uncertainty mass.
- the expert rules may also take into account threat outputs from multiple algorithms. For example, support and/or uncertainty masses associated with a selected expert rule may be applied to a certain hypothesis only when the threat outputs from multiple algorithms meet the conditions of the selected rule.
- the rules may be determined by information from problem space. For example, the rules may use information regarding the success rate of a particular algorithm in a real world scenario. Expert rules may also be developed using supervised training under lab conditions.
- the rules may also include an associated priority value to serve as weights for the mass values.
- the priority values may be customized by the user to adjust the relative importance of the rules so as to account for different circumstances or scenarios.
- the expert rules may be stored internally with the rule selector module 112 or provided as an input 120 to the rule selector 112 from a separate module 1118 , such as a user interface, separate computer, database, or the like.
- the structure of the rule selector module 112 could be a portion of a tangible medium having fixed thereon software instructions for causing a computer to select at least one rule.
- the structure could be a general purpose computer programmed to select at least one rule.
- the structure could be an electromagnetic signal, a ROM, a RAM, or other memory storing software instructions for causing a computer to select at least one rule.
- the structure could be a special purpose microchip, PLA, or the like.
- the determined support mass and uncertainty mass for each hypothesis ( 114 ) from the rule selector module 112 may be provided to the data fusion module 116 .
- the data fusion module 116 may use the support mass for each hypothesis as a belief in a Dempster-Shaffer analysis.
- the support mass for each of the selected rules that apply to a particular one of the hypotheses may be combined. For example, the combination may occur by taking the product of the support masses for the rules that apply so as to generate a combined belief value for each of the hypotheses.
- the uncertainty masses may be combined together into a single total uncertainty value, or plausibility value, for the plurality of hypotheses.
- the data fusion module 116 may generate a decision matrix of each hypothesis and the corresponding combined belief value. This decision matrix may additionally include the rules applied to generate the support mass for each hypothesis. For example, Table 2 below shows a decision matrix for three different hypotheses. The decision matrix may become the output 126 from the ACA 102 for evaluation by a user or for use by another system.
- the hypothesis having the largest combined belief value may be selected from the decision matrix.
- the selected hypothesis may be output as a “final decision” from the data fusion module 116 along output 126 , which may then be used by subsequent systems or conveyed to a user.
- the final decision may relate to the likelihood that a material in the object is a high Z material, a special nuclear material, or a general threat.
- Configurable thresholds from a module 122 may be input to the data fusion module 116 along input 124 for use in determining if the selected hypothesis has a sufficient combined belief value or if the uncertainty level is unacceptable.
- the structure of the data fusion module 116 could be a portion of a tangible medium having fixed thereon software instructions for causing a computer to determine a combined belief value for each of the plurality of hypotheses and a total uncertainty value and to generate an output based on the combined belief values and the total uncertainty value.
- the structure could be a general purpose computer to determine a combined belief value for each of the plurality of hypotheses and a total uncertainty value and to generate an output based on the combined belief values and the total uncertainty value.
- the structure could be an electromagnetic signal, a ROM, a RAM, or other memory storing software instructions for causing a computer to determine a combined belief value for each of the plurality of hypotheses and a total uncertainty value and to generate an output based on the combined belief values and the total uncertainty value.
- the structure could be a special purpose microchip, PLA, or the like.
- FIGS. 2-3 While for purposes of simplicity of explanation, the exemplary methodologies of FIGS. 2-3 are shown and described as executing serially, it is to be understood and appreciated that the present invention is not limited by the illustrated order, as some aspects could occur in different orders and/or concurrently with other aspects from that shown and described herein. Moreover, not all illustrated features may be required to implement a methodology in accordance with aspects of the present invention.
- FIG. 2 represents a process flow diagram 200 for an exemplary embodiment of a method for arbitrating outputs from a plurality of threat analysis algorithms.
- the method begins at step 201 and may continue to step 202 with the acquisition of one or more radiographic images of an object of interest. These images may be used by the independent threat analysis algorithms in the threat assessment. Control may continue to step 204 .
- the threat analysis algorithms may determine a threat output based on information from the radiographic images. Other algorithms may be in place for analyzing the radiographic images and providing data to the threat analysis algorithms for identification of a threat condition.
- the threat output may be a determination of the effective atomic number (Z eff ) of a material.
- the threat output may be a detection of high-Z material, special nuclear material, or a general threat condition in a given ROI.
- the threat output may be a plurality of outputs corresponding to a plurality of different hypotheses, each output having a respective confidence value. Control may then continue to step 206 .
- each threat output may be assigned to a class membership.
- the class membership may be based on an associated confidence values for each threat output or on an expected confidence for a given ROI.
- the membership may be a fuzzy membership set, a probabilistic map, or a probability distribution.
- each threat output may be assigned to a class membership based on the respective threat analysis algorithm and the ROI. For example, the confidence value associated with each threat output generated by its associated algorithm can be mapped to probability distribution functions represented by fuzzy class membership designations.
- the various output scores from a given recognizer may be placed into one of a plurality of fuzzy membership grades such as “very high,” “appropriate,” “close,” and “not appropriate.” Other fuzzy membership classes may be utilized, based on the application. Control may continue to step 208 .
- applicable rules for each hypothesis may be selected based on a threat output associated with the hypothesis, a confidence value associated with the algorithm, and the class membership for the threat output.
- Each of the rules can have one or more conditions that may be satisfied by characteristics of the threat outputs.
- the threat outputs can be evaluated to select any applicable rules for each hypothesis.
- a rule When a rule is selected for a given hypothesis, it may contribute a certain amount of support mass to a given hypothesis and a certain amount of uncertainty mass to a total uncertainty value.
- the masses may be weighted by a user-configurable priority value. Control may continue to step 210 .
- the associated support mass and uncertainty mass may be assigned to each hypothesis according to the rules selected.
- a support value for each hypothesis and a total uncertainty value can then be determined at step 212 .
- the support values may be provided or derived from system input conditions.
- the determined support values may be combined for each hypothesis using, for example, a Dempster-Shaffer approach.
- the support values for each rule applying to a given hypothesis may be combined by taking the product of the support values to generate a combined belief value.
- the uncertainty generated for a given hypothesis may be combined by the same methodology to generate a total uncertainty value.
- other approaches are also contemplated. Practically, Bayesian, Dempster, Fuzzy, or a combination of approaches may be employed. Control may continue to step 214 .
- the support value for each hypothesis and total uncertainty value may be respectively normalized to a desired scale to generate a combined belief value for each hypothesis and a normalized total uncertainty value, or plausibility value.
- a total support may be calculated as the sum of the respective support masses for the different hypothesis and the total uncertainty. Normalization may then be achieved by taking the ratio of each hypothesis support mass to the total support so as to arrive at a combined belief for each hypothesis.
- the total uncertainty value may be normalized in a similar manner by taking the ratio of the total uncertainty value to the total support value, so as to arrive at a plausibility value.
- Step 214 may also include generating a decision matrix of the combined belief value for each hypothesis.
- This decision matrix may include the rules applied to generate the support mass for each hypothesis. From the decision matrix, the hypothesis having the largest combined belief value can be selected in step 216 . Control may continue to step 218 .
- the selected hypothesis may be evaluated to determine if the combined belief value and uncertainty are sufficient.
- the selected hypothesis can be rejected if the combined belief value is below a first threshold value or the total uncertainty value is equal to or above a second threshold value.
- These threshold values may be configurable so as to enable a user to set the sensitivity of the system. If the combined belief for the selected hypothesis and the total uncertainty value are determined to be sufficient based on the comparison to the first and second thresholds, respectively, the selected hypothesis may be output from the system as the final decision.
- the selected hypothesis may relate to the likelihood that a material in an object is a high Z material, a special nuclear material, or a general threat.
- an uncertainty hypothesis may be output from the system as the final decision. Control continues to step 219 where the method ends. It should be appreciated that the above steps may be repeated in whole or in part in order to complete or continue an arbitration task.
- FIG. 3 illustrates a flow diagram 300 showing an operation of rule selection and data fusion aspects of an arbitration system. Control may begin at step 301 and may continue to step 302 . At step 302 , an ROI of an object is selected for investigation. Alternatively, the ROI selection may be performed prior to arbitration or no ROI selection may be performed at all. Control may continue to step 304 .
- the n th hypothesis may be selected from a set of hypotheses.
- the j th expert rule from a set of expert rules may be selected.
- the expert rules may be as shown in Table 3, for example.
- the j th expert rule may be evaluated at least in part on the associated class membership of a threat output as applied to the ROI to determine if it applies to the n th hypothesis.
- a selected rule may have one or more requirements to be met by one or more threat outputs associated with the selected hypothesis.
- the selected rule can provide a certain amount of support mass to the hypothesis as well as a certain amount of uncertainty mass to a total uncertainty when the requirements of the selected rule are met.
- the masses provided by each selected rule can be weighted by corresponding priority values, which can be adjusted by a user to change the relative importance of each selected rule in the arbitration process.
- Rule Rule Number Priority Description of Rule Rule 1 0.99 If all of the first ranked choices match and confidence is at least “appropriate” then match is candidate. Rule 2 0.99 If first ranked choice of the higher ranked algorithm matches any of the other first ranked choices of the secondary algorithms, and confidences are “appropriate” then select that match. Rule 3 0.90 If first choice of the highest ranked algorithm does not match any of the other first ranked choices of the secondary algorithms, but confidence values of the highest ranked algorithm is “very high” and at least one of the secondary algorithms are “close” in confidence values, then select that match.
- step 308 the hypothesis support, K, may be multiplied by the rule mass and the rule priority for the j th rule so as to combine the mass with any existing support mass for the n th hypothesis.
- the hypothesis uncertainty value, L may be multiplied by the uncertainty mass and the priority value from the j th rule. Control can then advance to step 310 . If it is determined that the j th rule does not apply to the n th hypothesis, the method may advance directly to step 310 .
- the support mass, K may be saved as the support mass S n for the n th hypothesis.
- the uncertainty value, L may be added to total system uncertainty, U. Control may then proceed to step 314 .
- the support value, S n , for each hypothesis may be normalized to determine the combined belief value, B n , for each hypothesis.
- normalization may include calculating a total support as the sum of all of the hypotheses support masses and the system uncertainty. The ratio of each hypothesis support to the total support can represent the combined belief for each hypothesis. Similarly, the uncertainty value may be normalized as the ratio of the total uncertainty to the total support. Control may continue to step 318 .
- a decision matrix may be generated.
- the decision matrix may include an array of the evaluated hypotheses and the corresponding combined belief values for each hypothesis.
- the decision matrix may also include the rules applied to generate the support mass for each hypothesis. From the decision matrix, a hypothesis having the greatest combined belief value can be selected at step 320 .
- the hypothesis with the greatest combined belief value may relate to the likelihood that said material is a high Z material, a special nuclear material, or a general threat.
- the combined belief value, B z for the selected hypothesis may be compared with a first threshold value.
- the uncertainty, U may be compared with a second threshold value.
- These threshold values may be user configurable. If the combined belief value is greater than or equal to the first threshold and the uncertainty is below a second threshold, the selected hypothesis may then be determined to be plausible. Control may then proceed to step 326 where the selected hypothesis may be output as the “final decision”. However, if the combined belief value is less than the first threshold or the uncertainty is equal to or greater than the second threshold, it may be determined that the selected hypothesis is not plausible. Therefore, the method may proceed to step 324 so as to output an uncertainty hypothesis as the final decision indicating that the threat output of the ROI cannot be reliably determined.
- steps of the present invention may be repeated in whole or in part in order to perform the contemplated threat arbitration. Further, it should be appreciated that the steps mentioned above may be performed on a single or distributed processor. Also, the processes, modules, and units described in the various figures of the embodiments above may be distributed across multiple computers or systems or may be co-located in a single processor or system.
- Embodiments of the method, system, and computer program product for threat arbitration may be implemented on a general-purpose computer, a special-purpose computer, a programmed microprocessor or microcontroller and peripheral integrated circuit element, an ASIC or other integrated circuit, a digital signal processor, a hardwired electronic or logic circuit such as a discrete element circuit, a programmed logic circuit such as a PLD, PLA, FPGA, PAL, or the like.
- any process capable of implementing the functions or steps described herein can be used to implement embodiments of the method, system, or computer program product for threat arbitration.
- embodiments of the disclosed method, system, and computer program product for threat arbitration may be readily implemented, fully or partially, in software using, for example, object or object-oriented software development environments that provide portable source code that can be used on a variety of computer platforms.
- embodiments of the disclosed method, system, and computer program product for threat arbitration can be implemented partially or fully in hardware using, for example, standard logic circuits or a VLSI design.
- Other hardware or software can be used to implement embodiments depending on the speed and/or efficiency requirements of the systems, the particular function, and/or particular software or hardware system, microprocessor, or microcomputer being utilized.
- Embodiments of the method, system, and computer program product for threat arbitration can be implemented in hardware and/or software using any known or later developed systems or structures, devices and/or software by those of ordinary skill in the applicable art from the function description provided herein and with a general basic knowledge of the computer, radiographic, and image processing arts.
- embodiments of the disclosed method, system, and computer program product for threat arbitration can be implemented in software executed on a programmed general purpose computer, a special purpose computer, a microprocessor, or the like.
- threat arbitration can be implemented as a program embedded on a personal computer such as a JAVA® or CGI script, as a resource residing on a server or image processing workstation, as a routine embedded in a dedicated processing system, or the like.
- the method and system can also be implemented by physically incorporating the method for threat arbitration into a software and/or hardware system, such as the hardware and software systems of multi-energy radiographic systems.
Landscapes
- Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Analytical Chemistry (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Chemical & Material Sciences (AREA)
- Evolutionary Biology (AREA)
- Biochemistry (AREA)
- General Health & Medical Sciences (AREA)
- Immunology (AREA)
- Pathology (AREA)
- Analysing Materials By The Use Of Radiation (AREA)
- Image Processing (AREA)
- Measurement Of Radiation (AREA)
- Apparatus For Radiation Diagnosis (AREA)
Abstract
A method of arbitrating outputs from a set of threat analysis algorithms or systems. The method can include receiving threat outputs from different threat analysis algorithms. Each threat output can be assigned to a class membership. Rules can be applied based on the threat outputs and the respective class membership. Each rule can provide an amount of support mass to a hypothesis and an amount of uncertainty mass. The rules can have an associated priority value for weighting the masses. A combined belief value for each hypothesis and a total uncertainty value can be determined based on the provided masses. The method can further include generating a decision matrix of the hypotheses and combined belief values. A hypothesis can be selected from the decision matrix based on the combined belief value.
Description
- The present application claims the benefit of provisional U.S. Patent Application No. 60/940,632, entitled “Threat Detection System”, filed May 29, 2007, which is hereby incorporated by reference in its entirety.
- The present invention relates generally to data fusion techniques, and, more particularly, to arbitration of the output of independent algorithms for threat detection.
- The detection of special nuclear materials can be accomplished with a combination of passive spectroscopic systems and advanced radiography systems. Together, these two technologies provide a capability to detect unshielded, lightly shielded and heavily shielded nuclear materials, components, and weapons that may be illicitly transported in trucks, cargo containers, air cargo containers, or other conveyances. With regard to nuclear material smuggled in cargo containers, it is more likely that the material will be shielded to the extent that it may be difficult to detect with passive spectroscopic systems. Thus, advanced radiography systems can be used to help detect shielded materials.
- Currently deployed radiography systems may be primarily designed to provide the capability to detect traditional contraband, such as drugs, currency, guns, or explosives. These types of contraband typically have a low atomic number (Z). Next generation radiography systems that use automated analysis of detector signals can detect shielded materials with a high atomic number, such as lead, uranium, or plutonium, by employing multi-energy radiographic images. Threat assessments for a region of interest can be made by algorithms or systems that employ the resultant radiographic images.
- These algorithms or systems may compromise between false alarm rates and detection accuracy. Certain algorithms or systems may have superior detection accuracy in one scenario, while contributing to increased false alarms in another scenario. An increased rate of false alarms is undesirable as it directs resources away from actual threats. As such, algorithms or systems are designed to balance acceptable detection accuracy with an acceptable false alarm rate. Embodiments of the present invention may address the above-mentioned problems and limitations, among other things.
- One embodiment provides a method of arbitrating outputs from a plurality of threat analysis algorithms so as to generate a final decision from a plurality of hypotheses. The method includes receiving at least one threat output for a region of interest (ROI) from each of the plurality of threat analysis algorithms. Each threat output may be assigned to a class membership. Expert rules can be selected based at least in part on the at least one threat output and the respective class membership. The expert rules can provide an amount of support mass for one of the plurality of hypotheses and an amount of uncertainty mass. Each expert rule can have an associated priority value that can be used to weight the support mass and uncertainty mass. A combined belief value can be determined for each hypothesis from the provided support mass. A total uncertainty value can be determined from the uncertainty masses for the plurality of hypotheses. The method can also include generating a decision matrix output including the plurality of hypotheses with respective combined belief values. A hypothesis with the highest combined belief value may be selected from the decision matrix output. The combined belief value for this selected hypothesis can be compared with a first threshold value. The total uncertainty value can be compared with a second threshold value. The selected hypothesis or an uncertainty hypothesis may be output as the final decision depending on the results of the comparisons.
- Another embodiment may include a system for arbitrating outputs from a plurality of threat analysis systems. The system can include means for assigning each threat output to a class membership. The system may further include means for selecting one or more rules. The selected rules can provide an amount of support mass for one of a plurality of hypotheses. The selected rules can also provide an amount of uncertainty mass. The system may further include means for determining a combined belief value for each of the plurality of hypotheses and a total uncertainty value from the provided support mass and uncertainty masses, respectively, and means for generating an output of a selected hypothesis based on the combined belief values and the total uncertainty value.
- Another embodiment includes a computer program product for arbitrating outputs from a plurality of threat analysis algorithms. The computer program product includes a computer readable medium encoded with software instructions that, when executed by a computer cause the computer to perform the step of receiving at least one threat output from each of a plurality of threat analysis algorithms. The steps may also include assigning each threat output to a class membership. The steps may also include determining a combined belief value for each of a plurality of hypotheses and a total uncertainty value and generating a decision matrix output including the plurality of hypotheses and associated combined belief values.
-
FIG. 1 shows a block diagram of a system view of an exemplary embodiment; -
FIG. 2 shows a flowchart of an exemplary method for arbitrating the outputs of a plurality of algorithms; and -
FIG. 3 shows a flowchart of exemplary aspects of an arbitration method. - In general, an exemplary embodiment may use an arbitration technology to integrate results from many independent algorithms or systems, thereby minimizing false alarm rates and improving assessment capability beyond the capability of any single algorithm. Results from several systems may be individually analyzed and their outcomes may be mapped to an occurrence probability or fuzzy membership. A series of hypotheses may be generated for each potential threat region. Expert rules may be used to determine a combined belief for each hypothesis for use in selecting a hypothesis as a final decision of the arbitrator.
-
FIG. 1 shows ablock diagram overview 100 of an exemplary embodiment of an advanced cognitive arbitrator (ACA) system. The ACA 102 has three aspects: aclass assignor module 108, arule selector module 112, and adata fusion module 116. The three components of the ACA 102 may be configured together as a single module, separate individual modules, or combined with other modules performing separate functions. Independent threat analysis algorithms 104 a-104 c may evaluate an object, such as by analysis of radiographic images of the object, so as to generate threat outputs 106 a-106 c to the ACA 102. Each threat output can have an associated confidence value that may be output to the ACA 102 as well. Although only three algorithms 104 a-104 c have been shown, fewer or additional algorithms can be used. Further, algorithms 104 a-104 c may be separate systems employing different detection methodologies. - The threat output may be the determination of the effective atomic number of a material of an object based on analysis of a radiographic image. Alternatively, the threat output may be a detection of high-Z material, special nuclear material, or a general threat condition in a given region of interest (ROI). Each algorithm may also generate more than one threat output. For example, a set of threat outputs may be generated that directly correspond to a predetermined set of hypotheses. Each threat output may then have an associated confidence value indicating the likelihood that the output is correct. Alternatively, only a single threat output for each particular algorithm may be communicated to the ACA 102.
- In addition, the threat outputs may be color coded. For example, the threats may be converted to a red-green-blue (RGB) color scale that serves as a visual cue to an operator or user. In the example where the threat output is the effective atomic number of materials in an image, the various pixels in the image may be color-coded to correspond to the estimated Zeff of the material in the image. For example, high Z materials may be colored red, organic materials may be colored orange, and metals may be colored blue. Alternately, the color coding may be based on confidence values associated with the threat outputs or threat algorithms.
- Within the
ACA 102,class assignor module 108 can receive the threat outputs 106 a-106 c. Theclass assignor 108 may then map the threat outputs to a class membership. In other words, theclass assignor 108 may assign each threat output to a particular class membership. The class membership may be based on associated confidence values for each threat output or on an expected confidence for a given ROI. The class membership may be a fuzzy membership set, a probabilistic map, a probability distribution, and/or the like. - The class membership for each threat output may be determined based on the particular algorithm and the given ROI. For example, three algorithms may be provided, as shown below in Table 1. Each threat output from the respective algorithms may be assigned to a class membership depending on the application of the algorithm to a particular location in the container. For example, for an effective atomic number greater than a threshold N, any threat output from
algorithm 1 may be assigned a high confidence value (or to a “high” class membership) if the algorithm is applied to a ROI at the bottom of a particular container, as shown in Table 1. Alternately, any threat output fromalgorithm 1 may be assigned a medium confidence value (or to a “medium” class membership) if the algorithm is applied to a ROI in either the middle or top of the particular container. Specific confidence values may be associated with each class membership and accordingly assigned to each threat measurement output, as appropriate. The different class memberships and their relation to the threat algorithms may be determined through human expert input regarding performance of the threat algorithm in a given environment and system condition. Accordingly, applicable class memberships may be determined for each threat algorithm for a variety of containers and/or scenarios. -
TABLE 1 Sample Class Membership Using Three Algorithms on Cargo Container Modeled Confidence Algorithm Condition Cargo Bottom Cargo Middle Cargo Top 1 Z > N High Medium Medium 2 Medium High Medium 3 High High Low - For example, the
class assignor module 108 may convert the confidence value associated with each threat output into probability distribution functions represented by fuzzy class membership designations. The various output scores from a given recognizer can be placed into one of a plurality of fuzzy memberships such as “very high,” “appropriate,” “close,” and “not appropriate.” Other fuzzy membership classes can be utilized, based on the application. Confidence values may be used for the ROI to account for variations in performance of the algorithms across different regions of the object. For example, when the object is a cargo container, different confidence values may be used for the regions of interest in the bottom, middle, and top of the cargo container according to how well the different algorithms perform in each area. - The structure of the
class assignor module 108 could be a portion of a tangible medium having fixed thereon software instructions for causing a computer to assign a class membership to each threat output from the plurality of threat analysis algorithms. Alternatively, the structure could be a general purpose computer programmed to assign a class membership to each threat output from the plurality of threat analysis algorithms. In another example, the structure could be an electromagnetic signal, a ROM, a RAM, or other memory storing software instructions for causing a computer to assign a class membership to each threat output from the plurality of threat analysis algorithms. Alternatively, the structure could be a special purpose microchip, PLA, or the like. -
Class assignor module 108 may output the class memberships and threat outputs (110 a-110 c) to therule selector module 112. Therule selector module 112 may utilize class membership, at least in part, to assign support mass to each of the plurality of hypotheses according to a plurality of selected expert rules. For example, therule selector module 112 can assign varying amounts of support mass to a certain hypothesis based on if a threat output is associated with the hypothesis, the confidence value of the associated threat output, and/or the class membership for the associated threat output. The selected expert rules may also assign an amount of uncertainty mass. - The expert rules may also take into account threat outputs from multiple algorithms. For example, support and/or uncertainty masses associated with a selected expert rule may be applied to a certain hypothesis only when the threat outputs from multiple algorithms meet the conditions of the selected rule.
- The rules may be determined by information from problem space. For example, the rules may use information regarding the success rate of a particular algorithm in a real world scenario. Expert rules may also be developed using supervised training under lab conditions.
- The rules may also include an associated priority value to serve as weights for the mass values. The priority values may be customized by the user to adjust the relative importance of the rules so as to account for different circumstances or scenarios. The expert rules may be stored internally with the
rule selector module 112 or provided as aninput 120 to therule selector 112 from a separate module 1118, such as a user interface, separate computer, database, or the like. - The structure of the
rule selector module 112 could be a portion of a tangible medium having fixed thereon software instructions for causing a computer to select at least one rule. Alternatively, the structure could be a general purpose computer programmed to select at least one rule. In another example, the structure could be an electromagnetic signal, a ROM, a RAM, or other memory storing software instructions for causing a computer to select at least one rule. Alternatively, the structure could be a special purpose microchip, PLA, or the like. - The determined support mass and uncertainty mass for each hypothesis (114) from the
rule selector module 112 may be provided to thedata fusion module 116. In an exemplary implementation, thedata fusion module 116 may use the support mass for each hypothesis as a belief in a Dempster-Shaffer analysis. The support mass for each of the selected rules that apply to a particular one of the hypotheses may be combined. For example, the combination may occur by taking the product of the support masses for the rules that apply so as to generate a combined belief value for each of the hypotheses. Similarly, the uncertainty masses may be combined together into a single total uncertainty value, or plausibility value, for the plurality of hypotheses. - The
data fusion module 116 may generate a decision matrix of each hypothesis and the corresponding combined belief value. This decision matrix may additionally include the rules applied to generate the support mass for each hypothesis. For example, Table 2 below shows a decision matrix for three different hypotheses. The decision matrix may become theoutput 126 from theACA 102 for evaluation by a user or for use by another system. - Alternatively, the hypothesis having the largest combined belief value may be selected from the decision matrix. The selected hypothesis may be output as a “final decision” from the
data fusion module 116 alongoutput 126, which may then be used by subsequent systems or conveyed to a user. The final decision may relate to the likelihood that a material in the object is a high Z material, a special nuclear material, or a general threat. -
TABLE 2 Exemplary Decision Matrix for a Set of Hypotheses. Combined Hypothesis Description Belief Rules Used Hypothesis- CORRECT, 99.0 Arbitrated Threat FOUND 1 Threat Found due to input and rule Matches combinations: 2, 8 Hypothesis- FALSE 0.83 Arbitrated Threat NOT 2 NEGATIVE, FOUND due to input and rule Threat Not combinations: 11 Found Hypothesis- Uncertain 0.17 Arbitrated Threat NOT Uncertain Hypothesis FOUND due to input and rule conflicts: 10, 11, 12 - When no combined belief value is sufficiently large and/or the total uncertainty value is unacceptably high, all hypotheses may be rejected or an uncertainty hypothesis may be selected as the final decision in lieu of any of the hypotheses. Configurable thresholds from a
module 122, such as a user interface or database, may be input to thedata fusion module 116 alonginput 124 for use in determining if the selected hypothesis has a sufficient combined belief value or if the uncertainty level is unacceptable. - The structure of the
data fusion module 116 could be a portion of a tangible medium having fixed thereon software instructions for causing a computer to determine a combined belief value for each of the plurality of hypotheses and a total uncertainty value and to generate an output based on the combined belief values and the total uncertainty value. Alternatively, the structure could be a general purpose computer to determine a combined belief value for each of the plurality of hypotheses and a total uncertainty value and to generate an output based on the combined belief values and the total uncertainty value. In another example, the structure could be an electromagnetic signal, a ROM, a RAM, or other memory storing software instructions for causing a computer to determine a combined belief value for each of the plurality of hypotheses and a total uncertainty value and to generate an output based on the combined belief values and the total uncertainty value. Alternatively, the structure could be a special purpose microchip, PLA, or the like. - In view of the foregoing structural and functional features described above, methodologies in accordance with various aspects of the present invention will be better appreciated with reference to
FIGS. 2-3 . While for purposes of simplicity of explanation, the exemplary methodologies ofFIGS. 2-3 are shown and described as executing serially, it is to be understood and appreciated that the present invention is not limited by the illustrated order, as some aspects could occur in different orders and/or concurrently with other aspects from that shown and described herein. Moreover, not all illustrated features may be required to implement a methodology in accordance with aspects of the present invention. -
FIG. 2 represents a process flow diagram 200 for an exemplary embodiment of a method for arbitrating outputs from a plurality of threat analysis algorithms. The method begins atstep 201 and may continue to step 202 with the acquisition of one or more radiographic images of an object of interest. These images may be used by the independent threat analysis algorithms in the threat assessment. Control may continue to step 204. - At step 204, the threat analysis algorithms may determine a threat output based on information from the radiographic images. Other algorithms may be in place for analyzing the radiographic images and providing data to the threat analysis algorithms for identification of a threat condition. The threat output may be a determination of the effective atomic number (Zeff) of a material. In other embodiments, the threat output may be a detection of high-Z material, special nuclear material, or a general threat condition in a given ROI. In addition, the threat output may be a plurality of outputs corresponding to a plurality of different hypotheses, each output having a respective confidence value. Control may then continue to step 206.
- At
step 206, each threat output may be assigned to a class membership. The class membership may be based on an associated confidence values for each threat output or on an expected confidence for a given ROI. The membership may be a fuzzy membership set, a probabilistic map, or a probability distribution. Alternately, each threat output may be assigned to a class membership based on the respective threat analysis algorithm and the ROI. For example, the confidence value associated with each threat output generated by its associated algorithm can be mapped to probability distribution functions represented by fuzzy class membership designations. The various output scores from a given recognizer may be placed into one of a plurality of fuzzy membership grades such as “very high,” “appropriate,” “close,” and “not appropriate.” Other fuzzy membership classes may be utilized, based on the application. Control may continue to step 208. - At
step 208, applicable rules for each hypothesis may be selected based on a threat output associated with the hypothesis, a confidence value associated with the algorithm, and the class membership for the threat output. Each of the rules can have one or more conditions that may be satisfied by characteristics of the threat outputs. The threat outputs can be evaluated to select any applicable rules for each hypothesis. When a rule is selected for a given hypothesis, it may contribute a certain amount of support mass to a given hypothesis and a certain amount of uncertainty mass to a total uncertainty value. The masses may be weighted by a user-configurable priority value. Control may continue to step 210. - At
step 210, the associated support mass and uncertainty mass may be assigned to each hypothesis according to the rules selected. A support value for each hypothesis and a total uncertainty value can then be determined atstep 212. The support values may be provided or derived from system input conditions. The determined support values may be combined for each hypothesis using, for example, a Dempster-Shaffer approach. For example, the support values for each rule applying to a given hypothesis may be combined by taking the product of the support values to generate a combined belief value. Similarly, the uncertainty generated for a given hypothesis may be combined by the same methodology to generate a total uncertainty value. However, other approaches are also contemplated. Practically, Bayesian, Dempster, Fuzzy, or a combination of approaches may be employed. Control may continue to step 214. - At
step 214, the support value for each hypothesis and total uncertainty value may be respectively normalized to a desired scale to generate a combined belief value for each hypothesis and a normalized total uncertainty value, or plausibility value. For example, a total support may be calculated as the sum of the respective support masses for the different hypothesis and the total uncertainty. Normalization may then be achieved by taking the ratio of each hypothesis support mass to the total support so as to arrive at a combined belief for each hypothesis. The total uncertainty value may be normalized in a similar manner by taking the ratio of the total uncertainty value to the total support value, so as to arrive at a plausibility value. - Step 214 may also include generating a decision matrix of the combined belief value for each hypothesis. This decision matrix may include the rules applied to generate the support mass for each hypothesis. From the decision matrix, the hypothesis having the largest combined belief value can be selected in
step 216. Control may continue to step 218. - At
step 218, the selected hypothesis may be evaluated to determine if the combined belief value and uncertainty are sufficient. The selected hypothesis can be rejected if the combined belief value is below a first threshold value or the total uncertainty value is equal to or above a second threshold value. These threshold values may be configurable so as to enable a user to set the sensitivity of the system. If the combined belief for the selected hypothesis and the total uncertainty value are determined to be sufficient based on the comparison to the first and second thresholds, respectively, the selected hypothesis may be output from the system as the final decision. The selected hypothesis may relate to the likelihood that a material in an object is a high Z material, a special nuclear material, or a general threat. - If either the combined belief or total uncertainty values are insufficient, an uncertainty hypothesis may be output from the system as the final decision. Control continues to step 219 where the method ends. It should be appreciated that the above steps may be repeated in whole or in part in order to complete or continue an arbitration task.
-
FIG. 3 illustrates a flow diagram 300 showing an operation of rule selection and data fusion aspects of an arbitration system. Control may begin atstep 301 and may continue to step 302. Atstep 302, an ROI of an object is selected for investigation. Alternatively, the ROI selection may be performed prior to arbitration or no ROI selection may be performed at all. Control may continue to step 304. - At
step 304, the nth hypothesis may be selected from a set of hypotheses. Atstep 306, the jth expert rule from a set of expert rules may be selected. The expert rules may be as shown in Table 3, for example. The jth expert rule may be evaluated at least in part on the associated class membership of a threat output as applied to the ROI to determine if it applies to the nth hypothesis. A selected rule may have one or more requirements to be met by one or more threat outputs associated with the selected hypothesis. The selected rule can provide a certain amount of support mass to the hypothesis as well as a certain amount of uncertainty mass to a total uncertainty when the requirements of the selected rule are met. The masses provided by each selected rule can be weighted by corresponding priority values, which can be adjusted by a user to change the relative importance of each selected rule in the arbitration process. -
TABLE 3 Table of Sample Rules with Descriptions and Priority Values Rule Rule Number Priority Description of Rule Rule 1 0.99 If all of the first ranked choices match and confidence is at least “appropriate” then match is candidate. Rule 20.99 If first ranked choice of the higher ranked algorithm matches any of the other first ranked choices of the secondary algorithms, and confidences are “appropriate” then select that match. Rule 3 0.90 If first choice of the highest ranked algorithm does not match any of the other first ranked choices of the secondary algorithms, but confidence values of the highest ranked algorithm is “very high” and at least one of the secondary algorithms are “close” in confidence values, then select that match. Rule 4 0.85 If first choice of the highest ranked algorithm does not match any of the other first ranked choices of the secondary algorithms, but confidence values of the highest ranked algorithm is “very high” and at least one of the top five choices in the secondary algorithms are “acceptable” in confidence values, then select that match. - If the jth rule is determined to apply to the nth hypothesis, control may continue to step 308 where the hypothesis support, K, may be multiplied by the rule mass and the rule priority for the jth rule so as to combine the mass with any existing support mass for the nth hypothesis. Similarly, the hypothesis uncertainty value, L, may be multiplied by the uncertainty mass and the priority value from the jth rule. Control can then advance to step 310. If it is determined that the jth rule does not apply to the nth hypothesis, the method may advance directly to step 310.
- At
step 310, the method may check to see if all the rules have been evaluated for the nth hypothesis. If all of the rules have not been evaluated, the method may increment to the next rule in the set of rules (i.e., j=j+1) and may return to step 306. If all of the rules have been evaluated, the method may proceed to step 312. Atstep 312, the support mass, K, may be saved as the support mass Sn for the nth hypothesis. The uncertainty value, L, may be added to total system uncertainty, U. Control may then proceed to step 314. - At
step 314, the method may check if all the hypotheses have been evaluated. If all of the hypotheses have not been evaluated, the control may continue to the next hypothesis in the set of hypotheses (i.e., n=n+1) and may proceed back tostep 304. If all of the hypotheses have been evaluated, control may advance to step 316. - At
step 316, the support value, Sn, for each hypothesis may be normalized to determine the combined belief value, Bn, for each hypothesis. For example, normalization may include calculating a total support as the sum of all of the hypotheses support masses and the system uncertainty. The ratio of each hypothesis support to the total support can represent the combined belief for each hypothesis. Similarly, the uncertainty value may be normalized as the ratio of the total uncertainty to the total support. Control may continue to step 318. - At
step 318, a decision matrix may be generated. The decision matrix may include an array of the evaluated hypotheses and the corresponding combined belief values for each hypothesis. The decision matrix may also include the rules applied to generate the support mass for each hypothesis. From the decision matrix, a hypothesis having the greatest combined belief value can be selected atstep 320. The hypothesis with the greatest combined belief value may relate to the likelihood that said material is a high Z material, a special nuclear material, or a general threat. - At
step 322, the combined belief value, Bz, for the selected hypothesis may be compared with a first threshold value. In addition, the uncertainty, U, may be compared with a second threshold value. These threshold values may be user configurable. If the combined belief value is greater than or equal to the first threshold and the uncertainty is below a second threshold, the selected hypothesis may then be determined to be plausible. Control may then proceed to step 326 where the selected hypothesis may be output as the “final decision”. However, if the combined belief value is less than the first threshold or the uncertainty is equal to or greater than the second threshold, it may be determined that the selected hypothesis is not plausible. Therefore, the method may proceed to step 324 so as to output an uncertainty hypothesis as the final decision indicating that the threat output of the ROI cannot be reliably determined. - It should be appreciated that the steps of the present invention may be repeated in whole or in part in order to perform the contemplated threat arbitration. Further, it should be appreciated that the steps mentioned above may be performed on a single or distributed processor. Also, the processes, modules, and units described in the various figures of the embodiments above may be distributed across multiple computers or systems or may be co-located in a single processor or system.
- Embodiments of the method, system, and computer program product for threat arbitration may be implemented on a general-purpose computer, a special-purpose computer, a programmed microprocessor or microcontroller and peripheral integrated circuit element, an ASIC or other integrated circuit, a digital signal processor, a hardwired electronic or logic circuit such as a discrete element circuit, a programmed logic circuit such as a PLD, PLA, FPGA, PAL, or the like. In general, any process capable of implementing the functions or steps described herein can be used to implement embodiments of the method, system, or computer program product for threat arbitration.
- Furthermore, embodiments of the disclosed method, system, and computer program product for threat arbitration may be readily implemented, fully or partially, in software using, for example, object or object-oriented software development environments that provide portable source code that can be used on a variety of computer platforms. Alternatively, embodiments of the disclosed method, system, and computer program product for threat arbitration can be implemented partially or fully in hardware using, for example, standard logic circuits or a VLSI design. Other hardware or software can be used to implement embodiments depending on the speed and/or efficiency requirements of the systems, the particular function, and/or particular software or hardware system, microprocessor, or microcomputer being utilized. Embodiments of the method, system, and computer program product for threat arbitration can be implemented in hardware and/or software using any known or later developed systems or structures, devices and/or software by those of ordinary skill in the applicable art from the function description provided herein and with a general basic knowledge of the computer, radiographic, and image processing arts.
- Moreover, embodiments of the disclosed method, system, and computer program product for threat arbitration can be implemented in software executed on a programmed general purpose computer, a special purpose computer, a microprocessor, or the like. Also, threat arbitration can be implemented as a program embedded on a personal computer such as a JAVA® or CGI script, as a resource residing on a server or image processing workstation, as a routine embedded in a dedicated processing system, or the like. The method and system can also be implemented by physically incorporating the method for threat arbitration into a software and/or hardware system, such as the hardware and software systems of multi-energy radiographic systems.
- It is, therefore, apparent that there is provided, in accordance with the present invention, a method, system, and computer program product for threat arbitration. While this invention has been described in conjunction with a number of embodiments, it is evident that many alternatives, modifications and variations would be or are apparent to those of ordinary skill in the applicable arts. Accordingly, Applicants intend to embrace all such alternatives, modifications, equivalents and variations that are within the spirit and scope of this invention.
Claims (20)
1. A method of arbitrating threat outputs from a plurality of threat analysis algorithms so as to generate a final decision from a plurality of hypotheses, the method comprising:
receiving at least one threat output for a region of interest (ROI) from each of the plurality of threat analysis algorithms, wherein each threat output is based on analysis of a radiographic image;
assigning each threat output to a class membership based on the respective threat analysis algorithm and the ROI;
selecting at least one expert rule based on the at least one threat output and the respective class membership, each selected expert rule having an associated priority value and providing an amount of support mass for at least one of the plurality of hypotheses and an amount of uncertainty mass, wherein the support mass and uncertainty mass are weighted by the associated priority value;
determining a combined belief value for each hypothesis from the provided support mass;
determining a total uncertainty value from the provided uncertainty masses;
generating a decision matrix including the plurality of hypotheses and respective combined belief values;
selecting a hypothesis from the decision matrix with the highest combined belief value;
comparing the combined belief value for the selected hypothesis to a first threshold value;
comparing the total uncertainty value to a second threshold value;
outputting the selected hypothesis as the final decision if the combined belief value is greater than or equal to the first threshold and the total uncertainty is less than the second threshold; and
outputting an uncertainty hypothesis as the final decision if the combined belief value is less than the first threshold or the total uncertainty is greater than or equal to the second threshold.
2. The method of claim 1 , wherein the priority value for each expert rule is configurable by a user so as to adjust a relative importance of that expert rule.
3. The method of claim 1 , wherein the expert rules are determined using information about each threat analysis algorithm, said information including a success rate of that threat analysis algorithm with respect to the ROI in real world scenarios.
4. The method of claim 1 , wherein assigning each threat output to a class membership includes assigning each threat output to one of a plurality of fuzzy identifiers.
5. The method of claim 1 , wherein assigning each threat output to a class membership includes mapping each threat output to a probability distribution.
6. The method of claim 1 , further comprising the step of normalizing the support mass and the uncertainty mass amounts.
7. The method of claim 1 , wherein each threat output relates to the detection of an effective atomic number of a material in the object and the final decision relates to the likelihood that said material is a high Z material, a special nuclear material, or a general threat.
8. A system for arbitrating threat outputs from a plurality of threat analysis algorithms, the system comprising:
means for assigning each threat output to a class membership, each threat output generated by a corresponding threat analysis algorithm;
means for selecting at least one rule, each selected rule providing an amount of support mass for one of a plurality of hypotheses and an amount of uncertainty mass; and,
means for determining a combined belief value for each of the plurality of hypotheses and a total uncertainty value from the provided support masses and uncertainty masses, respectively, and
means for generating an output based on the combined belief values and the total uncertainty value.
9. The system of claim 8 , wherein said means for generating an output further comprises:
means for selecting a hypothesis with the highest combined belief value;
means for comparing the combined belief value for the selected hypothesis to a first threshold value;
means for comparing the total uncertainty value to a second threshold value; and
means for outputting,
wherein said means for outputting outputs the selected hypothesis if the combined belief value for the selected hypothesis is greater than or equal to the first threshold and the total uncertainty is less than the second threshold or outputs an uncertainty hypothesis if the combined belief value for the selected hypothesis is less than the first threshold or the total uncertainty is greater than or equal to the second threshold.
10. The system of claim 8 , wherein each rule has an associated priority value, the support mass and uncertainty mass for the rule being weighted by the associated priority value, wherein the associated priority value is configurable by a user.
11. The system of claim 8 , wherein said means for selecting at least one rule selects the at least one rule based at least in part on the threat outputs and the respective class membership.
12. The system of claim 8 , wherein the rules are determined using information about each threat analysis algorithm, said information including a success rate of each threat analysis algorithm.
13. The system of claim 8 , wherein each threat output is assigned a class membership by mapping the threat outputs to fuzzy identifiers or a probability distribution.
14. The system of claim 8 , wherein each threat output relates to the detection of an effective atomic number of a material based on analysis of a radiographic image and the selected hypothesis relates to the likelihood that said material is a high Z material, a special nuclear material, or a general threat.
15. A computer program product for arbitrating threat outputs from a plurality of threat analysis systems comprising:
a computer readable medium encoded with software instructions that, when executed by a computer, cause the computer to perform the steps of:
receiving at least one threat output from each of the plurality of threat analysis systems;
assigning each threat output to a class membership;
determining a combined belief value for each of a plurality of hypotheses and a total uncertainty value, based at least in part on the at least one threat output and the respective class membership; and
generating a decision matrix including the plurality of hypotheses and associated combined belief values.
16. The computer program product of claim 15 , wherein the steps further comprise selecting a hypothesis from the decision matrix with the highest combined belief value.
17. The computer program product of claim 16 , wherein the steps further comprise:
comparing the combined belief value for the selected hypothesis to a first threshold value;
comparing the total uncertainty value to a second threshold value; outputting the selected hypothesis if the combined belief value is greater than or equal to the first threshold and the total uncertainty is less than the second threshold; and,
outputting an uncertainty hypothesis if the combined belief value is less than the first threshold or the total uncertainty is greater than or equal to the second threshold.
18. The computer program product of claim 15 , wherein the steps further comprise:
selecting at least one expert rule based on the at least one threat output and the respective class membership, a selected expert rule providing an amount of support mass for at least one of the plurality of hypotheses and an amount of uncertainty mass,
wherein each expert rule has an associated user-configurable priority value, the support mass for the expert rule being weighted by the user-configurable priority value, and
wherein the combined belief value for each hypothesis and the total uncertainty value are determined from the provided support masses and uncertainty masses, respectively.
19. The computer program product of claim 15 , wherein each threat output is assigned to a class membership by correlating the threat outputs to fuzzy identifiers or a probabilistic map.
20. The computer program product of claim 15 , wherein each threat output relates to the detection of an effective atomic number of a material based on analysis of a radiographic image and at least one of the plurality of hypotheses relates to the likelihood that said material is a high Z material, a special nuclear material, or a general threat.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/129,393 US20090055344A1 (en) | 2007-05-29 | 2008-05-29 | System and method for arbitrating outputs from a plurality of threat analysis systems |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US94063207P | 2007-05-29 | 2007-05-29 | |
US12/129,393 US20090055344A1 (en) | 2007-05-29 | 2008-05-29 | System and method for arbitrating outputs from a plurality of threat analysis systems |
Publications (1)
Publication Number | Publication Date |
---|---|
US20090055344A1 true US20090055344A1 (en) | 2009-02-26 |
Family
ID=40088192
Family Applications (7)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/129,383 Expired - Fee Related US8094874B2 (en) | 2007-05-29 | 2008-05-29 | Material context analysis |
US12/129,036 Abandoned US20090003651A1 (en) | 2007-05-29 | 2008-05-29 | Object segmentation recognition |
US12/129,439 Abandoned US20080298544A1 (en) | 2007-05-29 | 2008-05-29 | Genetic tuning of coefficients in a threat detection system |
US12/129,371 Abandoned US20090052762A1 (en) | 2007-05-29 | 2008-05-29 | Multi-energy radiographic system for estimating effective atomic number using multiple ratios |
US12/129,393 Abandoned US20090055344A1 (en) | 2007-05-29 | 2008-05-29 | System and method for arbitrating outputs from a plurality of threat analysis systems |
US12/129,410 Abandoned US20090003699A1 (en) | 2007-05-29 | 2008-05-29 | User guided object segmentation recognition |
US12/129,055 Abandoned US20090052622A1 (en) | 2007-05-29 | 2008-05-29 | Nuclear material detection system |
Family Applications Before (4)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/129,383 Expired - Fee Related US8094874B2 (en) | 2007-05-29 | 2008-05-29 | Material context analysis |
US12/129,036 Abandoned US20090003651A1 (en) | 2007-05-29 | 2008-05-29 | Object segmentation recognition |
US12/129,439 Abandoned US20080298544A1 (en) | 2007-05-29 | 2008-05-29 | Genetic tuning of coefficients in a threat detection system |
US12/129,371 Abandoned US20090052762A1 (en) | 2007-05-29 | 2008-05-29 | Multi-energy radiographic system for estimating effective atomic number using multiple ratios |
Family Applications After (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/129,410 Abandoned US20090003699A1 (en) | 2007-05-29 | 2008-05-29 | User guided object segmentation recognition |
US12/129,055 Abandoned US20090052622A1 (en) | 2007-05-29 | 2008-05-29 | Nuclear material detection system |
Country Status (1)
Country | Link |
---|---|
US (7) | US8094874B2 (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090003699A1 (en) * | 2007-05-29 | 2009-01-01 | Peter Dugan | User guided object segmentation recognition |
US20140164311A1 (en) * | 2012-12-12 | 2014-06-12 | International Business Machines Corporation | Computing prioritzed general arbitration rules for conflicting rules |
US8924325B1 (en) * | 2011-02-08 | 2014-12-30 | Lockheed Martin Corporation | Computerized target hostility determination and countermeasure |
US9476923B2 (en) | 2011-06-30 | 2016-10-25 | Commissariat A L'energie Atomique Et Aux Energies Alternatives | Method and device for identifying a material by the spectral analysis of electromagnetic radiation passing through said material |
US9697467B2 (en) | 2014-05-21 | 2017-07-04 | International Business Machines Corporation | Goal-driven composition with preferences method and system |
US9785755B2 (en) | 2014-05-21 | 2017-10-10 | International Business Machines Corporation | Predictive hypothesis exploration using planning |
Families Citing this family (30)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8200015B2 (en) * | 2007-06-22 | 2012-06-12 | Siemens Aktiengesellschaft | Method for interactively segmenting structures in image data records and image processing unit for carrying out the method |
DE102007028895B4 (en) * | 2007-06-22 | 2010-07-15 | Siemens Ag | Method for segmenting structures in 3D image data sets |
KR20100038046A (en) * | 2008-10-02 | 2010-04-12 | 가부시키가이샤 한도오따이 에네루기 켄큐쇼 | Touch panel and method for driving the same |
KR20110032047A (en) * | 2009-09-22 | 2011-03-30 | 삼성전자주식회사 | Multi-energy x-ray system, multi-energy x-ray material discriminated image processing unit, and method for processing material discriminated images of the multi-energy x-ray system |
JP5740132B2 (en) | 2009-10-26 | 2015-06-24 | 株式会社半導体エネルギー研究所 | Display device and semiconductor device |
US9036782B2 (en) * | 2010-08-06 | 2015-05-19 | Telesecurity Sciences, Inc. | Dual energy backscatter X-ray shoe scanning device |
US20120113146A1 (en) * | 2010-11-10 | 2012-05-10 | Patrick Michael Virtue | Methods, apparatus and articles of manufacture to combine segmentations of medical diagnostic images |
PL2677936T3 (en) * | 2011-02-25 | 2022-02-07 | Smiths Detection Germany Gmbh | Image reconstruction based on parametric models |
JP2014518133A (en) * | 2011-06-30 | 2014-07-28 | アナロジック コーポレイション | Image reconstruction method and system {ITERATEIVEIMAGECONSGTRUCTION} |
US9705607B2 (en) * | 2011-10-03 | 2017-07-11 | Cornell University | System and methods of acoustic monitoring |
JP5895624B2 (en) * | 2012-03-14 | 2016-03-30 | オムロン株式会社 | Image processing apparatus, image processing method, control program, and recording medium |
US9589188B2 (en) * | 2012-11-14 | 2017-03-07 | Varian Medical Systems, Inc. | Method and apparatus pertaining to identifying objects of interest in a high-energy image |
US9118714B1 (en) * | 2014-07-23 | 2015-08-25 | Lookingglass Cyber Solutions, Inc. | Apparatuses, methods and systems for a cyber threat visualization and editing user interface |
GB2530252B (en) * | 2014-09-10 | 2020-04-01 | Smiths Heimann Sas | Determination of a degree of homogeneity in images |
CN104482996B (en) * | 2014-12-24 | 2019-03-15 | 胡桂标 | The material kind of passive nuclear level sensing device corrects measuring system |
CN104778444B (en) * | 2015-03-10 | 2018-01-16 | 公安部交通管理科学研究所 | The appearance features analysis method of vehicle image under road scene |
US9687207B2 (en) * | 2015-04-01 | 2017-06-27 | Toshiba Medical Systems Corporation | Pre-reconstruction calibration, data correction, and material decomposition method and apparatus for photon-counting spectrally-resolving X-ray detectors and X-ray imaging |
US10078150B2 (en) | 2015-04-14 | 2018-09-18 | Board Of Regents, The University Of Texas System | Detecting and quantifying materials in containers utilizing an inverse algorithm with adaptive regularization |
US9760801B2 (en) | 2015-05-12 | 2017-09-12 | Lawrence Livermore National Security, Llc | Identification of uncommon objects in containers |
IL239191A0 (en) * | 2015-06-03 | 2015-11-30 | Amir B Geva | Image classification system |
CN106353828B (en) * | 2015-07-22 | 2018-09-21 | 清华大学 | The method and apparatus that checked property body weight is estimated in safe examination system |
US11836650B2 (en) | 2016-01-27 | 2023-12-05 | Microsoft Technology Licensing, Llc | Artificial intelligence engine for mixing and enhancing features from one or more trained pre-existing machine-learning models |
US20180357543A1 (en) * | 2016-01-27 | 2018-12-13 | Bonsai AI, Inc. | Artificial intelligence system configured to measure performance of artificial intelligence over time |
US10664766B2 (en) | 2016-01-27 | 2020-05-26 | Bonsai AI, Inc. | Graphical user interface to an artificial intelligence engine utilized to generate one or more trained artificial intelligence models |
US11775850B2 (en) | 2016-01-27 | 2023-10-03 | Microsoft Technology Licensing, Llc | Artificial intelligence engine having various algorithms to build different concepts contained within a same AI model |
US11868896B2 (en) | 2016-01-27 | 2024-01-09 | Microsoft Technology Licensing, Llc | Interface for working with simulations on premises |
US11841789B2 (en) | 2016-01-27 | 2023-12-12 | Microsoft Technology Licensing, Llc | Visual aids for debugging |
US10204226B2 (en) | 2016-12-07 | 2019-02-12 | General Electric Company | Feature and boundary tuning for threat detection in industrial asset control system |
US11120297B2 (en) * | 2018-11-30 | 2021-09-14 | International Business Machines Corporation | Segmentation of target areas in images |
US10939044B1 (en) * | 2019-08-27 | 2021-03-02 | Adobe Inc. | Automatically setting zoom level for image capture |
Family Cites Families (60)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US73518A (en) * | 1868-01-21 | Luke fitzpatrick and jacob schinneller | ||
US538758A (en) * | 1895-05-07 | Richard watkins | ||
EP0161324B1 (en) * | 1984-05-14 | 1987-11-25 | Matsushita Electric Industrial Co., Ltd. | Quantum-counting radiography method and apparatus |
US5132998A (en) * | 1989-03-03 | 1992-07-21 | Matsushita Electric Industrial Co., Ltd. | Radiographic image processing method and photographic imaging apparatus therefor |
US5319547A (en) * | 1990-08-10 | 1994-06-07 | Vivid Technologies, Inc. | Device and method for inspection of baggage and other objects |
US5692029A (en) * | 1993-01-15 | 1997-11-25 | Technology International Incorporated | Detection of concealed explosives and contraband |
US5600700A (en) * | 1995-09-25 | 1997-02-04 | Vivid Technologies, Inc. | Detecting explosives or other contraband by employing transmitted and scattered X-rays |
US5642393A (en) * | 1995-09-26 | 1997-06-24 | Vivid Technologies, Inc. | Detecting contraband by employing interactive multiprobe tomography |
US6018562A (en) * | 1995-11-13 | 2000-01-25 | The United States Of America As Represented By The Secretary Of The Army | Apparatus and method for automatic recognition of concealed objects using multiple energy computed tomography |
US6026171A (en) * | 1998-02-11 | 2000-02-15 | Analogic Corporation | Apparatus and method for detection of liquids in computed tomography data |
US6236709B1 (en) * | 1998-05-04 | 2001-05-22 | Ensco, Inc. | Continuous high speed tomographic imaging system and method |
US7394363B1 (en) * | 1998-05-12 | 2008-07-01 | Bahador Ghahramani | Intelligent multi purpose early warning system for shipping containers, components therefor and methods of making the same |
US6282305B1 (en) * | 1998-06-05 | 2001-08-28 | Arch Development Corporation | Method and system for the computerized assessment of breast cancer risk |
US6567496B1 (en) * | 1999-10-14 | 2003-05-20 | Sychev Boris S | Cargo inspection apparatus and process |
DE19954663B4 (en) * | 1999-11-13 | 2006-06-08 | Smiths Heimann Gmbh | Method and device for determining a material of a detected object |
CA2348150C (en) * | 2000-05-25 | 2007-03-13 | Esam M.A. Hussein | Non-rotating x-ray system for three-dimensional, three-parameter imaging |
US20020186875A1 (en) * | 2001-04-09 | 2002-12-12 | Burmer Glenna C. | Computer methods for image pattern recognition in organic material |
US6969861B2 (en) * | 2001-10-02 | 2005-11-29 | Konica Corporation | Cassette for radiographic imaging, radiographic image reading apparatus and radiographic image reading method |
US7444309B2 (en) * | 2001-10-31 | 2008-10-28 | Icosystem Corporation | Method and system for implementing evolutionary algorithms |
US6816571B2 (en) * | 2002-02-06 | 2004-11-09 | L-3 Communications Security And Detection Systems Corporation Delaware | Method and apparatus for transmitting information about a target object between a prescanner and a CT scanner |
AU2003219689A1 (en) * | 2002-02-08 | 2003-09-02 | Maryellen L. Giger | Method and system for risk-modulated diagnosis of disease |
US7162005B2 (en) * | 2002-07-19 | 2007-01-09 | Varian Medical Systems Technologies, Inc. | Radiation sources and compact radiation scanning systems |
US7103137B2 (en) * | 2002-07-24 | 2006-09-05 | Varian Medical Systems Technology, Inc. | Radiation scanning of objects for contraband |
US7356115B2 (en) * | 2002-12-04 | 2008-04-08 | Varian Medical Systems Technology, Inc. | Radiation scanning units including a movable platform |
AU2003270910A1 (en) * | 2002-09-27 | 2004-04-19 | Scantech Holdings, Llc | System for alternately pulsing energy of accelerated electrons bombarding a conversion target |
AU2003294600A1 (en) * | 2002-12-10 | 2004-06-30 | Digitome Corporation | Volumetric 3d x-ray imaging system for baggage inspection including the detection of explosives |
US7277521B2 (en) * | 2003-04-08 | 2007-10-02 | The Regents Of The University Of California | Detecting special nuclear materials in containers using high-energy gamma rays emitted by fission products |
US20050058242A1 (en) * | 2003-09-15 | 2005-03-17 | Peschmann Kristian R. | Methods and systems for the rapid detection of concealed objects |
US7092485B2 (en) * | 2003-05-27 | 2006-08-15 | Control Screening, Llc | X-ray inspection system for detecting explosives and other contraband |
US6937692B2 (en) * | 2003-06-06 | 2005-08-30 | Varian Medical Systems Technologies, Inc. | Vehicle mounted inspection systems and methods |
US7433507B2 (en) * | 2003-07-03 | 2008-10-07 | Ge Medical Systems Global Technology Co. | Imaging chain for digital tomosynthesis on a flat panel detector |
US7697743B2 (en) * | 2003-07-03 | 2010-04-13 | General Electric Company | Methods and systems for prescribing parameters for tomosynthesis |
US7492855B2 (en) * | 2003-08-07 | 2009-02-17 | General Electric Company | System and method for detecting an object |
US7366282B2 (en) * | 2003-09-15 | 2008-04-29 | Rapiscan Security Products, Inc. | Methods and systems for rapid detection of concealed objects using fluorescence |
US7856081B2 (en) * | 2003-09-15 | 2010-12-21 | Rapiscan Systems, Inc. | Methods and systems for rapid detection of concealed objects using fluorescence |
US7491958B2 (en) * | 2003-08-27 | 2009-02-17 | Scantech Holdings, Llc | Radiographic inspection system for inspecting the contents of a container having dual injector and dual accelerating section |
US7162007B2 (en) * | 2004-02-06 | 2007-01-09 | Elyan Vladimir V | Non-intrusive inspection systems for large container screening and inspection |
US7609807B2 (en) * | 2004-02-17 | 2009-10-27 | General Electric Company | CT-Guided system and method for analyzing regions of interest for contraband detection |
US8263938B2 (en) * | 2004-03-01 | 2012-09-11 | Varian Medical Systems, Inc. | Dual energy radiation scanning of objects |
US7340443B2 (en) * | 2004-05-14 | 2008-03-04 | Lockheed Martin Corporation | Cognitive arbitration system |
US7190757B2 (en) * | 2004-05-21 | 2007-03-13 | Analogic Corporation | Method of and system for computing effective atomic number images in multi-energy computed tomography |
US20060269140A1 (en) * | 2005-03-15 | 2006-11-30 | Ramsay Thomas E | System and method for identifying feature of interest in hyperspectral data |
US7356118B2 (en) * | 2004-10-22 | 2008-04-08 | Scantech Holdings, Llc | Angled-beam detection system for container inspection |
WO2007011403A2 (en) * | 2004-10-22 | 2007-01-25 | Scantech Holdings, Llc | Cryptographic container security system |
US20060256914A1 (en) * | 2004-11-12 | 2006-11-16 | Might Matthew B | Non-intrusive container inspection system using forward-scattered radiation |
US7847260B2 (en) * | 2005-02-04 | 2010-12-07 | Dan Inbar | Nuclear threat detection |
US20060204107A1 (en) * | 2005-03-04 | 2006-09-14 | Lockheed Martin Corporation | Object recognition system using dynamic length genetic training |
US7336767B1 (en) * | 2005-03-08 | 2008-02-26 | Khai Minh Le | Back-scattered X-ray radiation attenuation method and apparatus |
US20090174554A1 (en) * | 2005-05-11 | 2009-07-09 | Eric Bergeron | Method and system for screening luggage items, cargo containers or persons |
US7261466B2 (en) * | 2005-06-01 | 2007-08-28 | Endicott Interconnect Technologies, Inc. | Imaging inspection apparatus with directional cooling |
CN100582758C (en) * | 2005-11-03 | 2010-01-20 | 清华大学 | Method and apparatus for recognizing materials by using fast neutrons and continuous energy spectrum X rays |
EP1951119A2 (en) * | 2005-11-09 | 2008-08-06 | Dexela Limited | Methods and apparatus for obtaining low-dose imaging |
US7536365B2 (en) * | 2005-12-08 | 2009-05-19 | Northrop Grumman Corporation | Hybrid architecture for acquisition, recognition, and fusion |
US20070211248A1 (en) * | 2006-01-17 | 2007-09-13 | Innovative American Technology, Inc. | Advanced pattern recognition systems for spectral analysis |
US7483511B2 (en) * | 2006-06-06 | 2009-01-27 | Ge Homeland Protection, Inc. | Inspection system and method |
US8015127B2 (en) * | 2006-09-12 | 2011-09-06 | New York University | System, method, and computer-accessible medium for providing a multi-objective evolutionary optimization of agent-based models |
US8110812B2 (en) * | 2006-10-25 | 2012-02-07 | Soreq Nuclear Research Center | Method and system for detecting nitrogenous materials via gamma-resonance absorption (GRA) |
US7492862B2 (en) * | 2007-01-17 | 2009-02-17 | Ge Homeland Protection, Inc. | Computed tomography cargo inspection system and method |
US8094874B2 (en) * | 2007-05-29 | 2012-01-10 | Lockheed Martin Corporation | Material context analysis |
US7706502B2 (en) * | 2007-05-31 | 2010-04-27 | Morpho Detection, Inc. | Cargo container inspection system and apparatus |
-
2008
- 2008-05-29 US US12/129,383 patent/US8094874B2/en not_active Expired - Fee Related
- 2008-05-29 US US12/129,036 patent/US20090003651A1/en not_active Abandoned
- 2008-05-29 US US12/129,439 patent/US20080298544A1/en not_active Abandoned
- 2008-05-29 US US12/129,371 patent/US20090052762A1/en not_active Abandoned
- 2008-05-29 US US12/129,393 patent/US20090055344A1/en not_active Abandoned
- 2008-05-29 US US12/129,410 patent/US20090003699A1/en not_active Abandoned
- 2008-05-29 US US12/129,055 patent/US20090052622A1/en not_active Abandoned
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090003699A1 (en) * | 2007-05-29 | 2009-01-01 | Peter Dugan | User guided object segmentation recognition |
US20090052762A1 (en) * | 2007-05-29 | 2009-02-26 | Peter Dugan | Multi-energy radiographic system for estimating effective atomic number using multiple ratios |
US20090052732A1 (en) * | 2007-05-29 | 2009-02-26 | Peter Dugan | Material context analysis |
US8094874B2 (en) * | 2007-05-29 | 2012-01-10 | Lockheed Martin Corporation | Material context analysis |
US8924325B1 (en) * | 2011-02-08 | 2014-12-30 | Lockheed Martin Corporation | Computerized target hostility determination and countermeasure |
US9476923B2 (en) | 2011-06-30 | 2016-10-25 | Commissariat A L'energie Atomique Et Aux Energies Alternatives | Method and device for identifying a material by the spectral analysis of electromagnetic radiation passing through said material |
US20140164311A1 (en) * | 2012-12-12 | 2014-06-12 | International Business Machines Corporation | Computing prioritzed general arbitration rules for conflicting rules |
US9697467B2 (en) | 2014-05-21 | 2017-07-04 | International Business Machines Corporation | Goal-driven composition with preferences method and system |
US9785755B2 (en) | 2014-05-21 | 2017-10-10 | International Business Machines Corporation | Predictive hypothesis exploration using planning |
US10783441B2 (en) | 2014-05-21 | 2020-09-22 | International Business Machines Corporation | Goal-driven composition with preferences method and system |
Also Published As
Publication number | Publication date |
---|---|
US8094874B2 (en) | 2012-01-10 |
US20090052762A1 (en) | 2009-02-26 |
US20090052622A1 (en) | 2009-02-26 |
US20090003651A1 (en) | 2009-01-01 |
US20080298544A1 (en) | 2008-12-04 |
US20090003699A1 (en) | 2009-01-01 |
US20090052732A1 (en) | 2009-02-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20090055344A1 (en) | System and method for arbitrating outputs from a plurality of threat analysis systems | |
CN112990432B (en) | Target recognition model training method and device and electronic equipment | |
US9489562B2 (en) | Image processing method and apparatus | |
US11308714B1 (en) | Artificial intelligence system for identifying and assessing attributes of a property shown in aerial imagery | |
US20050058242A1 (en) | Methods and systems for the rapid detection of concealed objects | |
Salmon et al. | Proper comparison among methods using a confusion matrix | |
Kuznetsova et al. | A new determination of the high-redshift Type Ia supernova rates with the hubble space telescope advanced camera for surveys | |
EP1685429A2 (en) | A system and method for detecting contraband | |
CN114821282B (en) | Image detection device and method based on domain antagonistic neural network | |
Castro-Rodriguez et al. | Intracluster light in the Virgo cluster: large scale distribution | |
US20140241618A1 (en) | Combining Region Based Image Classifiers | |
Hosenie et al. | MeerCRAB: MeerLICHT classification of real and bogus transients using deep learning | |
US20090226032A1 (en) | Systems and methods for reducing false alarms in detection systems | |
US20220323030A1 (en) | Probabilistic image analysis | |
Davis et al. | The HETDEX Survey emission-line exploration and source classification | |
US8972307B1 (en) | Method and apparatus for machine learning | |
Gong et al. | KDCTime: Knowledge distillation with calibration on InceptionTime for time-series classification | |
CN112651397B (en) | Inspection sheet classification method, apparatus, computer device, and storage medium | |
US10248697B2 (en) | Method and system for facilitating interactive review of data | |
CN113298807A (en) | Computed tomography image processing method and device | |
CN117934869B (en) | Target detection method, system, computing device and medium | |
CN110335670A (en) | Image processing method and device for the classification of epiphysis grade | |
Cannaday et al. | Improved search and detection of surface-to-air missile sites using spatial fusion of component object detections from deep neural networks | |
EP4187438A1 (en) | Object sample selection for training of neural networks | |
Dhar et al. | An Improved Classification of Chest X-ray Images Using Adaptive Activation Function |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: LOCKHEED MARTIN CORPORATION, MARYLAND Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:DUGAN, PETER;PARADIS, ROSEMARY D.;REEL/FRAME:021481/0887;SIGNING DATES FROM 20080627 TO 20080812 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO PAY ISSUE FEE |