U.S. patent application number 12/129393 was filed with the patent office on 2009-02-26 for system and method for arbitrating outputs from a plurality of threat analysis systems.
Invention is credited to Peter Dugan, Rosemary D. Paradis.
Application Number | 20090055344 12/129393 |
Document ID | / |
Family ID | 40088192 |
Filed Date | 2009-02-26 |
United States Patent
Application |
20090055344 |
Kind Code |
A1 |
Dugan; Peter ; et
al. |
February 26, 2009 |
SYSTEM AND METHOD FOR ARBITRATING OUTPUTS FROM A PLURALITY OF
THREAT ANALYSIS SYSTEMS
Abstract
A method of arbitrating outputs from a set of threat analysis
algorithms or systems. The method can include receiving threat
outputs from different threat analysis algorithms. Each threat
output can be assigned to a class membership. Rules can be applied
based on the threat outputs and the respective class membership.
Each rule can provide an amount of support mass to a hypothesis and
an amount of uncertainty mass. The rules can have an associated
priority value for weighting the masses. A combined belief value
for each hypothesis and a total uncertainty value can be determined
based on the provided masses. The method can further include
generating a decision matrix of the hypotheses and combined belief
values. A hypothesis can be selected from the decision matrix based
on the combined belief value.
Inventors: |
Dugan; Peter; (Ithaca,
NY) ; Paradis; Rosemary D.; (Vestal, NY) |
Correspondence
Address: |
MILES & STOCKBRIDGE PC
1751 PINNACLE DRIVE, SUITE 500
MCLEAN
VA
22102-3833
US
|
Family ID: |
40088192 |
Appl. No.: |
12/129393 |
Filed: |
May 29, 2008 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
60940632 |
May 29, 2007 |
|
|
|
Current U.S.
Class: |
706/52 |
Current CPC
Class: |
G06K 9/6263 20130101;
G01N 23/06 20130101 |
Class at
Publication: |
706/52 |
International
Class: |
G06N 7/02 20060101
G06N007/02 |
Claims
1. A method of arbitrating threat outputs from a plurality of
threat analysis algorithms so as to generate a final decision from
a plurality of hypotheses, the method comprising: receiving at
least one threat output for a region of interest (ROI) from each of
the plurality of threat analysis algorithms, wherein each threat
output is based on analysis of a radiographic image; assigning each
threat output to a class membership based on the respective threat
analysis algorithm and the ROI; selecting at least one expert rule
based on the at least one threat output and the respective class
membership, each selected expert rule having an associated priority
value and providing an amount of support mass for at least one of
the plurality of hypotheses and an amount of uncertainty mass,
wherein the support mass and uncertainty mass are weighted by the
associated priority value; determining a combined belief value for
each hypothesis from the provided support mass; determining a total
uncertainty value from the provided uncertainty masses; generating
a decision matrix including the plurality of hypotheses and
respective combined belief values; selecting a hypothesis from the
decision matrix with the highest combined belief value; comparing
the combined belief value for the selected hypothesis to a first
threshold value; comparing the total uncertainty value to a second
threshold value; outputting the selected hypothesis as the final
decision if the combined belief value is greater than or equal to
the first threshold and the total uncertainty is less than the
second threshold; and outputting an uncertainty hypothesis as the
final decision if the combined belief value is less than the first
threshold or the total uncertainty is greater than or equal to the
second threshold.
2. The method of claim 1, wherein the priority value for each
expert rule is configurable by a user so as to adjust a relative
importance of that expert rule.
3. The method of claim 1, wherein the expert rules are determined
using information about each threat analysis algorithm, said
information including a success rate of that threat analysis
algorithm with respect to the ROI in real world scenarios.
4. The method of claim 1, wherein assigning each threat output to a
class membership includes assigning each threat output to one of a
plurality of fuzzy identifiers.
5. The method of claim 1, wherein assigning each threat output to a
class membership includes mapping each threat output to a
probability distribution.
6. The method of claim 1, further comprising the step of
normalizing the support mass and the uncertainty mass amounts.
7. The method of claim 1, wherein each threat output relates to the
detection of an effective atomic number of a material in the object
and the final decision relates to the likelihood that said material
is a high Z material, a special nuclear material, or a general
threat.
8. A system for arbitrating threat outputs from a plurality of
threat analysis algorithms, the system comprising: means for
assigning each threat output to a class membership, each threat
output generated by a corresponding threat analysis algorithm;
means for selecting at least one rule, each selected rule providing
an amount of support mass for one of a plurality of hypotheses and
an amount of uncertainty mass; and, means for determining a
combined belief value for each of the plurality of hypotheses and a
total uncertainty value from the provided support masses and
uncertainty masses, respectively, and means for generating an
output based on the combined belief values and the total
uncertainty value.
9. The system of claim 8, wherein said means for generating an
output further comprises: means for selecting a hypothesis with the
highest combined belief value; means for comparing the combined
belief value for the selected hypothesis to a first threshold
value; means for comparing the total uncertainty value to a second
threshold value; and means for outputting, wherein said means for
outputting outputs the selected hypothesis if the combined belief
value for the selected hypothesis is greater than or equal to the
first threshold and the total uncertainty is less than the second
threshold or outputs an uncertainty hypothesis if the combined
belief value for the selected hypothesis is less than the first
threshold or the total uncertainty is greater than or equal to the
second threshold.
10. The system of claim 8, wherein each rule has an associated
priority value, the support mass and uncertainty mass for the rule
being weighted by the associated priority value, wherein the
associated priority value is configurable by a user.
11. The system of claim 8, wherein said means for selecting at
least one rule selects the at least one rule based at least in part
on the threat outputs and the respective class membership.
12. The system of claim 8, wherein the rules are determined using
information about each threat analysis algorithm, said information
including a success rate of each threat analysis algorithm.
13. The system of claim 8, wherein each threat output is assigned a
class membership by mapping the threat outputs to fuzzy identifiers
or a probability distribution.
14. The system of claim 8, wherein each threat output relates to
the detection of an effective atomic number of a material based on
analysis of a radiographic image and the selected hypothesis
relates to the likelihood that said material is a high Z material,
a special nuclear material, or a general threat.
15. A computer program product for arbitrating threat outputs from
a plurality of threat analysis systems comprising: a computer
readable medium encoded with software instructions that, when
executed by a computer, cause the computer to perform the steps of:
receiving at least one threat output from each of the plurality of
threat analysis systems; assigning each threat output to a class
membership; determining a combined belief value for each of a
plurality of hypotheses and a total uncertainty value, based at
least in part on the at least one threat output and the respective
class membership; and generating a decision matrix including the
plurality of hypotheses and associated combined belief values.
16. The computer program product of claim 15, wherein the steps
further comprise selecting a hypothesis from the decision matrix
with the highest combined belief value.
17. The computer program product of claim 16, wherein the steps
further comprise: comparing the combined belief value for the
selected hypothesis to a first threshold value; comparing the total
uncertainty value to a second threshold value; outputting the
selected hypothesis if the combined belief value is greater than or
equal to the first threshold and the total uncertainty is less than
the second threshold; and, outputting an uncertainty hypothesis if
the combined belief value is less than the first threshold or the
total uncertainty is greater than or equal to the second
threshold.
18. The computer program product of claim 15, wherein the steps
further comprise: selecting at least one expert rule based on the
at least one threat output and the respective class membership, a
selected expert rule providing an amount of support mass for at
least one of the plurality of hypotheses and an amount of
uncertainty mass, wherein each expert rule has an associated
user-configurable priority value, the support mass for the expert
rule being weighted by the user-configurable priority value, and
wherein the combined belief value for each hypothesis and the total
uncertainty value are determined from the provided support masses
and uncertainty masses, respectively.
19. The computer program product of claim 15, wherein each threat
output is assigned to a class membership by correlating the threat
outputs to fuzzy identifiers or a probabilistic map.
20. The computer program product of claim 15, wherein each threat
output relates to the detection of an effective atomic number of a
material based on analysis of a radiographic image and at least one
of the plurality of hypotheses relates to the likelihood that said
material is a high Z material, a special nuclear material, or a
general threat.
Description
[0001] The present application claims the benefit of provisional
U.S. Patent Application No. 60/940,632, entitled "Threat Detection
System", filed May 29, 2007, which is hereby incorporated by
reference in its entirety.
[0002] The present invention relates generally to data fusion
techniques, and, more particularly, to arbitration of the output of
independent algorithms for threat detection.
[0003] The detection of special nuclear materials can be
accomplished with a combination of passive spectroscopic systems
and advanced radiography systems. Together, these two technologies
provide a capability to detect unshielded, lightly shielded and
heavily shielded nuclear materials, components, and weapons that
may be illicitly transported in trucks, cargo containers, air cargo
containers, or other conveyances. With regard to nuclear material
smuggled in cargo containers, it is more likely that the material
will be shielded to the extent that it may be difficult to detect
with passive spectroscopic systems. Thus, advanced radiography
systems can be used to help detect shielded materials.
[0004] Currently deployed radiography systems may be primarily
designed to provide the capability to detect traditional
contraband, such as drugs, currency, guns, or explosives. These
types of contraband typically have a low atomic number (Z). Next
generation radiography systems that use automated analysis of
detector signals can detect shielded materials with a high atomic
number, such as lead, uranium, or plutonium, by employing
multi-energy radiographic images. Threat assessments for a region
of interest can be made by algorithms or systems that employ the
resultant radiographic images.
[0005] These algorithms or systems may compromise between false
alarm rates and detection accuracy. Certain algorithms or systems
may have superior detection accuracy in one scenario, while
contributing to increased false alarms in another scenario. An
increased rate of false alarms is undesirable as it directs
resources away from actual threats. As such, algorithms or systems
are designed to balance acceptable detection accuracy with an
acceptable false alarm rate. Embodiments of the present invention
may address the above-mentioned problems and limitations, among
other things.
[0006] One embodiment provides a method of arbitrating outputs from
a plurality of threat analysis algorithms so as to generate a final
decision from a plurality of hypotheses. The method includes
receiving at least one threat output for a region of interest (ROI)
from each of the plurality of threat analysis algorithms. Each
threat output may be assigned to a class membership. Expert rules
can be selected based at least in part on the at least one threat
output and the respective class membership. The expert rules can
provide an amount of support mass for one of the plurality of
hypotheses and an amount of uncertainty mass. Each expert rule can
have an associated priority value that can be used to weight the
support mass and uncertainty mass. A combined belief value can be
determined for each hypothesis from the provided support mass. A
total uncertainty value can be determined from the uncertainty
masses for the plurality of hypotheses. The method can also include
generating a decision matrix output including the plurality of
hypotheses with respective combined belief values. A hypothesis
with the highest combined belief value may be selected from the
decision matrix output. The combined belief value for this selected
hypothesis can be compared with a first threshold value. The total
uncertainty value can be compared with a second threshold value.
The selected hypothesis or an uncertainty hypothesis may be output
as the final decision depending on the results of the
comparisons.
[0007] Another embodiment may include a system for arbitrating
outputs from a plurality of threat analysis systems. The system can
include means for assigning each threat output to a class
membership. The system may further include means for selecting one
or more rules. The selected rules can provide an amount of support
mass for one of a plurality of hypotheses. The selected rules can
also provide an amount of uncertainty mass. The system may further
include means for determining a combined belief value for each of
the plurality of hypotheses and a total uncertainty value from the
provided support mass and uncertainty masses, respectively, and
means for generating an output of a selected hypothesis based on
the combined belief values and the total uncertainty value.
[0008] Another embodiment includes a computer program product for
arbitrating outputs from a plurality of threat analysis algorithms.
The computer program product includes a computer readable medium
encoded with software instructions that, when executed by a
computer cause the computer to perform the step of receiving at
least one threat output from each of a plurality of threat analysis
algorithms. The steps may also include assigning each threat output
to a class membership. The steps may also include determining a
combined belief value for each of a plurality of hypotheses and a
total uncertainty value and generating a decision matrix output
including the plurality of hypotheses and associated combined
belief values.
BRIEF DESCRIPTION OF THE DRAWINGS
[0009] FIG. 1 shows a block diagram of a system view of an
exemplary embodiment;
[0010] FIG. 2 shows a flowchart of an exemplary method for
arbitrating the outputs of a plurality of algorithms; and
[0011] FIG. 3 shows a flowchart of exemplary aspects of an
arbitration method.
DETAILED DESCRIPTION
[0012] In general, an exemplary embodiment may use an arbitration
technology to integrate results from many independent algorithms or
systems, thereby minimizing false alarm rates and improving
assessment capability beyond the capability of any single
algorithm. Results from several systems may be individually
analyzed and their outcomes may be mapped to an occurrence
probability or fuzzy membership. A series of hypotheses may be
generated for each potential threat region. Expert rules may be
used to determine a combined belief for each hypothesis for use in
selecting a hypothesis as a final decision of the arbitrator.
[0013] FIG. 1 shows a block diagram overview 100 of an exemplary
embodiment of an advanced cognitive arbitrator (ACA) system. The
ACA 102 has three aspects: a class assignor module 108, a rule
selector module 112, and a data fusion module 116. The three
components of the ACA 102 may be configured together as a single
module, separate individual modules, or combined with other modules
performing separate functions. Independent threat analysis
algorithms 104a-104c may evaluate an object, such as by analysis of
radiographic images of the object, so as to generate threat outputs
106a-106c to the ACA 102. Each threat output can have an associated
confidence value that may be output to the ACA 102 as well.
Although only three algorithms 104a-104c have been shown, fewer or
additional algorithms can be used. Further, algorithms 104a-104c
may be separate systems employing different detection
methodologies.
[0014] The threat output may be the determination of the effective
atomic number of a material of an object based on analysis of a
radiographic image. Alternatively, the threat output may be a
detection of high-Z material, special nuclear material, or a
general threat condition in a given region of interest (ROI). Each
algorithm may also generate more than one threat output. For
example, a set of threat outputs may be generated that directly
correspond to a predetermined set of hypotheses. Each threat output
may then have an associated confidence value indicating the
likelihood that the output is correct. Alternatively, only a single
threat output for each particular algorithm may be communicated to
the ACA 102.
[0015] In addition, the threat outputs may be color coded. For
example, the threats may be converted to a red-green-blue (RGB)
color scale that serves as a visual cue to an operator or user. In
the example where the threat output is the effective atomic number
of materials in an image, the various pixels in the image may be
color-coded to correspond to the estimated Z.sub.eff of the
material in the image. For example, high Z materials may be colored
red, organic materials may be colored orange, and metals may be
colored blue. Alternately, the color coding may be based on
confidence values associated with the threat outputs or threat
algorithms.
[0016] Within the ACA 102, class assignor module 108 can receive
the threat outputs 106a-106c. The class assignor 108 may then map
the threat outputs to a class membership. In other words, the class
assignor 108 may assign each threat output to a particular class
membership. The class membership may be based on associated
confidence values for each threat output or on an expected
confidence for a given ROI. The class membership may be a fuzzy
membership set, a probabilistic map, a probability distribution,
and/or the like.
[0017] The class membership for each threat output may be
determined based on the particular algorithm and the given ROI. For
example, three algorithms may be provided, as shown below in Table
1. Each threat output from the respective algorithms may be
assigned to a class membership depending on the application of the
algorithm to a particular location in the container. For example,
for an effective atomic number greater than a threshold N, any
threat output from algorithm 1 may be assigned a high confidence
value (or to a "high" class membership) if the algorithm is applied
to a ROI at the bottom of a particular container, as shown in Table
1. Alternately, any threat output from algorithm 1 may be assigned
a medium confidence value (or to a "medium" class membership) if
the algorithm is applied to a ROI in either the middle or top of
the particular container. Specific confidence values may be
associated with each class membership and accordingly assigned to
each threat measurement output, as appropriate. The different class
memberships and their relation to the threat algorithms may be
determined through human expert input regarding performance of the
threat algorithm in a given environment and system condition.
Accordingly, applicable class memberships may be determined for
each threat algorithm for a variety of containers and/or
scenarios.
TABLE-US-00001 TABLE 1 Sample Class Membership Using Three
Algorithms on Cargo Container Modeled Confidence Algorithm
Condition Cargo Bottom Cargo Middle Cargo Top 1 Z > N High
Medium Medium 2 Medium High Medium 3 High High Low
[0018] For example, the class assignor module 108 may convert the
confidence value associated with each threat output into
probability distribution functions represented by fuzzy class
membership designations. The various output scores from a given
recognizer can be placed into one of a plurality of fuzzy
memberships such as "very high," "appropriate," "close," and "not
appropriate." Other fuzzy membership classes can be utilized, based
on the application. Confidence values may be used for the ROI to
account for variations in performance of the algorithms across
different regions of the object. For example, when the object is a
cargo container, different confidence values may be used for the
regions of interest in the bottom, middle, and top of the cargo
container according to how well the different algorithms perform in
each area.
[0019] The structure of the class assignor module 108 could be a
portion of a tangible medium having fixed thereon software
instructions for causing a computer to assign a class membership to
each threat output from the plurality of threat analysis
algorithms. Alternatively, the structure could be a general purpose
computer programmed to assign a class membership to each threat
output from the plurality of threat analysis algorithms. In another
example, the structure could be an electromagnetic signal, a ROM, a
RAM, or other memory storing software instructions for causing a
computer to assign a class membership to each threat output from
the plurality of threat analysis algorithms. Alternatively, the
structure could be a special purpose microchip, PLA, or the
like.
[0020] Class assignor module 108 may output the class memberships
and threat outputs (110a-110c) to the rule selector module 112. The
rule selector module 112 may utilize class membership, at least in
part, to assign support mass to each of the plurality of hypotheses
according to a plurality of selected expert rules. For example, the
rule selector module 112 can assign varying amounts of support mass
to a certain hypothesis based on if a threat output is associated
with the hypothesis, the confidence value of the associated threat
output, and/or the class membership for the associated threat
output. The selected expert rules may also assign an amount of
uncertainty mass.
[0021] The expert rules may also take into account threat outputs
from multiple algorithms. For example, support and/or uncertainty
masses associated with a selected expert rule may be applied to a
certain hypothesis only when the threat outputs from multiple
algorithms meet the conditions of the selected rule.
[0022] The rules may be determined by information from problem
space. For example, the rules may use information regarding the
success rate of a particular algorithm in a real world scenario.
Expert rules may also be developed using supervised training under
lab conditions.
[0023] The rules may also include an associated priority value to
serve as weights for the mass values. The priority values may be
customized by the user to adjust the relative importance of the
rules so as to account for different circumstances or scenarios.
The expert rules may be stored internally with the rule selector
module 112 or provided as an input 120 to the rule selector 112
from a separate module 1118, such as a user interface, separate
computer, database, or the like.
[0024] The structure of the rule selector module 112 could be a
portion of a tangible medium having fixed thereon software
instructions for causing a computer to select at least one rule.
Alternatively, the structure could be a general purpose computer
programmed to select at least one rule. In another example, the
structure could be an electromagnetic signal, a ROM, a RAM, or
other memory storing software instructions for causing a computer
to select at least one rule. Alternatively, the structure could be
a special purpose microchip, PLA, or the like.
[0025] The determined support mass and uncertainty mass for each
hypothesis (114) from the rule selector module 112 may be provided
to the data fusion module 116. In an exemplary implementation, the
data fusion module 116 may use the support mass for each hypothesis
as a belief in a Dempster-Shaffer analysis. The support mass for
each of the selected rules that apply to a particular one of the
hypotheses may be combined. For example, the combination may occur
by taking the product of the support masses for the rules that
apply so as to generate a combined belief value for each of the
hypotheses. Similarly, the uncertainty masses may be combined
together into a single total uncertainty value, or plausibility
value, for the plurality of hypotheses.
[0026] The data fusion module 116 may generate a decision matrix of
each hypothesis and the corresponding combined belief value. This
decision matrix may additionally include the rules applied to
generate the support mass for each hypothesis. For example, Table 2
below shows a decision matrix for three different hypotheses. The
decision matrix may become the output 126 from the ACA 102 for
evaluation by a user or for use by another system.
[0027] Alternatively, the hypothesis having the largest combined
belief value may be selected from the decision matrix. The selected
hypothesis may be output as a "final decision" from the data fusion
module 116 along output 126, which may then be used by subsequent
systems or conveyed to a user. The final decision may relate to the
likelihood that a material in the object is a high Z material, a
special nuclear material, or a general threat.
TABLE-US-00002 TABLE 2 Exemplary Decision Matrix for a Set of
Hypotheses. Combined Hypothesis Description Belief Rules Used
Hypothesis- CORRECT, 99.0 Arbitrated Threat FOUND 1 Threat Found
due to input and rule Matches combinations: 2, 8 Hypothesis- FALSE
0.83 Arbitrated Threat NOT 2 NEGATIVE, FOUND due to input and rule
Threat Not combinations: 11 Found Hypothesis- Uncertain 0.17
Arbitrated Threat NOT Uncertain Hypothesis FOUND due to input and
rule conflicts: 10, 11, 12
[0028] When no combined belief value is sufficiently large and/or
the total uncertainty value is unacceptably high, all hypotheses
may be rejected or an uncertainty hypothesis may be selected as the
final decision in lieu of any of the hypotheses. Configurable
thresholds from a module 122, such as a user interface or database,
may be input to the data fusion module 116 along input 124 for use
in determining if the selected hypothesis has a sufficient combined
belief value or if the uncertainty level is unacceptable.
[0029] The structure of the data fusion module 116 could be a
portion of a tangible medium having fixed thereon software
instructions for causing a computer to determine a combined belief
value for each of the plurality of hypotheses and a total
uncertainty value and to generate an output based on the combined
belief values and the total uncertainty value. Alternatively, the
structure could be a general purpose computer to determine a
combined belief value for each of the plurality of hypotheses and a
total uncertainty value and to generate an output based on the
combined belief values and the total uncertainty value. In another
example, the structure could be an electromagnetic signal, a ROM, a
RAM, or other memory storing software instructions for causing a
computer to determine a combined belief value for each of the
plurality of hypotheses and a total uncertainty value and to
generate an output based on the combined belief values and the
total uncertainty value. Alternatively, the structure could be a
special purpose microchip, PLA, or the like.
[0030] In view of the foregoing structural and functional features
described above, methodologies in accordance with various aspects
of the present invention will be better appreciated with reference
to FIGS. 2-3. While for purposes of simplicity of explanation, the
exemplary methodologies of FIGS. 2-3 are shown and described as
executing serially, it is to be understood and appreciated that the
present invention is not limited by the illustrated order, as some
aspects could occur in different orders and/or concurrently with
other aspects from that shown and described herein. Moreover, not
all illustrated features may be required to implement a methodology
in accordance with aspects of the present invention.
[0031] FIG. 2 represents a process flow diagram 200 for an
exemplary embodiment of a method for arbitrating outputs from a
plurality of threat analysis algorithms. The method begins at step
201 and may continue to step 202 with the acquisition of one or
more radiographic images of an object of interest. These images may
be used by the independent threat analysis algorithms in the threat
assessment. Control may continue to step 204.
[0032] At step 204, the threat analysis algorithms may determine a
threat output based on information from the radiographic images.
Other algorithms may be in place for analyzing the radiographic
images and providing data to the threat analysis algorithms for
identification of a threat condition. The threat output may be a
determination of the effective atomic number (Z.sub.eff) of a
material. In other embodiments, the threat output may be a
detection of high-Z material, special nuclear material, or a
general threat condition in a given ROI. In addition, the threat
output may be a plurality of outputs corresponding to a plurality
of different hypotheses, each output having a respective confidence
value. Control may then continue to step 206.
[0033] At step 206, each threat output may be assigned to a class
membership. The class membership may be based on an associated
confidence values for each threat output or on an expected
confidence for a given ROI. The membership may be a fuzzy
membership set, a probabilistic map, or a probability distribution.
Alternately, each threat output may be assigned to a class
membership based on the respective threat analysis algorithm and
the ROI. For example, the confidence value associated with each
threat output generated by its associated algorithm can be mapped
to probability distribution functions represented by fuzzy class
membership designations. The various output scores from a given
recognizer may be placed into one of a plurality of fuzzy
membership grades such as "very high," "appropriate," "close," and
"not appropriate." Other fuzzy membership classes may be utilized,
based on the application. Control may continue to step 208.
[0034] At step 208, applicable rules for each hypothesis may be
selected based on a threat output associated with the hypothesis, a
confidence value associated with the algorithm, and the class
membership for the threat output. Each of the rules can have one or
more conditions that may be satisfied by characteristics of the
threat outputs. The threat outputs can be evaluated to select any
applicable rules for each hypothesis. When a rule is selected for a
given hypothesis, it may contribute a certain amount of support
mass to a given hypothesis and a certain amount of uncertainty mass
to a total uncertainty value. The masses may be weighted by a
user-configurable priority value. Control may continue to step
210.
[0035] At step 210, the associated support mass and uncertainty
mass may be assigned to each hypothesis according to the rules
selected. A support value for each hypothesis and a total
uncertainty value can then be determined at step 212. The support
values may be provided or derived from system input conditions. The
determined support values may be combined for each hypothesis
using, for example, a Dempster-Shaffer approach. For example, the
support values for each rule applying to a given hypothesis may be
combined by taking the product of the support values to generate a
combined belief value. Similarly, the uncertainty generated for a
given hypothesis may be combined by the same methodology to
generate a total uncertainty value. However, other approaches are
also contemplated. Practically, Bayesian, Dempster, Fuzzy, or a
combination of approaches may be employed. Control may continue to
step 214.
[0036] At step 214, the support value for each hypothesis and total
uncertainty value may be respectively normalized to a desired scale
to generate a combined belief value for each hypothesis and a
normalized total uncertainty value, or plausibility value. For
example, a total support may be calculated as the sum of the
respective support masses for the different hypothesis and the
total uncertainty. Normalization may then be achieved by taking the
ratio of each hypothesis support mass to the total support so as to
arrive at a combined belief for each hypothesis. The total
uncertainty value may be normalized in a similar manner by taking
the ratio of the total uncertainty value to the total support
value, so as to arrive at a plausibility value.
[0037] Step 214 may also include generating a decision matrix of
the combined belief value for each hypothesis. This decision matrix
may include the rules applied to generate the support mass for each
hypothesis. From the decision matrix, the hypothesis having the
largest combined belief value can be selected in step 216. Control
may continue to step 218.
[0038] At step 218, the selected hypothesis may be evaluated to
determine if the combined belief value and uncertainty are
sufficient. The selected hypothesis can be rejected if the combined
belief value is below a first threshold value or the total
uncertainty value is equal to or above a second threshold value.
These threshold values may be configurable so as to enable a user
to set the sensitivity of the system. If the combined belief for
the selected hypothesis and the total uncertainty value are
determined to be sufficient based on the comparison to the first
and second thresholds, respectively, the selected hypothesis may be
output from the system as the final decision. The selected
hypothesis may relate to the likelihood that a material in an
object is a high Z material, a special nuclear material, or a
general threat.
[0039] If either the combined belief or total uncertainty values
are insufficient, an uncertainty hypothesis may be output from the
system as the final decision. Control continues to step 219 where
the method ends. It should be appreciated that the above steps may
be repeated in whole or in part in order to complete or continue an
arbitration task.
[0040] FIG. 3 illustrates a flow diagram 300 showing an operation
of rule selection and data fusion aspects of an arbitration system.
Control may begin at step 301 and may continue to step 302. At step
302, an ROI of an object is selected for investigation.
Alternatively, the ROI selection may be performed prior to
arbitration or no ROI selection may be performed at all. Control
may continue to step 304.
[0041] At step 304, the n.sup.th hypothesis may be selected from a
set of hypotheses. At step 306, the j.sup.th expert rule from a set
of expert rules may be selected. The expert rules may be as shown
in Table 3, for example. The j.sup.th expert rule may be evaluated
at least in part on the associated class membership of a threat
output as applied to the ROI to determine if it applies to the
n.sup.th hypothesis. A selected rule may have one or more
requirements to be met by one or more threat outputs associated
with the selected hypothesis. The selected rule can provide a
certain amount of support mass to the hypothesis as well as a
certain amount of uncertainty mass to a total uncertainty when the
requirements of the selected rule are met. The masses provided by
each selected rule can be weighted by corresponding priority
values, which can be adjusted by a user to change the relative
importance of each selected rule in the arbitration process.
TABLE-US-00003 TABLE 3 Table of Sample Rules with Descriptions and
Priority Values Rule Rule Number Priority Description of Rule Rule
1 0.99 If all of the first ranked choices match and confidence is
at least "appropriate" then match is candidate. Rule 2 0.99 If
first ranked choice of the higher ranked algorithm matches any of
the other first ranked choices of the secondary algorithms, and
confidences are "appropriate" then select that match. Rule 3 0.90
If first choice of the highest ranked algorithm does not match any
of the other first ranked choices of the secondary algorithms, but
confidence values of the highest ranked algorithm is "very high"
and at least one of the secondary algorithms are "close" in
confidence values, then select that match. Rule 4 0.85 If first
choice of the highest ranked algorithm does not match any of the
other first ranked choices of the secondary algorithms, but
confidence values of the highest ranked algorithm is "very high"
and at least one of the top five choices in the secondary
algorithms are "acceptable" in confidence values, then select that
match.
[0042] If the j.sup.th rule is determined to apply to the n.sup.th
hypothesis, control may continue to step 308 where the hypothesis
support, K, may be multiplied by the rule mass and the rule
priority for the j.sup.th rule so as to combine the mass with any
existing support mass for the n.sup.th hypothesis. Similarly, the
hypothesis uncertainty value, L, may be multiplied by the
uncertainty mass and the priority value from the j.sup.th rule.
Control can then advance to step 310. If it is determined that the
j.sup.th rule does not apply to the n.sup.th hypothesis, the method
may advance directly to step 310.
[0043] At step 310, the method may check to see if all the rules
have been evaluated for the n.sup.th hypothesis. If all of the
rules have not been evaluated, the method may increment to the next
rule in the set of rules (i.e., j=j+1) and may return to step 306.
If all of the rules have been evaluated, the method may proceed to
step 312. At step 312, the support mass, K, may be saved as the
support mass S.sub.n for the n.sup.th hypothesis. The uncertainty
value, L, may be added to total system uncertainty, U. Control may
then proceed to step 314.
[0044] At step 314, the method may check if all the hypotheses have
been evaluated. If all of the hypotheses have not been evaluated,
the control may continue to the next hypothesis in the set of
hypotheses (i.e., n=n+1) and may proceed back to step 304. If all
of the hypotheses have been evaluated, control may advance to step
316.
[0045] At step 316, the support value, S.sub.n, for each hypothesis
may be normalized to determine the combined belief value, B.sub.n,
for each hypothesis. For example, normalization may include
calculating a total support as the sum of all of the hypotheses
support masses and the system uncertainty. The ratio of each
hypothesis support to the total support can represent the combined
belief for each hypothesis. Similarly, the uncertainty value may be
normalized as the ratio of the total uncertainty to the total
support. Control may continue to step 318.
[0046] At step 318, a decision matrix may be generated. The
decision matrix may include an array of the evaluated hypotheses
and the corresponding combined belief values for each hypothesis.
The decision matrix may also include the rules applied to generate
the support mass for each hypothesis. From the decision matrix, a
hypothesis having the greatest combined belief value can be
selected at step 320. The hypothesis with the greatest combined
belief value may relate to the likelihood that said material is a
high Z material, a special nuclear material, or a general
threat.
[0047] At step 322, the combined belief value, B.sub.z, for the
selected hypothesis may be compared with a first threshold value.
In addition, the uncertainty, U, may be compared with a second
threshold value. These threshold values may be user configurable.
If the combined belief value is greater than or equal to the first
threshold and the uncertainty is below a second threshold, the
selected hypothesis may then be determined to be plausible. Control
may then proceed to step 326 where the selected hypothesis may be
output as the "final decision". However, if the combined belief
value is less than the first threshold or the uncertainty is equal
to or greater than the second threshold, it may be determined that
the selected hypothesis is not plausible. Therefore, the method may
proceed to step 324 so as to output an uncertainty hypothesis as
the final decision indicating that the threat output of the ROI
cannot be reliably determined.
[0048] It should be appreciated that the steps of the present
invention may be repeated in whole or in part in order to perform
the contemplated threat arbitration. Further, it should be
appreciated that the steps mentioned above may be performed on a
single or distributed processor. Also, the processes, modules, and
units described in the various figures of the embodiments above may
be distributed across multiple computers or systems or may be
co-located in a single processor or system.
[0049] Embodiments of the method, system, and computer program
product for threat arbitration may be implemented on a
general-purpose computer, a special-purpose computer, a programmed
microprocessor or microcontroller and peripheral integrated circuit
element, an ASIC or other integrated circuit, a digital signal
processor, a hardwired electronic or logic circuit such as a
discrete element circuit, a programmed logic circuit such as a PLD,
PLA, FPGA, PAL, or the like. In general, any process capable of
implementing the functions or steps described herein can be used to
implement embodiments of the method, system, or computer program
product for threat arbitration.
[0050] Furthermore, embodiments of the disclosed method, system,
and computer program product for threat arbitration may be readily
implemented, fully or partially, in software using, for example,
object or object-oriented software development environments that
provide portable source code that can be used on a variety of
computer platforms. Alternatively, embodiments of the disclosed
method, system, and computer program product for threat arbitration
can be implemented partially or fully in hardware using, for
example, standard logic circuits or a VLSI design. Other hardware
or software can be used to implement embodiments depending on the
speed and/or efficiency requirements of the systems, the particular
function, and/or particular software or hardware system,
microprocessor, or microcomputer being utilized. Embodiments of the
method, system, and computer program product for threat arbitration
can be implemented in hardware and/or software using any known or
later developed systems or structures, devices and/or software by
those of ordinary skill in the applicable art from the function
description provided herein and with a general basic knowledge of
the computer, radiographic, and image processing arts.
[0051] Moreover, embodiments of the disclosed method, system, and
computer program product for threat arbitration can be implemented
in software executed on a programmed general purpose computer, a
special purpose computer, a microprocessor, or the like. Also,
threat arbitration can be implemented as a program embedded on a
personal computer such as a JAVA.RTM. or CGI script, as a resource
residing on a server or image processing workstation, as a routine
embedded in a dedicated processing system, or the like. The method
and system can also be implemented by physically incorporating the
method for threat arbitration into a software and/or hardware
system, such as the hardware and software systems of multi-energy
radiographic systems.
[0052] It is, therefore, apparent that there is provided, in
accordance with the present invention, a method, system, and
computer program product for threat arbitration. While this
invention has been described in conjunction with a number of
embodiments, it is evident that many alternatives, modifications
and variations would be or are apparent to those of ordinary skill
in the applicable arts. Accordingly, Applicants intend to embrace
all such alternatives, modifications, equivalents and variations
that are within the spirit and scope of this invention.
* * * * *