U.S. patent application number 12/103522 was filed with the patent office on 2008-12-11 for prevention of cheating in on-line interaction.
This patent application is currently assigned to Cognisafe Ltd.. Invention is credited to Shmuel Konforty, Yitzhak Shimon.
Application Number | 20080305869 12/103522 |
Document ID | / |
Family ID | 40096380 |
Filed Date | 2008-12-11 |
United States Patent
Application |
20080305869 |
Kind Code |
A1 |
Konforty; Shmuel ; et
al. |
December 11, 2008 |
PREVENTION OF CHEATING IN ON-LINE INTERACTION
Abstract
A method for preventing cheating by users of client computers
running a network game program includes installing a monitoring
program, independent of the network game program, on a group of the
client computers so as to detect, using the monitoring program, an
anomalous use of an asset of at least one of the client computers
that is indicative of an attempt to cheat in the game program. A
message is conveyed over a network to a server from each of at
least some of the client computers in the group. The message from
each such client computer indicates that the monitoring program has
been actuated on the client computer. Responsively to the message,
the client computers receive a message from the server at the
client computer indicating which ones of the client computers have
actuated the monitoring program.
Inventors: |
Konforty; Shmuel;
(Or-Yehuda, IL) ; Shimon; Yitzhak; (Tel Aviv-Yafo,
IL) |
Correspondence
Address: |
DARBY & DARBY P.C.
P.O. BOX 770, Church Street Station
New York
NY
10008-0770
US
|
Assignee: |
Cognisafe Ltd.
Tel Aviv-Yafo
IL
|
Family ID: |
40096380 |
Appl. No.: |
12/103522 |
Filed: |
April 15, 2008 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
11850223 |
Sep 5, 2007 |
|
|
|
12103522 |
|
|
|
|
60842653 |
Sep 5, 2006 |
|
|
|
Current U.S.
Class: |
463/29 |
Current CPC
Class: |
A63F 2300/6027 20130101;
A63F 2300/5586 20130101; G07F 17/3276 20130101; A63F 13/77
20140902; A63F 13/12 20130101; A63F 13/75 20140902; A63F 2300/552
20130101; G07F 17/3241 20130101 |
Class at
Publication: |
463/29 |
International
Class: |
A63F 9/24 20060101
A63F009/24 |
Claims
1. A method for preventing cheating by users of client computers
running a network game program, the method comprising: installing a
monitoring program, independent of the network game program, on a
group of the client computers so as to detect, using the monitoring
program, an anomalous use of an asset of at least one of the client
computers that is indicative of an attempt to cheat in the game
program; conveying over a network to a server a message from each
of at least some of the client computers in the group, the message
from each such client computer indicating that the monitoring
program has been actuated on the client computer; and responsively
to the message, receiving from the server at the client computer a
communication indicating which ones of the client computers have
actuated the monitoring program.
2. The method according to claim 1, and comprising displaying on
the client computer a list of the client computers have actuated
the monitoring program, and receiving from a user of the client
computer a selection, based on the list, of participants with whom
to join in playing the game program.
3. The method according to claim 1, wherein the monitoring program
is configured so as to permit a user of the client computer to
deactuate the monitoring program with respect to the game program,
and wherein conveying the message comprises informing the server
when the monitoring program is deactuated.
4. The method according to claim 1, and comprising running the
monitoring program while playing the game program on the client
computer so as to detect an anomalous pattern of utilization of
assets on the client computer, which is indicative of a threat of
cheating in the network game program, and notifying a user of the
client computer of the threat.
5. The method according to claim 4, and comprising sending a
notification of the threat over the network to at least one of the
server and others of the client computers.
6. The method according to claim 4, wherein running the monitoring
program comprises running the network game program on the client
computer while detecting use of assets using the monitoring program
so as to learn a pattern of normal utilization of the assets, and
then detecting the anomalous pattern as a deviation from the normal
utilization.
7. A method for preventing cheating by users of computers running a
network game program, the method comprising: installing a
monitoring program, independent of the network game program, on the
computer; running the network game program on the computer while
detecting use of assets using the monitoring program so as to learn
a pattern of normal utilization of the assets; during a session of
the network game program, detecting an anomalous utilization
pattern of the assets, which is indicative of a threat of cheating
in the network game program; and outputting a notification of the
threat to a user of the computer.
8. The method according to claim 7, wherein detecting the use of
the assets comprises learning the pattern during at least one of
installation of the game program and playing of the game program by
the user.
9. The method according to claim 7, wherein detecting the use of
the assets comprises applying a threat map based on the use of the
assets, and wherein detecting the anomalous utilization pattern
comprises receiving an event associated with one of the assets, and
associating the event with the threat map with a likelihood that is
greater than a predetermined threshold.
10. The method according to claim 9, wherein the threat map relates
to a first event, and wherein associating the event with the threat
map comprises receiving a second event that is not in the first
threat map, and associating the second event with the threat map by
a process of semantic inquiry.
11. The method according to claim 10, and comprising updating the
threat map responsively to the semantic inquiry.
12. The method according to claim 11, wherein updating the threat
map comprises identifying a plurality of candidate threat maps,
computing a respective hypothetical likelihood that the second
event is associated with each of the candidate threat maps, and
selecting one of the candidate threat maps for update based on the
hypothetical likelihood.
13. The method according to claim 7, wherein running the network
game program comprises learning the pattern of the normal
utilization using the monitoring program autonomously,
independently of any identification of the assets by the user.
14. The method according to claim 7, wherein detecting the
anomalous utilization pattern comprises receiving an event
indicative of a deviation from the pattern of normal utilization in
the use of at least one asset selected from a group of the assets
consisting of CPU utilization, network utilization, files and
directories.
15. The method according to claim 7, wherein running the network
game program comprises calculating a normal centralism of an
executable file during the normal utilization of the assets, and
wherein detecting the anomalous utilization pattern comprises
detecting a deviation from the normal centralism.
16. A computer software product for preventing cheating by users of
client computers running a network game program, the product
comprising a computer-readable medium in which program instructions
are stored, the instructions comprising a monitoring program for
installation on a group of the client computers independently of
the network game program, wherein the instructions cause the client
computers to detect, using the monitoring program, an anomalous use
of an asset of at least one of the client computers that is
indicative of an attempt to cheat in the game program, and wherein
the instructions cause the client computers to convey over a
network to a server a message from each of at least some of the
client computers in the group, the message from each such client
computer indicating that the monitoring program has been actuated
on the client computer, and responsively to the message, to receive
from the server at the client computers a communication indicating
which ones of the client computers have actuated the monitoring
program.
17. A computer software product for preventing cheating by users of
computers running a network game program, the product comprising a
computer-readable medium in which program instructions are stored,
the instructions comprising a monitoring program for installation
on a computer independently of the network game program, wherein
the instructions cause the computer, while running the network game
program, to detect use of assets using the monitoring program so as
to learn a pattern of normal utilization of the assets, and to
detect, during a session of the network game program, an anomalous
utilization pattern of the assets, which is indicative of a threat
of cheating in the network game program, and to output a
notification of the threat to a user of the computer.
18. Computing apparatus, comprising: an output device; and a
processor, which is configured to run a network game program, and
to receive installation of a monitoring program independently of
the network game program, wherein the monitoring program causes the
processor to detect an anomalous use of an asset of the computing
apparatus that is indicative of an attempt to cheat in the game
program, and further causes the processor to convey over a network
to a server a message indicating that the monitoring program has
been actuated on the computing apparatus, and responsively to the
message, to receive from the server a communication identifying
other computers that have actuated the monitoring program, and to
provide to a user of the computing apparatus, via the output
device, list of users of the other computers identified by the
communication.
19. Computing apparatus, comprising: an output device; and a
processor, which is configured to run a network game program, and
to receive installation of a monitoring program independently of
the network game program, wherein the monitoring program causes the
processor, while running the network game program, to detect use of
assets using the monitoring program so as to learn a pattern of
normal utilization of the assets, and to detect, during a session
of the network game program, an anomalous utilization pattern of
the assets, which is indicative of a threat of cheating in the
network game program, and to output a notification of the threat
via the output device to a user of the computing apparatus.
Description
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] This application is a continuation-in-part of U.S. patent
application Ser. No. 11/850,223, filed Sep. 5, 2007, which claims
the benefit of U.S. Provisional Patent Application 60/842,653,
filed Sep. 5, 2006. Both of these related applications are
incorporated herein by reference.
FIELD OF THE INVENTION
[0002] The present invention relates generally to computer systems
and software, and specifically to detection of cheating in on-line
interactions, such as games.
BACKGROUND OF THE INVENTION
[0003] Cheating is defined as an act of lying, deception, fraud,
trickery, imposture, or imposition. Cheating is typically employed
to create an unfair advantage, often at the expense of others.
Fraud is a particular type of cheating, in which a victim is
illegally deceived for the personal gain of the perpetrator.
[0004] Cheating is rampant in on-line games, due to the relatively
poor security of most game programs and the permissive atmosphere
created by the mutual anonymity of participants in Internet-based
games. A wide variety of forms of cheating has developed, as
surveyed, for example, by Yan and Randell in "A Systematic
Classification of Cheating in Online Games," Proceedings of the
Fourth ACM SIGCOMM workshop on Network and System Support for Games
(NetGames '05, Hawthorne, N.Y., 2005), which is incorporated herein
by reference. Even when there is no financial stake in the game, a
cheater can detract from the experience of other participants and,
in some cases, may pose a threat to the secure operation of their
computers.
[0005] Various techniques are known in the art for detection of
cheating and assisting participants in distinguishing between
cheaters and trustworthy players. For example, U.S. Patent
Application Publication 2007/0149279, whose disclosure is
incorporated herein by reference, describes an architecture for
mitigating and detecting cheating in peer-to-peer (P2P) gaming,
using a combination of per-packet access authentication,
moving-coordinator, and cheat detection mechanisms.
[0006] As another example, U.S. Patent Application Publication
2007/0276521, whose disclosure is incorporated herein by reference,
describes a method for maintenance of "community integrity" in a
gaming network, in which devices interacting with a particular game
are monitored. Indicia of the violation of certain rules that
define fair game play may be identified, and a user and/or device
engaged in illicit game play activity may be identified as a
result. Other users in the gaming network may be informed of the
particular user's previous illicit game activity.
[0007] European Patent Application EP 1669115 A1, whose disclosure
is incorporated herein by reference, describes a system for
conducting a game of chance using a communication network. In this
system, the players must have credentials with which to identify
themselves remotely. If the players do not have these credentials,
they must be issued by a certification authority and certification
agent. To request credentials, the player downloads a player agent,
which communicates with the certification agent using a secure
communication protocol and digital certificate.
[0008] U.S. Pat. No. 7,169,050, whose disclosure is incorporated
herein by reference, describes a system and method for prevention
of cheating during online gaming in which a first computer system
receives information regarding cheaters from a second computer
system. Cheaters identified in this manner are prevented from
online gaming on the first computer system. A master database of
cheaters resides on one or more master servers, which assemble a
master list of cheaters aggregated from individual game servers. In
this way, once a cheater is banned on one game server, information
identifying the cheater is transmitted to the master databases of
the master servers for distribution to the other game servers.
[0009] A number of anti-cheating software packages are currently
available for various on-line games. Examples include
PunkBuster.TM., produced by Even Balance Inc. (Spring, Tex.), and
GameGuard, produced by INCA Internet Co. (Seoul, Korea).
SUMMARY OF THE INVENTION
[0010] The embodiments of the present invention that are described
hereinbelow provide novel methods for detection and prevention of
cheating in computer-based applications. In these embodiments, a
program installed on a computer learns normal patterns of use of
the assets of the computer and, based on the learned patterns,
monitors the computer to detect events that may be indicative of
cheating. Such cheating may include both deviant behavior by the
user of the computer itself and attempts to compromise the computer
carried out by users of other computers. The program implements
generic methods of learning and analysis, which are not limited to
a specific game or other application.
[0011] In some embodiments, the program running on the computer
communicates with a server, which monitors the activities of a
community of participants. When a member of the community wishes to
participate in an on-line game, the server verifies that the
computer is being monitored by the program and provides an
indication to the other members of the community that the user can
be trusted not to cheat. The user may similarly receive an
indication whether each of the participants in a game is or is not
running the monitoring program, and may thus choose to play only
with trusted participants.
[0012] Although the embodiments described hereinbelow relate
specifically to cheating in on-line games, the principles of the
present invention may similarly be applied in prevention of other
types of cheating, such as click fraud.
[0013] There is therefore provided, in accordance with an
embodiment of the present invention, a method for preventing
cheating by users of client computers running a network game
program. The method includes installing a monitoring program,
independent of the network game program, on a group of the client
computers so as to detect, using the monitoring program, an
anomalous use of an asset of at least one of the client computers
that is indicative of an attempt to cheat in the game program. A
message is conveyed over a network to a server from each of at
least some of the client computers in the group, the message from
each such client computer indicating that the monitoring program
has been actuated on the client computer. Responsively to the
message, a communication is received from the server at the client
computer indicating which ones of the client computers have
actuated the monitoring program.
[0014] In one embodiment, the method includes displaying on the
client computer a list of the client computers have actuated the
monitoring program, and receiving from a user of the client
computer a selection, based on the list, of participants with whom
to join in playing the game program. The monitoring program may be
configured so as to permit a user of the client computer to
deactuate the monitoring program with respect to the game program,
and conveying the message may include informing the server when the
monitoring program is deactuated.
[0015] In some embodiments, the method includes running the
monitoring program while playing the game program on the client
computer so as to detect an anomalous pattern of utilization of
assets on the client computer, which is indicative of a threat of
cheating in the network game program, and notifying a user of the
client computer of the threat. In one embodiment, the method
includes sending a notification of the threat over the network to
at least one of the server and others of the client computers.
Additionally or alternatively, running the monitoring program
includes running the network game program on the client computer
while detecting use of assets using the monitoring program so as to
learn a pattern of normal utilization of the assets, and then
detecting the anomalous pattern as a deviation from the normal
utilization.
[0016] There is also provided, in accordance with an embodiment of
the present invention, a method for preventing cheating by users of
computers running a network game program. The method includes
installing a monitoring program, independent of the network game
program, on the computer. The network game program is run on the
computer while detecting use of assets using the monitoring program
so as to learn a pattern of normal utilization of the assets.
During a session of the network game program, an anomalous
utilization pattern of the assets is detected, which is indicative
of a threat of cheating in the network game program, and a
notification of the threat is output to a user of the computer.
[0017] In a disclosed embodiment, detecting the use of the assets
includes learning the pattern during at least one of installation
of the game program and playing of the game program by the
user.
[0018] In some embodiments, detecting the use of the assets
includes applying a threat map based on the use of the assets, and
detecting the anomalous utilization pattern includes receiving an
event associated with one of the assets, and associating the event
with the threat map with a likelihood that is greater than a
predetermined threshold. Typically, the threat map relates to a
first event, and associating the event with the threat map may
include receiving a second event that is not in the first threat
map, and associating the second event with the threat map by a
process of semantic inquiry. The method may include updating the
threat map responsively to the semantic inquiry by identifying a
plurality of candidate threat maps, computing a respective
hypothetical likelihood that the second event is associated with
each of the candidate threat maps, and selecting one of the
candidate threat maps for update based on the hypothetical
likelihood.
[0019] Typically, running the network game program includes
learning the pattern of the normal utilization using the monitoring
program autonomously, independently of any identification of the
assets by the user.
[0020] In a disclosed embodiment, detecting the anomalous
utilization pattern includes receiving an event indicative of a
deviation from the pattern of normal utilization in the use of at
least one asset selected from a group of the assets consisting of
CPU utilization, network utilization, files and directories.
[0021] Additionally or alternatively, running the network game
program includes calculating a normal centralism of an executable
file during the normal utilization of the assets, and wherein
detecting the anomalous utilization pattern includes detecting a
deviation from the normal centralism.
[0022] There is additionally provided, in accordance with an
embodiment of the present invention, a computer software product
for preventing cheating by users of client computers running a
network game program, the product including a computer-readable
medium in which program instructions are stored, the instructions
including a monitoring program for installation on a group of the
client computers independently of the network game program, wherein
the instructions cause the client computers to detect, using the
monitoring program, an anomalous use of an asset of at least one of
the client computers that is indicative of an attempt to cheat in
the game program, and
[0023] wherein the instructions cause the client computers to
convey over a network to a server a message from each of at least
some of the client computers in the group, the message from each
such client computer indicating that the monitoring program has
been actuated on the client computer, and responsively to the
message, to receive from the server at the client computers a
communication indicating which ones of the client computers have
actuated the monitoring program.
[0024] There is further provided, in accordance with an embodiment
of the present invention, a computer software product for
preventing cheating by users of computers running a network game
program, the product including a computer-readable medium in which
program instructions are stored, the instructions including a
monitoring program for installation on a computer independently of
the network game program, wherein the instructions cause the
computer, while running the network game program, to detect use of
assets using the monitoring program so as to learn a pattern of
normal utilization of the assets, and to detect, during a session
of the network game program, an anomalous utilization pattern of
the assets, which is indicative of a threat of cheating in the
network game program, and to output a notification of the threat to
a user of the computer.
[0025] There is moreover provided, in accordance with an embodiment
of the present invention, computing apparatus, including an output
device and a processor, which is configured to run a network game
program, and to receive installation of a monitoring program
independently of the network game program, wherein the monitoring
program causes the processor to detect an anomalous use of an asset
of the computing apparatus that is indicative of an attempt to
cheat in the game program, and further causes the processor to
convey over a network to a server a message indicating that the
monitoring program has been actuated on the computing apparatus,
and responsively to the message, to receive from the server a
communication identifying other computers that have actuated the
monitoring program, and to provide to a user of the computing
apparatus, via the output device, list of users of the other
computers identified by the communication.
[0026] There is furthermore provided, in accordance with an
embodiment of the present invention, computing apparatus, including
an output device and a processor, which is configured to run a
network game program, and to receive installation of a monitoring
program independently of the network game program, wherein the
monitoring program causes the processor, while running the network
game program, to detect use of assets using the monitoring program
so as to learn a pattern of normal utilization of the assets, and
to detect, during a session of the network game program, an
anomalous utilization pattern of the assets, which is indicative of
a threat of cheating in the network game program, and to output a
notification of the threat via the output device to a user of the
computing apparatus.
[0027] The present invention will be more fully understood from the
following detailed description of the embodiments thereof, taken
together with the drawings in which:
BRIEF DESCRIPTION OF THE DRAWINGS
[0028] FIG. 1 is a schematic, pictorial illustration of a system
for on-line gaming, in accordance with an embodiment of the present
invention;
[0029] FIG. 2 is a block diagram that schematically illustrates
elements of computer software for detection of cheating, in
accordance with an embodiment of the present invention;
[0030] FIG. 3 is a flow chart that schematically illustrates a
method for learning patterns of asset use by a computer game, in
accordance with an embodiment of the present invention;
[0031] FIG. 4 is a flow chart that schematically illustrates a
method for assessing threat potentials, in accordance with an
embodiment of the present invention;
[0032] FIG. 5 is a flow chart that schematically illustrates a
method for ranking special assets, in accordance with an embodiment
of the present invention;
[0033] FIG. 6 is a flow chart that schematically illustrates a
method for game user learning, in accordance with an embodiment of
the present invention;
[0034] FIG. 7 is a flow chart that schematically illustrates a
method for adjusting asset threat potentials, in accordance with an
embodiment of the present invention;
[0035] FIG. 8 is a flow chart that schematically illustrates a
method for updating statistical results in game user learning, in
accordance with an embodiment of the present invention;
[0036] FIG. 9 is a flow chart that schematically illustrates a
method for computation of centralism of files, in accordance with
an embodiment of the present invention;
[0037] FIG. 10 is a flow chart that schematically illustrates a
method for inquiry management, in accordance with an embodiment of
the present invention;
[0038] FIG. 11 is a flow chart that schematically illustrates a
method for threat identification, in accordance with an embodiment
of the present invention;
[0039] FIG. 12 is a flow chart that schematically illustrates a
method for evaluating threat lines, in accordance with an
embodiment of the present invention; and
[0040] FIG. 13 is a flow chart that schematically illustrates a
method for pseudo-semantic inquiry, in accordance with an
embodiment of the present invention.
DETAILED DESCRIPTION OF EMBODIMENTS
System Overview
[0041] FIG. 1 is a schematic, pictorial illustration of a system 20
for on-line gaming, in accordance with an embodiment of the present
invention. Multiple participants 24 play a game together using
respective client computers 22, which are connected to communicate
during the game via a network 26, such as the Internet. Each
computer 22 comprises a processor 28 with suitable input and output
devices, such as a video monitor 30 and a joystick 32, as well as
an interface to network 26. The game in question may be
server-based or peer-to-peer: The principles of the present
invention, as presented in detail hereinbelow, are not tied to a
specific game or architecture. In fact, the anti-cheating program
that is described hereinbelow is capable of learning and monitoring
multiple games, of various different types, they may be played
using a given computer. Although computers 22 are illustrated in
FIG. 1 as personal desktop computers, the architecture and methods
described hereinbelow are equally applicable to computing devices
of other types, such as servers, as well as dedicated game consoles
and mobile computing and communication devices.
[0042] At least some of client computers 22 are linked to a "trust
net," which is coordinated by a server 34. A client program running
on each of these computers, as described hereinbelow, communicates
with the server before and during the game. The client program
informs the server of the identity of the participant who is using
the computer by means of a unique identifier (such as a digital
signature), and also informs the server of the game that the
participant wishes to play.
[0043] Prior to the game, the client program learns how the game in
question uses the assets of the client computer, such as files,
computational power, and communication resources. During the game,
the client program monitors the use of these assets. Upon detecting
an anomalous event, which may be indicative of an attempt to cheat
during the game, the client program typically informs both
participant 24 and server 34. Such anomalous events may be
indicative of either an attempt by another player to cheat against
the participant or an attempt to cheat by the participant himself.
The server may keep records of anomalous events and the
participants who were involved in them in order to assemble a list
of known or suspected cheaters.
[0044] Typically, the client program on computers 22 is itself
secured against tampering. For example, the program may be
digitally signed, and server 34 may check the digital signature as
part of the authentication process before the game. Participant 24
may choose to inactivate the client program at certain times, but
in such cases, server 34 will be informed that the client computer
in question is not being monitored and is therefore susceptible to
cheating.
[0045] Server 34 may give participants 24 information regarding
which other players are currently members of the trust net, i.e.,
which players have the client program installed and active on their
own computers. For example, as shown in FIG. 1, the server may
generate a window 38 on a display 36 listing players who are
participating in or wish to participate in the game in question. A
secure indicator 40, controlled by the server, marks the names of
players who are part of the trust net. If a given player has not
installed the client program or has turned it off, the secure
indicator will not appear next to his or her name. (Players with a
history of cheating may also be marked by the server.) Based on the
information in window 38, participant 24 may choose to play only
with trust net members.
[0046] Alternatively, participants have the option of playing with
players who are not approved by server 34. In this case, the client
program will still monitor the client computer and will alert the
participant to anomalous events, which will protect the participant
against some types of cheating, but without the more comprehensive
protection afforded by the trust net.
Software Architecture
[0047] FIG. 2 is a block diagram that schematically illustrates
elements of a program 50 for detection of cheating, in accordance
with an embodiment of the present invention. Program 50 includes
software modules and data structures that are used in learning and
monitoring computer 22. The components of program 50 may be
downloaded to computer 22 in electronic form, over a network, for
example. Alternatively or additionally, these program components
may be furnished and/or stored on tangible computer-readable media,
such as optical, magnetic, or electronic storage media.
[0048] Program 50 implements a cognitive engineering architecture,
based on the following principles, inter alia: [0049] Autonomous
solution--The program generally operates without the need for
intervention by operators or system engineers in ongoing operation.
A sieve module 100, as described further hereinbelow, is capable of
dynamically changing the data collection profile and adaptively
building the set of assets to be protected. [0050]
Self-learning--The program learns both new threats and normal
behavior of new games. A rule base module 52 manages self-learning
that is carried out by a software game learning (SWL) module 76 and
by a game user learning (GUL) module 78, which learns normal user
behavior. [0051] Self-expansion--A threat map semantic inquiry
(TMSI) module 74 recognizes variations on known threats and
activates a threat map update (TMU) module 80, which builds a new
threat pattern. The program also supports distribution of known
threats among the members in the trust net via server 34 (FIG. 1),
using a trust net module 54 and a threats warden module 56.
[0052] Rule base module 52 activates backward and forward chain
reasoning algorithms to populate and enrich a full knowledge base
66, including preliminary and conclusive information. The rule base
module may continually analyze the knowledge base in order to
generate one of the following generic decisions with respect to
each detected event: [0053] Ignore because there is no threat
indication; [0054] Request additional information (including input
from the user) or data processing (backward chain reasoning), in
order to reach a conclusion concerning the significance of the
event; or [0055] Identify the type of threat(s) and react
accordingly.
[0056] Knowledge base 66 is typically divided between private and
public knowledge information. The distinction between those two
categories of information reflects the access and retrieval
permissions for each category: The private part of the knowledge
base contains information that was gathered from a specific
computational node, while the public part of the knowledge base
contains common information provided and maintained centrally, by
server 34, for example. Both private and public knowledge bases can
share the same concept domain. Consolidation of the information
from both the private and public knowledge bases generates the full
concept domain.
[0057] Program 50 supports the following main session types: [0058]
Game installation session--Provides the program with knowledge
about the set of assets of computer 22 that are to be protected and
also about the main executable file of the game. The name of this
executable file is needed for recognition of the game in future
protected game sessions. [0059] Game/user learning session--This
chain of sessions provides the program with knowledge about the
game and user normal profiles. Program 50 typically monitors
several sessions of this type in order to be capable of
differentiating between normal and abnormal activity and overall
state inside the computer. [0060] Protection session--Regular game
session, processed under observation and protection by program
50.
[0061] Top-level modules of program 50 include rule base module 52,
sieve module 100, a reasoning module 60 and a learning module 62,
which interact with knowledge base 66 and a number of subsidiary
modules. The components and functions of the program modules are
described below:
[0062] Rule base module 52 manages the overall program state and
the other modules. Functions of module 52 include: [0063]
Communicating with the user via a user interface (UI) module 58.
[0064] Starting and initial handshake with sieve module 100. [0065]
Initiating sessions of different types. [0066] Communicating with
sieve module 100 in order to change its profile according to the
current session type. [0067] Communicating with sieve module 100 in
order to get updates about the current session state. [0068]
Declaring detection of anomalies in user activity. [0069] Declaring
detection of anomalies in overall machine state. [0070] Declaring
recognition of threats on the basis of previously known activity,
corruption of protected game assets, or hampering of normal user
activity within a game session. [0071] Activation of checking of
current activity for semantic proximity to existing threat
patterns. [0072] Activation of adaptively building the normal
activity profile for each user and each game. [0073] Activation of
adaptively building the mechanisms for recognition of specific
(known) threats. [0074] Auditing knowledge base 66.
[0075] Rule base module 52 manages the following main processes:
sieve module 100; reasoning module 60 (including an inquiry manager
70, a threat map-based identification (TMBI) module 72, and TMSI
module 74); learning module 62 (including SWL module 76, GUL module
78 and TMU module 80); and sieve module 22. Module 52 uses
information that was gathered during the activity sessions, which
is stored in a metadata table 88.
[0076] Sieve module 100 manages data collection processes using
sensor modules 104, 106, 108, 110. Functions of the sieve module
include: [0077] Monitoring sessions concerned with protected games.
[0078] Collecting data and activating procedures for data storage
in knowledge base 66. [0079] Dynamically changing data collection
profiles according to process information and sets of assets to be
protected. [0080] Transferring event data to a Threat Potential
Table (TPT) 90 for rough filtering in order to recognize suspicious
events. [0081] Communicating with rule base 52 in order to
synchronize data collection and routing of process information.
[0082] The sieve module serves as database feeder, configuration
manager and session manager. As database feeder, the sieve module
converts information from a string representation that is obtained
from sensors 104, 106, 108, 110 to a database representation. The
database feeder may use a flexible algorithm, based on external
scripts, to enable the knowledge base architecture to be updated.
The database feeder typically receives input in the form of
strings, containing name-value pairs separated by commas. The
scripts translate input fields or expressions based on input fields
into database rows. The configuration manager drops irrelevant
sensor input. It may also use a flexible algorithm, based on
external predicate scripts, which may be specified in an XML file.
The session manager separates sessions and may divide sessions into
clusters. The session manager encapsulates session-related
information and provides this information to other modules. The
database feeder uses this session information in order to fill in
corresponding fields in log data records.
[0083] Trust net module 54 supports communication with server 34,
as noted above. This module performs the following functions:
[0084] Reporting to the server on the status of monitoring
activities on the computer. [0085] Receiving information regarding
the other players in the network, particularly those who have
activated the anti-cheat program on their computers, as shown above
in FIG. 1. [0086] Sending information collected by threats warden
56 to inform server 34 of threat activities. [0087] Receiving
on-line updates of program 50.
[0088] Threats warden module 56, as noted above, collects
information on computer 22 regarding local activities in order to
inform server 34 of possible cheating.
[0089] User interface module 58 permits interaction between
participant 24 and program 50. The main functions that this module
include: [0090] Handling user inputs. [0091] Presenting the
activity status of the anti-cheat system, including window 38 and
notification of potential threats. (In some cases, the participant
may be asked to classify a new situation, hitherto unknown to
program 50, as normal or abnormal.) [0092] Communicating with rule
base module 52 in order to support user requests.
[0093] Learning module 62 contain modules 76-80, as mentioned
above, which implement the main learning functionalities of program
50: [0094] Software learning (SWL) module 76 collects information
about game assets and processes to be monitored and protected
within subsequent protected-mode session. It builds lists of game
assets and processes these lists, including rough filtering, for
further threat recognition. (The functions of module 76 are
described further hereinbelow with reference to FIGS. 3-5.) [0095]
Game-user learning (GUL) module 78 collects information about
normal user activity during protected sessions and fills in
statistical data tables that are used for recognition of abnormal
machine states. (The functions of module 78 are described further
hereinbelow with reference to FIG. 6-8.) [0096] Threat map update
(TMU) module 80 updates parameters of existing threat patterns,
based on recent user activity, and outputs the updated threat
patterns to knowledge base 66.
[0097] A service algorithms module 64 performs major mathematical
computations used by program 50.
[0098] Reasoning module 60 divides input data by type and activates
modules 70-74 in order to apply the appropriate processing: [0099]
Inquiry manager 70 coordinates the activity of modules 72 and 74,
as well as initiating activity of TMU module 80. (The functions of
module 70 are described further hereinbelow with reference to FIG.
10.) [0100] Threat map-based identification (TMBI) module 72 checks
current input data against known threat patterns. Module 72 uses a
self-learning algorithm in order to recognize events and situations
that are unknown but suspicious. In certain cases it calls module
74. (The functions of module 72 are described further hereinbelow
with reference to FIG. 12.) [0101] Threat map semantic inquiry
(TMSI) module 74 recognizes variations on known threats. Module 74
uses pseudo-semantic analysis in order to detect semantic proximity
of the current situation to known threat patterns. (The functions
of module 74 are described further hereinbelow with reference to
FIG. 13.)
[0102] Knowledge base 66 serves as the repository of the relevant
data enriched by semantic-type meta-information
(data-objects-concepts) collected by the modules of program 50,
including relations between the objects and concepts. The knowledge
base serves the program modules and enables the program to
continually learn the features of operation of the protected game
software. The adaptive learning properties of program 50 enable the
same backbone software to be used to protect both games for which
partial prior knowledge exists and games for which no prior
knowledge exists at all.
[0103] The knowledge base contains the following groups of classes:
[0104] Logs [0105] Threat knowledge [0106] Environment (reference)
knowledge. The knowledge base is built on a reference knowledge
group, which contains basic knowledge that is available a priori,
learned at the vendor labs, and learned on-site. It relates to
protected software assets knowledge classes, which describe all
types of assets (components) of the protected system. These assets
may include files, directories, devices, registry entries and
registry keys, inter alia. The asset classes also describe
groupings of these assets, such as file types, file extensions,
etc.
[0107] The logs of the knowledge base contain all incoming
information, including information generated both by the computer
itself and by components of program 50. Information generated by
the computer may include, for example, operating system events "as
is." Logs generated by program 50 may include, for example, program
parameters or a log of events specific to a particular protected
game. The logs typically include user-level and system-level event
logs regarding protected software, as well as overall system
information. The logs typically use the following knowledge
classes: [0108] A game log (GL) 82, which contains game-specific
event logs. This log is applicable only when the user has
configured program 50 to protect against cheating in a specific
game. [0109] An event table 84, which contains the overall activity
log of the computer system. [0110] A task manager information
(TaskMan Info) table 86, which contains a log of the machine
state.
[0111] GL 82 may include a protected software user-level events
log, which contains information on the events that are specific and
unique for the software that is being protected. If the protected
software is a multi-user online game, for example, then the events
can be of the type: "The user N0001 has entered player group
G0001," or "The user N0002 has left the chat room," or "My current
shots-per-second rate is 26.7." The ontology frame of this class
includes: [0112] time_stamp (datetime format) [0113]
event_sequential_number (long integer format) [0114] cluster_number
(integer) [0115] event_code (integer) [0116] event_parameters
(multiple, indexed)
[0117] GL 82 may also include a protected software system-level
events log class, which contains a detailed journal of system
events based on API commands. Examples of such events may include
"change process priority," "delete directory," "edit file
permissions," and "start process." The ontology frame of this class
includes: [0118] time_stamp (date/time format)--the time point at
which the event occurred. [0119] sequential_number_global (long
integer format)--a sequential index throughout a game session.
[0120] cluster_num (integer)--an index that classifies the event
according to stages in the game session. [0121] session_num
(integer)--a counter of the number of game sessions. [0122]
seq_num_within_cluster (integer)--a sequential index throughout a
played session that is reset at every cluster start. [0123] command
(reference to the class Commands)--a link to the referenced
command. [0124] command_parameter1 (reference to the class
OpSysInfo)--may refer to a predefined class, such as Files,
Processes, Directories, RegistryKeys, etc., or another subject of a
command event. [0125] command_parameter2 (reference to the class
OpSysInfo)--similar to command_parameter1, but defined only for
special operating system operations that require two parameters as
subjects. For instance, the operation "Rename" requires two command
parameters: one holding the old subject name and the second for
holding the new subject name. [0126] invoking_process (reference to
the class Processes)--the process that executes the operation.
[0127] origin_by_protected_soft (Boolean)--an indication of whether
or not the invoking process originated in the game software. [0128]
OS_operation_name (reference to the class
OS_Operations)--enumerated value indicating the nature of the
operation.
[0129] Event table 84 may include an overall system information log
knowledge class, which contains a detailed journal of system events
based on API commands, similar to those in the GL table. The
ontology frame of this class includes: [0130] time stamp [0131]
networking data--continuous data related to the network stream.
[0132] performance data--continuous data related to the resources
of the device (such as CPU utilization, memory cache use, etc.)
[0133] process description--the above data related to each and
every process that is running.
[0134] The threat knowledge group of classes in the knowledge base
typically includes the following classes: [0135] Threat potential
table (TPT) 90 contains the threat potential of specific system
asset uses or situations. It provides a rough filter of suspicious
activity [0136] The system normal state map (SM) 92 serves as an
input table for a rough identification of anomalies. [0137] Threat
maps (TM) 94 contain all the patterns of threat events, including
threat lines and threat elements, which are components of the
threat patterns. The threat knowledge is used together with a stati
normali class, which contains knowledge learned on-site of the
behavior of the user and software that is characteristic of clean
(threat-free) situations. The combination of these knowledge
classes also makes it possible for learning module 62 to
automatically learn new threat patterns, acquire new knowledge and
enrich the threat knowledge dynamically.
[0138] TPT 90 contains knowledge about the measure of threat
potential of specific elements (structures) of objects or groups of
objects or specific situations or ranges of situations. For
example, it may contain the threat potential value of an image
(executable file) of a process or of a group of APIs, or the threat
potential value of a situation in which a specific API is applied
to any file in a specific directory. Each instance of this class is
a set (collection, un-indexed sequence) of any number of instances
of the threat lines class. Since the threat lines that build up the
TPT class also build up the threat map (TM) class, the TPT class is
a subspace of the TM domain. The TPT class provides a rough
representation of the TM class in order to reduce computational
cost.
[0139] The threat lines class defines elemental test conditions. It
includes: [0140] threat_element (reference to the threat elements
class, as explained below). [0141] test_value (reference to any
relevant value against which the threat element is tested). [0142]
test_weight (a numerical measure of the significance of a given map
line as compared to the rest of the lines of the same map). [0143]
higher_threat_line (a reference to another line of the same map
that is precedent to the current threat line in logical hierarchy).
The threat lines class is the central tool for defining threats.
The data structure of the threat lines class can be used in
assembling logical predicates (statements or conditions) in a
generic manner, wherein the predicates may refer to any variable in
the knowledge base. For example, one threat line could state that
the condition x.sup.2>y indicates a partial fulfillment of a
certain threat, or alternatively, it might indicate the opposite,
i.e. that the satisfaction of the condition refutes another
predefined threat.
[0144] The threat elements knowledge class contains the main part
of each threat line. A number of different threat lines may contain
the same threat element. The frame of a threat element includes:
[0145] observed_parameter (the parameter tested to define the
threat). [0146] test_model (describes the test).
[0147] Threat maps 94 use the threat maps and distances knowledge
class, which contains: [0148] each map (pattern) of a threat
divided into elemental threat lines; [0149] the semantic distance
(measure of similarity) between any pair of such maps.
[0150] Sensor modules 104, 106, 108, 110 gather information
regarding the current activity and overall machine state of
computer 22. In general the sensor modules are small program
modules, which perform the following sorts of functions: [0151]
System-wide operation sensors collect information about all
operating system operations, such as file opening, writing to file,
process starting and terminating, registry updating, and so on.
[0152] Machine state sensors collect information about utilization
of machine resources, such as CPU, paging file, and so on. [0153]
Networking sensors collect information about activity on network 26
and network resource utilization. The sensors receive as input
internal information from the computer operating system and output
data in string representation to the database feeder function of
sieve module 100. String representation may also contain metadata,
as additional input for the database feeder. A TPT feeder 102 in
FIG. 2 represents the operation of TPT 90 in loading information
from the sensors into knowledge base 66.
[0154] The specific sensor modules shown in FIG. 2 include the
following: [0155] Plug sensor modules 104 gather information about
specific activity relevant to a specific game. [0156] A commands
sensor 106 collects information about all operating system
operations. [0157] A dashboard sensor module 108 collects
information about machine resource utilization. [0158] A network
sensor module 110 collects information about activity on the
network and network resources utilization.
Detailed Operation of Program Modules
[0159] FIG. 3 is a flow chart that schematically illustrates a
method for learning patterns of asset use by a computer game, in
accordance with an embodiment of the present invention. The user
submits a request, via UI module 58, for program 50 to learn a new
game, at a new game selection step 120. In response to this
request, the UI module opens a dialog window asking the user to
specify the installation file of the game in question, at a file
request step 122. The user provides (or browses for) the full path
of the game program, at a file provision step 124. The UI module
now retrieves the installation file, at a file retrieval step 126,
and transfers control to rule base module 52.
[0160] The rule base module sets up the required configuration and
then invokes sieve module 100, at a configuration step 128. The
configuration data indicate the operations, processes and
parameters to be used by the sieve module in including or excluding
data provided by sensor modules 104-110 during installation of the
game. For instance, upon installation, there may be "uninteresting"
types of assets, which are unlikely to be used in a cheating scheme
(such as video and audio files). Events involving these assets can
be sieved before storage.
[0161] Sieve module 100 then logs the data transmitted by the
sensors during installation of the game, at a logging step 130. The
logged data are typically stored in a temporary memory. Upon
completion of the installation, the sieve module returns to the
rule base module with either a success or a failure indication.
When the logging was successful, the rule base module invokes SWL
module 76 to process the logged data, at a SWL invocation step 132.
Based on this processing, the SWL module adds new instances of game
assets to TPT 90, at a table addition step 134.
[0162] FIG. 4 is a flow chart that schematically shows details of a
method used in assessing threat potentials at step 134, in
accordance with an embodiment of the present invention. SWL module
76 loops over various types of assets that have been predefined
within the threat model, at an asset type review step 140. For each
type of assets, the SWL module ranks each asset found in the log
that was generated at step 130. A subroutine implementing an
algorithm that may be used at step 140 (written in Visual Basic for
Applications (VBA)) is listed below in Appendix A. The ranking
function at step 140 is typically determined by a single variable.
For example, if C.sub.A represents a measure of the number of event
occurrences in which a given asset was involved, the ranking
function at step 140 is simply given by the value of C.sub.A in
descending order: Rank(C.sub.A)=C.sub.A.
[0163] For special asset types, the SWL module performs an
additional ranking process, at a special ranking step 142. (Special
assets are those asset types that require special treatment in the
ranking process. Examples of special assets include registry keys
and folders, having corresponding asset type registry values and
files, respectively, which are taken into account in the ranking
process.) Details of this step are presented below in FIG. 5. For
these special asset types, the ranking function may also be
determined by a single variable, but not simply by the number of
event occurrences in which a given special asset was involved.
Typically, each type of special assets has a corresponding type of
assets. The ranking of the special assets is determined in part by
the corresponding game assets that they hold.
[0164] One type of special asset is a file directory. Assume, for
example, that a given directory, C:/a/b/d, is at the bottom of a
directory tree. In this case, its importance may be determined
simply by the number (say X) of the assets that it holds, such as
directories and files. For the sake of illustration, assume that
the parent directory C:/a/b holds the same number of files as
C:/a/b/d (apart from the files that are held in C:/a/b/d), and that
C:/a/b does not have any other descendants besides C:/a/b/d.
Therefore, C:/a/b holds a total of 2X files. In such a case, the
SWL module will rank C:/a/b/d and C:/a/b as having the same
importance, because each one of them "is responsible for" holding
the same amount of files (X).
[0165] As an alternative example, assume now that C:/a/b has
another subdirectory besides C:/a/b/d, i.e., C:/a/b/d has a sibling
C:/a/b/e, which holds 5X files. In this case, the SWL module will
assign C:/a/b/e a measure of importance that is five times higher
than that of C:/a/b/d. In other words, if the importance of
C:/a/b/d is Y, then the importance of C:/a/b/e is 5Y. The
cumulative number of files held in C:/a/b is now 7X (X+X+5X), but
its importance should still be lower than that of C:/a/b/e.
Although the cumulative number of files in a directory (including
all files in subdirectories) will always be greater or equal to the
number of files in any of its subdirectories, the ranking of the
directory takes into account the subdirectory with the maximal
number of files over all subdirectories. As a result, the ranking
of C:/a/b is 2Y (based on the difference 7X-5X). In other words,
because another sibling directory has been added to C:/a/b/d, the
ranking of the parent directory C:/a/b is raised in comparison to
C:/a/b/d. Appendix B hereinbelow presents a subroutine, written in
Visual Basic for Applications (VBA), that implements a ranking
algorithm that may be used at step 142.
[0166] SWL module 76 computes the threat potential of each asset
(including special assets) at a threat potential computation step
144. Various formulas may be used to determine the threat potential
as a function of rank, as long as the formula returns a valid
value, i.e., a probability. Initially, in the absence of prior
knowledge about the assets (apart from their existence and possibly
their distribution within directories), the SWL module may set the
threat potential for each asset to 1 (one), but these threat
potentials may subsequently be reduced by GUL module 78 (as
described below with reference to FIG. 7). The SWL module then
writes the instances of the assets (including special assets) and
their respective threat potentials to TPT 90.
[0167] FIG. 5 is a flow chart that schematically shows details of
the method for ranking special assets carried out at step 142, in
accordance with an embodiment of the present invention. SWL module
76 loops over all of the special asset types, at a type review step
150. For each type of special assets, the SWL module counts the
total number of the corresponding assets, at an asset counting step
152. In relation to directories, for example, as described above,
the SWL module counts the total number of files in each directory,
down to the bottom of the directory tree. Based on the counts made
at step 152, the SWL module then ranks each special asset found in
the log, at a ranking step 154. The ranking formula used at this
step for a given special asset d is:
Rank(d)=.alpha..times.(d.count-MAX(SUB(d).count)
wherein .alpha. is a fixed coefficient, MAX(*) is a function that
returns the maximum out of a set of numbers, and SUB(*) returns all
the descendants at the next generation of a given special
asset.
[0168] FIG. 6 is a flow chart that schematically illustrates a
method for game user learning, in accordance with an embodiment of
the present invention. A user of computer 22 uses UI module 58 to
request that program 50 learn a game, at a learning selection step
160. The user specifies that the learning is to take place while
the game is being played, without protection against cheating. The
UI module presents a dialog window offering the existing game-user
profiles for selection by the user, at a profile presentation step
162. The user selects the desired profile from the list, at a
profile selection step 164.
[0169] Once the user has selected the profile for the desired game,
the UI modules retrieves the profile and transfers control to rule
base module 52, at a profile retrieval step 166. The rule base
module sets up the required configuration and then invokes sieve
module 100, at a sieve invocation step 168. The configuration
indicates what events the sieve should monitor (as transmitted by
sensor modules 104-110) and the processes and parameters the
transmitted data should include or exclude. The sieve module
transfers the data from the sensor modules to knowledge base 66
until the game ends, or until the user quits the learning process,
at a data transfer step 170.
[0170] When step 170 is completed, the sieve module returns control
to rule base module 52, which then invokes GUL module 78, at a GUL
invocation step 172. The GUL adds new asset instances and modifies
existing instances with respect to the game in question in
knowledge base 66. Details of step 172 are shown below in FIG. 7.
As part of this step, the GUL module measures metric distances
between each pair of assets within each type.
[0171] FIG. 7 is a flow chart that schematically illustrates a
method for adjusting asset threat potentials, carried out by GUL
module 78 at step 172, in accordance with an embodiment of the
present invention. Based on the data transferred at step 170, the
GUL module adds new instances of threatened assets to TPT table 90
and/or modifies existing instances, in a table modification step
180. The algorithm used at step 170 is similar to that presented in
FIG. 4, except that the counter C.sub.A is now
configuration-dependent.
[0172] For example, if the configuration set by rule base 52
instructs sieve module 100 to transmit command events invoked by
any process, then C.sub.A becomes two-fold, wherein
C.sub.A.sup.with and C.sub.A.sup.without respectively represent the
number of event occurrences in which a given asset was involved
with the game being played and without it. (In cases in which the
configuration instructs the sieve module to transmit only command
events invoked by the game process itself, C.sub.A.sup.without will
accumulate a null value.) The rank is then given by:
Rank(C.sub.A)=C.sub.A.sup.with+C.sub.A.sup.without-C.sub.A.sup.with.time-
s.C.sub.A.sup.without.
When C.sub.A.sup.without=0, this expression simply gives the rank
as C.sub.A.sup.with.
[0173] GUL module 78 creates and modifies statistical results in
accordance with statistical requests defined in the knowledge base,
at a statistics calculation step 180. Details of this step are
shown below in FIG. 8. As part of this step, the GUL module may
adjust the threat potentials of the assets from their initial value
of 1 to a new value according to the frequency of use of the assets
and the stage (cluster) in which each asset is used.
[0174] The GUL module calculates centralism for each process image
(executable) file, at a centralism computation step 184. Details of
this step are shown below in FIG. 9. The centralism is determined
for each image and each user and provides information on how
central the game is to the user and the computer while it is being
played. In other words, for each known image file in the system,
the centralism indicates how often the file operates while the game
is running and what is the time proportion between the image
processes and the game process overall. Centralism may be defined
separately for the launch phase of the game ("launch centralism"),
as opposed to the centralism throughout the game.
[0175] GUL module 78 returns to rule base 52 all the similar pairs
of assets, along with the distances between the assets in the pair,
at an asset pairing step 186. The distance is given by the
formula:
Dist = Mean ( .omega. .fwdarw. .OMEGA. , D .fwdarw. )
##EQU00001##
wherein
Mean ( k .fwdarw. , S .fwdarw. ) = i k i .times. S i ,
##EQU00002##
and wherein each element .omega..sub.i.epsilon.{right arrow over
(.omega.)} is a predefined weight (scalar) and
.OMEGA..ident..SIGMA..omega..sub.i. Each element D.sub.i of {right
arrow over (D)} is given by:
D i = f i 2 ( X 1 ) - f i 2 ( X 2 ) Max ( f i ( X 1 ) , f i ( X 2 )
) ##EQU00003##
wherein f.sub.i(Xj) is a numerical value assigned to a given asset.
For instance, f.sub.i(Xj) could be the size of a given file
X.sub.j, priority of a given process X.sub.j, or any other
predefined arithmetic manipulation on numeric attributes associated
with the asset, which is either stored in the knowledge base or
calculated based on stored values. If the metric distance between
two given assets is lower than a predefined threshold, then the
assets are considered to be similar for the purposes of analyzing
events and assessing threats.
[0176] FIG. 8 is a flow chart that schematically shows details of
the method used for updating statistical results at step 182 in
game user learning, in accordance with an embodiment of the present
invention. Statistical requests refer to variables with stochastic
behavior (for example, CPU utilization, network utilization,
in-game variables, etc.) Statistical requests may also apply to
some variables that are not stochastic in nature, such as the order
of events, which allows for learning patterns in the game software.
GUL module 78 processes the statistical requests, as noted above,
and returns statistical results to knowledge base 66.
[0177] To generate the statistical results, GUL module 78 computes
and updates the average value of the relevant variable, as well as
the corresponding standard deviation and a histogram of the
variable. The GUL module updates the average, at an average
computation step 190, using the formula:
y n = y n - 1 + x n - y n - 1 n ##EQU00004##
wherein y.sub.0.ident.0, y.sub.n-1 is the previous sample average,
x.sub.n is the new sample, and n is the number of samples. The GUL
module computes the standard deviation, at a deviation computation
step 192, using the formula:
.sigma. n = ( n - 1 ) .times. .sigma. n - 1 2 + ( x n - y n ) ( x n
- y n - 1 ) n ##EQU00005##
wherein .sigma..sub.0.ident.0.
[0178] GUL module 78 computes the histogram of the variable in
question, at a histogram computation step 194. The histogram is
defined as having fixed number of bins, but the GUL module may add
new extrema (i.e., a new minimum or a new maximum), which will
result in changes to the ranges of the bins and thus to
recalculation of the bin values.
[0179] FIG. 9 is a flow chart that schematically shows details of
the method used in computation of file centralism at step 184, in
accordance with an embodiment of the present invention. GUL module
78 determines the "launch centralism" for each executable file, at
a launch centralism computation step 200. The launch centralism
depends on the number of processes that are running at the start of
a new cluster in the course of running the game program. The GUL
module determines the "throughout centralism" of the file, at a
throughout centralism computation step 202. This type of centralism
is based on the processing time of the executable file in question
in comparison with the overall game processing time.
[0180] The centralism characteristics of the executable files that
are learned by the GUL module are subsequently used in detecting
exceptions to the user's habits. Anomalous deviations from normal
centralism at both the launch and game processing phases have been
found to be a good indicator that cheating may be going on.
[0181] FIG. 10 is a flow chart that schematically illustrates the
operation of inquiry manager 70, in accordance with an embodiment
of the present invention. As noted earlier, the inquiry manager
manages the process of testing an event against threat maps. Upon
receiving an event, the inquiry manager actuates TMBI module 72,
which loops over all relevant threat maps i and computes the
likelihood that the event is relevant to each of the maps, at a
likelihood computation step 210. Based on the individual
likelihoods, the TMBI module finds the overall likelihood that the
current event is a threat, at an overall assessment step 212. These
steps may be expressed in pseudocode form as follows:
TABLE-US-00001 Start routine /* step 210*/ For (i = 1 , ... ,
Nmaps) Likelihood[i] = TMBI(CurrentEvent, ThreatMap[i],
Normometer(TaskmanagerInfo) )); Endfor i; /* step 212*/
LikelihoodOverall = MAX(likelihood[i], i = 1,...,Nmaps);
In the above code, "Normometer" is a predefined function that
computes the average distance to the norm of each of the numeric
variables in the knowledge base for which a statistical measurement
has been obtained.
[0182] The inquiry manager compares the overall likelihood to
predetermined threat and safety thresholds, at a threat
classification step 214. If the overall likelihood is above the
threat threshold ("red"), the inquiry manager returns a threat
identification to the rule base module, at a reporting step 218. By
the same token, if the overall likelihood is below the safety
threshold ("green"), meaning that none of the threat maps has
anything in common with the current event, the inquiry manager
marks the event as "clean" and returns the control to the rule base
module at step 218.
[0183] On the other hand, if TMBI module 72 finds at least one
threat map at step 210 that is close enough to the current event to
raise suspicion, but not close enough to assign the event to a
threat map, the inquiry manager calls TMSI module 74, at a
pseudo-semantic inquiry step 216. The TMSI module perform a
semantic analysis of the event in order to decide whether it
actually is a threat. Details of this step are shown below in FIG.
13. After the TMSI module finishes this analysis, the inquiry
manager returns control (along with the TMSI output) to the rule
base module at step 218.
[0184] FIG. 11 is a flow chart that schematically illustrates a
method for threat identification carried out by TMBI module 72 at
step 210, in accordance with an embodiment of the present
invention. For each threat map, the TMBI module loops over all the
threat lines in which a higher threat line is null, at a threat map
looping step 220. In other words, the TMBI module loops over all of
the top threat lines in the threat lines hierarchy. For each such
threat line in turn, the TMBI module calls the "test row" function,
which returns a test result for the current threat line, as well as
the test results of all descendant threat lines of that threat
line. For this purpose, the test row function is invoked top-down
recursively. Details of this step are shown below in FIG. 12.
[0185] Thus, at the conclusion of step 220, the TMBI module has the
test results of all the threat lines in the current threat map.
Based on these test results, the TMBI module calculates the
likelihood that the present event constitutes a threat in a given
threat map, at a likelihood computation step 222, using the
formula:
Likelihood = j = 1 N ( w j .times. x j ) j = 1 N w j .
##EQU00006##
[0186] Here w.sub.j is a predefined weight coefficient for threat
line j, and x.sub.j is the test result for this threat line. Each
weight coefficient is determined according to the significance of
the corresponding threat line it represents. Since not all
conditions, upon their fulfillment, indicate the same contribution
to likelihood of the existence of a threat, the weighting provides
the ability to set a "balanced" threat map than just a binary
network of predicates. The TMBI module also outputs a list of
"lacks" for the tested threat map, at a lack listing step 224. This
list contains the threat maps having negative test results for the
current event. A pseudo-code implementation of steps 220-224 is
listed in Appendix C.
[0187] FIG. 12 is a flow chart that schematically shows details of
a method for evaluating threat lines that is carried out by TMBI
module 72 at step 220, in accordance with an embodiment of the
present invention. Any threat line describes only partially a
realization of event. Therefore, the TMBI module tests each given
threat line against the current input event, at an event testing
step 230. If there is no relation between the event and the threat
line, then the test result of that threat line is set to nil
(zero), at a zero setting step 232. Pseudocode implementing steps
230 and 232 is listed in Appendix D.
[0188] If the current input event is related to the threat line,
then the TMBI module checks the test model of the threat line in
question, at a model testing step 234. Not only does the test model
define how to test the event against the given threat line, but it
also defines the meaning. If, for example, the test model is "NE",
it means that the test result is positive (TRUE) if the two
compared arguments are not equal. It also means that if the result
is FALSE, i.e., the two compared arguments are equal, then this
outcome refutes the entire branch of the threat map. In other
words, during a threat inquiry, the realization of a refuting
threat line dismisses its entire sub-tree of threat lines. Thus,
step 234 determines whether the test model can refute the
relationship between the input event and the threat line. If the
relation cannot be refuted, then the TMBI module assigns a test
result to this threat line that is equal to the test weight of the
threat line, at a weight assigning step 236.
[0189] Otherwise (i.e., if the tested threat line has been
realized, meaning that the threat line has been observed on the
basis of an input event in the course of attempting to refute the
relation), the TMBI module assigns a nil value to the test results
of all threat lines in the tree below the tested threat line, at a
tree setting step 238. Note that the hierarchy of the threat lines
is designed for the purpose of handling refutation of threat lines:
As long as a given threat line is not refuted, the test result of
that threat line will not have the effect of dismissing its
sub-tree. Dismissal occurs only when the threat line is refuted.
Pseudocode implementing this step is listed below in Appendix
E.
[0190] On the other hand, after determining at step 234 that the
test model cannot refute the relation between the current input
event and the threat line under test, and assigning the test weight
to the test result at step 236, the TMBI module repeats the test
and refutation routine described above over all child threat lines,
at a child looping step 240. This step provides test results for
all threat lines in the tree, for use in the likelihood calculation
at step 222.
[0191] FIG. 13 is a flow chart that schematically shows details of
a method for pseudo-semantic inquiry carried out by TMSI module 74
at step 216, in accordance with an embodiment of the present
invention. A pseudocode implementation of this method is presented
below in Appendix F. Initially, the TMSI module searches for maps
that are semantically similar to the event under investigation, at
a map finding step 250. Specifically, the TMSI module collects
"rogue maps," i.e., all the maps for which the computed likelihood
of the current event exceeds the above-mentioned safety threshold
(referred to as Threshold_GREEN). In general, the map with the
highest likelihood is considered to be the best candidate to serve
as the basis for building a new map.
[0192] The TMSI module next computes a hypothetical likelihood for
each of the rogue maps using initial parameters, at a likelihood
computation step 252. The hypothetical likelihood is a measure of
semantic similarity between an event and a threat map. (A formula
for the computation of the semantic similarity is given in Appendix
F.) The TMSI module then chooses the rogue maps whose hypothetical
likelihoods exceed the threat threshold (Threshold_RED) as
candidate maps, at a candidate selection step 254. According to
this criterion, the event that is the subject of the semantic
inquiry is classified as a threat on these candidate maps.
[0193] The TMSI module tests the number of candidate maps that were
found, at a candidate checking step 256. If no such maps were
found, the TMSI module updates the parameters of the hypothetical
likelihood formula, at a parameter update step 258. The TMSI module
then returns to step 252 in order to re-compute the hypothetical
likelihoods, as long as such a parameter update is still possible.
Computation step 252 is carried out using the formula:
L hyp ( i ) = L ( i ) + j .noteq. i L ( j ) d i , j d i , j - 1 j L
( j ) ( 1 - L ( i ) ) ##EQU00007##
Here L(i) is the calculated likelihood of threat map i, and
d.sub.i,j is the distance between threat map i and threat map j.
The distance between two maps is between zero to one, wherein zero
is the closest (semantically similar) and one is the most distant.
If there is no such possibility, it means that no threat has been
detected, either directly or semantically. The TMSI module then
returns control to the inquiry manager (IM), at a termination step
262.
[0194] Alternatively, if one or more candidate maps are found for
the current event at step 256, the TMSI module chooses the best
candidate map among them, at a map selection step 260. As noted
earlier, the best candidate map is the one that has the largest
value of hypothetical likelihood for the current event, and then
returns control to the IMM at step 262.
OPERATIONAL EXAMPLE
[0195] The following scenarios provide an example of the operation
of program 50 in on-line game protection. These scenarios deals
with a common type of cheating, which is classified as "Cheating by
Exploiting Lack of Secrecy" in the above-mentioned article by Yan
and Randell. This method of cheating involves exchange of packets
between peers, wherein a participant cheats by inserting, deleting
or modifying game events, commands or files that are transmitted
over the network. The example is described with reference to an
on-line game known as "GunZ--The Duel" (MAIET Entertainment,
Korea), but the characteristics of this example are equally
applicable to many other games.
[0196] As explained above, the main session types of program 50
include game installation session, game/user learning session and
protection session for the on-line game. These sessions follow
three sub-scenarios of protection: Scenario A--existing complete
previous knowledge, Scenario B--existing partial previous
knowledge, Scenario C--no previous knowledge exists.
[0197] The learning processes includes game installation and
game/user learning sessions. A game installation session may take
place during an installation or an update of the game using a
mechanism (such as a daemon) that identifies the installation or
update, or by program activation, such as by the installation
software itself. The learning session is managed by rule base
module 52. SWL module 76 learns the installation-derived system
[0198] After installation, game/user learning sessions start with
the activation of the online game. These sessions are managed by
rule base 52 and carried out by GUL module 78. Program 50 loads and
starts close monitoring of the commands performed by the game and
by other processes that are performed on the assets learned during
the installation and recorded in TPT 90. It is possible to filter
out in advance certain types of assets that have a low likelihood
of being used for cheating (such as video files). Data collection
by sensors 104-110, which is activated by rule base module 52 and
managed by sieve module 100, includes: [0199] a. The image name
(i.e., the executable file that performs a given process). [0200]
b. The action name (such as deletion, renaming, loading, process
creation, attribute changing, etc.) [0201] c. The parameter (file,
directory, registry key, process, etc.), for instance: "C:\Program
Files\Gunz\v74\X1.dat." [0202] d. The command sequence number order
of the command in relation to other commands in the context of the
game.
[0203] GUL module 78 performs a statistical analysis on the data
collected over the game sessions and stores the results in status
map (SM) 92. The statistical data collection and analysis are
performed on particular environmental variables during the run of
the game, such as network utilization, memory performance, and CPU
utilization. The GUL module also learns the user's behavior during
the run time of the game, including centralism of the game programs
(as defined above) with respect to other programs that are normally
operated in parallel with the game. The statistical analysis may
use methodologies such as histograms, averages and deviations, as
explained above.
[0204] The following are examples of the types of data collected
for a given variable (such as variable X1): [0205] Access events
belonging to clusters of events. [0206] Average of X1 access events
per cluster. [0207] Sequence number of commands containing X1
access events. [0208] Concurrently running programs. In addition,
the GUL module calculates and updates the aforementioned
environmental variables in relation to "normal" environmental
variables.
[0209] The GUL module divides the learned data into clusters and
deals with each cluster separately. The clustering may relate, for
example, to stages during the game, such as "Startup," "Shutdown,"
"Session Load," and "Session Unload." The number of clusters is
thus defined for each game. For instance, a game may have only one
startup cluster and one shutdown cluster, but any number of other
clusters in between.
[0210] As part of the learning process carried out by the GUL
module, new values of variables will be added and existing ones
will be updated based on the behavior of the game program. Updates
may be applied to various parts of knowledge base 66, such as
version update in GL 82, metadata 88, SM 92, TPT 90 and TM 94.
[0211] The first protection scenario (Scenario A, as mentioned
above) deals with a situation in which program 50 has complete
previous knowledge concerning the assets of the on-line game and
their utilization. In this scenario, in other words, SWL module 76
and GUL module 78 have completed creation of the appropriate
metadata and have populated SM, TPT and TM in knowledge base 66.
The TPT and TM include variables, such as variable X1 (as in the
above example), that are regarded as assets that should be
protected against foreign access during the game.
[0212] During the game, sieve module 100 transfers events from
sensors 104-110. We assume, for example, that one of the events is
an access to variable X1, which is used in the startup of the game
by a WIN32 process (and is thus listed in the "startup cluster").
Such an event triggers an investigation by inquiry manager 70,
which then invokes a test by TMBI module 72. At least one of the
threat maps in TM 94 defines a threat comprising initiation of the
WIN32 process by a process that is foreign to the game. The TMBI
module computes a reasoning score, indicating the likelihood that
the threat is real. If at least one likelihood in all the tested
threat maps is greater than the danger threshold, rule base 52
determines that a threat has occurred and takes appropriate action,
such as notifying the user of computer 22, and possibly also server
34 and other game participants.
[0213] Another protection scenario (Scenario B) deals with a
situation in which program 50 has only partial previous knowledge
concerning the assets of the on-line game and their utilization.
This sort of scenario may occur when the performance of the SWL
and/or GUL module has not been completed or when there is a lack of
appropriate metadata or entries in the SM or TPT or a sufficiently
reliable TM for the game. In this case, let us assume, for example,
that the missing information is the replacement of variable X1
(such as data file name X1) with variable X2 (also a data file,
such as "X2.dat"), wherein X2 is used for startup of the game but
is not included in the original TM or TPT.
[0214] Again, during the game, sieve module 100 transfers events
from the sensors. The events include, in this case, an access to
variable X2 by the game program during startup. This occurrence may
be repeated over a number of sessions. In one of the sessions,
another access of X2 was also identified, several minutes into the
session, but in this case the accessing process was a WIN32 process
foreign to the game.
[0215] From session to session, GUL module 78 records the access to
variable X2 and starts creating a norm for X2 as part of the
learning process. At a certain point the GUL module determines that
the metric distance between X1 and X2 (as measured by the
differences between their locations, names, attributes, process
hierarchy, etc.) is small enough to "adopt" X2 as a legitimate
asset of the game. If both X1 and X2 are metrically close to each
other, and if X1 is defined as belonging to Cluster A (the cluster
of the game startup process), then the GUL module will attribute X2
to Cluster A (with a certain probability). TMU module 80 will also
expand all the relevant threat maps of X1 to include X2 as
well.
[0216] Prior to the above learning process, access to X2 would not
have been classified as a threat. Subsequently, however, access to
X2 will trigger an investigation by inquiry manager 70. The
recorded access to variable X2 instead of variable X1 belongs to
the startup of the game. If X2 is now accessed while the game is in
progress by a foreign WIN32 process, after X2 has been classified
as an asset of the game, the inquiry manager will begin an
investigation. If X2 replaced X1 identically, then the situation is
the same as in Scenario A. On the other hand, if some of the
attributes of X2 differ from those of X1, but the related threat
map is still partly fulfilled, then TMSI module 74 will complete
the threat map for X2 by a pseudo-semantic inquiry.
[0217] In this inquiry, it is first assumed that there is more than
one threat map in TM 94 or alternatively, that the map describing
the threat to X1 has several variants, such as additional files in
the same "hit zone" or additional actions beyond the one defining
the threat. For instance, the threat map may define a threat as
deleting the file, changing its name, its contents or its security
definitions. Also, additional maps in TM 94 may indicate that a
change in the registry or file security attributes would constitute
a threat from the same threat space. When there is a partial match
of an event to the map describing the attack, but at the same time
certain (generally lesser) matches to other maps in the same threat
space, TMSI module 74 suggests a hypothetical addition to each map,
based on the density of the threat maps around it.
[0218] If no threat has been identified after invoking TMBI module
72, TMSI module 74 goes through map by map, adding a hypothetical
value to each based on the spread of the neighboring maps, as given
by the distances between threat maps. (Each threat map in knowledge
base 66 has a well-defined distance from all other threat maps,
determined by a known quantification formula.) The denser the
neighboring map spread, the higher will be the likelihood
associated with the hypothetical addition to the threat maps.
[0219] TMBI module 72 may declare a threat when the likelihood
value of at least one threat map has crossed a certain threshold.
Alternatively, program 50 may be configured so that a threat will
be declared only upon satisfaction of a more complex condition,
which takes into account the entirety of the new array of
hypothetical likelihoods. This sort of condition can be defined
heuristically. For example, assume an event has "passed" the filter
of TPT 90 and that there are several maps {M01, . . . , Mn} in
knowledge base 66. For each threat map, TMBI module 72 calculates
the likelihood that a given event constitutes a threat. Even if
none of the individual threat map likelihoods has passed the
applicable threshold, it may be that some of the likelihoods have
crossed the safety threshold, meaning that the possibility of a
threat due to the event in question cannot be entirely ruled
out.
[0220] For example, two or more threat maps that resemble one other
may belong to the same threat space. The pseudo-semantic distance
between these maps (which does not necessarily adhere to the
definition of Cartesian distance) may be small enough so that they
and other, similar maps belonging to the same threat space. In such
"gray" situations, inquiry manager module 70 may activate TMSI
module 74 to allow for threat mutations and generation of adaptive
solutions to developing threats.
[0221] The third protection scenario (Scenario C) deals with a
situation in which program 50 has no previous knowledge whatsoever
concerning the assets of the online game and their utilization. In
this case, it is assumed that there has been no valid run of SWL
module 76 or GUL module 78, and there are no usable entries in
metadata 88, SM 92, TPT 90 or TM 94. Again, GUL module 78 records
actions by the game program and the use of assets during the game.
After several sessions, the GUL module creates normal statistical
data for each cluster, including parameters populating the modules
of knowledge base 66.
[0222] At a certain point, program 50 may identify anomalies of
types that have been predefined within the existing threat space.
Examples of such anomalies could include three consecutive
deviations from average CPU utilization, each for 30 seconds or
more, or network utilization at its maximum value for 90 seconds
straight, possibly with 30 seconds of deviant CPU utilization
occurring within the 90 seconds. TMBI module 74 then draws upon all
the maps that belong to this threat space to perform an
investigation of the events that took place around the times of the
anomalies. The reasoning and learning functions are performed as in
Scenario B. Since the decision thresholds are sensitive to the
overall distance to the norm of the variables stored in SM 92, the
same investigation by inquiry manager 70 may give different
decisions under different environmental conditions for the same
event.
[0223] Although the embodiments described hereinabove relate
specifically to cheating in on-line games, the principles of the
present invention may similarly be applied in prevention of other
types of cheating. For example, the techniques described above may
be used, mutatis mutandis, in detection of click fraud, in which a
person, automated script, or computer program imitates a legitimate
user of a Web browser by clicking on an link on a Web page for the
purpose of generating a charge per click without having actual
interest in the target of the link. For this purpose, a computer
learns normal and abnormal patterns of clicks and generates an
alert upon detecting a large volume of anti-normal behavior.
[0224] It will be appreciated that the embodiments described above
are cited by way of example, and that the present invention is not
limited to what has been particularly shown and described
hereinabove. Rather, the scope of the present invention includes
both combinations and subcombinations of the various features
described hereinabove, as well as variations and modifications
thereof which would occur to persons skilled in the art upon
reading the foregoing description and which are not disclosed in
the prior art.
APPENDIX A
VBA Subroutine
[0225] The following code implements ranking of assets within a
type, at step 140 (FIG. 4):
TABLE-US-00002 foreach (DataRow row in
_DBSet.Tables[TableName].Rows) { if (!(row["name"] is DBNull))
if(row["CountWithout"] is DBNull) row["CountWithout"] = 0;
row["Fraction"] = (double)(int)row["CountWithout"] /
Math.Pow((double)(int)row["CountWith"],power); temp =
Math.Exp(FractionEffect * (double)(float)row["Fraction"]);
row["Rank"] = Math.Log((double)(int)row["CountWith"]) * temp; }
DataView View = _DBSet.Tables[TableName].DefaultView; View.Sort =
"Rank desc"; foreach (DataRowView tempRowView in View) {
//Wr.WriteLine( ); for (j = 0; j <
_DBSet.Tables[TableName].Columns.Count; j++) { if
(!(tempRowView.Row[j] is DBNull)) { Wr.Write("\t " +
tempRowView.Row[j].ToString( )); } else Wr.Write("\t "); }
Wr.WriteLine( ); }
APPENDIX B
VBA Subroutine
[0226] The following code implements ranking of assets of a special
type, at step 142 (FIG. 4). Each special asset type corresponds to
a type of assets that was handled at step 140.
TABLE-US-00003 if (TableName == "FileNames") { string Path1; string
FolderName; string RootDirectory; m = 0; foreach (DataRowView
tempRowView in View) { //Wr.WriteLine( ); if
(!(tempRowView.Row["name"] is DBNull) &&
((string)tempRowView.Row["name"] != "")) { Path1 =
(string)tempRowView.Row["name"]; FolderName =
Path.GetDirectoryName(Path1); RootDirectory =
Path.GetPathRoot(Path1); j = 0; while ((FolderName != null)
&& (FolderName != RootDirectory)) { for (l = 0; l < m;
l++) { for (k = 0; k < 10; k++) { if (UniqueFolder[l, k] ==
FolderName) { Occ_Count[l, k]++; goto next_folder; } } }
UniqueFolder[m, j] = FolderName; Occ_Count[m, j] = 1; next_folder:
Path1 = FolderName; FolderName = Path.GetDirectoryName(Path1); j++;
} } m++; } m = 0; while (UniqueFolder[m, 9] != null) {
Wr.WriteLine( ); tabs = 0; for (j = 9; j >= 0; --j) if
(UniqueFolder[m, j] != null) Wr.Write(UniqueFolder[m, j] + "\t");
else tabs++; for (j = 1; j <= tabs; j++) Wr.Write("\t");
Wr.Write ("\t"); for (j = 9; j >= 0; --j) if (Occ_Count [m, j]
> 0) Wr.Write(Occ_Count[m, j] + "\t"); m++; } Wr.WriteLine( ); }
m = 0; foreach (DataRowView tempRowView in View) { for (j = 0; j
< 10; j++) { if (j == 0) Dir_Rank[m, j] = (1.0F -
(float)tempRowView["Fraction"]) *
(float)(int)tempRowView["CountWith"] * (float)(int)(Occ_Count[m,
j]); else Dir_Rank[m, j] = (1.0F - (float)tempRowView["Fraction"])
* (float)(int)tempRowView["CountWith"] * (float)(int)(Occ_Count[m,
j] - Occ_Count[m, j - 1]); if (Dir_Rank[m, j] > 0.0F) {
Wr.Write(UniqueFolder[m, j] + "\t" + Dir_Rank[m, j]); Wr.WriteLine(
); } } m++; }
APPENDIX C
Pseudo-Code for TMBI
[0227] The following is a pseudocode listing that implements method
shown in steps 220-224 (FIG. 11):
TABLE-US-00004 { /********* Start TMBI **********/ For ( x = each
ThreatLine in a given ThreatMap) { if (higher_threat_element equals
Null) { test_result = TestRow(x , true); } /* endif */ } /* endfor
*/ SumOfResults = 0.00 SumOfWeights = 0.00 Likelihood = 0.00 For (
x = each ThreatLine ) { if (x.test_result is Null) { report error;
exit( ); } else if (x.test_model .noteq. "NOT_EQUAL") {
SumOfWeights = SumOfWeights + x.test_weight ; } if (x.test_result
== 0.0) { append x to LoL; } /* Comment: LoL is and array of
objects to /* hold the List of Lacks else if (x.test_result >
0.0) { if (x.test_model == "NOT_EQUAL") { mark "x is refuting" }
else { SumOfResults = SumOfResults + x.test_result ; } } } } } /*
endfor x */ if (SumOfWeights > 0.0) { Likelihood = SumOfResults
/ SumOfWeights; } else { Write "Internal Error: Sum of Weights
equals zero"; Exit ("Internal Error"); } } /********** Finish TMBI
***********/
APPENDIX D
Pseudocode for Threat Line Evaluation
[0228] The following is a pseudocode implementation of the process
depicted in steps 230-232:
TABLE-US-00005 For (each element of the Input table) { /* start
for-loop 1 */ if (Comparison1(input_observed_param, observed_param)
&& Comparison2(input_value, test_value) ) { test_result =
test_weight; if (test_result == 0) { test_result = -2.0 } break; }
} /* end for-loop 1 */
APPENDIX E
Pseudocode for Zeroing Test Results
[0229] The following pseudocode implements the process depicted in
step 238:
TABLE-US-00006 For (each threat_line in
current_row.lower_threat_elements) { /* start for-loop 1 */ if
(threat_line.test_result > 0 && threat_line.test_model
== "NOT_EQUAL" ) { CollapseTree(next_threat_line); } else {
threat_line.test_result = TestRow(threat_line, TRUE); } } /* end
for-loop 1 */
APPENDIX F
Pseudo-Semantic Inquiry
[0230] The following pseudocode implements the method shown in FIG.
13:
TABLE-US-00007 /* STEP 250 */ { .lamda. = [initial parameter
depending on the project] Iter = 0 Do While(Iter < MaximalNumber
Of Iterations) { For i = 1, . . . , number-of-maps { (start loop 1)
If likelihood[i] > threshold_GREEN /* STEP 252 */ { likelihyp [
i ] = likelihood [ i ] + k .noteq. i ( likelihood [ k ] e dist 2 (
i , k ) ) k .noteq. i likelihood [ k ] ( .lamda. - likelihood [ i ]
) ##EQU00008## } } /* STEP 254 */ If likelihyp[i] >=
threshold_RED { /* STEP 256 */ Append {i, likelihyp[i]} to
{cand_list} } (end if) } (End loop 1) /* STEP 258 */ If
length(cand_list) == 0 { .lamda. = [Iteration formula depending on
the project] Iter ++ } Else { /* STEP 260 */ i.sub.max = -1
MaximalLikelihood = 0 For i = 1, . . . , length(cand_list) { (start
loop 2) If (cand.list.likelihyp[i] > MaximalLikelihood) {
i.sub.max = i MaximalLikelihood = cand.list.likelihyp[i] } } (end
loop 2) Store { i.sub.max, MaximalLikelihood } to pass to RuleBase
for building a new ThreatMap. break Do-loop } } /* end do-loop */
/* STEP 262 */ } (end module)
* * * * *