U.S. patent application number 10/998654 was filed with the patent office on 2005-05-05 for method for executing trusted-path commands.
Invention is credited to Casey, Thomas Andrew JR., Gasser, Morrie, Hall, Judith Shelhorse, Kahn, Clifford Earl, Kendall, Leslie Richard, Lipner, Steven B., Mason, Andrew Halstead, Sawyer, Paul Douglas, Zurko, Mary Ellen.
Application Number | 20050097354 10/998654 |
Document ID | / |
Family ID | 23904915 |
Filed Date | 2005-05-05 |
United States Patent
Application |
20050097354 |
Kind Code |
A1 |
Zurko, Mary Ellen ; et
al. |
May 5, 2005 |
Method for executing trusted-path commands
Abstract
A method for executing trusted commands, in which a trusted
command is first received from a user at a user terminal and parsed
by untrusted code; then passed to a trusted computing base for
execution. The trusted computing base displays some indication of
what is to be done back to the user for confirmation. Confirmation
of the commands prevents unauthorized modification of the commands
and increases system confidence. A randomly (or pseudo-randomly)
generated process identifier is employed to verify the existence of
a trusted path.
Inventors: |
Zurko, Mary Ellen; (Groton,
MA) ; Casey, Thomas Andrew JR.; (Westwood, MA)
; Gasser, Morrie; (Saugus, MA) ; Hall, Judith
Shelhorse; (Sudbury, MA) ; Kahn, Clifford Earl;
(Framingham, MA) ; Mason, Andrew Halstead;
(Hollis, NH) ; Sawyer, Paul Douglas; (Billerica,
MA) ; Kendall, Leslie Richard; (Glouchester, MA)
; Lipner, Steven B.; (Wellesley, MA) |
Correspondence
Address: |
IP Administration
HEWLETT-PACKARD COMPANY
Legal Department, M/S 35
P.O. Box 272400
Fort Collins
CO
80527-2400
US
|
Family ID: |
23904915 |
Appl. No.: |
10/998654 |
Filed: |
November 30, 2004 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
10998654 |
Nov 30, 2004 |
|
|
|
09514978 |
Feb 29, 2000 |
|
|
|
6871283 |
|
|
|
|
09514978 |
Feb 29, 2000 |
|
|
|
07479666 |
Feb 13, 1990 |
|
|
|
6507909 |
|
|
|
|
Current U.S.
Class: |
726/27 |
Current CPC
Class: |
G06F 2211/009 20130101;
G06F 21/53 20130101 |
Class at
Publication: |
713/200 |
International
Class: |
H04L 009/00 |
Claims
1-20. (canceled)
21. A computing environment configured to process a trusted
command, comprising: an untrusted environment to parse a trusted
command in order to establish a parsed trusted command, said
trusted command only being executable in a trusted environment; and
a trusted environment to receive the parsed trusted command from
the untrusted environment and to communicate a representation of
the parsed trusted command.
22. The computing environment of claim 21, wherein the trusted
environment executes the trusted command if the trusted environment
detects confirmation of the trusted command.
23. The computing environment of claim 21, wherein the
representation of the trusted command is communicated through a
trusted path.
24. The computing environment of claim 23, wherein the trusted path
is between a user and the trusted environment.
25. The computing environment of claim 21, further comprising: a
user interface to communicate with the untrusted environment and
the trusted environment.
26. A method of processing a trusted command, said trusted command
only being executable in a trusted mode, comprising the steps of:
parsing a trusted command in an untrusted mode of a system; then
establishing a trusted mode of the system; and then communicating a
representation of the trusted command in the trusted mode.
27. The method of claim 26, further comprising the step of:
executing the trusted command in the trusted mode if confirmation
of the trusted command is detected.
28. The method of claim 26, wherein the communicating step
comprises the step of: displaying a representation of the trusted
command.
29. A method of processing a trusted command, comprising the steps
of: interpreting a trusted command in an untrusted mode;
communicating a representation of the trusted command in a trusted
mode; verifying the trusted command in the trusted mode after the
communicating step; and executing the trusted command in the
trusted mode after the verifying step.
30. The method of claim 29, wherein the verifying step comprises
the step of: requesting confirmation of the trusted command in the
trusted mode.
31. The method of claim 29, further comprising the step of: using
the trusted command in the untrusted mode.
32. The method of claim 29, further comprising the step of:
transitioning from the untrusted mode to the trusted mode.
33. The method of claim 29, further comprising the step of:
transitioning from the trusted mode to the untrusted mode.
34. The method of claim 33, further comprising the step of: issuing
a message to a user to indicate a transition to the untrusted mode
before the transitioning step.
35. The method of claim 29, further comprising the step of:
detecting if a command is a trusted command in an untrusted mode.
Description
1. BACKGROUND OF THE INVENTION
[0001] 1.1 Introduction
[0002] This invention relates to a method for improving security in
a computer system. More specifically, this invention concerns a
method for implementing trusted commands through both trusted and
untrusted code.
[0003] 1.2 Background
[0004] The proliferation of computers has resulted in the need for
reliable computer security systems. The need to protect certain
information is great in areas dealing with national security, for
example. Security measures are required in other fields, including
areas which utilize financial, medical and personal information.
Many computerized information systems therefore include internal
security measures which provide the users with different levels of
access to the computerized information, and which attempt to
provide at least a degree of protection of the information from
undesirable access.
[0005] 1.3 Security Criteria: Reference Validation Systems
[0006] In response to the need for secure computer systems, the
Department of Defense has issued a publication titled Department of
Defense Trusted Computer System Evaluation Criteria (reference No.
DOD 5200.28-STD). This publication is sometimes referred to as the
"Orange Book," and is available from the Department of Defense. The
Orange Book describes system security objectives and evaluation
criteria for secure computer systems.
[0007] A "secure" computer system generally includes some type of
"reference validation" system. These reference validation systems
(known as reference monitors) are responsible for enforcing the
security rules (security policy) for a given computer system.
[0008] Reference monitors mediate all access to "objects" by
"subjects". Objects are passive entities that contain or receive
information. Access to an object implies access to the information
that it contains. Subjects are active entities, generally in the
form of a person, process or device that cause information to flow
among objects or change the system state. A subject may ordinarily
reference its own, subject-internal information without the
involvement of the reference monitor. Reference monitors
controlling such access to objects by subjects are known in the
art, and may utilize a security kernel approach.
[0009] Proper implementation of a reference monitor calls for
adherence to three principles:
[0010] (1) completeness, in that all access by subjects to objects
or other subjects must involve the monitor;
[0011] (2) isolation, in that the monitor must be protected from
tampering; and
[0012] (3) verifiability, in that correspondence must be shown
between the security policy and the actual implementation of the
monitor.
[0013] As discussed, every reference to information or change of
authorization should go through the reference monitor. Thus, all
commands issued by a user or other subject arc monitored by the
reference monitor. This approach is particularly useful in
multiuser computer environments.
[0014] The totality of the protection mechanisms within a computer
system--including hardware, software, and firmware, the combination
of which is responsible for enforcing a security policy--is
commonly known as a "trusted computing base" (TCB). If the trusted
software is designed to be as simple as possible, for the sake of
verifying the reference monitor, then the trusted software is known
as a "security kernel".
[0015] Generally, TCBs attempt to meet the control objectives set
out in the Orange Book. Compliance with these objectives inspires
user confidence, and increases the overall desirability of a
computer system. These objectives deal with:
[0016] (1) security policy;
[0017] (2) accountability; and
[0018] (3) assurance.
[0019] The security policy objective entails enforcement by the TCB
of the desired security rules. These security rules are designed to
limit the access to and dissemination of information in a precisely
defined manner.
[0020] Security policies may include provisions for the enforcement
of both mandatory and discretionary access control rules. Mandatory
access control rules control access based directly on comparisons
of the user's security level and the sensitivity level of the
information being sought. Discretionary access rules control and
limit access to identified individuals who have been determined to
have a need-to-know.
[0021] These access control rules call for associating with each
user identification code a statement indicating the user's access
rights. This statement often includes information representing the
user's security level (for mandatory control purposes), and
membership in groups (for discretionary control purposes).
[0022] The accountability objective calls for providing each user
with an individual user identification code (often called a "user
name") and for the TCB to be able to recognize the code and ensure
that it is being used by its proper user. This may be done by
checking the user's password. This ensuring the user's identity is
known as "authentication".
[0023] In addition, the accountability requirement calls for the
existence of auditing capabilities. Such capabilities allow for the
auditing of actions which can cause access to, generation of, or
release of classified or sensitive information.
[0024] 1.4 Assurance Objectives and "Trusted" Systems
[0025] The assurance objective is especially important in the
present context. That objective is concerned with taking steps to
ensure that the security policy is correctly implemented and that
the TCB accurately mediates and enforces the intent of that policy.
Steps may be taken to insure that each portion of the TCB is
assured. To accomplish this objective, two types of assurance are
needed.
[0026] The first type of assurance is life-cycle assurance. This
type of assurance refers to steps taken to ensure that the computer
system is designed, developed, and maintained using formalized and
rigorous control standards.
[0027] The second type of assurance is operational assurance.
Operational assurance deals with the features and system
architecture used to ensure that the security policy is
uncircumventably enforced during system operation. All of the
software (sometimes referred to informally in the art as "code") in
the TCB is generally analyzed to determine the assurance level of
the system.
[0028] As the amount of code in the TCB increases, it becomes more
difficult to ensure that the TCB accurately enforces the security
policy. Because of this, it is desirable to minimize the amount of
trusted code, and thus the complexity of the TCB.
[0029] A TCB is usually operated with a substantial amount of
software, such as text editors and other applications, operating
within the security policy of the TCB. Generally, this untrusted
software asks the TCB for access to objects when the user or the
untrusted software requires them. Thus, the majority of user's
requests to the TCB, and the majority of the information that a
user obtains from the TCB, are handled through the agency of
untrusted software.
[0030] This untrusted software, however, is by nature in danger of
compromise and vulnerable to bugs. For some types of requests and
displays, malicious or malfunctioning untrusted software could
compromise the enforcement of the security policy. Generally, TCBs
cannot distinguish between requests faithfully made by the
untrusted software on command from a user and either requests made
by the untrusted software at its own initiative or requests that
misrepresent the user's actual command. For example, if commands
issued by an authorized user to change certain users' security
levels were made through the agency of untrusted software, it would
be possible for malicious or malfunctioning untrusted software to
inappropriately raise the security level of a user. Such
inappropriate raising of a security level could result in the
disclosure of sensitive information.
[0031] Furthermore, TCBs generally cannot ensure that displays made
by untrusted software arc faithful. This poses problems in that if
displays of audit records were made through the use of untrusted
software it would be possible for malicious untrusted software to
misrepresent these audit records to hide suspicious activities.
[0032] To overcome these problems, prior-art systems have developed
a concept known as a "trusted path." A trusted path is a mechanism
by which the user can communicate directly with the TCB. A trusted
path may only be activated by the user or the TCB and cannot be
imitated by untrusted code. Such a trusted path is a positive
TCB-to-user connection that bypasses all untrusted software. The
Orange Book requires a trusted path for all systems to have a
security rank of B2 or above.
[0033] A "trusted command" (also known as a trusted-path command)
is a command which requires a trusted path between the user and the
TCB for execution. The specific security policy of a computer
system will determine which commands are defined as trusted
commands. For example, commands relating to changes of the security
levels of users would be trusted commands.
[0034] 1.5 Parsers as Increasers of System Complexity
[0035] Because of perceived performance problems, prior-art
computer systems often included code to implement certain functions
in the TCB. One such function comprises a portion of code known as
the "parser."
[0036] The parser performs the basic interpretation of user
commands by translating a human-readable representation of a user
command (e.g., a command typed by a user at a keyboard) into a
machine-readable representation known as the binary representation,
or the parsed command. A user command may consist of a group of
words, typed by the user, specifying some action to be taken. The
user command may also specify one or more variations or options of
that action.
[0037] For the convenience of the user, most parsers allow these
option words to be typed in any order without changing the meaning
of the user command. Also, most parsers allow the user to type the
words in either full or abbreviated form. In some instances,
several possible abbreviations exist. Further, in most parsers, the
user may vary the spacing between the words without changing the
meaning of the command. Some parsers also check user commands for
proper syntax, and report errors to the user prior to execution of
the command. Other parsers prompt the user for missing information,
avoiding the need to completely retype an incomplete command.
[0038] Thus, several different inputted human-readable
representations may be associated with each unique parsed command.
It is the job of the parser to translate accurately these inputted
representations of user commands (which may vary with each user)
into specific parsed command representations. Due to the complex
nature of parsing, large amounts of computer code are generally
associated with this activity.
[0039] Prior-art trusted software systems have included the parser
for trusted commands, and thus the associated code, within the TCB.
In these systems every trusted command was parsed and executed
exclusively by trusted code. The inclusion of the parsing code in
the TCB was regarded as necessary for proper system operation.
[0040] As discussed above, however, the ease and degree of assuring
a system is roughly inversely proportional to the complexity of the
system's TCB. Thus, the inclusion of the parser in the TCB results
in more complex assurance analysis and a corresponding decrease in
system confidence. Prior art devices (since they needed to be small
and simple to support assurance) have thereby inherently
constrained the user-friendliness of computer systems.
2. SUMMARY OF THE INVENTION
[0041] The present invention provides a number of advantages over
the prior art. By reducing the complexity of the TCB, the amount of
trusted code that must be verified or assured is reduced. In
addition, the general usability of the system is increased, because
the improved method of processing trusted commands allows for the
implementation of added computing functions at little or no
additional cost in TCB assurance.
[0042] The present invention includes an improved method for the
execution of trusted commands. In this improved method, the
required steps for processing a trusted command are split between
trusted and untrusted code. In this manner the amount of trusted
code in the TCB may be significantly reduced.
[0043] As discussed above, the parsing of a command can be
comparatively difficult, and often requires large amounts of
complex code. However, it is relatively simple to translate the
parsed representation of a command into a standard human-readable
representation for display to the user. This is in part because a
single, standard human-readable display representation may be
associated with each parsed representation. The present invention
takes advantage of this fact by parsing the trusted commands in
untrusted code prior to execution in the TCB.
[0044] In general, the present invention includes a computer system
which includes a TCB operating as a reference monitor. Included in
the TCB is the security-relevant code necessary for the actual
execution of a trusted command.
[0045] Associated with the TCB are one or more untrusted subjects
(e.g. processes). Each such subject has particular access rights,
which may include a security level. These untrusted subjects
communicate only with the involvement of the TCB. The TCB allows
communication only as permitted by the system's security policy.
Each untrusted subject is substantially independent.
[0046] In each of these untrusted subjects, untrusted software is
generally run. This untrusted software may include an untrusted
operating system (UOS), and generally includes applications such as
text editors. In the present invention, the untrusted code includes
the non-security relevant functions necessary for the execution of
a trusted command.
[0047] Attached to the computer system are several user terminals.
These user terminals allow the user to exchange information with
the computer system. All terminal activity is seen first by the
TCB. A secure-attention key is dedicated for use on each terminal
attached to the computer system. The TCB is designed to recognize
and trap this key signal. Activation of this key by the user
establishes a trusted path between the user terminal and the TCB.
Alternate methods for establishing a trusted path may be
utilized.
[0048] The TCB is generally provided with a limited number of
"direct" user commands (direct commands). Because of this, the
majority of the user's activities will take place in the untrusted
operating system.
[0049] While operating in the UOS, the user may attempt to execute
a trusted command. When one of these commands is issued by the
user, the untrusted code in the UOS checks the syntax of the
command and may prompt for missing parameters; then the untrusted
parser converts the command into a parsed representation that will
be referred to here for convenience as the "binary representation."
In some instances, command procedures and application-issued
trusted commands will also be parsed and translated into a binary
representation.
[0050] This binary representation is then passed from the UOS to
the TCB. The TCB initially associates the binary representation
with a physical terminal, and copies the binary representation into
a memory buffer assigned to that terminal.
[0051] Once this has been completed, the untrusted code in the UOS
may prompt the user to activate the secure-attention key. If this
key is activated, a trusted path will be established between the
user terminal and the TCB. Alternate methods may be employed to
initiate this trusted path. A randomly generated process ID, which
may be generated at login, may be utilized to prevent untrusted
code from simulating a trusted path.
[0052] Subsequent activity by the TCB depends on the type of
command that is represented by the binary representation. If the
command is one that will not modify any information, i.e., a
request to view information, the TCB verifies that the user has the
necessary access rights, and attempts to execute the binary
representation. A human-readable standard representation of the
binary command received by the TCB is displayed along with the
information.
[0053] In most instances, if the command requested by the user
requests modification of information, the TCB displays for
confirmation a standard human-readable form of the command, along
with an indication of what the result of command execution will be.
The user is then requested to confirm (by typing a response via the
trusted path) that the displayed command was received correctly and
should be executed.
[0054] This confirmation allows the user to ensure that untrusted
code has not performed an unauthorized modification of the user's
original command. Also, confirmation allows for the detection by
the user of any errors that may have occurred during the parsing
process. In situations where the binary representation represents a
command procedure or an application-issued trusted command, the
confirmation ensures that the command proposed by the untrusted
software is acceptable to the user.
[0055] If the command is confirmed by the user, the TCB checks to
ensure that the user has the necessary access rights to execute the
command and, if so, attempts to execute the command.
[0056] A distinct advantage of the present invention arises from
the fact that the execution of a trusted command is split between
trusted code and untrusted code. The actual parsing of the command
occurs in untrusted code. The trusted code included in the TCB
checks the parsed command for correctness and displays it to the
user in human readable form. This checking and display of a parsed
command requires substantially less code than is required to parse
a command.
[0057] Thus, since the command parser is included in the untrusted
code, the amount of trusted code in the TCB is minimized. This
lowers the difficulty of trusted system assurance, and increases
the amount of confidence that may be placed in a system.
[0058] It is a further advantage of this invention that the user
interface is separate from the underlying functions performed by
the TCB. This allows for implementation of alternate UOS-to-user
interfaces without major modification of the TCB. Further, the
amount of time and effort to develop such a command interface is
reduced. A still further advantage is that more useability features
may be provided without substantially increasing the complexity of
the TCB. Examples of such useability features are command line
editing and recall.
3. BRIEF DESCRIPTION OF THE DRAWINGS
[0059] FIG. 1 illustrates the various levels within a computer
system;
[0060] FIG. 2 illustrates various layers within the trusted
computing base;
[0061] FIG. 3 illustrates a trusted path;
[0062] FIGS. 4A and 4B are flow diagrams representing a method for
processing trusted commands. Differences in box representation in
these drawings is explained below.
[0063] FIG. 5 illustrates one method of implementing the
invention;
[0064] FIG. 6 is a flow diagram representing a method for verifying
a trusted path;
[0065] FIG. 7 is a state transition diagram of a further embodiment
utilizing various command states.
4. DESCRIPTION OF SPECIFIC EMBODIMENTS
[0066] A specific embodiment of the present invention is described
with respect to a computer which supports various user processes
and may operate in a multiuser environment. As an illustration, an
implementation in connection with a VAX model computer, available
from Digital Equipment Corporation (DEC), is briefly described. It
should be understood that the present invention may be readily
applied to various types of computer systems including PC networks,
stand-alone multiuser computer systems, or single user computer
systems, to name a few examples.
[0067] 4.1 Overview of Software Architecture
[0068] FIG. 1 illustrates a preferred computing environment which
is generally designated by the reference numeral 2. A trusted
computing base (TCB) 10 resides at the center of the computer
system. The TCB 10 acts as a reference monitor in that access to
information is mediated by the TCB 10.
[0069] In a general embodiment of the invention, the TCB 10 may be
a secure kernel residing under the general operating system of a
computer system. In one embodiment, the TCB 10 comprises trusted
code residing underneath a VMS or Ultrix system.
[0070] The TCB 10 enforces a security policy which identifies
permissible modes of access between users or computer processes
(subjects) and files, memory segments or other objects. Access to
stored information is mediated by the TCB 10, so that the TCB 10
may determine whether a particular user or untrusted subject can
access particular information. In order for the TCB 10 to fulfill
its protective function, it must take into account the three
requirements discussed above: (1) completeness, (2) isolation, and
(3) verifiability.
[0071] A more complete understanding of the TCB 10 may be obtained
by reference to FIG. 2. As is illustrated, the TCB 10 may comprise
several distinct layers. Each layer of the TCB 10 comprises trusted
code. This trusted status is indicated by the security boundary 11.
In one embodiment, layers of particular interest are the layers to
be referred to as: secure server (SSVR) 12, Command Conveyor (CC)
14, and VTerm (VI) 16.
[0072] The TCB 10 includes the trusted code necessary to ensure
compliance with the Orange Book requirements. In alternate
embodiments, the trusted code may ensure compliance with different
systems of security requirements. The TCB 10 also includes the code
which actually executes the requested trusted command, as discussed
below.
[0073] Each untrusted subject is generally supported by an
untrusted process. This process may spend time in one of two
"modes." The first mode, in which the process usually runs, is
known as a "user mode." This mode is a restricted hardware state
such as the VAX user mode that is known in the art. This process
may also spend time executing trusted code in an all-powerful
hardware state. This state is represented by the VAX "kernel mode",
which is known in the art. This state may be triggered by a request
by the untrusted subject of the security kernel. Such a request may
be made in a conventional manner and is commonly called a "kernel
call." This mode may also be triggered by asynchronous events (e.g.
the completion of an Input/Output request previously made by the
untrusted subject).
[0074] In one embodiment, the TCB 10 also has one process
associated with each terminal that communicates with the user over
the trusted path. This process is known in the art as an "execution
thread." In this embodiment, each execution thread shares its
address space with other TCB execution threads. An advantage of
such sharing is economy of machine resources. Full-blown processes,
each with its own address space, would require far more machine
resources than if such address space sharing is employed.
[0075] It is the CC 14 which provides the interprocess
communication for passing parsed commands between the processes
that support the untrusted subjects, and the processes within the
TCB 10 that support the trusted path.
[0076] The untrusted code may submit to the TCB 10 the parsed
representation of the trusted command by making a kernel call. The
kernel call causes the process supporting the untrusted subject to
switch to the all-powerful hardware state and to begin to execute
trusted code designed to handle kernel calls. This trusted code
recognizes the nature of the kernel call and calls the CC 14. The
CC 14 then obtains and copies the binary representation of the
trusted command into protected memory and puts it in a global data
structure that supports the user process associated with the
terminal in question.
[0077] The VT 16 layer includes the trusted code which monitors all
terminal activity and checks for activation of the secure-attention
key. In one particular embodiment, the VT 16 layer also includes
code which tracks which physical terminal is associated with a
particular user or untrusted subject.
[0078] The specific manner of implementation of the above described
activities will be apparent to one skilled in the art having the
benefit of this disclosure.
[0079] Referring to FIGS. 1 and 3, "around" the TCB 10 is a general
untrusted operating system (UOS) 20. The UOS 20 generally comprises
untrusted code and may include a traditional operating system such
as Ultrix or VMS. Alternate constructions are envisioned where the
UOS 20 includes any untrusted code. It will be appreciated that the
system may include untrusted code that is separate from TCB 10 and
UOS 20.
[0080] Most user activity will occur in the UOS 20. In one
embodiment, a specific UOS 20 may run in each untrusted
subject.
[0081] 4.2 User Terminals and the Secure-Attention Key
[0082] Associated with the computer system are one or more user
terminals 30. These terminals 30 may be conventional in
construction, in that they allow the user to exchange information
with the computer system. Either "intelligent" or "dumb" terminals
may be used. Examples of such dumb terminals are the well-known VT
series 200 and 300 terminals. These terminals are available from
Digital Equipment Corporation or its authorized dealers.
[0083] A particularly useful terminal includes control programming
that is designed with security features in mind, such as is
described in a co-pending U.S. patent application entitled "Method
for Securing Terminal and Terminal Apparatus for Use with the
Method," Ser. No. 456,672, filed by Wei-Ming Hu, Clifford E. Kahn,
Andrew H. Mason, and Peter A. Sichel on Dec. 26, 1989 and to be
commonly assigned with this application.
[0084] A keyboard is attached to each user terminal. Each keyboard
includes a key employed as a secure-attention key. This key is
utilized to establish a trusted path between a user terminal and
the TCB 10. In one embodiment, the secure-attention key is an
"out-of-band key" for the UOS 20. It is desirable that the signal
sent by the terminal upon activation of the secure-attention key is
generally not utilized by the applications running in the UOS 20.
An out-of-band key is convenient since it does not restrict the
traditional number of user keys and the signal is sent immediately,
even if other I/O is pending. For example, the secure-attention key
may be the BREAK key on the terminal keyboard.
[0085] In may terminals (such as the VT 2XX and VT 3XX families)
locking of the keyboard prevents the transmission of several key
signals. In these terminals, however, the BREAK key signal may be
transmitted, even from a locked keyboard. Thus it is generally
desirable to employ the BREAK key as the secure-attention key.
Also, the signal generated by activation of the BREAK key comes in
ahead of other characters in the keyboard buffer. A further
advantage of utilizing the BREAK key is that software running in
the UOS 20 cannot get the terminal to send that character.
[0086] 4.3 The Trusted Path and Direct Commands
[0087] The TCB 10 generally operates as a reference monitor.
[0088] Upon initial login or at other times when the user is not
connected to a process in the UOS 20 (e.g., an application
program), the TCB 10 responds to a comparatively limited series of
"direct commands," which may include a CONNECT command to connect
to an untrusted subject. When the user is connected to such an
untrusted subject, the TCB 10 checks for the secure-attention
signal; if a signal other than the secure-attention signal is
received, the TCB 10 passes the signal to the connected untrusted
subject.
[0089] As discussed above, in one specific embodiment the trusted
code in the VT layer 16 monitors all terminal activity. When the
code in the VT 16 notices that the BREAK key has been depressed,
the SSVR is signaled in a conventional manner (such as by advancing
an event count or activating a flag). Code in the SSVR 12
recognizes this signaling and establishes a trusted path between
the user terminal and itself. In this manner, the secure-attention
key signal is trapped by the TCB 10.
[0090] In certain security environments, general security
requirements mandate that the TCB 10 make displays when a trusted
path is established and exited by the user. In one embodiment, when
a trusted path is established, the TCB 10 sends a reset command
sequence to the physical user terminal 30 to assure that it is in a
known state. This is done in part to ensure that the information
displayed to the user by the TCB 10 is in the intended form, (e.g.,
correct font). A particularly useful sequence of commands to
implement the reset command sequence is described in the
aforementioned copending U.S. patent application by Hu et at.
[0091] When the secure-attention key is activated, other than in
connection with the issuance of a trusted command, the secure
server 12 displays a particular banner and prompt (for example:
"SSVR>" together with the user's process identifier, discussed
below), to indicate that the user is conversing interactively with
trusted code. If the secure-attention key was activated in
connection with the issuance of a trusted command, the secure
server 12 displays to the user some indication of what is to be
done, as discussed below.
[0092] When command execution is complete, or when the user
establishes or reactivates a session with the UOS 20, the TCB 10
issues a message to inform the user that he/she is breaking the
trusted path. When a session with the TCB is complete, the memory
in the physical terminal may be cleared by the TCB. This is done to
protect trusted information from both untrusted code and
unclassified persons who may later use the terminal. A particularly
advantageous method of clearing the terminal memory is described in
the aforementioned co-pending patent application.
[0093] The number of direct user commands available from the TCB 10
is preferably limited to reduce the complexity of the TCB 10.
Generally, a command is direct if:
[0094] a. one needs the command when one is not connected to an
untrusted subject;
[0095] b. end users depend on the command's correct behavior for
security (e.g., LOGOUT); or
[0096] c. the command must be issued when no user process in the
UOS 20 is running.
[0097] In most cases, the user will converse with the TCB 10
through the SSVR 12. In one embodiment the user conversing with the
SSVR 12 may be restricted to commands which allow him/her to
establish (connect) or end (disconnect) a computing session in the
general UOS 20. Further commands may be added, such as allowing the
user to view the existing UOS 20 sessions or change his or her
password, but it is generally desirable to limit the number of
available commands in the TCB 10. It is further desirable to ensure
simplicity in the syntax of these direct commands.
[0098] The simple syntax of the direct commands, and the
comparatively small amounts of code necessary to interpret these
commands, do not substantially increase the complexity of the TCB
10.
[0099] 4.4 Login Processing by the TCB
[0100] To initiate a computing session in the illustrative UOS 20,
the user may first login to the computing environment through the
SSVR 12. To initiate a login, the user activates the
secure-attention key (or performs some other action to gain the
attention of the SSVR 12). As discussed above, this establishes a
trusted path between the user terminal and the TCB 10. Alternate
embodiments are envisioned wherein other methods are employed to
establish a trusted path.
[0101] Once a trusted path has been established, the SSVR 12
prompts the user for his/her specific user name and password. The
various layers in the TCB 10 perform well-known security checks to
ensure that the user has the necessary access rights to login at
the terminal. If the user possesses the required access rights, a
computing session is established in the SSVR 12.
[0102] At this time, the user's security information is stored by
the TCB 10. The TCB 10 also stores in memory a unique identifier of
the user process. This information is maintained in memory by the
TCB 10 as long as the user has an active session.
[0103] 4.5 Connection to Untrusted Process
[0104] Because the number of direct user commands in the TCB 10 is
preferably limited, the user will generally choose to establish a
computing session with an untrusted subject. Such an untrusted
subject may include application programs residing in the UOS
20.
[0105] In one specific embodiment, when the user establishes a
computing session in the UOS 20, the user's physical terminal 30 is
associated with a virtual terminal of the UOS. The user converses
with the untrusted subject through a computing session within the
UOS 20. When such a connection with an untrusted subject is made,
the VT layer 16 of the TCB 10 stores in memory which physical
terminal 30 is associated with which virtual terminal in an
untrusted subject. In this manner, trusted commands received from
an untrusted subject may be attributed to the actual user at the
physical terminal 30. Once the user has established a computing
session within the UOS 20, normal computing activities may be
carried out.
[0106] A more detailed understanding of these activities may be
obtained through reference to FIGS. 4A and 4B. Activities indicated
by heavily bordered blocks are performed by the TCB 10, or trusted
code. Activities represented by lightly bordered blocks are
performed by untrusted code.
[0107] In FIG. 4A, the user terminal is represented as having
established a computing session 100 within the general UOS 20.
During the session, the user may attempt to execute various
commands. Utilities within the UOS 20 determine whether the issued
commands are trusted commands 50. If the command is not identified
by the UOS 20 as a trusted command, the UOS 20 attempts to perform
the command at process 60. In these cases the operation and
existence of the TCB 10 may have little effect on the command's
behavior as seen by the user.
[0108] 4.6 The Process Identifier
[0109] In one embodiment, a process identifier (process ID) is
associated with each user process at the time of login. This
process ID is a pseudo-randomly generated alpha-numeric tag which
is associated with the user throughout his/her computing
session.
[0110] The process ID is useful if the untrusted code can
temporarily disable or delay the effect of activation of the
secure-attention key. For example, untrusted code can send a reset
command to some terminals. While these terminals are resetting,
activation of the secure-attention key has no effect. Such a
technique may allow untrusted code to trick the user into thinking
a trusted path had been established when it had not been. The
process ID inhibits such trickery by allowing the user to
distinguish between actual and emulated trusted paths.
[0111] When the user first logs in, he/she will not yet know the
process ID and cannot detect a false trusted path. However, to
simulate a login dialogue, untrusted code would have to guess when
the user activated the secure-attention key, or somehow detect the
activation of the key even though the TCB could not detect it.
Guessing the activation time is impracticable, because users start
login dialogues at their own convenience and give no advance signal
to the TCB prior to activation of the secure-attention key.
Detecting the secure-attention key even though the TCB could not
detect it would be possible only by causing the key to yield a
signal different than the TCB expects. Methods to prevent such an
event are described in the co-pending application by Hu et al.,
referenced above.
[0112] With some terminals, the untrusted code may be able to
program the terminal so that the secure-attention key generates a
signal that is recognizable to the untrusted code, but not to the
TCB. In these cases, generation of the process ID at the time of
login would be unsatisfactory. Instead, the process ID may be
generated prior to user login. For example, in these embodiments,
the process ID may be generated when the user is registered with
the TCB 10 and stored in protected memory. In this embodiment the
user may be given a way of changing his/her process ID, just as the
user can generally change their password.
[0113] In one particular embodiment, when the user logs in the SSVR
12 pseudo-randomly generates a process ID and assigns this ID to
the specific user at the physical terminal. Methods of generating
pseudo random identifiers are generally known in the art. In one
embodiment, the random number initial seed is based on a hardware
(as opposed to a software) event. The algorithm for generating
successive random numbers should be computationally difficult to
invert.
[0114] This process ID is maintained in the TCB and is never
provided to untrusted code. Thus, untrusted code in the general UOS
20 is prevented from discovering the specific process ID assigned
to the user.
[0115] The user is responsible for noting and retaining his/her
assigned process ID. To promote user retention the process ID may
be generated as a pronounceable group of characters. Methods for
such generating pronounceable passwords are described in "A Random
Word Generator", MITRE Technical Report (MTR-3006), May, 1975, by
Morrie Gasser.
[0116] The process ID is displayed to the user when a trusted path
is established between the user terminal and the SSVR 12.
Displaying the ID serves a useful security function as it inhibits
untrusted code from "spoofing" the user. Absent the process ID it
may be possible for an untrusted subject to emulate the SSVR 12. An
untrusted subject, emulating the SSVR 12, may be able to trick the
user into divulging security-relevant information or otherwise
compromising system security.
[0117] The process ID, known only to the user and the SSVR 12,
inhibits this type of "spoofing." Before the user responds to an
apparent SSVR 12 request, he/she will first verify that the proper
process ID is displayed. Since the process ID is known by only the
user and the SSVR 12, it is extremely difficult for an untrusted
subject, ignorant of the process ID, to fool the user into
believing that he/she is conversing with the SSVR 12.
[0118] FIG. 6 illustrates a flow diagram of this process. When the
user initially logs into the computer system through the SSVR 12, a
randomly generated process ID is assigned to the user. This
activity is represented by process 310. In one embodiment, this
process ID comprises alphabetic characters and is pronounceable.
The pronounceable nature promotes user retention of the ID. The
process ID is then stored in trusted memory by the SSVR 12 and
displayed to the user by the SSVR 12 at process 320.
[0119] During the course of the computing session, the TCB
determines if the user is operating through a trusted path. This
activity occurs at process 330. If the TCB determines that a
trusted path is not established, the process ID will not be
displayed to the user. This is noted at process 350.
[0120] The process ID is maintained in trusted code and memory, and
is inaccessible to untrusted subjects. Each time a trusted path is
established between the SSVR 12 and the user, the SSVR 12 displays
the user's process ID at process 340. By observing the displayed
ID, the user is assured that an actual trusted path has been
established.
[0121] 4.7 The Command Conveyor
[0122] As discussed above, the command conveyor (CC) 14 is a layer
in the TCB 10 which comprises part of the trusted code. The CC 14
is utilized to ensure proper communication of trusted commands
between the SSVR 12 and the process supporting the untrusted
subject. As discussed, such an untrusted subject may include a UOS
20.
[0123] Associated with each physical terminal 30 is a command
buffer. In some embodiments, a status code may be associated with
each terminal. In a still other embodiments, one of five command
states is also associated with each physical terminal. The command
buffer is used to store the binary representation of the trusted
command. The status code may be employed to provide the resulting
execution status to the untrusted subject. The command states
generally comprise five possibilities:
[0124] (a) no command,
[0125] (b) command submitted,
[0126] (c) command in progress,
[0127] (d) ignore finish, and
[0128] (e) command done.
[0129] These states are used to indicate the various states that
may be encountered in the execution of a trusted command. FIG. 7 is
state transition of the command states. A detailed explanation of
these states and the statuses may be obtained from the following
discussion.
[0130] 4.8 Distribution of Processing of Trusted Commands
[0131] As discussed above, the TCB 10 "recognizes" two categories
of commands. For the purposes of this specification "commands"
include user instructions received through the use of command line
interfaces, menu interfaces, direct manipulation interfaces or the
like.
[0132] The first category includes "direct commands" issued while
the user is operating in the SSVR 12 layer of the TCB 10, i.e., is
interactively conversing with the TCB 10. As discussed above, it is
desirable to limit the number of direct commands available to the
user. The execution and processing of these direct commands is
carried out using traditional methods known in the art.
[0133] The second category includes "trusted commands." These
commands are issued by a user operating in a non-trusted computing
environment, such as in the general UOS 20. In one embodiment these
trusted commands are issued by a user operating in an untrusted
virtual machine. FIGS. 4A and 4B illustrate the activity necessary
to execute such a command.
[0134] If the user attempts to execute a trusted command, the
parser residing in the general UOS 20 parses the trusted command
string and converts it into a binary representation at process 70.
In parsing the command, the untrusted code in the UOS 20 checks
syntax and semantics, and prompts for missing parameters. Parsing
strategies are well known in the art and will not be discussed
herein.
[0135] From a security perspective, it is not required that the
code in the UOS 20 perform an accurate translation of the user's
commands into a parsed representation (binary representation). The
confirmation mechanism, described below, is designed to catch such
errors whether benign or malevolent.
[0136] Following parsing of the command, a process supporting the
UOS 20 issues a call to the TCB, submitting to the TCB the
identification of the terminal from which such command was issued,
at process 80. In one embodiment, the virtual terminal from which
the command was issued is submitted to the TCB. Such a call is
processed by the CC 14 in the TCB 10. The CC 14 then determines
which trusted subject submitted the command and copies the parsed
representation into protected memory at process 90. In one
embodiment this determination is made by asking VTerm to identify
the physical terminal associated with the UOS's virtual
terminal.
[0137] Such a submission of the binary representation will be
successful only if the TCB 10 determines that the submitting
terminal is connected to an active process.
[0138] Normally, in embodiments utilizing the command states, the
command state of the submitting physical terminal is no command
which indicates that there is no trusted command submitted by that
physical terminal or in progress. This state is represented by
state 400 in FIG. 7. When the state is no command the CC 14 saves
the binary representation in the command buffer, and sets the
physical terminal's state to command submitted, represented by
transition state 410. Otherwise, the submission fails.
[0139] When the untrusted subject running in untrusted code
receives indication from the TCB 10 that the command has been
submitted, it requests the user to activate the secure-attention
key. In one embodiment, the untrusted code in the UOS 20 will
request that the user activate the BREAK key.
[0140] If the user activates a cancellation key, for example Ctrl-Y
or Ctrl-C, instead of activating the secure-attention key, code in
the general UOS 20 may cancel the trusted command. This can be
accomplished by the untrusted code issuing a kernel call (i.e., a
request by the untrusted subject of the security kernel) and
informing the CC 14 that the command is to be aborted. If the CC 14
finds that the command has not been sent to the SSVR 12, the
command is canceled. Similarly, if the user process in the UOS 20
is killed, the process rundown code may cancel the trusted command
request. However, if the CC 14 finds that the command has already
been sent it proceeds; because the user is already executing the
command.
[0141] In embodiments employing command states, cancellation of the
trusted command request results in the command state being changed
from command submitted to no command. This process is indicated in
FIG. 7.
[0142] If the user attempts to establish a trusted path at the same
time the trusted command is sent to the TCB 10, then the TCB 10
must arbitrate; either of these requests may be processed first. If
the secure-attention signal is processed first, the TCB 10 may
defer the command submission, and a connection with the SSVR 12 is
established. If the submission is processed first, then the user
sees the confirmation screen as is described below.
[0143] If, when prompted, the user activates the secure-attention
key, a trusted path is established between the user's physical
terminal 30 and the SSVR 12 in the TCB 10. This configuration is
illustrated by the dark line in FIG. 3. When the VT layer 16
detects the secure-attention (BREAK) key, it notifies the SSVR 12
which then establishes a trusted path in the trusted
path-supporting process associated with the user's physical
terminal 30.
[0144] Once the trusted path is established, the TCB 10 maintains
exclusive control over the physical terminal 30. In one embodiment,
untrusted code in the UOS 20 waits to be notified that the command
has been completed. Thus, the user essentially bypasses the
untrusted code, and converses only with trusted code. Thus, if the
user's process in the general UOS 20 is killed, or the user's
process terminates, or the general UOS 20 crashes, the trusted
command may still be executed. The user will not be informed of any
of these activities until the trusted command has been completed or
abandoned.
[0145] Following establishment of the trusted path, the security
requirements of the system require interaction between the user and
the SSVR 12. This is necessary to execute commands which require
the establishment of a trusted path. The Orange Book criteria,
and/or the system's security goals, will determine which commands
require this interaction as is generally discussed in Section 4.9
below.
[0146] The establishment of a trusted path in these instances is
useful for several reasons. First, it allows the secure server to
identify the specific user issuing the command. Second, the trusted
path allows the user to review and approve (or reject) the
requested action in a trusted fashion. This reduces the possibility
of malicious untrusted software altering a command request, and
allows for the detection of parsing errors.
[0147] FIG. 4B illustrates the activity occurring after control has
passed to the TCB 10. As discussed above, the TCB 10 first
establishes a trusted path at process 120. The TCB then checks to
see if a trusted command has been submitted by the user. Untrusted
code may then wait for the command to finish.
[0148] Once a trusted path has been established, the SSVR 12 will
call the CC 14 to return the binary representation at process 130.
In one embodiment, the CC 14 will first determine the command state
of the physical user terminal 30. In a still further embodiment if
the command state is command submitted 410 (FIG. 7), the CC 14 will
return to the SSVR 12 the binary representation, and change the
command state to command in progress 420. If the state is no
command 400 or command done 430, the call will fail, and the SSVR
12 will display the SSVR prompt (SSVR>) to the user. An initial
command state of command in progress 420 or ignore finish 440
implies a bug in the SSVR 12 and may result in the CC 14 crashing
the TCB 10.
[0149] In these embodiments, if the command state was command
submitted 410, then the CC 14 will return to the SSVR 12 the binary
representation of the requested command.
[0150] 4.9 Display of Command to User
[0151] Once the parsed command has been returned to the SSVR 12,
the SSVR 12 will determine if the command requires confirmation at
process 140 (FIG. 4B). Confirmation should generally be required
for commands which request modification of information. Commands
that do not alter the information, such as the viewing of
information, should not generally require confirmation. For
purposes of confirmation, redirecting the display into a file
should be considered as viewing information.
[0152] The decision of whether a given command should require
activation of the secure-attention key to execute depends on the
basic security policy of the system. In one embodiment,
secure-attention trusted commands are a mechanism for use by system
managers, security managers, operators, and holders of special
privileges.
[0153] Generally, a command which is not a direct command, is a
secure-attention command if:
[0154] a. it modifies a security kernel database in a
security-relevant way; or
[0155] b. it displays information on which a systems manager,
security manager, or operator will base a security decision.
[0156] If the command is one that does not require confirmation the
TCB 10 enforces the system's security policy and determine if the
user has the necessary access rights to execute the command at
process 145. If this is the case the TCB 10 attempts to execute the
binary representation at process 150. Since certain commands, e.g.
commands that merely display information, do not change the state
of the system, user confirmation is not required. There is no
security problem in not confirming these commands since the system
only allows users to view information to which they have
access.
[0157] There are certain commands that change the state of the
system may not require confirmation. Some commands may allow the
user to turn on a file to receive the user's redirected output.
This files are sometimes known in the art as "listing files."
Generally, the turning on of a listing file is a confirmed
command.
[0158] In situations where a listing file has been turned on (i.e.,
the output is redirected into this file) commands which normally
display information may, by writing to the listing file, alter the
state of the system. In these cases, confirmation of these commands
is generally not required because the flow of information is
constrained by the TCB's security rules.
[0159] When a command requesting the display of information is
executed by the TCB 10, the SSVR 12 responds by displaying a
complete representation of the requested command as well as the
requested information, i.e. the standard human-readable
representation of the command. In one embodiment, the SSVR 12
displays the function code, command modifiers, and command
parameters all in unambiguous notation. If employed, the process ID
may also be displayed.
[0160] Thus, if the user's command is improperly parsed, or
modified in an unauthorized manner, the user will be able to
observe the difference. This inhibits an untrusted subject from
fooling a user into believing that he or she is observing the
results of a submitted command, when in fact that command had
actually been altered.
[0161] If the command requested is one for which confirmation is
required, the SSVR 12 displays a standard human-readable
representation of what is about to be done to the user terminal at
process 170. The code representing each command should produce a
different display, unless the commands have the same effect. As
previously discussed, the re-display of the action that a binary
representation will perform is less complex than the parsing of the
original command. This is in part because by definition there is
only one standard human-readable representation for each parsed
command.
[0162] By parsing the command in untrusted code, and verifying the
request in trusted code, the overall complexity of the TCB 10 is
decreased. The display of "what is about to be done" may of course
vary from one command to another.
[0163] If the command's purpose is to add a record to a database or
create an object, the display may contain what the new record will
contain, or what the attributes of the object will be if the
command is confirmed.
[0164] If a particular database or object is to be modified, the
display may show what the updated data will look like, or what the
attributes of the object will be if the command is confirmed. For
commands which remove an existing object, or delete an existing
file, the display may show the record or object.
[0165] Generally, for both adding/creating and modifying, the data
packet representing the submitted command may contain fields for
the objects attributes, such as its name, its size and its
protection. For each of these fields, the packet may also have a
flag indicating whether the field is meaningful. These flags
indicating meaningful fields may be know as "field selectors."
[0166] In add/create commands, the TCB may supply default values
for unselected fields. In commands which modify a database entry or
object, unselected fields may retain their previous values.
[0167] To aid the user, the selected fields may be highlighted when
displayed for confirmation. Such highlighting may be accomplished
by methods known in the art (e.g., by using escape sequences that
begin and end reverse video). This highlighting draws the users
attention to the fields that have non-default or changed values and
helps to protect the user from tricks by untrusted code.
[0168] When an audit record is produced of the confirmation display
via the sink (discussed below) the reverse-video escape sequences
may be replaced with dashes on the next line. Example:
[0169] Name:JOHN.DOE
[0170] This change permits the audit record to be properly
displayed on a wide variety of devices, even simple line
printers.
[0171] Alternative embodiments are envisioned in which the TCB
checks each field's value: for add/create commands the fields that
have non-default values are highlighted; and for modify commands
the fields that are changing are highlighted. This "field
comparison" approach generally results in less highlighting.
[0172] Generally, for every command, every packet field is
displayed on the screen, except for fields ignored by the
particular command.
[0173] In the cases when a database will not be modified, the
display generated by the SSVR 12 may express what action is to be
done. In this manner, the possibility of an erroneous confirmation
is reduced since the user is requested to confirm the results, not
the command.
[0174] The SSVR 12 may also display a list of user protection
attributes so that the user knows what role he or she is
exercising. The user is required to confirm explicitly (Yes or No)
the action requested. In a general embodiment, the TCB 12 displays
for confirmation the results which would occur if the command is
executed.
[0175] Generally, any information in the binary representation that
is displayed is first validated. This information is checked once
to ensure that certain requirements are met, e.g., to ensure that
the counts of array elements are within array bounds (since
out-of-bounds conditions could provide an opportunity for entry
into the TCB by illicit software).
[0176] Before displaying the contents of a packet containing the
binary representation of the command, anything that could cause the
formatting code to malfunction may be validated. Generally, such
candidates for validation could include anything used to index into
something else. Specific examples include the PASCAL language's
implementation of varying strings (a count followed by an array of
characters; the count is checked to be less than or equal to the
character array size), and numbers that represent keywords (their
value is checked before it is used to index into an array of
keyword text strings). Further methods of validation are well known
in the art and will not be discussed herein.
[0177] In one embodiment, the replies that the user may make to the
confirmation prompt are "yes", "no", Ctrl-C, Ctrl-Y, or the BREAK
key. No default responses exist in this embodiment. For each user
response, the secure server may display a message restating the
response. If the response was "yes" or "no", the secure server 12
displays an indication of the computing session in the UOS 20 when
it is being be resumed after command completion.
[0178] If, however, the user activates the Ctrl-Y, Ctrl-C or the
BREAK key, the command will be aborted. In one embodiment, the CC
then changes the command status code to negative. This may be done
to inform the submitting process that the command has been aborted.
In these cases a direct session with the SSVR 12 will be
established at process 300. In these instances, the user must
explicitly issue a resume command to return to the session in the
UOS 20.
[0179] If the user confirms the results or command, by replying
with "yes", or if the command does not require confirmation, the
TCB 10 enforces the system's security policy at process 145. At
this process the TCB 10 determines if the user has the necessary
access privileges to perform the requested activity. If the user
has the required access rights, the SSVR 12, along with other code
in the TCB 10 attempts to execute the command in a conventional
manner.
[0180] Upon attempted execution of the command, the SSVR 12 may
inform the CC 14 of the execution status. In some high-security
situations, concern for information flow from trusted to untrusted
code will prevent informing the CC 14. The CC 14 may change the
command status to indicate the SSVR 12 response. In addition, in
alternate embodiments, the CC 14 changes the command state from
command in progress 420 (FIG. 7) to command done 430. In these
cases, the CC 14 notifies the submitting untrusted subject by
traditional methods (e.g., by advancing an eventcount for that
process). The SSVR 12 then reestablishes the terminal connection
with the computing session in the UOS 20 from which the command was
executed.
[0181] In embodiments utilizing the status code, if the user
responds with "no", the secure server informs the session in the
UOS 20 that the command was not executed (negative status code). If
command states are also employed, the SSVR 12 changes the command
state to command done and re-establishes the user's session in the
general UOS 20 at process 160.
[0182] If the trusted command requested by the user is a command
which affects one of the system databases, and the database record
was changed by some other process or user between the time the
confirmation display appeared and the current user confirmed the
command, then the SSVR 12 displays a message indicating that a
change has been made and requests reconfirmation. In this case, the
SSVR 12 waits for a response as if it were the first time the
information was displayed.
[0183] To ensure that if two users attempt to modify the same
element at the same time, only one request is rejected, the TCB may
utilize identifiers known as "modification counts"
(mod-counts).
[0184] Each element of key databases (or registries) may contain a
modification count. This mod-count is advanced after each
modification and thus identifies how many modifications have been
made to the database element. Thus, when a user attempts to modify
a database element, the SSVR 12 first obtains that database
element's mod-count. After the user has confirmed the request, the
TCB 10 checks the mod-count in the request against the current
mod-count.
[0185] A different mod-count indicates that a change has been made
to the database element, and the user is asked to reconfirm the
request since the result previously confirmed by the user is not
what the result would actually be.
[0186] To deter malicious users or subjects from preventing the
execution of trusted commands, two measures may be taken. First,
certain access rights may be required to change certain database
elements. Second, the mod-count should not be advanced when a
modification is requested that does not change the database
element. In this manner, attempts by malicious subjects to prevent
execution of a trusted command, by rapidly and spuriously causing
the mod-count to be incremented, will be inhibited.
[0187] Once activity in the SSVR 12 is completed, the user's
session in the general UOS 20 is reestablished. In one embodiment,
this may be accompanied by the CC 14 informing the untrusted
subject of the status of the submitted command. The untrusted code
in the general UOS 20 then wakes-up the untrusted process running
in the UOS 20 at process 160. In one embodiment, before the TCB 10
passes the user to the untrusted subject, the SSVR 12 prompts the
user to activate a carriage return. This step is necessary to
inform the user that he/she is being placed into an untrusted
environment.
[0188] 4.10 The SSVR as an Audit Recorder
[0189] In one embodiment, the SSVR 12 audits (i.e., creates an
audit trail by writing to an audit or log file) at the command
level. For direct commands the SSVR 12 audits both the command line
and the final status of the command. For trusted commands
(secure-attention commands), the SSVR 12 audits both the
confirmation display and the final status of the command.
[0190] To record the confirmation displays, the SSVR 12 defines a
portion of memory to store the strings of information sent by the
SSVR 12 to the user's terminal. In one embodiment, this portion of
memory is known as the "sink." Only the strings that comprise the
SSVR's confirmation screens are placed in the sink.
[0191] Strings may be appended to the sink only when the sink is
opened. The sink is opened when the SSVR 12 begins to service a
trusted command. The sink may be closed after the user has
responded to the confirmation prompt, or before the display of
certain requested information. The information stored in the sink
does not include the user input to any prompts.
[0192] The information stored in the sink is maintained for
auditing purposes and may be utilized according to traditional
auditing practices.
[0193] 4.11 Confirmation by Others than the Submitting User
[0194] In some situations it may be desirable for users other than
the submitting user to confirm a secure-attention command. Such a
situation may arise if there is a certain application or command
file that includes a privileged command but is needed to be run by
a large number of users. In the cases where it may be impracticable
to grant all of the users the necessary access rights, it may be
possible to grant one user the necessary rights, and to allow
him/her to confirm this command for the other users.
[0195] Such non-submitting user confirmation may be accomplished by
having the code in the untrusted subject lie about which user is
submitting the command. For example, the code in the UOS 20 may
ensure that certain commands are always sent to the TCB 10 as if
they had been submitted by a selected user.
[0196] One purpose of this select user would be to confirm these
certain commands for the other users. After a non-select user
submits one of the certain commands, the selected user will be
notified that there is a submitted command waiting. The select user
may then observe the confirmation display. If the select user
determines that the command submitted was a legitimate command,
he/she may confirm the command. Otherwise, the original users
attempt to execute the command fails.
[0197] 4.12 Implementation Approaches
[0198] The method for executing trusted commands described above
may be implemented through the use of appropriate software. The
actual design and coding is a matter of routine for those skilled
in the art having the benefit of this disclosure. Such a design
will of course vary with the hardware platform and other factors
associated with the specific desired implementation.
[0199] FIG. 5 illustrates one such method for implementing the
present invention. A programmable machine comprises a central
processing unit (CPU) 210, and an input/output (I/O) device 220.
The I/O unit 220 is capable of reading and transmitting stored
programming instructions from known storage media 230. The CPU 210
is capable of processing these instructions and executing the
programming steps represented on media 230. Such systems are well
known in the art and will not further be described.
[0200] To implement the invention, the necessary programming steps
are stored on storage media 230 by traditional methods. The storage
media 230 is then read by the I/O unit 220 and the programming
instructions are transmitted to the CPU 210. The CPU 210 reads and
interprets the instructions, and may thereafter carry out the
method of the invention.
* * * * *